Jump to content
  • Blog

Artificial Intelligence as Part of Digital Responsibility

Enabling Small and Medium-Sized Enterprises

Share blogpost

Artificial intelligence is a divisive topic.

On the one side you have fangirls and fanboys who go wild about the breathtaking possibilities of artificial intelligence. And on the other are the critics who worry about the devastating consequences of AI. This may be slightly exaggerated, but it’s generally true. As a consulting company that deals extensively with technology and its impact on the economy and society, our view of AI is much more nuanced. And we firmly believe that objective discussion and debate is vital if we are to seize the opportunities that artificial intelligence offers us while also remaining aware of the risks. This is how we interpret our corporate digital responsibility.

Using AI to Benefit Society

One of the opportunities for companies is that they can use AI to improve the efficiency of their processes and create completely new products, services and business models. This reduces costs and increases sales; which has a two-fold effect on profit. When companies really commit to AI – because they know that they have to take on their responsibility – the resulting innovations can also have enormous benefits for society and every single person in it. There is a broad spectrum: More efficient production and logistics processes, for example, reduce energy consumption, which conserves resources and reduces emissions. This is both “lean” and “green.” Artificial intelligence can be used to improve control over mobility in cities, which helps to protect the environment and relieves the burden on infrastructure. The technology can also be used to reduce food waste. Our MEALLY application is a practical example of how this can work. To put it simply, when used correctly, artificial intelligence can help us create a better tomorrow!

The key words here are “used correctly.” As with almost any technology, it’s true that AI can be consciously used for purposes that are less than worthwhile. The technology has been used in weapons systems, for instance, for a long time. But there are also unconscious negative effects; things that are not intentional. One important aspect here is that machine learning can easily be affected by bias. And in this scenario, the results delivered by the artificial intelligence are then systematically flawed. One example that is often cited is that, in an HR context, if AI learns from historical data it will always conclude that middle-aged, white men are the best candidates for a position on the executive board – simply because that is how it has always been in the past.

New legal challenges will arise if decisions are delegated to AI, especially in relation to responsibility and liability – the current rules don’t really provide a framework for making these judgments. This issue is currently under the microscope in the context of autonomous driving. There is also another factor that is seen as problematic, particularly by critics. Their concern is that if machines are given too much autonomy they could ultimately become impossible to control.

Two sides of CDR

In light of all of this, we believe that corporate digital responsibility means two things in an AI context: Firstly, it is important to consistently explore how technology can help create a better world and then use that knowledge. Secondly, companies must actively tackle ethical issues, address them clearly in social debates, and be transparent when they communicate their findings and their plans. This is the only way to create the level of acceptance that is necessary for artificial intelligence to succeed.

There is also political support for achieving this.

In November 2018, the German Government adopted an artificial intelligence strategy that is designed to make Germany a leading hub for this technological mega trend. However, according to the Federal Ministry for Economic Affairs and Energy, only 5.8 per cent of German companies are actually using AI successfully. There needs to be a boost from medium-sized companies, which still generate the bulk of economic output in Germany.

SMEs in particular face three inter-related challenges when it comes to using artificial intelligence:

  1. Decision-takers in medium-sized companies often have little experience with using AI, but do have certain reservations and uncertainties. From our perspective, taking the mystery out of artificial intelligence is vital.
  2. Although many companies are still cautious, there is a great demand for employees with AI skills. In 2019, there were 22,500 vacant positions for AI-related roles – 43 per cent of them could not be filled. When it comes to competing for talent, SMEs are often at a disadvantage. Universities are turning out more and more young people with the appropriate training, but there is a bit of a time lag.
  3. You can never really know for certain in advance whether an AI project will actually yield the success you’re hoping for. Stakeholders therefore need to be prepared for the possibility that they won’t see an immediate return on their investment. As a result, medium-sized companies with tight R&D budgets prefer to focus on more predictable developments.

Mutual Progress with Initiatives

MHP is involved in various initiatives aimed at changing this situation and in particular at helping medium-sized companies to make use of AI.

One example is the collaboration on the regional AI labs, which are part of the “AI for SMEs” action plan. The aim of the labs is to make it easier for SMEs to gain access to AI and to guide them as they take the first steps toward using it effectively. Working in collaboration with Esslingen University and alongside the relevant SMEs in workshops, we come up with the initial, specific ideas for how to use the technology. We also show the SMEs how they can become fully data driven and what role an AI strategy can play.

In addition, MHP is currently very actively involved in implementing the German Government’s AI strategy, mentioned earlier. Together with the German Institute for Standardization (DIN), we want to use customer-focused funding projects to create a framework for implementing AI projects that will put the emphasis on European value propositions and ethics guidelines throughout the entire life cycle of an AI solution. The result for our customers will be digital products that stand out in the global market and have a lasting impact on the “AI made in Germany” quality seal. This framework will also produce genuine DIN standards that make it possible to quantify aspects such as the transparency, reliability and fairness of AI.

Author

    Vanessa Viellieber

    Head of Business Area Data Science & Advanced Analytics

    LinkedIn