Skip to content

Regulation of artificial intelligence

AI Act – Do we need a gold standard for artificial intelligence?

  • by
ai act
MegiasD/Envato

Currently, the discussion surrounding the AI Act, in which the European Union (EU) aims to regulate the use and development in the field of artificial intelligence (AI), is causing a lot of concern. This regulation is causing discomfort not only in the start-up scene, but also in many companies and associations.

The key point is that there is a risk-based approach in four tiers. The higher the risk of an AI application, the more strictly it is regulated.

  • Unacceptable risk: Systems that pose a clear threat, such as social scoring or systems that are manipulative in nature.
  • High risk: Systems that pose a threat to the security or fundamental rights of EU citizens.
  • Low risk: Systems that are used in non-critical areas of life, such as chatbots.
  • Minimal risk: Spam filters, computer games, etc.

I think it makes sense to have such discussions and to work out appropriate regulations. The difficulty will be to find a balance between the ability to innovate on the one hand and a defined level of protection for citizens on the other.

The biggest problem we see in this context as a company is the broad definition of the term AI. In addition to machine learning concepts, the draft law also includes statistical approaches, search and optimization methods. The implication is that this could automatically include almost any software that is currently being developed.

Whatever the AI Act will look like, we are pursuing the approach of making the behavior of an AI solution as comprehensible as possible from the user’s point of view. After all, AI solutions are often black box solutions. Now, we also know that this is not always possible to understand AI. But this is precisely where solution providers need to develop alternatives so that users can understand the behavior and limitations of AI and classify it for themselves. We have followed this approach at Datalyxt from the very beginning and also apply it within SdbHub when reading or extracting data from SDSs. Because only transparency creates a trust in a new technology.

As part of the KARL research project, which is funded by the German Federal Ministry of Education and Research (BMBF), we are also involved in working with other companies and research partners to explain the behavior of AI in a way that is transparent to the user.

We hope that the EU will come up with a proposal that is acceptable to all parties involved.