Why companies should care about the AI Act

Mika Aldaba
5 min readJan 26, 2023

If you are working with data, you need to be aware of this most recent development of the AI Act, which is a proposed set of regulations and legal framework for AI in the EU. It covers areas such as transparency, accountability and human oversight in line with EU values such as respect for human rights, privacy and democracy. Simply put, a colleague has described it as “GDPR but for AI.” The latest AI Act compromise text has just been passed by the council of the EU last December. After a parliament vote in March, it can potentially be enacted in full later this year or early next year. Companies are not fully prepared for managing the risks attached to their AI usage and development.

On one hand, this will increase costs for both providers and users of AI systems because of all the risks to take into consideration. The financial implications are huge as companies can be fined as much as €30,000,000 or 6% of a company’s total worldwide annual turnover (3% in the case of an SME or start-up). It is worth investing in human-centred responsible AI. No fine is worth a dystopian Black Mirror future.

On the other hand, this could become a differentiator for companies who can show that their products are more trustworthy. As people grow more skeptical with technology, they will be drawn towards those that place humans at the core of their products. Detractors of the AI Act say this will be harmful for innovation and startups, but on the contrary, it might actually be the opposite as you have to be creative to develop within this framework.

And if you’re from the US, the same issues with AI are being addressed but with different policies and frameworks. Ted Lieu, one of the 3 US congressmen who has a computer science degree, writes that if the FDA exists for food and drugs where the interaction of molecules is not fully understood, then why don’t we have it for AI where algorithm results can be hard to explain.

It took him and his team 2 years to pass a law for facial recognition which is only one use case in a sea of many. It would take more than a lifetime if we don’t even know the infinite capabilities of AI which is why a general approach is needed like the EU AI Act setting the initial benchmark. It is also worth taking a look at the second draft of the AI Risk Management Framework that has just been released by the National Institute of Standards and Technology.

Here are 6 key themes to take note of in the latest compromise version:

  1. The key definitions of AI and high-risk systems have been updated
    – The definition of AI has been narrowed down to differentiate AI from classical software and allow for further changes
    – Prohibited AI practices has been extended to private actors in social scoring
    – Vulnerable groups now include people with social and economic disadvantages
    – High-risk systems classification has added, removed and finetuned several use cases
    – The requirements for high-risk systems has been simplified to increase understanding for stakeholders to comply with

2. General purpose AI (e.g. image recognition, translation, video generation) is defined and will hold some of the same requirements as high-risk systems

3. The scope of the proposed AI Act has been clarified to specifically exclude national security, defence and military purposes. Research and development are also excluded from the scope. Provisions relating to law enforcement authorities have been added.

4. Conformity assessments, governance framework, market surveillance, enforcement and penalties have been clarified and simplified for easier implementation

5. To increase transparency, affected persons should be informed if they are exposed to an emotion recognition system. People will be able to make complaints if any provision is violated and expect the complaint to be handled. Public figures will need to register in a high-risk system user database.

6. Measures in support of innovation are added such as requiring AI to be tested in real world conditions in regulatory sandboxes

It may be challenging to keep up with all the updates especially if your company doesn’t have a dedicated team tracking this, or if your experts don’t have the bandwidth. Then it is worth getting help externally to stay on top of compliance. My colleagues, Bodil Hald Brabæk, Mayra Futema, Myra Abbasi, Tarek Ghanoum, Iulia-Gabriela Popescu, Henriette Jepsen have just held a workshop explaining the AI Act to a Scandinavian client. Tarek has recorded 2 videos about how AI can discriminate and about the regulation of AI in the EU (in Danish).

To get an in-depth explanation of how your company might be affected, I highly suggest that you get in touch with a multidisciplinary team such as ours as this is a complex topic. We can explain the new regulations and any further amendments from a legal, cybersecurity, data and UX perspective. That is one of the advantages of working with a huge consultancy instead of a more specialized smaller agency.

On my end, I focus on finding tools and exercises from the field of design to make understanding AI and legal frameworks more accessible to non-technical people. We have had success with the nightmare scenario workshop, where you try to come up with worst case scenarios for AI cases. This was inspired by Reza Arkan’s master thesis. I am also looking forward to see the toolkit from @dataactivism because this shows a participatory design and bottom up grassroots approach in a contrast to the top down regulations by the AI Act or official risk frameworks published by governments.

Public Miro board for mapping data resistance actions

In summary, the AI Act may start implementation a year from now and companies should plan ahead in order to differentiate and innovate rather than merely complying to avoid fines. A cross-functional approach is recommended in order to get more people on board for a responsible use of AI and making society better rather than causing further harm.

--

--