×
<p>The draft EU AI Act comfortably passed the vote last week..</p>

EU AI Act passed

The draft EU AI Act comfortably passed the vote last week, with 84 votes out of a potential 103 in favor. Along with Canada, the EU is one of the first jurisdictions to pass laws specifically regulating the use of advanced analytics. Like GDPR before it, it is likely to have considerable influence on any laws internationally that follow.


Risk-based legislation

The original draft focused on the level of risk. While no one technology is completely prohibited, the law restricts the use of some types of algorithms in specific use applications. This means that those using more established analytics processes in low-risk situations, they are relatively free to carry on with their work. However, some uses of AI are completely banned and many more will involve additional obligations for the development team. 

In particular, the use of ‘subliminal or intentionally manipulative’ techniques to exploit vulnerabilities is deemed an unacceptable risk. It is unclear at the moment what this might mean. Many are braced for the use of personalized messaging during elections, which has the purpose of manipulating how people might vote. 

In addition, social scoring, in other words, classifying people based on their social behavior, socioeconomic status, and personal characteristics will be strictly prohibited. Social scoring techniques reduce the sum of every action that a person takes to a number between 1-5, with the idea that a high score will open up opportunities, while a lower rating will leave the individual isolated and with limited options. A classic example of social scoring is the social credit system proposed in China. It is unclear how, for example, insurance company ratings would be viewed under the new laws. Many life insurers are allowed to use social media data to learn more about their clients, and businesses such as Airbnb and taxi services are fundamentally based on social scoring.  

In last-minute amendments, the following practices were added to the list of banned applications of AI:

  • Real-time remote biometric identification in public spaces
  • Post-remote biometric identification systems unless used for the prosecution of serious crimes
  • Biometric categorization on the basis of sensitive characteristics (gender, race, ethnicity, citizenship status, religion, or political affiliation)
  • Emotion recognition systems in law enforcement, border management, the workplace and educational institutions
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create databases to train facial recognition algorithms

There are few surprises in this list, and the controls very much target governmental use in member states, rather than common corporate applications of data analytics.


Foundation models

In the more recent amendments to the Act this year, they introduce the concept of ‘foundation models’. These are sophisticated technologies like Chat GPT, which are trained on vast data sets with multiple use case applications. 

Most of us will never work with these models. Few companies have access to a billion-dollar machine to train the algorithm, or the legal right to use the billions of documents needed to feed it. The onus of responsible development is on the handful of companies working with these technologies. They are required to ensure that their machines will not cause harm to health, safety, fundamental rights, or the environment. It is also unclear how this might be regulated as the teams developing the algorithms currently have little to no contact with the companies making use of them. 


Environmental Burden of Generative AI

The new law may draw a line in the development and deployment of foundational technologies or use in the EU zone. For example, Chat GPT-3 alone has 175 billion parameters, which have to be cycled through each time one of us asks the machine to create a workout playlist or to write our history homework. Searches in these foundational technologies may generate 4-5 times the energy used to create a Google search. We don’t have specific figures, as these are currently not disclosed, but it is estimated that 552 tons of carbon dioxide were produced just to get Chat GPT-3 ready to launch. 25,000 trees would be needed to offset that figure, and we don’t know how much carbon is being produced by the 100 million users now that it’s launched.

The requirement to limit harm to the environment may encourage the use of more efficient foundational technology architectures, which can produce similar results with 100 to 1000 times less energy consumption. 

 

Give supervised machines credit for their work

There are additional requirements in the law for generative AI. In particular, people will need to disclose that the content was created by a machine. Companies using the tool will need safeguards in place to ensure that the machine does not generate content that would be illegal, such as publishing copyrighted data. It is unclear how this would affect business models of, say, Snapchat, where an unsupervised intelligent agent is currently interacting with children on their apps and giving completely inappropriate advice. Would warnings alone be enough to evidence safeguards?

 

Best practice

The law should give companies applying best practices very little to worry about. Much of it is common sense, and, for example, our own Generative AI Ethics Policy, which we produced earlier this year, requires disclosure of the use of machines and supervision. 

It may also facilitate communication between those in the complex supply chain of AI. Making it illegal to use certain algorithms in certain situations may lead to greater transparency, with both service providers and customers sharing more openly how the algorithms were built and what they will be used for. One of the big challenges in our field has been the use of Open source and pre-trained algorithms with little to no understanding of how they were built and with few to no rights to inspect, say, training data. Compliance with this law will require greater cooperation and less hiding behind IP laws. Well-documented data processes have nothing to fear, and we have extensive training on how to ensure best practices in our training materials, for anyone concerned. 

There is also provision in the Act for best practice, with the promotion of sandbox environments for companies, and special support for research activities. This may avoid the risk that excessive control may hamper legitimate innovation in this field in Europe, and it also allows companies a space to work out how this law needs to be interpreted while we build up case studies. 

 

Enforcement

The test of the EU AI Act will be enforcement. GDPR enforcement was slowed down because of the requirement to make the nation where the company has its headquarters responsible for prosecuting offending companies. This meant that a tiny country, like Luxemburg, with just half a million citizens, became responsible for processing every complaint made against the giant Amazon across the EU. The new act will be the responsibility of the EU AI Office, which will oversee the complaints process. The hope is that the law provides legal certainty of obligations and rights, and will promote trust in human-centric AI. The AI Office will be going up against some organizations with very big legal budgets soon.