Europe has one of the most progressive human-centric policies for managing artificial intelligence in the world. Compared to the heavy-handed government surveillance in China or in the style of the Wild West everything goes approach in the US, the EU’s strategy is designed to fuel academic and business innovation while protecting citizens from harm and force majeure. But that doesn’t mean it’s perfect.
The initiative of 2018
In 2018, the European Commission launched its European AI Alliance Initiative. The alliance exists to allow different stakeholders to weigh in and be heard as the EU considers its ongoing policy on the development and application of AI technologies.
Since 2018, more than 6,000 stakeholders have participated in the dialogue through various venues, including online forums and in-person events.
The comments, concerns and advice of these stakeholders have been considered by the EU High Level Expert Group on Artificial Intelligence, which ultimately produced four key documents that serve as the basis for EU policy discussions on AI:
1. Ethical Guidelines for Trustworthy AI
2. Policy and Investment Recommendations for Trustworthy AI
3. Rating list for reliable AI
4. Sectoral considerations in the policy and investment recommendations
This article focuses on point one: the EU’s “Ethics Guidelines for Trustworthy AI”.
This document, published in 2019, provides an overview of the ethical concerns and best practices for the EU. While I wouldn’t exactly call it a ‘living document’, it is supported by a continuously updated reporting system through the European AI Alliance initiative.
The Ethical Guidelines for Trustworthy AI provide a “set of 7 key requirements that AI systems must meet in order to be considered trustworthy.”
Human freedom of choice and supervision
According to the document:
AI systems should empower people, enable them to make informed decisions and promote their fundamental rights. At the same time, good monitoring mechanisms must be ensured, which can be achieved through human-in-the-loop, human-on-the-loop and human-in-command approaches.
Neural’s rating: arm
Human-in-the-loop, human-on-the-loop, and human-in-command are all hugely subjective approaches to AI governance that almost always rely on marketing strategies, business jargon, and dishonest approaches to discuss how AI models work. to appear effective.
Essentially, the “human in the loop” myth involves the idea that an AI system is safe as long as a human is ultimately responsible for “pressing the button” or authorizing the execution of a machine learning function that could potentially have a negative effect on humans.
The problem: Human-in-the-loop relies on competent people at every level of the decision-making process to ensure fairness. Unfortunately, studies show that people easy to manipulate by machines.
We also tend to ignore warnings when they become routine.
Think about it, when is the last time you read all the fine print on a website before agreeing to the terms presented? How often do you ignore the “check engine” light on your car or the “time for an update” warning on software if it is still functioning properly?
Automating programs or services that influence human outcomes under the pretense that having a “human in the loop” is enough to prevent misalignment or abuse is, in this author’s opinion, a worthless approach to regulation that gives companies carte blanche. cares about developing malicious models so long because they meet a “human-in-the-loop” requirement for use.
As an example of what could go wrong: ProPublica’s award-winning “Machine bias” article exposed the tendency of the human-in-the-loop paradigm to create additional bias by showing how AI used to recommend criminal punishment can perpetuate and amplify racism.
A solution: The EU must move away from the idea of creating “good surveillance mechanisms” and instead focus on creating policies that regulate the use and deployment of black box AI systems to prevent them from being deployed in situations where human outcomes can be affected unless there is a human authority that can be held ultimately responsible.
Technical robustness and safety
According to the document:
AI systems must be resilient and secure. They must be secure, provide a fallback plan in case something goes wrong, as well as be accurate, reliable and reproducible. That is the only way to ensure that accidental damage can also be minimized and prevented.
Neural’s rating: needs work.
Without a definition of “safe,” the whole statement is fluff. Furthermore, “accuracy” is a malleable term in the AI world that almost always refers to arbitrary benchmarks that don’t go beyond labs.
A solution: The EU should set an absolute minimum requirement to demonstrate that AI models deployed in Europe that can influence human outcomes equivalence. An AI model that achieves lower reliability or “accuracy” in tasks involving minorities should not be considered safe or reliable.
Privacy and data management
According to the document:
In addition to ensuring full respect for privacy and data protection, adequate data management mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimate access to data.
Neural’s rating: good, but could be better.
Luckily the General Data Protection Regulation (GDPR) does most of the heavy lifting here. However, the terms ‘quality and integrity’ are highly subjective, as is the term ‘legitimate access’.
A solution: The EU must define a standard where data must be obtained with consent and verified by humans to ensure that the databases used to train models contain only data that is correctly labeled and used with the consent of the person or group that generated them.
Transparency
According to the document:
The data, system and AI business models must be transparent. Traceability mechanisms can help with this. In addition, AI systems and their decisions should be explained in a way that is adapted to the stakeholder concerned. People should be aware that they are interacting with an AI system and should be informed about the system’s capabilities and limitations.
Neural’s rating: this is hot waste.
Only a small percentage of AI models lend themselves to transparency. Most AI models in production today are “black box” systems that, by the very nature of their architecture, produce outputs with far too many steps of abstraction, deduction, or merging for a human to parse.
In other words, a given AI system can use billions of different parameters to produce an output. To understand why it produced that particular outcome rather than another, we would have to review each of those parameters step by step so that we can come to the exact same conclusion as the machine.
A solution: The EU must have a strict policy to prevent the deployment of opaque or black box artificial intelligence systems that produce outputs that could influence human results, unless a designated human authority can be held fully responsible for unintended negative results.
Diversity, non-discrimination and fairness
According to the document:
Unfair prejudice should be avoided as it can have multiple negative consequences, from the marginalization of vulnerable groups to the exacerbation of prejudice and discrimination. To promote diversity, AI systems must be accessible to all, regardless of disability, and involve relevant stakeholders throughout their lifecycle.
Neural’s rating: arm.
In order for AI models to engage “relevant stakeholders throughout their lifecycle”, they must be trained on data from different sources and developed by teams of different people. The reality is that VOICE is dominated by white, straight, cis males and there are countless peer-reviewed studies showing how that simple, provable fact makes up for it almost impossible to produce many kinds of AI models without bias.
A solution: Unless the EU has a method to solve the lack of minorities in STEM, it should instead focus on creating policies that prevent companies and individuals from deploying AI models that deliver different outcomes for minorities.
Social and ecological well-being
According to the document:
AI systems should benefit all people, including future generations. Therefore, care must be taken to ensure that they are sustainable and environmentally friendly. In addition, they must take into account the environment, including other living things, and their social and societal impact must be carefully considered.
Neural’s rating: great. No notes!
Responsibility
According to the document:
Mechanisms should be put in place to ensure accountability and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes, plays a key role in this, especially in critical applications. In addition, an adequate and accessible story must be ensured.
Neural’s rating: good, but could be better.
There is currently no political consensus on who is responsible when AI goes wrong. For example, if the EU’s facial recognition systems accidentally identify a passenger and the resulting investigation causes them financial loss (they miss their flight and all opportunities arising from their journey) or undue mental distress, no one can be held responsible for the mistake.
The employees who follow the procedure based on the AI signaling a potential threat are simply doing their job. And the developers who trained the systems are usually flawless once their models go into production.
A solution: The EU must create a policy that specifically dictates that people should always be held accountable when an AI system causes an unintended or erroneous result for another human being. Current EU policies and strategy encourages a “blame the algorithm” approach that benefits corporate interests more than civil rights.
Strengthening a solid foundation
While the above comment may be harsh, I believe the EU’s AI strategy is a bright spot showing the way. However, it is clear that the EU’s desire to compete with the Silicon Valley innovation market in the AI sector has raised the bar for human-centric technology a little further towards corporate interests than the other technology policy initiatives of the EU. the union.
The EU would not agree to a plane that was mathematically proven to crash more often when black people, women or gay people were passengers than when white men were on board. It shouldn’t allow AI developers to implement models that also work that way.
Contents