AI, ethics and regulation by Lorelien Hoet
You do not have to be particularly interested in technology to appreciate that artificial intelligence (AI) is one of the most widely discussed current topics of today. And rightly so, as it is an area of rapid technological development, and the new technological advances will eventually change the conditions of the way we live and work. AI plays a central role in this.
The rise of AI
AI is not new. It has existed for decades and we are all using it – probably every day, when searching on Google, using our GPS, Microsoft Translator or Siri… Nevertheless, AI is probably one of the areas where only very few people fully understand the potential and possible consequences.
The rise of AI is also bringing new legal questions and ethical challenges. For example, in its current state of development, certain uses of facial recognition technology increase the risk of outcomes that are biased and possibly discriminating. That is why not only we need to work on improving the capability of this technology to recognize faces with a range of ages and skin tones, but also, we need to recognize the role of regulation.
Regulations and ethical principles
The applicable regulatory framework, like the GDPR, already applies to the deployment of AI. In addition, we believe that companies, countries and individuals working with AI must assess the ethical dimensions of their activities. In 2018, Microsoft has identified six ethical principles – fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability – to guide the cross-disciplinary development and use of artificial intelligence.
We also strongly endorse the principles and values that the European Commission has formulated within its Ethics Guidelines for Trustworthy AI, prepared by the High Level Expert Group on AI and published in April 2019. This marks an important milestone on the road towards the responsible development and deployment of human-centric AI in Europe. Next to these overarching principles, some areas of AI – such as facial recognition technology – may require further and specific legislation.
We trust that stakeholders in Brussels, and Belgium more generally, are embracing the AI changes rapidly and effectively. This is important. But while we believe that AI can help solve big societal problems, we must continue looking at our future with a critical eye. There are challenges as well as opportunities. We must address the need for strong ethical principles, the evolution of laws, training for new skills and even labor market reforms. A close dialogue among data scientists, legal experts and other disciplines is continuously required to develop new digital strategies and evolve legal frameworks for society. That is why I am very happy to see that many local representatives are actively taking part in this debate.
This article was written by Lorelien Hoet, Government Affairs Director EU at Microsoft.
#TheFutureLivesinBrussels by the MIC Brussels. This is a series of curated articles written by experts and partners of the ICT sector and entrepreneurial community. It’s a way for them to communicate their insights on specific topics and share their ideas for a better future in Brussels. If you want to be part of it, get in touch with us!