Thought Article (UAH)


Although AI is not a new field by any means (its history dates back to the 1950s) it hasn’t been until recently when it has definitely taken off, thanks to the advances in technology, computation and electronics, among others. Its increasing presence in all areas of our lives shows how what once could be discarded as a hyped technology can now be considered a reality. AI can be used and adapted to execute a myriad tasks in almost any domain or sector and, in general, improving the efficiency of the process. AI’s ability to generate business advantages, economic benefits and social good is hard to question; thus, basically all countries worldwide have adopted plans and strategies to lead the AI revolution in the coming years. Europe is not an exception, with its “Communication on Artificial Intelligence for Europe” in 2018.

The development and widespread use of AI is not without risks, however. As it becomes increasingly common and used even for the most mundane tasks, it also becomes subject to more public scrutiny; researchers, practitioners, journalists and civil organizations often bring to light AI-related scandals and controversies. To give a few examples, in 2018 the Cambridge Analytica scandal[1] was uncovered, demonstrating the risks of AI in Social Media. Amazon, who had been using an AI tool to assist in the recruiting process, realized that it didn’t like women[2]; the tool penalized resumes that included the word “women’s,” as in “women’s chess club captain”, as the tool had been trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period – mostly coming by men. And only a few days ago, to give a more ordinary example, YouTube’s AI blocked a channel after allegedly mistaking discussions about chess pieces being black and white for racist slurs[3]. Evidently, a mistake on temporarily banning a user from YouTube might not have the same impact as not hiring women or as giving an incorrect treatment to a patient, but what is clear is that they are all derived from asymmetries of information or power or from not involving minorities or vulnerable groups. The victims are always the same: human rights and democratic values are at stake.

What is interesting, however, is that the blame is always put on the tool itself. We read statements such as “Amazon’s system taught itself that male candidates were preferable”, or we say that YouTube’s AI made a mistake. Little attention is set on the human beings (yes, actual people) who wrote, designed and devised those systems. Algorithms don’t have values; they reflect the values and biases of their developers. And, so far, we have mostly been focusing in preparing young people with advanced programming skills, but its ethical and values-based components have been inadvertently left out; the consequences of doing so start to surface in our daily life.

It is in this context where Higher Education must play an important role in this area of strategic importance for the economic and social development of the European Union, contributing to cutting-edge, safe, ethical AI, with the European values at its core. It is indeed up to us to prepare young people with advanced programming skills, but also to prepare all students to understand the implications of AI and ensure it is put to good use and always for the benefit of the community. Empowering our students to be able to identify what we should do rather than what we (currently) can do with technology is, thus, the only way to ensure the fundamental rights and their underlying values are respected today and in the future.

[1]https://www.politico.eu/newsletter/ai-decoded/politico-ai-decoded-how-cambridge-analytica-used-ai-no-google-didnt-call-for-a-ban-on-face-recognition-restricting-ai-exports/

[2]https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

[3]https://www.dailymail.co.uk/sciencetech/article-9279473/YouTube-algorithm-accidentally-blocked-chess-player-discussing-black-versus-white-strategy.html