Coexistence with Artificial Intelligence: A Vision of Society 5.0: Any development and implementation of AI must be based on ethical principles and public trust

Ljubljana, November 23, 2019 – This was the main conclusion of the participants in the International Workshop and Roundtable Coexistence with Artificial Intelligence: Vision of Society 5.0, organized by the Institute 14 and the European Liberal Forum (ELF). The event took place at the Polygon in Ljubljana on November 21.

Workshop discussions focused on the issue of the coexistence of human and artificial intelligence (AI) and regulation of AI. Both topics continued with a roundtable discussion where speakers more specifically touched upon topics such as the future of algorithm development, AM, cybernetics, cognitive systems and cyber security, mainly from an ethical and social point of view.

According to the speakers, the fear of AI is unnecessary – if the society  is able to put in place mechanisms that will allow the optimal but secure use of future technologies.

Dr. Aleksander Aristovnik (Director of the Institute 14) compared AI with some of the greatest human inventions. “As in the past, the steam engine, electricity or the Internet, today AI is changing our economy and society. And the availability of data combined with increasingly powerful computers and increasingly advanced algorithms is turning AI into a strategically important technology of the 21st century.” That is why so much is at stake. “How we approach the AI will have a significant impact on the world we will live in – especially when it comes to the impact of the AI on human rights, democracy, the rule of law. That is why the EU needs a very clear frameworks in this area, including legal ones, as the AI does not come without serious risks. It must be ensured that the development and use of IM is in accordance with our values, human rights and ethical principles, ”Aristovnik noted. “The EU should be a leader in AI development,” he added, advocating for increased public and private sector investments in the sector.

Prof. dr. Ivan Bratko (Faculty of Computer and Information Science in Ljubljana), also one of the founders of research in the field of artificial intelligence in Slovenia, explained that the biggest breakthrough when it comes to AI development in the past decade, was achieved in the field of deep learning. If the number of errors was still around 25 percent in 2010, it dropped to about 2 percent by 2016.

But progress does not come without challenges.  Less errors  means that neural networks are becoming more and more complex – to the point of being (almost) impossible to understand – so they are becoming “black boxes” –  where the user does not know how the system generated the results from the data. But one of the goals of the European Artificial Intelligence Strategy is to create a trusted AI – based on the “transparent box” principle.

Bratko also pointed out, that algorithms still make completely incomprehensible errors, which raises the question of the security of such algorithms, since errors in most cases – such as in autonomous driving – cannot be acceptable.

Psychologist dr. Janek Musek noted that when developing AI, we must pay attention to ethical challenges: “Ethics is important in all aspects of our lives. All the great social, personal and professional risks are the result of the discrepancy between ethical principles and ethical behavior on the one hand, and our actual behavior on the other. Our behavior is often inconsistent with ethical principles. In the case of AI, however, ethical issues become even more important as AI is the most powerful tool that humanity has developed in its history. The benefits of AI can be compared to other tools but the detrimental effects are multiplied. Respect for ethical principles is therefore crucial. “

According to Paul Kuyer (Dublin City University) –  artificial intelligence challenges people’s perception of themselves , since we see ourselves  as a universal rational thinker: “We are used to being the ones who make the decisions. But now we have AI that can also make decisions and also influence our decisions. Therefore, the issue of ethical AI  comes to the fore. When AI makes important decisions, it is important that they are the right decisions.” He also emphasized the importance of quality and impartiality of data used by AI in decision making.

Dr. Dan Podjed (ZRC SAZU) provocatively suggested that we should replace the Heads of State and Business managers with “unemotional” AI. He pointed out the social and political consequences of fake news – where algorithms are used as a tool: “We  “buy” all this “, we base our votes on such disinformation. This can lead to the end of democracy as we know it, since we obviously can no longer make rational decisions. This is idiocracy. It’s time to stop algorithms from targeting us – whether it is for the purpose of buying something or voting for someone. “

Dr. Aljaž Košmerlj (The Jozef Stefan Institute) emphasized that artificial intelligence systems are currently quite simple tools, designed to perform certain tasks. Furthermore he highlighted the power that certain platforms have to amplify a particular aspect of the certain message. However – none of those platforms is willing to be held responsible for the consequences of such actions. “But I think regulators will soon recognize that this power is too big to remain unregulated.” He noted that the knowledge that people have about how algorithms work, needs to be strengthened, as knowledge also provides at least some resistance to some less the desired effects of the use of algorithms. “The society will adapt to AI as it has done many times before. However, many problems can be avoided through education. ”

Dr. Jonas Valbjorn Andersen (IT University of Copenhagen) emphasized that we are still far from smart artificial intelligence. What we have today are systems that are capable of performing clearly and narrowly defined tasks. According to him, AI cannot replace humans in many sectors. He also touched on the issue of regulation: “When we talk about regulating algorithms, we are too little aware of how algorithms regulate us. There are already many AI systems that regulate our behavior. In Europe, we are very focused on the question – how to regulate technology and its use, while the Chinese model puts more emphasis on regulating society with technology. ”

Europe sees AI as a technology that comes to us from the outside, although there are some top AI experts working here in Europe. China, on the other hand, has been copying this technology from the US for years and years. However, they did it systematically and later upgraded it with their technology. This area is very unregulated there, ”said Nina Pejic from the Faculty of Social Sciences. The speakers, however, unanimously  called for a model of use and regulation of AI that will ensure the continuation of European democratic values. (388 KB)

ELF/Zavod 14 Policy paper on regulation of Artificial Intelligence: How to regulate AI? – Towards Trustworthy Artificial Intelligence (388 KB)




An event organised by the European Liberal Forum (ELF). Supported by Zavod 14 in collaboration with Friedrich-Naumann-Stiftung für die Freiheit and Institute Novum. Co-funded by the European Parliament.

Neither the European Parliament nor the European Liberal Forum are responsible for the content of the programme, or for any use that may be made of it. The views expressed herein are those of the speaker(s) alone. These views do not necessarily reflect those of the European Parliament and/or the European Liberal Forum asbl.