Transcript - Data Protection, Privacy and Responsible AI for a Digital Society

From lotico
Jump to navigation Jump to search


Title: Data Protection, Privacy and Responsible AI for a Digital Society
Date: February 8, 2023
Host: Marco Neumann
Speaker: Paul Nemitz
Type: Lotico Event
URL: http://www.lotico.com/index.php/Data_Protection,_Privacy_and_Responsible_AI_for_a_Digital_Society
Video Source: https://www.youtube.com/watch?v=cU4GuqOPkbc


Introduction

Marco Neumann:

Thank you for accepting my invitation to join us today! Paul is, well, was the principal advisor to the EU Commission's Directorate-General for Justice and Consumers. He has just informed me that, as of last week, he is now the principal advisor for digital transition at the European Commission. Paul has been instrumental in the discussion and development of the General Data Protection Regulation, short GDPR, in the European Union in 2011 at the Privacy Conference of the German Association for data protection and data security, Paul back then as director for fundamental rights and citizenship of the Commission, announced that the EU plans to implement a regulation that is directly applicable to all EU member states in an effort to harmonise the data protection laws in Europe.

Again, thank you for joining us, Paul. I was made aware of your work through an invitation from Deborah Weber-Wolf, who is a professor at the HTW University in Berlin, to attend a discussion with you in Leipzig, Germany, last year about democracy and AI. And I was positively surprised by the sound case you have made at the event. I later found out that it was based on ideas you have developed in your book called "Prinzip Mensch", which he has co-authored with Matthias Pfeffer, and could be translated as the "Human Imperative". It's not primarily a book about technology but a book that takes you on a political and philosophical journey. One that reminds you about the importance of human intervention in the shaping of our democratic reality and the embedding of new technology development in the same. It describes the world we live in and makes you wonder about "the one we want to live in?" The book is a rich collection of contributions that support their case. It encompasses philosophical thoughts from Aristotle to Descartes and German idealism à la Kant and Hegel, over two more disruptive thoughts by Nietzsche, and finally via Hans Jonas to Jürgen Habermas and Herbert Marcuse to a more modern human focus. It builds its case on contributions by sociologists like Max Weber and Niklas Luhmann, and it identifies a new ideology that is based on transhumanists like Hans Moravec, Ray Kurzweil and Jaron Lanier and cyber Libertarians like John Perry Barlow that intellectually provide the foundation for the so-called Californian Ideology. A wild mix of cybernetics, free-market economics, and counter-culture libertarianism that can be found in Silicon Valley. I have to say I was quite surprised not to read about Ayn Rand who has been frequently mentioned in technology circles in the US and is cited by many as an inspiration for this new technology-based Utopia. In the book they attempt to describe some of the key technological developments and their cultural and ethical context one does not have to agree with every aspect of their reasoning and the conclusions they arrive at in the book but I would say you will agree that the book makes you think about the status quo and the role that data and technology companies play and where we are heading as democratic societies. But they go further, they demand a human focus, and a politically accountable technology industry, one that is firmly rooted in our democratic values. They stretch it and go all the way to recommend a path to a certified engineer of democracy. Engineers who are institutionalised and bound by laws. Not just self-imposed regulation. It's this tension that gives the book urgency and the reader a desire for resolve. And their goal is not to stifle innovation with additional bureaucracy and complex rules, but they see the discussion around regulation as an ongoing struggle and as part of a living democracy. Even if we can't change or reverse the course of technological development, it's important to understand the dynamics and events around us and what happens to our digital footprint. Only a well-informed society can act as a foundation for a strong democracy that is based on sound laws. So again, welcome, Paul. Please take to the stage!

Paul Nemitz: Thank you, Marco, and happy to talk to all of you! So first of all to explain a little bit what is happening in the European Commission right now of course you will all have followed the many pieces of law which are passing our legislative machinery consisting of the European Parliament, which is elected by the people which gives the legitimacy to the law by from the people and the Council of Ministers which represents the member states so every piece of law which is adopted in Brussels has this double legitimacy of the government of the member State and of the people of Europe and since GDPR a lot of important laws relating to digital matters have been passed.

Digital Services Act (DSA)

I would just like to mention the Digital Services Act which relates to the behavior of the platforms as commercial markets, you know addressing issues like a self-preferencing of own products for example on on Amazon. But also the platforms as places where people build their opinion entity and DSA structural terms and for what's happening on the platform the DSA I think has inspired discussions also in the United States and we will very soon see in the Supreme Court of the United States in the case Gonzalez vs Google LLC section 230 namely the safe haven rule which the responsibilizes platforms in the US as to the content which is put on the platform by Third parties through what is called the notice and takedown mechanism namely under 230 the platforms have no responsibility to check on the content whether it's legal or not they only have to act once somebody has told them that there is something illegal and that's at the basis of many of the problems on the Internet and it is also let's say today at least something which is not in line with the capabilities these companies employ and their business models anymore because all these companies with the targeted advertisement they make money with and of course have content recognition systems upload filters which immediately identify what you're writing or what pictures or videos you are uploading in order to ascertain what type of advertisement would be best placed in this context in this moment to get you to fulfill a commercial transaction or to influence your opinion for political purposes if this is about political advertisement.

So this is a one piece which is new since GDPR and which will soon enter the enforcement stage and is very much let's say in the daily debate you know our commissioner Breton is often saying you know we want to seek for example Elon Musk and Twitter complying with the DSA so this is something which is important in the context of the discussion of democracy and platforms and then we move on to the DMA the Digital Markets Act which is let's say an add-on to traditional competition policy which will make it easier to intervene earlier in the market not only after dominant positions have been acquired and abused but already at an earlier stage thus addressing the power concentration and the market concentration in a few companies in some of the sectors of the digital economy.

Then we have the new Digital Governance Act which is an invitation to create data intermediaries which are neutral in the sense that they do not commercially themselves use the data and process the data for their own purposes but they bring data together and then make it available in an aggregated form for economic purposes or for the public interest for example to train AI.

The idea here is to help European companies but also public interest developers to get out of the grip of the data giants which have a lot of data on which right now developers are are dependent for example if you want to train your AI and to have alternative sources which in addition are also better curated and let's say also more specific to Europe. This piece of law gives let's say a framework for such data intermediaries which I would say can be compared to banks you know in the ideal case the bank manages the money well for you as an individual so you entrust your money to the bank and the bank has a certain trusteeship position towards your money but at the same time the bank assembles your savings to use then the aggregated savings it thus assembles as investment or credit in order to further economic goals through Market mechanisms so these data intermediaries more or less I think are very parallel to this type of aggregation and double-sided activity as one sees it in the bank then we have a piece of law ongoing with it which is the Data Act where the issue is what are the rules of access to data held in particular by private parties so we already have on the books, legal rules on the ReUse of public data access to public data the issue now is are there reasons of public interest for government access to data and now I'm not talking about the type of access like the NSA namely for for national security purposes or law enforcement purposes I'm talking about the access to data which is important to run government where there's a certain public interest let's say on a national regional or european level and where the data is necessary for for government in order to be able to see the world at least as good as Google does it for example. You know we remember the title seeing the world like Google and the theme of this piece of law is well governments in the public interest should be able to see at least as good as Google and the other issue is access of private parties two data held by other private parties this is of course a Sony issue so we'll have to see how the compromise comes out out of the legislative legislative procedure.

Artificial Intelligence Act (AI Act)

And then I would like to mention the AI regulation and which has been uh called back to everybody's attention through uh the recent release of the language processor GPT-3 by OpenAI. The AI Act is an effort to introduce legal rules which do not really create new individual rights but which make sure that the rights which already exist for people in terms of fundamental rights but also rights from secondary law consumer protection uh and continue to be available and enforced in the world of AI so the artificial intelligence act sets out a broad scope of application in The Proposal of the commission it is also now in debate between the council and the Parliament in the parliament there are thousands of amendments it's a very intense discussion right now taking place but my prediction is that it will be adopted this year and to go through the issues so first of all there is an issue of definition of course people are now asking is GPT-3 actually covered I think it's important that it is covered.

The Pyramid of Risk

And second the subdivision of the types of AI into four groups of risk so there is a let's say a theory or or a picture you can imagine a pyramid of risk and depending on in which risk group a certain type of AI and its use is the degree of obligations under the AI Act increases.

So, let's quickly go through the risk groups to give you some ideas. There are some types of AI which are so dangerous or so risky that they are simply forbidden. That's the very small top of the pyramid in red. So to say, this is for example a social scoring by governments like we see it in China this type of AI use will be forbidden in the AI Act. Then there is a second group of high risk activity and this high risk activity is sourced from two subgroups first wherever AI is used in a sector where already there is regulation on the safety of products, you know ranging from aircraft right through to medicinal products, there the AI has to, as part of the regulated product comply with the product regulation. The specific product regulation, plus it has to comply with the rules for the highest risk well the highest allowed risk group namely orange. It's not red it's orange of the AI Act. And the second source of identification of high risks is a description of sectors in which AI is used ranging from the use of you know by the police or the judiciary right through to for example recruitment software. Namely these are all sectors where the specific products may not be regulated but where the legislator still considers that the risk in terms of fundamental rights being touched on all the interests of the state being touched on is so high that it is necessary to impose a high level of regulation and and a high level of obligations on the AI.

Obligations

And then there are two lower groups of risks where the degree of obligations is lower. Now let's go through a number of obligations laid out in the AI Act and I would say for start, very simply, that all the obligations proposed are obligations which a reasonable engineer would probably anyway put on herself or himself when producing an AI. I don't think there's anything which is outrageous, you know, to give you an example the software of the program must be robust, it must actually do what it is announced to do, and it must do it correctly and properly. The procedures how the software was trained from which sources and how it actually functions must be documented, and must be made transparent. So that the users have an understanding of the functioning of the machine. So all these obligations, I would say, are quite natural for someone who takes the job as a developer and also marketer of a complex technological product serious. And of course in the high-risk group it is important to continue function, to continue to follow the functioning of the AI as it is unfolding in the market and being used in the real world. Like it is important to follow a medicine a pharmaceutical product to see whether after all the trials have been made in the real world when people you know take the pill and there are no side effects which have not been discovered before and sometimes these side effects are only discovered years later in statistics which show that you know a certain type of person with a certain type of predisposition may even for example die if they take this pill. So this must be followed and it's the same with the AI product in the market in the high-risk group at least there is an obligation, there must be an obligation , to follow this product through the lifetime and to observe its functioning and to, you know, if necessary do the corrections necessary to ensure compliance with existing law of fundamental rights and the good functioning of democracy, consumer protection and so on.

So two general principles are being basically made concrete through the AI Act first the AI must comply with all rules and regulation with which a human would have to comply if it carries out the same activity or for that matter with which a piece of technology would have to be compliant with if it wouldn't contain AI, so I think that's that's a very simple formulation for everybody Who develops AI, you know just look at the field in which you're developing AI what are the laws which govern the behavior of people what are the laws which govern other pieces of technology where that's what AI then also will have to comply with and the second and this is now very up to date in the discussion on GPT-3 but has been part of the commission from the outset it is very important that people always know whether they communicate with a machine namely AI or whether they communicate with a human. Why is this important? Well first of all this is a principle of human dignity, we humans should not be objects of machine manipulation by you know being misled about who's talking to us you know AI today and in speech technology is so perfect that both in the spoken word but also in the written word one can have the impression, you know according to the Touring test that on the other side it's really a human and this is of course misleading because it's not a human it's a machine and that is something we should know. So by all means you know to have the Turing test as a as an academic test of technology fair enough but in daily life it is the Dignity of mankind which requires and also our safe ability to determine our life ourselves it is important that we know are we talking to a machine or are we talking to a human so this must be made clear second it must be made clear not only in the written dialogues but also in the oral dialogues where a machine is speaking so this is key and I'm pretty sure that the final compromise on the law will contain an obligation in this respect. So between all these pieces of Law.

Other Issues

And I have now not mentioned the copyright issues which are being touched on and being discussed now in relation to generative AI and I have not spoken about Net neutrality which is becoming a very important again in the context of the discussion about the remuneration of the use of the internet for for the telephony companies in all these pieces of law I think we are now facing a challenge of a coherence and compliance and I think what is important is that in the future work first of all of those who have to apply the law namely the companies and and the developers who work with all these pieces of law in their business models and at their technology we need an attitude of goodwill in the sense that people don't try to go in all cases you know to the outer edge of what the law allows and take high risks of possibly acting illegally but rather a middle of the road attitude.

I think it is a rotten attitude you know to think that disruptive innovation which is some something which we all welcome in principle uh because it can bring a fascinating benefits uh to mankind that but it's rotten to think that disruptive innovation should include the disruption of the law in terms of you know intentionally uh just not complying or for that matter constantly testing in through including through you know provocative action and constant litigation until the last instance you know the scope of the law and I think that is doing damage to the rule of law and democracy and if we mean it with democracy and the rule of law I think we need a willingness of everyone of citizens and those who who have to comply with the law to stay let's say in the good center of legality.

And the second thing which we will need in this new complexity the combined complexity of technology complex business models global business models and complex law in Europe paired with an absence largely of law in the United States is an effort of coherent interpretation and application. And this is a let's say an invitation to the academic community and to the legal community to focus on the unity of law to focus on the coherence of the overall system of law as it applies to the digital space I think that will be our new challenge we need the great book of coherence which explains to everyone how the interaction between all these different pieces of laws functions or not functions and how this interaction does or should contribute to a good functioning of society to well-being and a good functioning of democracy but also the protection of individual rights I think these are the two big challenges before us.

Businesses Actions

And now I would like to come back to the importance of the businesses actions in this context. Why is this a way of acting by the big corporations that I'm thinking here in particular of the GAFAM so Google, Apple, Facebook, Microsoft, Amazon. Why is this important because these companies exercise a power and have a concentrated power together which is unheard of and has rarely been seenmaybe never been seen in history before they're all five are top of the US stock exchange they are the richest company in the world and they have the means so to say to make or break many of the elements which are necessary to ensure that the element that the internet the digitalization and new uh algorithms and systems like AI and also Generative AI and maybe one day even General AI are not disrupting and not disrupting democracy and fundamental rights and the rule of law but rather contribute to this and and strengthen this if you know and if we want to live under such conditions and not just be ruled by The Brute Force of power whether it's Public Power or private power Public Power like we see it exercised in China in in a dictatorship by the Communist party or we see it in Russia you know exercise through War or private power based on money and Technology if we don't want to live under either of such dominations we must maintain all together the Primacy of democracy and the rule of law over technology and markets and over brute power and brute force so we must insist that rules are made to be complied with rules must have an effect it is not useful to make laws just to have something on the books but which don't then really govern what's happening in the world if we mean it with democracy we need both laws which which make a difference and we need a willingness to comply with these laws and on both points there are issues with these companies because uh we see this right now in the very heavy lobbying activities on the AI Act they very much try to make sure that they are free of any obligatory effects of the laws they do a lot of lobbying to shape the laws according to their business model so that you know in an ideal case for them they have to change nothing and that's of course not what the purpose of the law in a democracy is and and this is often not what serves best is the public interest so the first thing I would say is it would be good if these companies exercise a little bit a self-restraint in their campaigning in Washington and Brussels and you know become let's say good corporate citizens in the sense that they accept. But most other actors and most people actually cannot lobby like them and don't have the money and the means and that they don't overdo it given that they have the money and the means to overdo it.

And the second thing is that once the laws are in place I would say we need a culture in Ethics. An Ethic which is also brought forward of among technologists and technology companies to comply with the law it should not be a grudging oh God you know the law but it should be in Ethics of wanting to comply and to be in the center of compliance of the law the practice of the companies right now is a different one they just go for it you know Mark Zuckerberg motto you know gotta move fast and break things and if they are found to not not comply then they regularly litigate up to the last instance in the courts now that is the practice which I think is maintained by these companies in order to intimidate the regulator and maybe even the lawmaker you know to show to the regulator if you take a decision which we don't agree with we will Bock down at least 20 people from your staff for at least five years because we will litigate through all two or three instances which are available to us and if this is done and it has been done over the last year in a very regular way you know this has an effect on regulators obviously because no regulator in the world can afford to regularly have you know so many people bogged down only in this type of litigation so I think there is I would say a certain element of bad face in this systematic litigation on everything to the last instance. The aim here is to try to intimidate regulators to such an extent that they basically always discuss their individual decisions first with the company and then after discussion never take a decision which the company doesn't agree to. And that is of course a vision of the world which is simply not okay you know the law has not been shaped and it's not the purpose of a democratic law you know to always please the big corporates.

The Digital Space is about Power

So these I think are the two concrete challenges which we are now facing in the in the digital space but let me say that I think we need to understand that both the engineering challenges but also the discussions on ethics and politics and rules for the digital always have to look at the power element. Who gets allocated opportunities, chances and risks, whose power is strengthened, whose power is weakened by either the technological system which I'm just working on or which I'm developing or which I'm servicing or which I'm using and on the other hand who gets chances, who gets opportunities, whose power is strengthened, and whose power is weakened in the making of rules. And also in the discussion whether something needs legal binding rules or whether ethics is enough and we need to look at this power aspect always because the reality of the digital is that it is about power it's about power over money, it's about power over people, and we have really seen such a concentration of power in the hands of the few.

The 8 Sources of Power

Let me go through the eight sources of power of the digital companies just to make this plea that we always have to think about the power effect what we're doing of what we're doing whether it's technology development or legislation just to make this a little bit more concrete. So I already mentioned that the GAFAM are among the highest capitalized companies of the world and as we know you know money makes the world go round, money means power and this is not to be ignored. Second these companies have the biggest collection of personal data about individuals in the world they use these data to predict and influence and some say to try to control the behavior of individuals that's a huge new source of power which adds to the money. Third these companies are in the control of the networks the new networks of communication, the networks of technology many elements of the internet transatlantic cables but also you know the cloud system the whole storage system the information distribution all this is in the hand of the big digital companies to a large extent. And so they do not only have the data they also control the network and its function and with the networks and it's functioning they also have the control, major platforms, the platforms which have a lock in effect namely people can't afford and businesses often can't afford not to be part of the platform not to be part of the system. This lock-in effect which comes from the reality that if many are there you have to be there too if you're not there if you're not part of the platform if you're not part of the technological ecosystem your chances you know to get your message through. If it's about politics or your chances to make money if you are a market operator are declining so this is by the way the reason why DMA the Digital Market Act and Digital Service Act are so important and these companies probably are best place to integrate the many very complex elements of systems and of research which are necessary to make function new systems of applied AI there's a lot of AI research taking place in the world now there's a lot of development in small and medium companies and startups but when it comes to bringing all the elements together in new worldwide scalable systems it is these companies which because they have the data and because they have the platforms and because they have the infrastructure the networks they control the networks are best placed to bring the new elements of technology together into new services. And we see this right now you know OpenAI Microsoft invested heavily in it 10 billion dollars at least and it combines now this new technology this new capability from AI with its existing platform service of search Bing and let's see what advantages brings but for me that's a prime example of showing what power today is about. The idea that there will be new market entrants and you know and they will challenge these giants you know, I think we have to maintain it I think, you know we need competition law to try to keep market open. But the reality of power concentration is that because you already control all the pre-existing elements which everybody needs you know the networks, the data, the platforms you will be best place to also draw money and that and that's another huge source of power. namely this I would say you know very strong position dominance of system integration when it comes to new developments such AI based on the prior power position in the platform economy and in the networks and then of course these companies they do not do their own development and the problem is you know we don't know how much they're spending and we don't know how many thousands of people work on this but it's supposedly huge it's supposedly 3,000 people at least in each of the big players on on AI products which but with budgets going beyond the uh the double-digit billions but they don't only do you know very secretive in-house development they also and this is another source of problems buy up startups wherever they can find them either to integrate the knowledge or simply to stop a competition if you look at the hundreds of startups which these companies have bought up it's it's really quite amazing how uh you know they they basically ensure that they remain the relevant players on the market and you know this one has to ask questions about the theory you know market entrance will challenge and so on if one at the same time allows this type of systematic buying up of competition and this is one of the areas where the Digital Market Act hopefully will bring a degree of new discipline and let's see how effective it will be.

So then we move on to the classic tool of power through money which is political influence the ability uh to shape the law and the political environment to your business model and this is something which these companies have become very astute in both in Washington and in Brussels they maintain huge Lobby offices they throw great parties you know many discussions which on the face of it look like you know public interest discussions of you know difficult academic issues but in reality of course are lobby activities public relations activities to which they try to position themselves as a you know valuable discussion partner for policy makers but actually try to push their business model so I think uh you know in addition to the factors I mentioned the political influence the direct political influence in all the capitals uh of the world where such rules are made uh must be monitored and I'm thankful for any journalist and NGO which comes up with a new report and a new study about these kinds of activities because I think these activities are genuinely problematic in society of course particularly in Washington where you know Lawrence Lessig, the great constitutional law professor from Harvard University, has demonstrated amply how money is used and how the systems of lobbying are used to avoid any adoption of reasonable laws you know necessary to shape the digital sphere money is at the source of the downfall of the American democracy in terms of delivering laws and let's not forget that the law is the most noble instrument of democracy speaks through law and if a democracy is not able to produce laws anymore there is a serious problem so political direct political influence and this is combined with a grip on the public discourse the public discourse today at least for 50 percent of people takes place entirely on the internet people these 50 percent whether in the United States on Europe built their political opinions through the internet and those who control what we see on the internet therefore to a large extent control the public discourse so I'm not surprised that in addition to the direct lobbying at the places of policy making and towards policy makers these big companies are investing heavily and you know sometimes more sometimes less it depends on you know the mood in the Company headquarter and the Strategic positioning in making friends in the Press through press subsidizing programs and also making friends in Academia it's very important today to always ask if academics come up with studies on the societal impact of the internet or on any of the legislative products which are being discussed in Democratic fora where have you got the money from and it's not only about the money for this specific study it's also The Institute where the scientist is working and employed did they get money from one of the GAFAM and the unfortunate reality is that it's getting increasingly difficult to find institutes which do research on Internet and Society issues which have not received big money from Big Tech and you know if you look in Germany the Humboldt Institute on Internet and Society in Berlin received millions of Euros from Google. Munich University chair of Ethics in Technology received millions from Facebook if you go to Harvard Berkman Klein Center if you go to Oxford Internet Institute and so on you know the money of the government is everywhere and of course I'm not saying that you know professors are being corrupted or enriched themselves with this money obviously not but if you carry out research which is kind of of interest to these companies the money keeps on flowing and you know your ability to do research is enhanced and so you know if you go to Berlin and you compare the offices of the publicly financed Humboldt University with the Google financed Humboldt Institute you know you understand what's happening here and so if we take all this together political direct influence influence on journalism direct shaping of public opinion through the network through the algorithms which play up or not this or that type of news plus influence in Academia we can say well there is clearly there is clearly a strategy of trying to influence all elements of public opinion including those who in traditional series of the state should stay independent and should only serve the truth that is journalism as the fourth estate next to the legislator the executive and the Judiciary the fourth estate is journalism in a democracy critical journalism which checks on power private power and Public Power and which for that reason should not be financed by those uh who it should be checking on at least not to an extent which puts into question Independence and and also Academia traditionally is the idea of freedom of academia freedom from pressure and influence through uh private money and again here we see that you know with their money the Big Tech tries to undermine and impregnate you know these spaces of independence in such a way that we're all friends we're all friends you know of course the money comes without any obligation you know you're completely free to do anything but you know let's stay friends and I I think that message you know I would be very surprised if it wouldn't have an effect in the midterm so I think we have to talk about this loudly and we have to be very clear and I would say you know anybody who gets such a such money should be extremely transparent about it and you know mention it in all publications and if they don't do it you know it doesn't give me a good feeling.

The Californian Ideology

Okay and then with all this in place of course there is a narrative, there is an ideology which accompanies the business model which in our book "Prinzip Mensch" we call, as others have called it before the ideology the Californian Ideology and it is the ideology of the total technical solvability of all problems of the world. It is an ideology that absolutizes technology to the detriment of democracy and individual rights of humans and the rule of law. And it is an ideology which is so problematic because it is combined with a downtalking of democracy and the rule of law as an obstacle to technological innovation as only a cost factor as something which is not good in the public interest and which leads then in extreme is for example to decisions like those of president Trump to leave the climate agreements, the global international law agreements, on climate change with the argument: We will solve all the problems of climate exchange through technology. Now we have seen that neither does this work with global climate change nor did it work with Covid the idea that vaccination alone or the app, the famous Covid warning app, alone or even in combination that only natural science and technology will solve the issue of climate change of course is ludicrous. It was key to agree on common rules on behavioral rules how we behave as individuals in terms of keeping a distance and keeping personal hygiene and so on but also to agree on how to distribute how to price, how to have state investments in further development of vaccines and also how to design the technology of of the apps I think the Covid example was a clear reminder that the claim that you know we only need the best technology in the world and basically you know also as individuals we can just solve all our problems if we just download The Right program over that meta develop it ourselves if we can this claim is just misplaced.

Maintain a Critical Attitude on Technological Power

So I think this is another reason why it's important to maintain a critical attitude on technological power because the impact for good of technology is often overstated and technology can actually often not deliver the good it claims to deliver. And behind this then of course are the ideologies of post-humanism and transhumanism I don't now want to get into this because you know that's maybe as far as the companies are concerned more a matter of speculation who follows those theories or who not but up to this point I think there is sufficient empirics on you know what these companies do and what they say to back up all these the presence of all these elements of power. So once we have understood this I think it's also clear that and there I will come to a conclusion it is also clear that there's not the one or the other measure which will solve this problem of modern power concentration in the age of AI, but what we need is a multi-faceted approach.

Targeted Advertising the new Battleground

That's what we're trying in Europe with different pieces of law and in this context data protection and privacy protection remains key why because also AI will largely work with personal data whenever AI works with personal data the data protection regulation of course will be fully applicable and the hunger for manipulating individuals through collection analysis of personal data is at the center of the economics of this new business namely that is the industry of advertising, targeted advertising and we now see because this becomes so extreme uh the manipulation and uh and also the non-compliance uh with with the law made we now see a new discussion on whether the business model of targeted advertisement should not be banned completely so that we get out of this you know devilish downturn of ethics and law uh driven through the hunger for money which makes people break uh data protection laws over and over again so this discussion on the whether targeted advertisement should be banned my prediction is that over the next years this will become one of the core battles where we'll see the real face of the government on the one hand but also the willingness of certain political directions or not to act on this this is let's say this one of the central economic issues of which we need to face and it may well be the key to return to informational self-determination of people in the digital age in the age of AI.

Conclusion

So let me say what the book sets out and what I'm trying to do also with this discourse. I'm trying to build a bridge between policy making, the people who work in academia and technologists in the companies and also the companies themselves for a new engagement in democracy. I'm particularly interested in the feedbacks from the technological intelligencia those who work on technology, I'm convinced that we need to re-engage them also in democracy because in the end democracy thrives only if people engage and democracy needs the knowledge of the technologists to be able to make the rules the democratic rules the binding rules the enforceable rules which this century needs to come to grips with artificial intelligence and other technologies on the horizon in such a way that Innovation can thrive and move forward but in a direction which is conducive to sustainability which is conducive to the good functioning of democracy which is conducive to protection of individual rights of people! Thank you very much.

Discussion

Marco Neumann: Thank you Paul, well done! We are, I'm sure full of questions now. We have approximately another 15 minutes for the Q and A which will be recorded and after that we have kind of an open chat and people decide how long they want to stay probably. Let me start off with the question what that means to for you now as the principal advisor for digital transition at DG Justice and consumer all that how that does that change your role in this discussion?

Paul Nemitz: No I think this is just a more precise uh job description which really says what I'm doing my job description is to work on the digital changes in you know making sure that the technological developments in this world are conducive to sustainability rule of law and the respect of fundamental rights and of course also consumer protection and sustainability and I think that will be the key a challenge for policy making and also the application of the law in the future and I would say for a sustainable business model of continued profits it is the best way for companies to align with these goals and not to fight against them.

Marco Neumann: Very good now I'd like to invite all the participants that stayed on we actually have lost any participants since you started the uh the presentation well done uh please come forward unmute yourself and ask some questions this is a great opportunity to engage with Paul on all aspects, okay Naurah Dwirengganis please introduce yourself briefly and let us know.

Naurah Dwirengganis: Hi Marco, hi Paul my name is Naurah. I'm a law student, I'm an international law student in one of the University in Indonesia and to be honest now I'll work on my undergraduate thesis which is particularly about Parallel Market Behavior which is I see from the scope of German and Indonesian law so yeah I have a really high enthusiasm in this topic and I actually want to know uh the thing that I want to ask is actually about I would like to know about how new countries or even the law itself settle the dispute or even the infringement that happened regarding AI in every aspect according to the law that's my questions. thank you so much.

Paul Nemitz: yes okay, yes thank you very much for the question. AI related disputes will be settled absolutely in the same way as all disputes in this society are settled. So let's go through the pyramid of diverging views and how things then converge. So the first level of let's say, if one can call it dispute settlement, is the making of the law. Because in the making of the law you see the different interests being hopefully made transparent in a public debate and not only one debate but it can take years you can see it in the AI Act now thousands of amendments you know it's a very active debate and the law is a first effort to reconcile you know the different interests and the majority decides and in the European Parliament to have a majority it's a very grand coalition you need you need a coalition between conservatives, social democrats, liberals and greens to get the law through.

So you know it will be a piece of compromise that's the nature of law making in the European Union it's always a compromise and the reality is at the end nobody is perfectly happy because in the compromise everybody has to give something. So that is the first level of conciliation of divergent views then comes the level of application and where there is a specific regulator it's of course the job of this regulator to take decision which are proportionate with which are even handed and to listen to all parties concerned this the obligation of the regulator to listen to all parties concerned is an element of trying to find a solution a decision on the application of the law which is I would say again even handed and at least taken in the knowledge of all the arguments of interest concerned in the best case may be fully taking them into account sometimes it will not be possible you know sometimes someone leaves the decision process unhappy that's the reality of life and before the regulator acts of course there is the whole body of legal advice you know of lawyers and and consultants and I think they also in in reality of things they play an important role because you know in the way they act they can either incite to conflict or calm down their clients and you know find a way of moving forward which I would say you know is at the center of the law rather than at the edge you know some lawyers may think they make more money when they drive their people and and their clients into conflict well but you know there are others who say we advise our clients to do things which are certainly legal in this way we avoid conflict costly conflict for our clients that's what our clients like.

So I think what the legal and consulting and advisory business does with a piece of law is also very important and then if the conflict continues after the regulator has decided then of course it's up to the Judiciary to check and in Europe the system is the following if a national court is then confronted with this of the regulator and it if the interpretation of European law is at issue that's called a request for a preliminary ruling so the request for the preliminary ruling is made by a national judge who is confronted with for example the interpretation of the AI Act and the national judge us presents the question and the case to the European Court of Justice the European court of justice doesn't decide the case but interprets the Peace of law in this case the AI law gives this interpretation back to the national judge and then the national judge decides the case on the basis of this interpretation and in this way because only the European Court of Justice is able to provide a binding interpretation of EU law we ensure the coherence and unity of application of EU law across all of Europe that is the core mechanism to ensure that the conflicts on the interpretation of AI law are solved in such a way that the answers to the conflicting interests are such that they are applied coherently all over Europe.

Naurah Dwirengganis:thank you so much for the information.

Marco Neumann: Very good do we have a few more questions in the queue a now okay Tobias go ahead

Tobias Schweizer: Yeah hello, I'm Tobias I'm a software developer I don't have any background in law so thank you for this analysis it's quite dystopian so you talked a lot about this formal processes of legislation and lobbyism I would be interested in knowing what I can do as a individual so I think as an individual I'm a citizen so I can vote for a party but I'm also a consumer so is there some kind of moral obligation not to use some pieces of software so that my data doesn't doesn't get into this system or what would you recommend in terms of behavior in in daily life

Paul Nemitz: Yeah I mean every one of us has of course different roles and different ways to engage in life you mentioned citizen you know political engagement in the evening going to the NGO meeting or the party meeting and consistently participating in this while during the day working then you mentioned the your job as a con your ability as a consumer to make consumption decisions which are you know favoring products which are sustainable uh or in that all that meta products that are data protection friendly or simply products which don't come from any of the dominant companies you know to favor The Challenge to the dominant companies and then of course you are also a developer of technology so I think your question shows that we all have many ways to engage and I you know let's not overcharge different roles you play in society you engage a little bit on these issues I have no let's say uh you know specific recommendation for example on political engagement I would just say that democracy lives through the individual engagement of people and going to the vote is already great but one can do more than that you know there are many ways for example there is the digital rights scene there are consumer associations which give the opportunity for individuals to engage there's a whole Civil Society space there are unions churches parties many ways in which people can engage in a consistent a long-term way and I think that's what is important that one sticks with it because you know these fights on power in society as I describe them these are fights which go on for years and decades and they never end so the idea that we change something with the flash mob and you know a short engagement here and there I don't believe in it I think it's better to engage consistently long term and at least in one Forum where one gets together with others and gets organized getting organized and practicing democracy in these groups and deciding together and also understanding how difficult it is to come to Common agreements and to always practice this I think that's you know a good thing but I would say another possibility which can be in addition or an alternative a consumption decisions and you know in the same way that one can find advice on the quality of products through you know consumer associations one can find today advice on the sustainability of products you know the ecological impact and on the impact of uh on privacy also on the software on platform activities and so on and if that's a choice one needs to get engaged in this and and research this and the third thing that's what I'm most interested in is the direct engagement of uh programmers and in the technical intelligence when they work in their work at the workplace in relation to the products they work on in my chapter I have a in my book I have a whole chapter on this in Germany in 1976 there was a sociological study by Eugen Kogon a political scientist from the Frankfurt School you know who had a lot of dialogues with Adorno and Horkheimer and so on and he made basically study with the association of German engineers the book is called "The Hour of the Engineers" ("Die Stunde der Ingenieure") and he wanted to find out and the engineering Association the German engineering Association supported him in this whether Engineers take political responsibility for their inventions and so we did a big questionnaire which was sent out to 20 000 engineers and he came to the conclusion based on the answer of 20 000 Engineers that yes the engineers take responsibility and they are deeply interested in the impact their Innovations has on Society at the time of course it was one of the issues was Atomic power and so I would say you know I believe in in my dialogues also with professional associations of the technical interlegenza as distinguished from the companies I always find that the professional associations are much more responsible and for example they develop ethics codes for their members and I think you know there is a huge scope to engage in your day-to-day work as a programmer and I would say while I always say ethics is not enough we need binding law of course the first point of ethics can be and this is a choice Ethics is always a choice that when develop a product for example which processes personal data we make for example so I think there are many ways in which one can engage and the nice thing about the free Society is you can decide which which way you go I personally found it always interesting to get engaged together with others because it also has a social value you know you meet people and it's fun um to get things done together but it needs like many other things in life also you know it needs also the tenacity of the programmer what needs to stick with it okay I see that was it that's good

Marco Neumann: so we have one more hand here by Sabina Mollenhauer please unmute yourself.

Sabina Mollenhauer: Yes hi thank you so much for your presentation it was super interesting and also thank you for the questions that have been asked so far my background is a different one A little bit it's a mix I have a background academic background in media science and also in Computer Science and I've also actually worked in Tech and in like non-governmental Tech or you know Society societal Tech like associations and stuff and now I'm going back to Academia and getting a PhD in digital Humanities and my question is since you talked about the money that's influencing policy making in different ways and that kind of the you're interested in building bridges between policy makers Academia and Tech which is I think is really wonderful but my question is or my concern is kind of like I mean I agree that there's that money is a big issue I also think think though that in Academia especially there are there are it's not always as Democratic as we would want because besides money there are also other let's say structures of hierarchy and tradition that are not necessarily representative and Democratic and I was wondering what your take is on how policy makers can influence a more democratic academic discourse?

Paul Nemitz: Yeah that is not my field but of course I have had a number of discussions with people from universities professors and and also Deans etc yes and so on on this issue of third-party financing and privacy arguments which assists us to seek third-party financing because public financing is scarce and and in this text of course I have also learned that let's say the degree of time limited employment at universities and therefore the power relations between you know lifetime staff and time limited staff is really becoming an issue and I see the same problem by the way in the in some uh many of the genres today are free the so-called free they're not employed a little bit you know the uberization of everything including in these spaces where one needs to have a certain academic Freedom or journalistic freedom and this must be based on some economics stability or you know you don't get any you don't get any assignments anymore as a free journalist if you write things which you know someone powerful doesn't like so I think there is this issue of maintaining spaces of freedom and the freedom requires a certain stability in the job that's why you have you know lifetime civil servants that why you have tenure professors and that's why at least in the past in the Public Services of job communism you had many people who had life Spacey because you know it makes them a little bit more independent and just in law which undermine these statutes of Independence ah I can tell you in the European Commission for example we have now much more time limited employment then you know as I said the same is true in Academia it was uh you know when I was uh course short time a teaching assistant at a university let's say the number of people who are on constant repeated time limited contracts was much less and it was much more normal to have a you know to eventually have a lifetime or a long-term employment and this is something which policy shapes uh you know for the universe I'm sure you know the early belief that democracy shapes things should shape more than just the market in this case the labor market and if you believe in the you know in the independence of the in the importance of Independence and therefore having less hierarchy because Independence means also you know from time to time you can do things which are not totally and into even obliged sometimes to say no to certain things which your hierarchy may be asking you you know does the law allow this and depending on the big worldviews you know the laws are changed in well wait hey it was the answer so that's my answer to your question sorry to be not into this but generally speaking I am critical for example of the huge number of free called free journalists in German public TV and it's too many I think it's 18 000 alone in the first channel and these people are supposed to do the critical political reporting you know doesn't look good to me and and also I think what I what I hear about the movements let's say in Berlin and other university towns uh the discontent among young scientists about the chaining of contracts and never getting a normal labor law secured position I think these are structural things which need change. Thank you so much that was good

Marco Neumann: Very good Paul, thank you very much for the question. we're going now off the record.