This month we had the pleasure of discussing with Dr. Maria Moloney what 2024 may bring regarding AI and data protection innovation, as well as issues of European and global legislation regarding those matters. Maria holds the position of Senior Data Protection, Security and AI specialist in an Irish technology company entitled PrivacyEngine. She has a doctoral degree in Information Systems from Trinity College Dublin and her mission, as a consultant, is to look at data protection, security and artificial intelligence, from a compliance perspective. She works with large private companies in Europe and in America, as well as public administrations.
Progress and research on AI is going to be even more intense in 2024 than in 2023, in my opinion. I think we’re just seeing the beginning of AI systems and their uses. There has already been a lot of talk about generative AI but I think more is to come in 2024 and 2025 in terms of just understanding exactly how these models can be used in business, what the benefits are of these models but also what are their drawbacks. In 2023, focus was on large language models, now in 2024 we’re going to be looking more at multi-modal AI models. Chat-GPT 3.5 looked at text interpretation whereas now GPT-4 combines various modalities like text, audio and video. As a result, these AI models can give much more intuitive responses. To be honest, it’s an exciting place to be working right now, in this area.
I do not think it is an overstatement to say that in the last twelve months, AI has changed the way people work. We now know that it is a massively disruptive technology. Clearly, there are huge benefits to it. We see self-driving cars, we see a massive increase in automation. A lot of work that has historically been seen as boring and challenging can now be done by artificial intelligence. This frees up the human and allows us to spend our time in more creative ways and to pursue tasks that they enjoy as opposed to having to do things that we find repetitive and tedious.
AI Act will put Europe on the map regarding the global artificial intelligence regulation
I think the EU’s AI Act is going to put Europe on the map in terms of regulating AI, in that Europe now takes the place of being the first jurisdiction globally to regulate AI and, thus to set the scene for trustworthy AI. Since the GDPR came into effect in 2018, we as Europeans have learned a lot regarding risk-based legislation and its benefits. The EU have adopted the same risk-based approach for the AI Act, and it has been relatively fast to be agreed upon. This is largely due to the European elections coming up later this year. The Commission wanted to get an agreement by the end of 2023 and an agreement was particularly important as, like with the GDPR, Europe wants to be at the forefront of legislation regarding AI.
Europe has always been seen to be the place where protection of citizens is first and foremost. I think the threat to the rights and freedoms of citizens by artificial intelligence was sufficient for Europe to realise they needed to move quickly regarding regulating it. The challenge, however, is that AI is still developing at a rapid rate. As a result, there is talk that there may be provisions provided within the AI Act to allow the European Commission to amend the act in the future if necessary. I think, they’ve done a really good job in terms of trying to get something out there to set a high bar as of what level of trustworthiness is expected by businesses to be allowed to access the European market. Like the GDPR, it’s obviously going to take years for everything to be ironed out fully, but it’s a great start anyways.
In America and the UK and in many other countries, there is no comprehensive AI regulation, and regulations are often industry specific. Europe has taken a horizontal approach, meaning that every industry has to comply with the same legislation, i.e. the EU AI Act. We also saw this in 2018 for the GDPR. It’s a good way to go for Europe in terms of being able to be the leader in trustworthy AI across sectors.
On the global scale, the AI Act could potentially be regarded by other jurisdictions as a reference model for when they regulate AI. It will definitely influence countries that want to or have been traditionally interacting or selling into the European market. Most likely, any country that has a strong tradition of selling into the EU market will seek adequacy with the AI Act when legislating for themselves. We have already seen adequacy with the GDPR across multiple countries. It makes sense for these jurisdictions to have an equal or similar legislation for AI in place in order to reduce the legislative burden. This is what has come to be known as the Brussels effect. I believe we should see this happening with the AI Act as well.
I consider myself very lucky to be part of Europe because I know that there will always be an effort to protect my rights in the digital world, even though there has always been a tension between innovation and protection of fundamental rights regarding new technologies. This often leads to the fear that other jurisdictions across the globe will gain competitive advantage over Europe regarding innovation thanks to their approach being more light-touch than the European approach, but I think we’ve demonstrated with the GDPR that here in Europe we can actually be innovative and also protect the rights of citizens. The GDPR has increased the trust and reputation of Europe and something similar will probably happen with the AI Act. I’m quite confident about the route that the AI Act will take in the future. It will probably take five years for us to see the true benefits of the act and how it gets played out across various jurisdictions globally, but I believe Europe will benefit greatly.
There are already plenty of real-world cases of serious damages that have been done to people because AI systems not being rolled out appropriately and with little consideration to the individuals on the receiving end of the systems. Granted, I accepted that there’s currently global race on between countries to see which one can be the most innovative regarding AI and that is good for mankind, but it is important to balance that with data protection and digital rights because it can have serious repercussions for people.
We have the example of the child benefit case in Netherlands that happened recently. There was an AI system in the Netherlands that accused an awful lot of individuals of fraud, which proved to be incorrect due to a biased AI system. The scandal brought down the government. and that happened literally because nobody picked up on the biases that were in the algorithms of the child benefit system. That system was contrary to the GDPR, because you must have human oversight in any automated decision-making system. It is important to always remember that when you think of algorithms, they are only as good as the people that have written them in the first place. At the end of the day, it’s a technology that’s open to human error.
Another example of AI systems that have shown to have built-in biases are the US AI systems that look at prisoner recidivism, or in other words the likelihood of prisoners reoffending. There are legal implications for the individual based on the outcome of the automated recidivism assessment carried out by these AI systems. Such automated decisions have proven massively problematic for individuals trying to improve their lives. The decisions that these AI systems make about individuals and their likelihood of re-offending can have a serious effect on somebody life. Thankfully, if these systems existed in Europe, under the EU AI Act, they would be considered high risk AI systems due to them having such a serious potential impact on the future of individuals.
I currently work with many individuals that write algorithms for banks and financial technology. There are many business models currently in use now that do not allow for human oversight of AI decision-making systems. In fact, businesses often cut out humans on purpose to reduce costs. It is cheaper to automate everything. Ironically, Europe is trying to grow the Fintech industry in an effort to be global leaders, and there is clear support for allowing fintech business models to be developed where there’s no human oversight which is contrary to GDPR. For instance, if somebody seeks a mortgage nowadays, their data is automatically put into a system to assess whether that individual is a good candidate for a mortgage or a loan. This assessment is very often not overseen by a bank employee. Thus, situation can have serious consequences for the individual. A bank manager would be in a better position to assess whether the individual is trustworthy and likely to make the repayment. If these individuals cannot secure a home, they will have to rent for the rest of their lives because they cannot secure a mortgage. To be honest, we do not even have to wait for the AI Act to change this situation, we can depend on the GDPR to ensure there is human oversight for all automated decision-making activities. As we know, the GDPR has been in place for years, and yet we still see these business models being implemented to this day. Hopefully, the AI Act will reinforce the need for human oversight for AI systems.
When people feel threatened, they move away from the technology
As already argued, the GDPR has demonstrated that we Europeans can be innovative while protecting the personal data of citizens. At the end of the day, if citizens aren’t protected then the benefits of using AI systems are hugely reduced. People will move away from the technology if they feel threatened by it. On the other hand, once you have people’s trust, they will embrace the technology and have fun with it.
I have noticed a marked change in the mindset of politicians and public administrators. Although, this mindset change is slightly slower than that of CEOs and professionals in the private sector, I do see a change. Obviously, public servants have different motivations for taking up their positions in the public sector, nevertheless, with almost six years of the GDPR behind us, public administrations are now starting to see that there is a real need for compliance and for protecting people’s data. In my own experiences in compliance maybe four years ago, you had people that would come to you and say “What is data protection? What is the GDPR?” whereas now everybody knows, and the questions are more like “How do we set up our GDPR compliance plan?”.
Not only is there an increased level of awareness and knowledge around data protection in Europe, there is also a growing awareness globally. Traditionally, the US had a completely different ideology around privacy and freedom of speech. In my opinion, they are beginning to see the importance of data protection especially when considering cybersecurity and antifraud activities. Europe and America are slowly coming closer together in terms of their understanding of the importance of data protection. This was evidenced by the roll out of the EU-US Data Privacy Framework only a few months ago in the summer of 2023. I believe we are closer than we have ever been regarding data protection collaboration with the US.
Going forward for 2024, I believe AI trustworthiness and the explainable part of AI really need to be looked at because the fact that such large amounts of personal data can be scraped from the web and used so quickly by AI makes the case for trustworthiness top priority. Now that the AI Act is here, we have the strength of regulation to be able to interact with companies that are developing AI and work with them to incorporate transparency and trustworthiness into their compliance solutions to ensure that their clients and users of these systems are protected and to reduce the risk of a security breach involving their AI systems. Breaches always affect company image that in turn affects the market price of the company and so on.
I accept that there are still quite significant challenges to make AI trustworthy and to be able to explain a lot of the black box algorithms out there, but I think Europe is really at the forefront of bringing that awareness and responsibility to companies to ensure that not only can they benefit and profit from AI, but they must also protect people from abuses of AI.
AI systems and the Threat to Democratic Elections
We have seen for many years now the potential for AI systems to disrupt the democratic process of elections. The most high profile case was the Cambridge Analytica scandal which involved the unauthorised collection and use of personal data from millions of Facebook users for political purposes. Cambridge Analytica worked on political campaigns, including Donald Trump’s 2016 U.S. presidential campaign, and obtained and exploited the personal information of around 87 million Facebook users to manipulate the outcome of elections.
Likewise, more recently AI-generated disinformation and deepfake technology has the potential to also disrupt the democratic process of elections. Sophisticated AI tools can now make it easier to manipulate media, particularly with the emergence of realistic deepfakes. The coming year, 2024 will see a number of countries going to the polls, including the US, UK, and the EU, and malicious entities will more than likely try to manipulate results. Given the inadequacy of existing detection tools for such attacks, there is an increased need for real-time responses by politicians to prevent the spread of disinformation. It is also important to interact with social media companies, and ensure their continued investment in detection capabilities, and to review their evolving strategies to mitigate disinformation campaigns. Overall, I believe ensuring the integrity of the democratic processes in the face of advanced AI manipulation will be a considerable and growing challenge for a number of regions across the globe this year.
"Be greedy when others are fearful" The great feature of 2024 is its jam-packed electoral calendar. This year, almost three billion people will be voting in 76 countries, including major [...]
Geopolitics: Asymmetrical Recomposition 1 - American election fog disrupts global visibility The fog surrounding the American elections is disrupting global visibility. What will this lack of visibility mean for the [...]
Bitcoin: Double-edged institutionalisation The United States' recognition of Bitcoin's legitimacy via the ETFs will be a double-edged sword: it will consolidate the Bitcoin's value and status in the short term; [...]
Comments