Artificial intelligence: regulation for self-protection

Artificial intelligence: regulation for self-protection

By Andrea Deinert on Hidden Insights 19 July 2021

https://blogs.sas.com/content/hiddeninsights/2021/07/19/danilo-mygarry-jacques-ludik-thomas-keil/

Taking responsibility is not easy; indeed, it can weigh heavily. It makes those who are willing to bear it more morally sound – politicians, for example. But no one carries it lightly. This burden needs to be well thought out. It is understandable that some people shirk it from time to time.

Considering the ambivalent effect of responsibility, it is astonishing that this human value has found its way into something technological, namely artificial intelligence. The field is talking about responsible AI, that is, responsible artificial intelligence. What exactly does this mean?

In discussions about AI responsibility at a panel of Swiss Cognitive, terms like reliability come up. Like this: Do I stick with my AI through thick and thin? Does that mean a form of decency? Integrity and impeccability could also be assigned to it. Or is AI even supposed to be loyal?

Could it be that AI is supposed to be exactly the opposite of its human inventors? Are we constructing an ideal world for ourselves with our AI? Humans do not always act responsibly, do not always behave righteously and are not always loyal. Nevertheless, our AI is supposed to be all of these things. Or do the experts ultimately mean that AI should be held responsible for our actions so that we are off the hook?

Let’s try to lure the camel through the eye of the needle

Jacques Ludik

Jacques Ludik

Let’s start with a statement from Dr Jacques Ludik, AI expert, smart technology entrepreneur and author of Democratizing Artificial Intelligence to Benefit Everyone, who said that “although it is clear that ethical AI principles such as human autonomy, prevention of harm, fairness and explicability provides a foundation for responsible or trustworthy AI, it seems that ethical AI have many nuances which leads to do different interpretations.” He further states that “it is in the best interest of humanity’s future and beneficial outcomes, that we should work hard to get everyone aligned on key ethical principles”.

Danilo McGarry, member of the EU AI Alliance, agrees and says that this is so “because ethics means something different to everyone. What is ethical is determined by the social environment.” As a commissioner, he chats out of the closet: “Every government interprets ethics differently.”

Danilo McGarry

Danilo McGarry

He sums up: “That’s why it’s easier for us to agree on the term responsible AI than ethical AI.” He is hoping for a common value framework in the European Union with all the country-specific societal differences. After two years, the first draft is in place.

Balancing profit and responsibility

And McGarry is also looking at a very special balancing act, namely that between economic efficiency and sustainability in AI applications. In order to really put something completely new on the road, decisions often have to be made quickly. And here, at this point, the balance between responsible action and profit must be maintained. Dr Jacques Ludik agrees and have proposed in his book a massive transformative purpose for humanity and its associated goals (that complements the United Nations’ SDGs) to help shape a beneficial human-centric future where we optimize the balancing of profit and responsibility.

Thomas Keil

Thomas Keil

What do the manufacturers say? “At the end of all considerations, our users always ask the question, what do I want with my AI and what am I allowed to do?” says Dr. Thomas Keil of SAS, analyzing the interaction with his customers. “Of course, ethical aspects are always taken into consideration. Because without the sustainable and lasting acceptance of the customers, there will also be no economic success of an application. And then they also prefer to see one regulation more than too many gray areas for black sheep.”

Regulation helps the jump to AI

And Thomas touches on the financial sector when he ponders aloud whether all this is hindering or promoting. By “all this,” he means the humanistic discourse at KI and notes how it is conducive. This is where his company is at home, he says. No other industry has as many customers in the financial sector. The company he is working for is familiar with regulation. And it is aware of its importance for global markets. That’s why it takes the concerns seriously and, above all, can react quickly and adequately. “At this stage, the automation potential holds the greatest economic benefit from AI.”

Arguably, it could be inferred that companies are currently operating in the regulation-free AI space, humanistic discussions are taking place in an elite circle, and there is no exchange between the two.

There is still time to take a close look at how and who should reconcile profit with morals. Most manufacturers are still dreaming their automation dream with AI. Market experts attest to fantastic sales for AI. But are we still moving forward with theoretical questions? Or is monetization here? Inertia seems to be the current lot.

When the danger wants to be averted

Danilo McGarry notes that Europe’s governments are taking a wait-and-see approach. But they could really get going. The danger is just around the corner and wants to be averted. But there are other important issues to be resolved, such as education and the climate, and then they have to catch up on their own ignorance of AI.

The companies, then? Or even the AI itself? There are those who argue that AI should be able to regulate itself. After all, this is a model that can at least be considered. But is this idea so good? After all, algorithms can also be wrong, depending on who built them.

But they can also be designed responsibly. From a purely technical point of view, this is possible. Decisions can be made explainable by AI. You can program in traceability. The manufacturers would be happy about any legal framework. They don’t want to invent algorithms in a vacuum that are not justifiable in the end.

They even endorse initiatives like “Explainable AI” from BWI. “A society should agree on a course of action, and then we act within that staked-out framework. Wonderful,” says Keil. “Clear recommendations for action make it easier for us to interact with our customers.”

Artificial intelligence - regulation for self-protection

Artificial intelligence – regulation for self-protection

Is that what is meant by responsible AI?

Now it becomes clear why we must also ask who sets the tone – manufacturers or politicians? It also becomes clear why the balance between morality and profit is important and, above all, who has to evaluate when the ecosystem between profitability and profit is disturbed.

The first bit of good news is that responsible manufacturers design their algorithms responsibly. Which factor influences a result and to what extent is already well understood today, and if not, the algorithm will be adjusted so that it is traceable. The second bit of good news is that AI-using companies can always explain to their customers what led to a credit rejection, for example.

Taking responsibility is not easy – we’ve been here already. It makes those who are willing to bear it more morally sound – we’ve also been here already. This editorial excursion is to show that the whole of Europe is committed to humanizing AI applications, that is, not losing sight of the humane, to setting a responsible framework. Dr Ludik also argues for such a more beneficial human-centric future, but also one that is more local and more decentralized and aimed at democratizing AI, its applications, and its benefits to as many people as possible.  Whether we are talking about refugees, medical applications, applications in the financial sector or in insurance, what they all have in common is that humanity must remain above technicality. Profit not at the top of the list for the first time? Not quite – that is also part of the truth. Does whoever wants fixed rules to know how to act correctly have to be protected from themselves?

SwissCognitive – The Global AI Hub, SAS #artificialintelligence

Danilo McGarry (EU AI Alliance), Thomas Keil (SAS), Andrea Deinart (SAS), Jacques Ludik (Cortex Logic, Cortex Group, Machine Intelligence Institute of Africa | MIIA, Vive Teens

#ai #technology #intelligence #regulation #selfprotection

Post by jludik

Leave a Reply

Your email address will not be published. Required fields are marked *