Rules of Ethical AI

ljrk

2023-05-14

There’s currently [citation needed] a lot of talk about AI. And while the bulk(?) of it seems to be centered on how awesome it apparently is, at least a few people are discussing ethics and wider societal impact. While I feel that the latter is definitely underrepresented, I do think that the ethics discussion is currently quite lost in dealing with AI. This shiny new thing seems to defy our current moral guidelines, but I want to argue that this is not the case. Indeed, if we focus less on AI itself but on transparent decision making, progress of society and sharing the benefit, we can come up with a much better model.

Non-Generative or Categorizing AI

I want to broadly categorize AI into two lumps: Generative AI, which you prompt and it generates text/picture/video/audio, and non-generative AI which can be used to categorize complex input and produces a more or less complex “decision”.

At first, let’s have a look at the latter category, it’s applications and ethics. As already alluded to, I base this broadly on widely accepted standards of ethics. This means that first, decisions shoud be done responsibly by an envolved individual that isn’t completely decoupled from the impact of their choice. And second, that the process for the decision should be transparent.

AI doesn’t really tick either of those boxes, which made 1979 IBM create the slide reading

A COMPUTER CAN NEVER BE HELD ACCOUNTABLE

THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION

Since then, a lot of time has passed. We do have AI now and it will be used, for management decisions and non-management decisions. But in this article I want to talk more about ethics, so I’ll propose with the following “Gretchenfrage”. It can be used to determine quite clearly whether a given application of decision-making AI is ethical, or not:

Is the result of the task uniquely identifiable or verifiable by a human onlooker?

Examples for such usage are using AI to automatically transcribe audio to text, or for voice recognition, optical character recognition, and translation (to a limited degree). The task at hand is a decidedly manual task which any person can do without much training, but with more time. The result can be live-checked and rectified if errorneous.

Counter-examples are complex decisions that don’t have a black-and-white answer, with the clichée question being “In the face of only those options, should the car run over the child or the elderly person or even group of people?”. If this decision only depends on the choice of data set we completely rid ourselves of any responsibility. We notice: This is not about AI, it’s about how we make decisions! It would be just as-bad if we simply deflect any future decision to previous decisions without reconsidering. This gets much worse when the data set is reproducing stereotypes or racist ideology that is ingrained in our society. Humans can reflect and actually act against their learnt behavior, a feature AI crucially lacks. Deferring to AI (or any technology that will just replicate previous decisions) will conserve societal progress and stop progress.

Generative AI

The discussion around generative AI is a different one – but it too totally misses the point. The key issue is, again, not whether we should be using AI for art or not. Actually artists already made that decision, they recognized the potential of AI before the hype and incorporated it into their artwork. But since then the way AI was applied to generate art has changed!

So all of this is about data, data ownership, sharing and protection. The discussion who owns AI generated pictures is absurd, which can be highlighted by a simple thought experiment: Suppose a company/individual is wishing to generate pictures using AI. They then commission/pay/contract artists for a host of data in order to use that as their training set. This is not unlike stock photos! Any content generated from this data set is clearly licensed properly and there wouldn’t be much discussion about it.

So the issue artists, rightfully, take, is not the usage of AI. It’s freeloading by big companies! Companies who generate revenue from nothing of their own: The data set is what makes up the AI, and all that data is effectively stolen. Note that, usually artists don’t care about individuals or other artists “copying” if done respectfully and even for free. It’s basically a widely accepted standard to allow “inspiration” and “covering”. With big companies scraping the Internet we have a totally different scenario though, but this distinction is hard to put into law of course. We end up with a loophole: A societal rule which, through technology, can be exploited and no law in place to reign that in, yet.

Regulation

We need laws that protect data, each individual’s data. We need to ensure that data we share with other humans under lax terms – because we trust them and society works through a system of mutual respect – isn’t exploited by technocratic AI companies. If we protect our data, the “AI issue” with generated art will simply disappear.

What’s more, if AI becomes a tool where we own the data we put into it, we control any kind of decision making, as well! While we certainly don’t want AI to actually do any serious decision-making, some applications are more gray-area than others: An automatic translation is certainly an interesting application of AI but it was shown many times how it can reproduce stereotypes in the translation that weren’t present in the source material. This “decision” to, e.g., use feminine pronouns when referring to the nurse and male pronouns for the architect is still problematic. But if we actually know the data set better, it’s more transparent and easier to fix.

Regulation Effects: Mis-Applications of AI

This kind of regulation would restrict the spread of AI (otherwise, what’s the point?). I do argue, however, that virtually all the applications of AI that would become impossible or hard through those restrictions, are problems AI is either unfit to solve, self-made problems or no problems at all.

Be it self-driving cars (we have trains! And the remaining cars wouldn’t be too many to handle with humans…), semi-automated form-filling or data screening (we can change the process behind that and replace PDF forms with websites, have direct data entry, reduce mail-ping-pong, …). We have a lot of construed issues stemming from hypercapitalism and neoliberal policies, half-assed digitalisation and a focus on conserving destructive habits. If we go to the core of these issues, AI wouldn’t help but solve the symptoms, or even worse: Shift the issue to a new area which we don’t have any experience in, yet, and don’t know how to solve problems in. Certainly great for economy and creating fake jobs, for sure! But capitalism set out as a system proclaiming that if we can find the right rules, we can improve society, simply through each individual striving for their personal benefit.

Though, if we create new problems instead of solving the old ones, and then cheer on the new jobs created, then we can observe the most clear sign that this system fails us.