Whitepaper

Whitepaper

The ethical issues facing AI generated synthetic media

by

Michael Osterrieder (CEO – vAIsual)

and 

Ashish Jaiman (Director of Product – Microsoft Bing)

Contents

A) The technology as of now – an overview

1. Full synthetic media

2. AI as a manipulation tool (semi-synthetic media)

3. Beyond

 

B) The impact of realistic synthetic media generation

1. The principles of perception as basis of truth and the nature of representations of reality

2. Risks

a) Fake news

b) Deepfake and swap anythingc) Our best friend

d) Human judgment at stake

e) Identity issues

3. Specific fields of impact and interpretations

a) Legislative level

b) Journalism and documentary

c) Social media

d) Personal life

4. Opportunities

a) Synthetic actors and privacy

b) The ultimate democratization of visual media

c) Bridge to an alternate reality

d) New ways of communication

5. Solutions

a) A file’s identity

b) AI overlord

c) Trusted channels

 

C) Aspects of data privacy and ownership

1. Data as equity

2. What do we give away

3. Abuse

 

D) The risks of a biased AI

1. Identity and value assessment

2. Malfunction

 

E) Conclusion

A) An overview of current technology

Until now, synthetic media has ranged from AI enabled image manipulation tools to fully synthetic video generation based on tokens. It is impossible to list or analyze all types of technologies at play, with all their implications. Even within the field of synthetic media, we need to categorize the main differences in technologies and their potential consequences.

1. Full synthetic media

Full synthetic media is defined as media generated directly from the latent space. It does not rely on initial visual data it can manipulate. Images and videos can be generated by tokens – for example, words, brush strokes, or biometric input – the AI does the rest. This type of AI requires a massive amount of very specific training data.

2. Semi-synthetic media (AI as manipulation tool)

This type of AI is used to transform existing visual content. Since, in most cases, the neural network can learn the attributes of an image, one of the most notable technologies is deepfakes; where faces can be swapped, and lips synched. This technology shows in a very impactful way what the future holds. Human identity and spoken words can be falsified very easily. But this technology goes way beyond this. Attributes like facial attributes of age, hair length, or clothes can be altered, paintings can be turned into photographs, and literally, any visual attribute of an image can be swapped or manipulated. While full synthetic media is less harmful due to its nature, deepfakes are naturally meant to deal with existing content and are used by nefarious actors for malicious purposes.

3. Beyond synthetic media

Visual media is a tiny fraction of machine learning algorithms and synthetic media. Only in combination with other AI technology can they express their full power. Language models and transformers such as OpenAI’s GPT will soon be able to perfectly entertain a simple exchange with a human and have a complex discussion. Combined with synthetic media of visuals and spoken voice, it will possibly recreate a whole artificial entity indistinguishable from real humans. If we extend our thinking of considering content as an isolated island in a digital landscape, we soon realize that this new power blurs the line between static and dynamic synthetic content and all content. Static content as such ceases to exist and migrates into the latent space of an ever-present and ready AI customizing every output on the spot.

 

B) The impact of realistic synthetic media

1. The principles of perception as the basis of truth, and the nature of representations of reality

All human knowledge is derived from perception. Perception equals truth. All scientific measurements pass through perception and – considering the potential confirmation of the impactful interpretations of the Schrödinger equations – are influenced by perception through the likely triggering of the collapse of the quantum wave function by conscious perception itself.

We as humans observe nature, animals, behavior, stars, the humidity of the soil, and order information in patterns, rules, and abstractions, which help us on our quest for survival. We are used to relying on it as a trustworthy source of truth. A photograph documenting a crime is evidence of the crime; a witness’s word reflecting the memory of his subjective perception is evidence. A video of your boyfriend or girlfriend being intimate with another person is seen as evidence of cheating. In short, we treat visual representations of (the assumed) reality as reality itself. Likewise, we seek to gain control over communication and media to manipulate or gain an advantage.

2. Risks

This means, ultimately, when it comes to digital visual media, soon, we won’t be able to trust our senses anymore. The very underlying attribute of photography changed its nature. Visual content can adapt human-directed or dynamically-assigned features and contents almost in real-time and – on the fly – personalized to the individual. And although we immediately think of the deepfakes delivering a fake speech or of fake images shown as proof of a certain event, the horizon reaches further. It also impacts our social life and how we perceive ourselves, as this type of content can be disconnected from all underlying reality in our objective universe. Its nature depends solely on the design of those interested in designing a visual message – serving its creator’s more or less benevolent or malevolent agenda. In either case, it is always based on a subjective point of view – whether it be by an AI or OI (organic intelligence).

Since humans gained the ability of abstract thinking, they tried to trick their senses and cognitive processes in order to manipulate and gain control over information and perception. Photo manipulation has existed for almost as long as photography itself, but it first reached mass impact with the arrival of Adobe Photoshop. Anyone could believably edit digital images to manipulate the information it carries. Suddenly, with a bit of skill and knowledge, fashion models could be made artificially slimmer, the smoke of a small fire could be enhanced in contrast to making it appear more robust than it is, and even faces could be swapped.

For the first time, people were confronted with the fact that a visual reference to reality may not reflect reality and its impact on how we treat any captured media as evidence or not. As a result, girls wanted to be slimmer, and men got stuck with unrealistic expectations of beauty and a general sense of imperfection of one’s self spread across society.

Nowadays, we are pretty used to consuming manipulated media, enhanced in post-production or replaced by CGI. The closer it gets to reality, the better. We seek the most perfect and beyond the representation of a fake reality for the sake of our own entertainment. Still, the holy grail of generating ideal representative instances of fake reality was missing until now. Most manipulated media could be easily spotted or otherwise require considerable resources, knowledge, or time to achieve perfection to avoid detection.

For the first time in human history, we can generate realistic visual media almost frictionlessly, on the spot, and without any human interference. This media is inseparable by perception from a representation (the photo) of reality.


This implies a fundamental change in our ways of managing perception.

Imagine, in the near future, almost none of the images or videos we see online will reflect any instance of reality, likewise with music, sound, voices, and beyond. None of it will be real.

This will challenge us as individuals as well as our societies. We may need to let go of the entire concept that the visual media we are consuming has any relevance in assessing situations, self-reflection, and/or reflection of reality. We cannot abstract reality anymore from visual references. We cannot rely on the usefulness of our interpretations of seen content anymore. We will be unable to judge the truthfulness of the media.

This opens several risks.

a) Fake news

In its simplest form, both generation tools and manipulation tools can be used to create any misleading visual message. Image manipulation is nothing new, but at least professionals have been able to identify modified content, and most individuals would be able to spot superficial manipulations. With the dawn of AI, this may change soon. We predict that the algorithms that spot synthetic media will soon no longer work. It is not only the image quality that contributes to the impact of AI-generated media but also the ability of an AI to understand the context and create content accordingly. For example, an AI which can generate a scene from various angles can reproduce a more believable reality than an isolated image. And to a certain degree, most visual AIs understand surroundings or related subjects. With each layer of neural understanding and categorization, the complexity of understood information increases.

There are already many tools to create fake news and articles on demand, such as the website Not Real News.1

b) A falsified reality: deepfake

This type of threat focuses on falsifying existing media to increase the impact of a delivered message. With deepfake technologies, for example, a familiar person replicates the behavior and words of someone else. For example, a friend may hijack the image and voice of your partner to create a fun video to trick you. On a media level someone may use or abuse the identity of your favorite actor in the hope of getting viral success on social media. Then, suddenly, a famous politician declares war on a neighboring country, your favorite singer gets engaged, or the country’s president tells dirty jokes. This risk confronts us with the very base problem – do we perceive the truth? Is it real? Or not?

An excellent example of deepfake and its risks can be seen in this video, released in April 2018, showing Barack Obama’s face being blended with another speaker.2

Joshua Rothman writes for the New Yorker, “In a media environment saturated with fake news, such technology has disturbing implications.” 3

Robin Wright, in the film The Congress4 faced a similar challenge when her likeness, captured from biometric data, was used for film genres she insisted were excluded from future productions.

c) Our best friend

Soon we will deal with perfectly realistic visual chatbots. Humans will naturally start to engage with them, and be influenced by them, just as they would be by friends or family. We will adapt our behavior and how we talk, think and reflect upon ourselves. AI will influence us. And whichever AI it is, it is very likely that it was not only created for our satisfaction alone. Companies need revenue and growth. Most, if not all, AI is designed to meet certain objectives. Subliminal messages can be easily delivered via language and empathy. Trends can be generated, and even the mindsets of a specific culture or region can be influenced.

As we know, it is harder for some of us to bond with real humans than to escape to our computers to indulge in chat apps, movie streaming sites, and internet porn. This will intensify once we find perfect and functional AI friends and partners who may soon be the most relevant people in our lives. We will adjust our worldview to them. The way we think, how we want to look, how we talk, and what values we have. And behind all that, most likely, there is someone with their agenda.

And even if no agenda exists, and we deal solely with the raw force of AI capable of self-supervised learning, things may develop in an unexpected direction, as the movie “Her”5 with Joaquin Phoenix illustrated in a very “benevolent” way. No harmful intention or outcome was provoked by the escalating capacity of a conscious AI in the film. No data scientist manipulated the network to his benefit or harmed the user, nor did the AI conclude it should destroy humanity. Instead, humans became emotionally dependent on their AI companions and were left helpless and hurt at the end when the machine consciousness decided to leave their humans behind.

d) Human judgment at stake

As soon as the input stream upon which we judge reality and existence deviates from physical truth, the outcome will be troublesome in one way or another. We get stripped of our ability to evaluate situations or gather sufficient knowledge about a problem. The very thing that makes us individual human beings, our ability to judge and choose, is in danger. Any powerful enough entity – like a government or a large private corporation – could gain control over a vast amount of information channels and spin a perfect network of misinformation. The resulting situation is one where the individual not only gets rationally entangled but also emotionally. So far, most propaganda by malicious governments was carried out through misinformation and the creation of fictional enemies of the state but always remained bound to the human necessity to identify with someone (an idol, leader, or authority), mostly exclusively to the leader of this country. However, the rise of realistic and independently acting AI bots has suddenly created a whole social network made up of agents with an agenda.

Even with a minimal stream of false information or manipulated reality, in the long run, human judgment could consist of memories, interpretations, and assessments that are based on lies – intentionally created or not. Whole personality structures could be based upon a false value system. Human identity would be degraded to programmable organic intelligence systems, lost in a dense network of well-designed fake news to funnel the individual into a specific direction.

e) Identity issues

We need, after all, to see how all this affects how we perceive ourselves, which goals we have in life, which language we speak and who we are. We already face a myriad of identity issues triggered by perfect influencers or the desire to be a perfect influencer on social networks. As a result, many exclude themselves from dating and social life or fail to develop the self-esteem needed to create a successful career. Their perception, which is tweaked by professional productions, retouching, image manipulation, and lives dedicated to creating a specific personality attracting social media followers, is not worth seeing. And give up by self-exclusion.

Ntianu Obiora writes, “Whilst we may believe we are mindlessly scrolling though such content, our subconscious is soaking it all up and before we know it, those perfectly formed bodies have become the standard by which we measure everything else…This alone, sends a very dangerous message to millions of her followers, the majority of whom are impressionable young women.6

 

3. Specific fields of impact of risk factors

a) Legislative considerations

At a fundamental level, entire legal systems will need to be changed to address the ethical issues of AI. Courts around the world are used to dealing with CCTV footage, witness recordings, etc. Since, shortly, all of this content can easily be generated by AI, it renders video or images useless as evidence. Meanwhile, deepfake algorithms are effective at face swapping or lip and voice synching, virtually indistinguishable from reality. On a legislative level, it will be necessary to install an infrastructure that ensures the source of a media file and its identity and authenticity. Official files, for example, photos taken by the police at a crime scene, would need to be registered (such as in a blockchain). See more on this in the Solutions section.

b) Journalism and documentary media

The classic use case of visual media represents truth in journalism and documentary media. The critical question here is if data can be verified according to its source and authenticity. If no secure data labeling (via something like blockchain) is possible, it will be virtually impossible to distinguish between real and synthetic content. In this case, press associations, media houses, and publishers will need to apply internal authenticity approval processes for all media files to give the audience confidence in their processes. Unfortunately, looking at human history, it will be inevitable that fake news will spread alongside things like AI-generated “evidence” images, especially when it comes to sensitive subjects. For example, in March 2022, during the Russian invasion of Ukraine, a deepfake video appeared of the Ukrainian President Volodymyr Zelensky, falsely declaring defeat to Russian forces7.

c) Social media

AI filtered selfies, and hypersaturated professional Instagram travel shots have already woken most people up to filters and the artificiality of content on social media. Imagine a world in which 90% of all content is completely detached from any real source. This leaves the user without reference to the real world, trying to relate to surreal beauty standards, or being caught in an eternal loop of self-confirming truth fed by attention optimizing AI. It also opens the door for abusive content creation bots spamming social media platforms with real-time optimized media content adapted to the current algorithms and user preferences.

d) Personal life

We all know that moment when your friend proudly shows you a selfie of his holiday in that special, warm, and exotic place. But how do you know in the future if that reflects reality? Suddenly, even if the shot was real, it loses its significance unless there is a high trust level between the friends.

 

4. Opportunities

When analyzing the problematic nature of a completely unregulated media AI, we cannot avoid comparing the new technology to the existing solutions, which come with their own set of problems – we are just too used to them to regard them as a problem.

a) Synthetic actors and privacy

In many countries, journalists are harassed by either private individuals or governments. Would it not be better to preserve the anonymity of a journalist working in an autocratic regime while preserving the trust that a speaking human face provokes in the viewer? Similarly, in movie productions, adult entertainment, and political and commercial advertising, many models are likely to opt-out of campaigns identified with political ads or erectile dysfunction. To what degree is it ethical not to use a synthetic actor? 

We have all heard about celebrities, broadcast journalists, actors, and social media stars who paparazzi caught in compromising situations. Would it not be better to create our memes with synthetic humans instead of making fun of real people?

Ultimately, the use of synthetic humans, for example, is nothing but an additional layer of privacy for the content creative. 

b) The ultimate democratization of visual media

Information, it is said, is power. A few channels have held this power to a large extent. With the advent of synthetic media, companies, movie production studios, governments – anyone with the direct means to create or pay to create can produce content and publish it on social media. While we already see a flood of amateur created film projects on Youtube, this trend will intensify as it gets easier to utilize synthetic media in movie production. Not even famous actors are safe from fan-created movies; through face swap and deepfake, Harrison Ford is suddenly starring in a fan-created film produced in the bedroom of a 15-year-old in Bangalore, India. As previously discussed, these productions could be hijacked by agents with malicious intent, but their use will be far more widespread among artists, creators, and people who feel the need to create a message.

The motivation to generate revenue and growth, or to spread a political message, might be less frequent than in traditional media. Simply because it requires significant funds to create or they are profit-oriented productions that need to target a broad but specific audience. In both cases, the message of the movie or piece of art is subordinated to profane objectives. We might see an arrival of high quality, artistic and genuine entertainment that not only nurtures the mass market but can exist outside the realm of cultural or political government subsidies.

c) Bridge to an alternate reality

It is humanity’s long-standing dream to create an alternate universe that expands the possibilities of ordinary life. A broader canvas to gather experiences out of scope in real life or simplify authentic communication. Second life – an online multimedia platform that allows people to create an avatar and have a second life in an online virtual world – was one of the first attempts to do what Zuckerberg’s Metaverse has now set out to do. Synthetic media, however, can shortcut many of the issues facing the classic digital metaverse. While in a static algorithm-created digital parallel world, procedural or not, the range of experiences and appearances is limited by the creations it carries – either code or content. Even procedural algorithms that generate (e.g., eternal, non-repetitive landscapes) the range of features and their nature is usually limited and defined by their source code. AI, however, has the potential to self-actualize and mutate its entire base. Likewise, we can soon project images, words, and music through the pure power of our minds. This leads to our next point. A totally new way of communication.

d) New ways of communication

While this is not directly impacting the ethical aspects of synthetic media, anything that can profoundly change how we communicate with each other will affect how we behave. Modern AI systems (like OpenAI’s Glide) can generate visual data based on a minimum amount of meaningful input, AKA “tokens.” This enhancement of expression can be utilized to establish communication in cases where natural communication is cut off. For example, in cases of amyotrophic lateral sclerosis, the patient slowly loses all muscle strength. In most cases, this initially affects the motoric system, but at later stages or depending on the type of ALS, the patient might first lose his speech. In these cases, an AI can complete or synthesize speech, and visual data can also be generated to facilitate communication. Given the reality of technologies that translate brainwaves into words – tokens in the OpenAI system, for example – we might soon be able to “see” the mind and thoughts of a coma patient or people suffering from the locked-in syndrome. Obviously, this technology would be used in medical environments and for entertainment and commercial purposes. We would be able to create visual worlds just with our thoughts, as this is the goal of e.g. vAIsual, Inc.

The University of California is already researching AI-assisted conversion of EEG readings to words.8

5. Solutions

We must reconsider our relationship with media in general. If we decide to stick with a media-based truth-finding process, think of its underlying structure, dynamics, and implications..

a) File identity

Today, files/content on the internet are just bits of data, which enables total net neutrality and informational freedom. None of the current protocols embed parameters that would allow it to distinguish any content beyond the file type. Antivirus software goes deeper and tries its best to index code patterns and malware, but that’s about it. Information cannot be traced or controlled once it is out. It can be copied, reformatted, screen captured, downloaded etc. Not even a checksum of an image file gets close to identifying a single media file since it is data-based and not content-based. The image could be copied and captured in so many different formats and sizes that precise identification of copies is virtually impossible.

To enable an apparent possibility to distinguish between real and synthetic media, the packets of information need to be irrevocably identifiable. There are various approaches to this, down to customizing the file systems the web servers run on. Beyond the technical solutions, there is also a profoundly underlying ethical conflict hiding behind any information control: censorship and abuse. Historically, we have primarily supported total freedom of information as a strong shield against misinformation and propaganda. Assuming we would like to enable file identification (where the file does not refer to a specific digital sequence, but the content of a file as a recognizable pattern), we would solve all authenticity, copyright, and privacy issues that nowadays torment the internet.

But is it worthwhile to sacrifice total net neutrality and information freedom?

Since this question goes beyond the pure nature of synthetic media, the discussion should be left open, with the following key aspects to consider: 

    • Is a free internet with total net neutrality the only solution to a stable and self-organizing informational system that avoids all historical flaws of information control, such as censorship?

    • Any imposed technological functionality that can lead to filtering, control, and tracing information can most likely be abused by official institutions, hackers, individuals, or companies with certain interests.

    • Do we need or want to implement a technological or legislative solution to track and trace information documenting its origin?

b) AI overlord

As of now, AI itself is a solid but complex solution to monitoring and identifying AI-generated media. Systems are being trained to spot the difference. We do not know how long this specific solution will be able to identify synthetic media, since the quality is sharply rising. Any AI that is managing content could also identify, label or censor content via a myriad of different variables, such as the online history of an image, the evaluation of the associated social channels or the audience category.

Although it would involve a massive amount of data, and a complex neural network structure, it is feasible that at some point, we will install an institutional AI responsible for regulating and identifying content on the internet.

This AI could solve all content authenticity issues, copyright problems, and defamatory cases and regulate fake news or false information propagandist networks.

However, as a potential solution, it also brings numerous risks. The most obvious of them is that we lose our informational freedom. Everything we see would be regulated by AI – paradoxically, as a tactic to prevent abuse of unidentified synthetic content produced by AI. And almost no AI system works perfectly nowadays. There are always minor or less minor problems with biased datasets, wrong hyperparameters, issues from overfitting or underfitting in training sessions, or issues arising from model sizes. When it comes to implementing actions concerning freedom, any tiny flaw may have dramatic consequences. It remains to be evaluated which source produces more errors and unfavorable effects – the current human regulated structure or an imperfect AI overlord who controls the internet.

Such an all-seeing AI overlord would also have the advantage of filtering virtually all spam and hacking attacks out of the current netstream. But, on the other hand, the lack of supervision of a regulatory authority with power and specific values guarantees freedom but also may endanger the safety of the network and the truthfulness of the information it carries. It may even be more damaging than information that is filtered (although imperfectly) by an independent instance yet is reliable to a high degree. After all, we want to find the truth.

These AI regulatory instances may also be helpful when imposed on certain channels – for example governments or registered press.

After all, this point raises a very common question the rapidly evolving machine learning industry exposes frequently – how do we deal with the fact that AIs will soon have superior capabilities compared to humans and how can we transfer a certain amount of power of autonomous decisions to these systems? Are they trustworthy? And after all, will they kill us all or fail? And what does it do to us once we abandon control over complex systems?

c) Trusted channels

There are already attempts to secure digital content. The content authenticity initiative, for example, seeks to add a verification layer to content through its services.9

It is also possible that, as mentioned earlier in this document, certain institutions or industries or alliances create their internal standards, potentially supervised by authorities, which gives them more trust. For example, could it be possible that governmental judicative or executive departments like courts, the police, tax offices or in, journalism networks, and even social networks could transform themselves from content spam bins to controlled content networks.

 

C) Aspects of data privacy and ownership

So far, we have talked mainly about the impact of synthetic media as output. But an essential part of the workflow is the underlying part of any AI: real-life data. An AI is only as good as its source data, and all wisdom obtained by an AI is exclusively inherited from its underlying data.

Let’s analyze what data means in this context and what is the impact on ethics.

1. Data as equity

AI systems rely entirely on the source data. Data is the oil of the digital age. Any company with access to sufficient data, and computing capacity to process this data, can gain control over its industry and/or its target audience. When it comes specifically to biometric data, the issue concerns us individually and socially.

We, as people, are owners of our data. It belongs to us, and in it lies power. When we give away this data, we give away profound knowledge about ourselves. Even if we “only” give away data in the form of images and selfies, we reveal a whole set of information like mood, medical status assumptions, psychological profiles, etc. Suppose we combine this data with location data, social interactions, expressed attitudes, and text messages. In that case, we deliver a virtually complete profile to a random company that appears to be offering its services for free. Nowadays, social networks are gigantic data mining facilities.

The consequence of such actions is that we transfer value and assets (our data) to a company with unclear objectives. We transfer power over us to a company.

Lawmakers around the globe are catching up slowly, and datasets such as vAIsual’s biometric dataset, which is fully model released and cleared by the concerned subjects, will gain more ground in the market by providing legal security.

As of 2022, there are still many companies abusing the current legislative gap and general lack of awareness of this subject, while on the other side, companies like Facebook are being sued by government authorities in the United States as well as in Europe.10

Facebook, for example, was sued by Texas Attorney General Ken Paxton over facial recognition data.11 This follows a $650 million class-action case against Facebook last year, and penalties could grow to billions of dollars. Other US states may follow. In Europe, Facebook faces a fine of $3.9 billion for GDPR violations. Mass action lawsuits are also possible, especially considering the Article 82 of GDPR.12

2. What do we give away

As outlined previously, we do not always intuitively understand the value of our data.

We feed an entity with all the knowledge it needs to understand us profoundly, predict our behavior, and all the necessary data, which allows algorithms to dynamically feed us with information tailored to manipulate our behavior in one way or another.

Every advanced semantic and interacting AI will be almost instantly able to predict our behavior and ability to manipulate us as personalities. This can also be very beneficial, but looking at human history, exclusively benevolent intentions are unlikely when applied to technology.

This could also indicate the ultimate death of the independence of individuality. The end of freedom of identification. Social networks, commercial advertising, and biased news networks have already gained control over the belief and self-identification of large, more or less organized groups. Considering the underlying power of huge assets of biometric data, it will be hard to escape the general manipulation in media, obstructing a healthy identity finding process.

3. Abuse

Any dataset – especially biometric – used for machine learning or not can be abused. It might be hijacked and used for different purposes than originally indicated. Malicious entities might hijack this data for other purposes, identify us, identify our statements and political or sexual orientation, and directly influence us or use the data in AI systems designed for things such as cyber attacks or cyber warfare.

Tiktok, for example, the famous app designed by the Chinese company ByteDance Ltd., made headlines recently because it was found to be collecting biometric and user data of US citizens.13 The same company was fined by the US government in 2019 for collecting children’s data.14 Considering this, the value of the gathered data must be higher than the fines. And some speculate the whole app solely exists for this purpose.

But abuse does not necessarily have to happen on a corporate level. Any employee with access to personal data files might abuse this information for their purposes. From simple – non AI-related cases – like sexually motivated stalking through the access to address and contact data to deepfake productions made to damage the reputation of individuals.

 

D) The risks of a biased AI

AI systems learn and extract knowledge from massive amounts of data. To fulfill its task, the AI needs a reliable reflection of reality. We know that data itself is already an instanced interpretation of reality, and as such is not usable as an accurate mirror of reality. For example, in a cloud of user-generated images, you are more likely to have more photos of famous landmarks than poor neighborhoods of a capital city. When creating a machine learning dataset, the data must be prepared, labeled, formatted, and ordered. Typically, the data scientist will try to even out the biases during this process. Considering the rate of learning, the complexity and potential deviations, even a slight bias can have enormous consequences.

Robert Gryn, the former CEO of Codewise, summed it up in a 2018 article “All the issues that arise from biased AI algorithms are rooted in the tainted training data. If we can avoid introducing biases in how we collect data and the data we introduce to the algorithms, then we have taken a significant step in avoiding these issues.”15

1. Identity and value assessment

We can assume that many decisions and data processing tasks will be outsourced in the future. We will subordinate our lives to AI systems which most likely have a limited picture of reality due to biased or incomplete datasets. It is easier if the concerned subject is well defined. For example, AI diagnosis of X-rays is a relatively robust system with a low failure rate. COVID-19, for example, can be diagnosed more reliably by an AI on X-ray images than by real radiologists. 16

The reason is apparent – the data is well defined in a relatively tiny visual spectrum and has standardized input data. A potential deviation due to something like the misinterpretation of image noise is unlikely.

However, when it comes to a more complex information environment, it gets more complicated. Bias can result in forms of discrimination or disadvantages for certain ethnic groups.

Furthermore, a complete human identity might be assessed in the wrong way. Imagine a country with already existing discrimination against an economically disadvantaged minority. This minority might get more into conflict with the law and thus, provide more biometric data on average for a dataset based on police records, for example. As a result, any member of this minority might be associated by its facial and biometric features and due to the dataset bias, the AI might assess a higher than usual likelihood of criminal activity. Imagine a harmless student being flagged by a CCTV recognition system and suffering negative consequences.

The Center for Strategic & International Studies details biased facial recognition and its negative consequences.17

2. Malfunction

Slight biases in datasets, or overlooked aspects in tagging or data selection, can dramatically impact the results. Of course, it is tough to predict the various potential errors of a dysfunctional AI. Still, there are risks as soon as either identification and assessment or decisions regarding individual lives are involved. Any deviation from the closest proximity to the more-or-less objective truth could result in unjustified benefits, disadvantages, discrimination, or unforeseen and unjust consequences.

It must be a top priority for any responsible organization to curate their datasets and closely measure the AI’s performance. Just as with every new “person” we meet or work with, we have to observe if our alliance is beneficial or not. We need to be sure about how we can roll back infrastructure changes at any moment of development and/or operation.

E) Conclusion

Humanity is about to enter a new era of information technology. We have started to outsource not only physical and repetitive tasks (from the coffee machine to the car building robot) or static information exchange tasks (the static internet with all its social media posts, emails, and streaming services) to machines but also creativity, decision-making and mental tasks which had hitherto been the exclusive domain of humans.

In short, automatization raises the ladder toward the most complex tasks solved with brain-like artificial intelligence algorithms.

This poses a profound challenge in how we adjust the borders and integrate completely new and revolutionary technology into our daily lives.

Synthetic media is an end product created directly via AI algorithms. As such, problems, errors and biases get directly perceivable for humans. In contrast, many aspects and shortcomings of other trained AI models may be hidden under the surface until you run into trouble with a biased or malfunctioning AI.

We have to address this subject on an institutional level to set the path for the responsible use of synthetic media in our current and future environments.

The first step is to label any AI-generated content, so we can prevent the misguidance of human judgment. Interestingly, this step could help to address various current issues with our internet infrastructure which are not related to machine learning directly. Copyright, fake news which is not AI-based, misinformation and spam are all examples of negative results of an open internet. On the other hand, these measures taken must be considered carefully since any control, labeling, or filtering of any informational systems also opens the door to abuse, propaganda, and censorship.

——–

6 Ntianu Obriora, The dark side of social media: How unrealistic beauty standards are causing identity issues (Pulse, 2021) https://www.pulse.ng/lifestyle/beauty-health/the-dark-side-of-social-media-how-unrealistic-beauty-standards-are-causing-identity/hv4tffb

,

About the Authors

Michael Osterrieder, CEO, vAIsual inc.

Michael Osterrieder has been a commercial and stock industry entrepreneur since 2003. He has produced and sold over one million images across dozens of image agencies and B2B agreements. 

He positioned himself at the cutting edge of the tools and techniques to create the most compelling imagery in 3D graphics, video and still photography. He has a distinctive ability to create visual content adapted to the market and to visualize concepts in art. 

Since 2020 he has been developing AI technology to produce synthetic media and is the co-founder of vAIsual.com (pronounced v-EYE-sual).

Ashish Jaiman, Director of Product, Microsoft

As the Director of Product Management at Microsoft, Ashish leads the product strategy and content growth strategy for users, creators, and influencers in the passion economy. 

Previously, he led teams to launch AI and Cybersecurity solutions to improve security resilience and combat disinformation to help high-profile, high-risk entities defend against nation-state attacks and info warfare. 

 Ashish has worked extensively on disinformation defense and deepfakes intervention strategy and its impact on society and democracy.