MAY 20, 2022
Synthetic media needs to be “irrevocably identifiable”, according to a new whitepaper on ethics and AI
Since synthetic media became impossible to discern from reality, we have moved beyond the “uncanny valley” to the “chasm of truth”, where we face ethical and legal problems from the tech being too good.
In a newly released white paper, by vAisual CEO, Michael Osterrieder, and Microsoft’s Ashish Jaiman, the authors say the cascading issues of deep fake, sense-making and AI bias mean that stricter measures are needed to ensure the authenticity of a file.
“It goes beyond ‘clickbait’ videos on the internet, right to the heart of what we perceive to be the truth. The AI synthetic media output can be already indistinguishable from organic media. Even specialized AI algorithyms trained on detecting fake media cannot detect a sythetic image generated by vAIsual. We know because we’ve been testing our photos of people and they are regarded as real. By humans, as well as algorithms made to detect generated media”, says Osterrieder.
“In recent years, more people have been acknowleding the high level of manipulation of visual media, or “photoshopping” of the original image with AI technology, and that reality is altered. With generative synthetic media AI, it takes it to the next level by removing all underlying source information and letting the AI dream up a image on its own after it had learned the subject.” says Osterrieder.
According to the white paper, amongst the various ways of guiding humanity trough the jungle of made up media, the most clear and radical is identifying files by identiftying the contained information and label it as well as track it according to its genuine source. This can be done through legislative and technological solutions, however, the authors warn against the potential for misuse through information control and censorship.
While the issue of justice and truth are most critical to address, there are a many more negative consequences when AI is trained with dirty or biased data.
“Bias creeps in during the process of training the AI. For example, as we were building our dataset we had less people showing their teeth when they smiled. Consequently the teeth on the synthetic humans were the last element that it perfected.
Importance of avoiding bias that creeps in because the source material is naturally skewed in certain directions. According to Osterrieder, it’s critical the AI is trained in a balanced way.
He cites the example of when discrimination of a minority is reinforced by providing more data in relation to local crime, and increasing tracking and potential threats for anyone with similar features from this minority group.
With the limited picture of reality that the AI is trained on, racial bias can creep in and become a self-perpetuating feedback loop.
While the white paper focussed on the various risks of AI generated media, it also suggests there are opportunities to protect the privacy and safety of actors and journalists.
“We’ve been approached by large banks wanting to replace real life models with synthetic ones to avoid ethical issues of including people in advertising about debt and financial issues”, says Osterrieder, CEO and co-founder of vAIsual. “Similarly, journalists reporting in politically dangerous countries, would benefit from using a sythetic model, rather than their own face, to report the news.”
According to Osterrieder, “now we know that AI can solve ethical considerations around the use of models for sensitive advertising (for example erectile dysfunction, dating profiles and financial debt), it becomes unethical not to use synthetic media.