By Dr Mmaki Jantjies
The impact of AI in the field of technology has been significant in improving its role as an enabler across sectors. As the exciting potential for innovation around the use of AI continues, there is also a perilous threat of misinformation, which is not always easy to interpret considering the quality of new AI-generated content. Recent advancements are exemplified by OpenAI's DALL-E 2 announcement that it can create realistic images from text instructions, revolutionising a whole range of sectors of the modern economy. However, this progress is accompanied by the downsides of deepfake technology, which can manipulate the images and words of leaders and celebrities, such as recent AI-generated explicit images of megastar Taylor Swift.
A deepfake is particularly alarming. It is a type of synthetic media created using the techniques offered by AI, especially deep-learning algorithms. These algorithms analyse and manipulate existing images, videos, or audio recordings to generate highly realistic fake content, often featuring individuals saying or doing things they never actually did. As witnessed in the case of Taylor Swift, the misuse of AI-generated content to propagate false narratives highlights the urgent need for robust mechanisms to combat misinformation across all areas of modern life. But with crucial elections on the horizon this year, the convergence of AI capabilities and deepfake technology raises highly challenging issues for the correct exercise of democracy. There have further been several cases of social media misinformation impacting young people's confidence leading to mental well-being challenges amongst young social media users.
One of the most pressing concerns was also the potential impact of deepfakes on democratic processes across the world, as many countries ran successful elections this past year. The emergence of deepfake technology had introduced a new and concerning dimension to electoral processes, as evidenced by recent incidents in both Nigeria and Slovakia in 2023. In Slovakia, AI-generated audio recordings were used to fabricate statements attributed to a political candidate, suggesting an intention to manipulate markets while rigging an election. Similarly, in Nigeria, an audio clip faked by AI falsely implicated a presidential candidate in ballot manipulation, potentially swaying public opinion in terms of voting preferences.