A 21-year-old girl from Ukraine discovered that her photo had been taken and cloned using artificial intelligence (AI) to create an alter ego on a Chinese social media network.
Olga Loiek, a student at the University of Pennsylvania, opened a YouTube channel last November to share her lifestyle and experiences. She was trying to build her audience for her own content, but things took an unexpected turn for her.
Her image was lifted and processed through an AI program to create a set of personas, such as “Natasha”, who appears to be a Russian woman, fluent in Chinese, and eager to express her gratitude for China's support for Russia. She sells some products to make money.
Loiek's content is doing well, with her channel having less than 18,000 subscribers at the time of writing, but disappointingly, fake Chinese accounts sometimes have followers in the hundreds of thousands.
Things I'll Never Say
“It’s literally like my face speaking Chinese, with the Kremlin and Moscow visible in the background, and me talking about how great Russia and China are,” Loiek said. “
“It was really creepy. “These are things I will never say in my life.” she added.
There is a deeper context to the story of AI-enabled deception: geopolitics.
Accounts, images and avatars like ‘Natasha’ demonstrate the close political ties between Russia and China. The former remains at war with Ukraine following the February 2022 invasion.
Taken at face value, Chinese social media is often used by Russian women who want to show their warmth and respect for China thanks to their ability to speak the language fluently. They also try to support the war effort by selling products imported from their country, but that is all a mirage.
This is AI at work, replicating images and leveraging real-life situations to influence and mislead unsuspecting people. Often, videos and products are targeted at Chinese single men to create a response and sell.
Some accounts appear to have sold tens of thousands of dollars worth of products, including candy, while others even include disclaimers saying AI may have been used to create the avatars.
This is another example of how AI remains a powerful tool for spreading misinformation and misinformation for governments and tech companies now, not in the future.
Featured image via Ideogram