On April 4, Microsoft warned in an official blog post that China is experimenting with the polarization of global voters to enhance its geopolitical interests and is intensifying AI-generated content.
According to the Microsoft Threat Analysis Center (MTAC), China has been conducting surveys on already polarised themes using fake social media accounts, potentially exacerbating divisions and influencing the outcomes of elections scheduled for this year in the United States, South Korea, and India.
The company’s threat analysis center (MTAC) released a report on the same day (PDF) stating that while the impact of such content on users remains low, experiments to enhance the content are increasing, and it could become effective in the future.
The MTAC had previously reported in September 2023 that, during the 2022 U.S. midterm elections, social media accounts linked to the Chinese Communist Party posed as American voters to influence the election. These accounts continued to operate after the midterms, posting on divisive topics like climate change, border policies, drug use, immigration, and racial tensions.
China has already been accused of conducting a disinformation campaign using AI in Taiwan’s presidential election in January through a group Microsoft calls “Storm 1376.” For example, a fake voice endorsement for another candidate was posted on YouTube by Terry Gou, who had withdrawn from the race in November (YouTube removed it immediately), and it is highly likely that it was generated by AI. There were also baseless rumors spread by AI that William Lai, a candidate not supported by the Chinese government, had embezzled funds (he eventually won).
Similar Post
On the day of Taiwan’s presidential election, Storm-1376 posted an AI-generated audio clip purporting to show Terry Gou, founder of Foxconn and presidential candidate, endorsing another candidate, though he never made such a statement. The clip was posted on YouTube but was removed before reaching many viewers.
Storm 1376 has also been accused of opportunistically engaging in events that could lower the reputation of the United States and Japan, spreading disinformation through AI.
For instance, regarding the Japanese government’s decision to discharge treated radioactive wastewater (ALPS-treated water) into the Pacific, they launched a multilingual campaign accusing the United States of intentionally contaminating other countries’ water to maintain its water hegemony.
The content posted by the fake accounts includes both material copied from other prominent political accounts and AI-generated content. For instance, one post claimed that a large wildfire in Hawaii in August 2023 was a test of “weather weapons” by the U.S. government, which MTAC identifies as a product of the group “Storm-1376”, indicating it was AI-generated.
AI-generated content was also used in posts criticizing the discharge of treated water from the Fukushima Daiichi nuclear power plant into the Pacific, targeting Japan, the U.S., and the International Atomic Energy Agency (IAEA).
Microsoft warns that in 2024, a global election year, China is likely to intensify its use of AI-generated content to advance its own interests.
The company had also announced efforts last November to protect global elections from attacks, including deepfakes.