Pro-Taliban outlet releases alleged AI-generated audio of former Afghan commander arranging mercenary deal

4 min read

Afghan Witness

Afghan Witness's photo

Clips alleged to show former commander Sadat arranging mercenaries for Ukraine spark controversy, with allegations of AI manipulation.

On 8 October 2024, pro-Taliban media outlet Niqab (“The Mask”) shared a 26-second audio clip on X (formerly Twitter), attributing it to former Afghan Special Operations Corps Commander, Lieutenant General Sami Sadat.

In the audio clip, the voice alleged to be that of Sadat discusses a contract to send seven former Afghan special forces soldiers to fight in Ukraine for USD 5,000 (GBP 3,824) per month. The recipient of the conversation is referred to as “David”, who Niqab claims is a military contractor allegedly sending mercenaries to fight in Ukraine.

The outlet also alleges that Sadat is using his London-based company, Sadat Consultants Limited, for controlling operations. It is worth noting that Sadat Consultants Limited was officially dissolved in February 2024, even if the website is still live.

Niqab shared two more clips: a 22-second response from “David” negotiating terms, and an 11-second clip of Sadat agreeing.

 

Responses to the alleged audio of Sami Sadat

Several pro-Taliban X accounts reshared the voice clips initially posted by Niqab, while some anti-Taliban accounts labelled them as fake, claiming the voices were generated using Artificial Intelligence (AI).

Afghan journalist Mirwais Afghan stated on X that the audio attributed to Sadat was fabricated using spliced clips. He added that Sadat had personally informed him the voice was faked using AI. The post was subsequently reshared on Sadat’s own X account.

 

Analysing claims of the use of AI

AW was unable to verify the authenticity of the audio clips, but there are several indicators that cast doubt on their legitimacy. Analysis of the audio released by Niqab (see Figure 1) shows limited variations in pitch and tone, as is usual for a human voice. Furthermore, comparison of wavelengths of the Niqab clip with audio clips from prior interviews with Sami Sadat shows different intonations in voice, as well as a slower pace of speech.

Figure 1: Comparison of wavelengths of the audio released by Niqab (top) with historical audio recording of Sami Sadat (bottom).

Taliban propaganda and the use of AI

Niqab’s account on X was created in November 2023 and describes itself as a platform that “unmasks hidden truths and corruption” related to the former Afghan government, and serves as “a complete archive of American corruption and betrayal” in Afghanistan. The account began posting in May 2024 and, in addition to promoting propaganda against the former republic government and women’s rights activists and protesters, has published two controversial audio clips which it attributed to the National Resistance Front (NRF) leader, Ahmad Massoud.

On 23 July 2024, Niqab shared a 17-second audio clip purportedly featuring Ahmad Massoud, in which he expresses concerns to an individual referred to as “Richard” about the harmful effects to the NRF of Tajik government’s closure of its office in Tajikistan. Speaking in English, the alleged voice of Massoud reportedly seeks “Richard’s” advice and support on the issue.

Several anti-Taliban accounts responded to the audio clip, labelling it as propaganda and a product of AI. While Niqab did not provide more information on the identity of “Richard”, on 25 July 2024, a pro-Taliban X account claimed that the individual was Richard Bennett, UN Special Rapporteur on the situation of human rights in Afghanistan.

The post featured a supposed screen-recording of a WhatsApp conversation in which the audio is played. However, the screen recording is clearly manufactured: the audio file does not appear as an audio file should in a WhatsApp conversation, and it also finishes ‘playing’ – as represented by a dot moving along the visual representation of the recording – seven seconds before the audio is finished, meaning the audio has been put on top of a mocked-up conversation.

So while it is difficult to make a technical conclusion on the audio, the screen recording is clearly fake and produced to lend credibility to the audio, making it highly likely the audio is falsified.

 

Figure 2: Comparison of supposed WhatsApp screen recording of Massoud-Bennett audio (top) with a generic voice message in WhatsApp (bottom). Note the Massoud-Bennett audio is represented as a line of dots, whereas a genuine WhatsApp audio features a representation of the audio levels

On 7 September 2024, Niqab released another audio clip purportedly featuring Ahmad Massoud, in which he allegedly expressed his frustration to German Foreign Minister Annalena Baerbock that the management of a German hotel had denied NRF the use of its venue for an event commemorating Ahmad Shah Massoud’s death anniversary. An anti-Taliban account alleged that the clip was AI generated: “Don’t be cheated. Don’t fall for these things in the age of artificial intelligence. Think about it: is it really rational for someone to be sending a WhatsApp voice message to a foreign minister?”

 

Remarks

Pro-Taliban propaganda channels, such as Al-Mersaad, Hindukush, and Niqab, consistently target anti-Taliban and opposition figures through their content. Although it is technically challenging to confirm the use of AI in audio clips released by Niqab, this would represent a new tactic in pro-Taliban propaganda, as well as an increasing threat to the information space in Afghanistan. The use of AI technology to create fake audio clips by the Taliban or their supporters has also not been limited to targeting domestic opposition groups, as evidenced by the apparent attempt to implicate Richard Bennett, as noted above.

Share Article