In this sense, Afghanistan is a magnified example of how intentional targeting to try and silence women’s voices affects the women who experience it. These ‘levels of pressure’ (intentional targeting coupled with the pressure of social and cultural expectations) increase the risk of re-traumatisation, but, at the same time, this type of research is deeply needed as it enables those who experience this abuse to both be and feel seen, and for individuals, organisations and governments to pursue mechanisms for change.
Research of this kind can and should be conducted in a way that ensures those who have been affected by this abuse aren’t re-traumatised, but this is particularly difficult to achieve when considering the online space, as a user has limited control over what they see and hear as a result of algorithms and notifications.
To limit re-traumatisation as much as possible, navigating this research must be done thoughtfully and in consultation with the affected community. This is how Afghan Witness approached the Gender Hate Speech report, along with anonymising interviewees and blurring images where necessary.
Who are the actors and why is this abuse happening?
It’s interesting — and was noted by the panel as unsurprising — that much of the abuse stems from pro-Taliban and ‘low-level-Taliban’ accounts. It is notable too that the timing of this abuse often spiked in line with responses to enacted restrictions of women’s civic rights carried out by the Taliban: meaning women who tried to speak out on issues such as the closures of their schools or meeting places, would swiftly be trolled online in an effort to, in essence, silence dissent.
Of course, this kind of abuse against women is a critical problem around the world and is by no means limited to Afghanistan. As noted, though, in the context of a country where women’s civic freedoms are so gravely limited by the group in power, and the main perpetrators of this violence are members or supporters of that group, the online and offline effects on the women targeted, are compounded.
The ‘why’s’ behind this type of abuse against women are part of a broader conversation and body of research, however — critical research conducted by organisations such as Rawadari, MADRE, ICFJ, and more. Practically speaking, though, there are elements of the way platforms currently function that shape how women experience — and how perpetrators are enabled to inflict — this kind of abuse online.
Elon Musk’s 2022 purchase of Twitter, now X — where much of the research for the Gender Hate Speech report has come from* — has resulted in decreased content moderation and stymied tools to access data on the platform, and the removal and stripping-back of teams responsible for monitoring human rights issues and mis- and disinformation. Musk famously supports ‘freedom of speech’, indeed supporting it was one of his stated intentions after buying the platform, but it’s important to thread a needle with respect to how this plays out in practice, to ensure those who experience ‘free speech’ online aren’t silenced as a result of it. If an Afghan female activist is being silenced by online abuse, then that, too, is a free speech issue.
There is also a consideration here when it comes to platform users moderating or reporting abusive content. If someone being abused online has to manually remove or block their abusers – sometimes in their thousands – whilst being potentially traumatised by the content, this surely de-incentivises them from participating in what has been touted as the virtual town square.
Additionally, with the majority of examples included in the Gender Hate Speech report research being written in Dari/Farsi or Pashto, the platform’s ability to moderate content in languages other than English, in part dictates the amount of abuse people might experience.
What are the possible solutions?
Many things can be done to address the challenges identified. Five recommendations, formed via consultation with a focus group of politically active Afghan women who have experienced online abuse, are listed alongside the report itself.
Shaharzad, Francesca and Nina also reinforced the crucial importance of interventions including regular updates to hate speech policies, the expansion of language capabilities for the moderation of hate speech on platforms, and increased platform accountability overall, as good technical places to start.
In terms of policies and programs, they noted governments and international bodies could increase funding for research and resilience programs to build women’s networks and skill sets to manage the effects of online abuse, as well as countering the abuse itself.
Overall, the importance of enabling women to maintain a voice and space for themselves online, through platform accountability and varied policies and programs, was reinforced.
Two final points were raised to conclude the discussion –