“Not Just Empty Threats: The Woman Who Faced Backlash for Challenging X”

In an era where the digital and real worlds intertwine in unprecedented ways, Australia’s eSafety Commissioner Julie Inman Grant has become the focal point in a controversial battle over online content regulation. Her recent ordeal underlines not only the personal risks faced by those who challenge powerful tech giants, but also the wider implications for internet governance and online safety.

A legal challenge against X

Earlier this year, Julie Inman Grant, in her role as Australia’s eSafety Commissioner, took a bold step by launching legal proceedings against X (formerly Twitter). The case centred on X’s refusal to remove a disturbing video of a stabbing incident linked to religious tensions in Sydney. The content, which was deemed highly disturbing and in breach of Australian online safety laws, was the catalyst for Inman Grant’s legal action.

The aim was to force X to remove the video not just from Australian users, but globally. The move was intended to set a precedent for how international social media platforms should adhere to local rules. However, the case was eventually dropped. The federal court judge ruled that removing the content globally would be “unfair” and could potentially “be ignored or disparaged by other countries.” The legal battle’s outcome The legal dispute, although ultimately unsuccessful, sparked a severe and disturbing response from the online community. Following a tweet by X owner Elon Musk, in which he referred to Inman Grant as the “censorship commissar” to his massive audience of 196 million followers, she faced a barrage of abuse online. Musk’s comments further escalated the situation, turning what had been a legal and regulatory issue into a highly personal and vicious campaign of harassment. A Columbia University report on technology-facilitated gender-based violence, which used Inman Grant’s case as a key example, revealed that she had been mentioned in nearly 74,000 posts on X prior to the court proceedings. Despite being relatively unknown online prior to the incident, Inman Grant became the target of intense vitriol. Most of these posts were negative, hateful or threatening. Columbia University’s analysis highlighted the use of demeaning language and gendered slurs. Terms such as “leftist Barbie” and “Captain Tampon” were used to humiliate and attack her, reflecting a wider trend of misogynistic and harmful online behaviour.

The impact of online harassment

Inman Grant’s experience has highlighted the serious consequences of online harassment. The abuse she suffered was not limited to digital interactions, but extended into her real life. She received credible death threats, and her personal information was exposed online through doxing – a practice in which private details are made public to harass or intimidate individuals.

Inman Grant has spoken about the profound impact this harassment has had on her life. Australian authorities advised her not to travel to the US due to safety concerns, and members of her family have also been targeted. “There have been threats to my staff, my family, my safety – including credible death threats,” she explained. “I have had to involve federal and local police and change my activities.” The case highlights how online threats can translate into real-world risks. The intense harassment that Inman Grant faced demonstrates the dangers of digital abuse and the need for effective measures to protect individuals from such threats.

The role of social media platforms

The incident raises important questions about the responsibilities of social media platforms in moderating content and managing abuse. X’s approach of geoblocking the controversial video, rather than removing it entirely, was seen by many as inadequate. Although this complied with Australian regulations to some extent, it did not fully address the broader concerns of content moderation.

X’s global government affairs team viewed the outcome of the case as a victory for “freedom of expression”. This approach highlights an important debate about the balance between protecting users from harmful content and upholding the principles of free expression. The complexities of regulating global platforms, which operate across diverse legal and cultural landscapes, add another layer of difficulty to the discussion.

A wider context

Inman Grant’s travails are part of a larger pattern of conflict between regulatory authorities and tech companies. As social media platforms continue to grow their influence, the challenge of enforcing local rules while respecting global standards is becoming increasingly complex. The need for effective content moderation and user protections is more urgent than ever.

“Deepfake Porn Scandal Shakes Korean Schools”

On a seemingly normal Saturday, a university student we’ll call Ji-woo received a scary message on her phone from an unknown number. “Your photos and personal information have been leaked. Let’s talk.” Panicked, Ji-woo clicked on the message to see what it was about. The anonymous sender had sent her a photo of herself from several years ago, a normal school photo that looked frighteningly familiar. Moments later, another photo arrived. This time, the image had been digitally altered to show her in a sexually explicit situation. The photo was fake, but the damage to Ji-woo’s mental peace was very real.

Stupid with fear, Ji-woo decided not to respond to the sender, but the manipulated photos kept coming. Each had her face superimposed on another person’s body, manipulated with alarming realism using advanced deepfake technology. “I felt humiliated and incredibly alone,” Ji-woo later told a reporter. However, her experience is becoming worryingly common in South Korea, where deepfake pornography is turning into a nationwide crisis.

The rise of deepfake technology
Deepfakes – digitally manipulated videos or images that replace faces onto another body using artificial intelligence – are not new. Initially, this technology was seen as a novel and entertaining application of AI, but it has quickly evolved into a malicious tool used to create non-consensual pornographic content. In South Korea, the issue has reached unprecedented levels, especially among high school and university students.

Journalist Ko Narin was the first to highlight the severity of this crisis. A few weeks ago, Ko’s investigative report revealed that deepfake pornographic rings were operating at two major universities in South Korea. However, the problem was much deeper than she initially thought. Ko began searching across various social media platforms and discovered several Telegram groups where users were turning images of women they knew — friends, classmates, even strangers — into explicit deepfakes in seconds.

“Every minute, new photos of girls were being uploaded and requests were being made to turn them into deepfakes,” Ko said. The journalist’s findings not only exposed a horrific violation of privacy, but also a systematic network of exploitation.

A disturbing subculture

Ko found a very disturbing subculture on Telegram, where groups were not limited to university students. Some chat rooms specifically targeted high school and even middle school. In these rooms, often labeled “abuse rooms” or “friends of friends rooms,” a disturbing economy of victimization thrived. If enough explicit content was created using photos of a specific student, it could be given its own dedicated chatroom.

These groups have strict rules for entry, such as requiring members to post multiple photos of a person, and personal information such as name, age and place of residence. One chatroom Ko observed required members to share at least four photos of someone before joining. “The most horrifying thing I found was that one group was targeting minor students from a school, and it had more than 2,000 members,” Ko said.

A national emergency

Ko’s revelations have sparked outrage across South Korea. Women’s rights activists, who have long been vocal against digital sex crimes, immediately sprang into action. They began scouring Telegram channels for evidence and offering support to victims. In just a few days, more than 500 schools and universities were identified as targets, and that number is expected to rise. Shockingly, many of the victims are believed to be under the age of 16, South Korea’s age of consent.

Another victim, Heejin, spoke of her growing anxiety after realizing the scope of the crisis. “I kept thinking, did this happen because I uploaded my photos to social media? Should I have been more careful?” she confessed. Such feelings of guilt and paranoia are becoming widespread among young women in South Korea. Many have since deleted their photos from social media or deactivated their accounts altogether, fearing they could be targeted next.

University student Ah-eun, whose peers have also been victimized, expressed frustration at being forced to change her online behavior. “It’s so unfair that we have to censor ourselves when we haven’t done anything wrong,” she said.

Legal challenges and Telegram’s role

Central to the scandal is the messaging app Telegram, known for its encryption and anonymity. Unlike public websites, which authorities can easily monitor and request the removal of harmful content, Telegram operates in private, encrypted channels. This makes it a haven for people involved in illegal activities, including the distribution of deepfake pornography. Despite claims that it is “harmful material,” the site remains a haven for users.

Exit mobile version