• Sat. Oct 12th, 2024

“How Social Media Algorithms Expose Boys to Violence and Harmful Content”

In the age of social media, algorithms are the invisible hands that shape our online experiences, crafting content based on our interactions and preferences. For most users, this means seeing content that matches their interests and hobbies. However, for some, particularly vulnerable teens, this algorithmic curation can take a nasty turn. Recent revelations have shed light on how social media platforms, despite their best intentions, can inadvertently expose young users to violent and harmful content. Cai’s Experience Cai, a 16-year-old from the UK, found herself at the centre of this disturbing phenomenon in 2022. Initially, her social media feeds were full of harmless content such as videos of cute dogs.

However, her experience quickly turned worse when she encountered disturbing content – ​​videos of accidents, aggressive fights, and misogynistic comments. The shift from innocent to harmful content felt sudden and disturbing, leading Cai to question why she was being suggested such disturbing content. Inside TikTok: Andrew Kang’s experience Former TikTok user safety analyst Andrew Kang gives insight into how these algorithms work. During his tenure from December 2020 to June 2022, Kang was concerned with the nature of the content being suggested to teenage users. His role involved examining the algorithmic recommendations provided to users, including 16-year-olds, and he found a disturbing pattern: teenage boys were being shown violent, pornographic and misogynistic content, while teenage girls were being suggested a very different kind of posts. Kang’s findings were a significant concern. He said TikTok

uses AI tools to moderate content, but these systems are not infallible. Videos that are not immediately automatically removed or flagged often remain on the platform until they reach a certain threshold of views, which can be as high as 10,000. This delay in moderation meant that harmful content could be exposed to young users before it was reviewed by human moderators. Challenges at Meta Kawing’s previous role at Instagram’s parent company Meta presented a different set of challenges. He observed that while Meta’s AI tools effectively flagged and removed a substantial amount of harmful content, the platform relied heavily on user reports to identify additional problematic videos. This system was not always effective, especially for younger users who may encounter and view harmful content before it is reported. Kawing raised concerns about these issues within both companies, but often faced resistance due to concerns about the cost and workload associated with implementing sweeping changes. Although some improvements have been made since then, both TikTok and Meta’s platforms still have

significant flaws in protecting vulnerable users. The main problem lies in the way algorithms are designed. Both TikTok and Instagram use engagement metrics to guide their recommendations. These algorithms prioritize content that generates high engagement, whether positive or negative. For example, videos with high levels of engagement — likes, comments, shares — are more likely to be suggested to other users. This can inadvertently promote harmful content if it attracts users’ attention, even if the attention is negative or upsetting. Cai’s experience illustrates this problem. Despite her efforts to show disinterest in violent or misogynistic content, the algorithm continued to send her such videos. Her attempts to manipulate the algorithm by reporting or disliking videos often had the opposite effect, increasing the visibility of similar content in her feed. This cycle of exposure can have serious psychological effects, contributing to a distorted perception of reality and reinforcing harmful beliefs. Kong’s analysis highlighted another troubling aspect: disparities in content recommendations based on gender.

They found that teenage boys were more likely to be exposed to violent and misogynistic content, while teenage girls were generally recommended content related to music, makeup, and other non-violent topics. This disparity can be traced to the interests expressed by users when signing up, which often inadvertently categorizes them by gender. Algorithms use these interests, along with engagement metrics, to customize content recommendations. For teenage boys, having an interest in combat sports or controversial influencers can lead to a higher likelihood of being shown extreme or violent content. In contrast, teenage girls having an interest in beauty and pop culture leads to a different content feed.

By voctn

Leave a Reply

Your email address will not be published. Required fields are marked *