Fonte : Algorithmwatch
licenza Creative Commons Attribution 4.0 International (CC BY 4.0) .
by John Albert
Since personalized recommender systems have the power to influence what we think and do, we need to understand the risks they pose to our society. The DSA’s new transparency regime is a promising step forward, but we still need external, adversarial audits by independent research facilities.
If social media is the water we’re swimming in, there are more than a few reasons to be alarmed by its undercurrents. Hate speech is trending on Twitter following Elon Musk’s chaotic takeover of the company. TikTok—the most influential platform among teenage users—bombards vulnerable young people with content promoting eating disorders and self-harm. And despite early warnings, a range of platforms including Facebook and YouTube continued carrying extremist content that helped enable the January 8th antidemocratic insurrection in Brazil—and turned a profit in doing so.
These examples are the results of a growing collection of public interest research that call out social media platforms for their role in facilitating the spread of toxic content. Now, thanks to the Digital Services Act (DSA), there is expanded scope for researchers seeking to formally access and make sense of platforms’ internal data. This kind of research will be crucial to help identify risks emerging from online platforms.
While the DSA’s new transparency regime is a promising step forward, it will take some time before we know its true effectiveness. Meanwhile, our collective ability to hold platforms accountable will continue to rely on the work of adversarial researchers—researchers who are capable and willing to employ tools that shine a light on the inner workings of platforms’ opaque algorithmic systems.