On Wednesday, Senators John Curtis (R-UT) and Mark Kelly (D-AZ) introduced the Algorithm Accountability Act, targeting the recommendation algorithms utilized by social media platforms. This legislation seeks to amend Section 230 of the Communications Decency Act, which currently protects online platforms from legal liability for user-generated content and allows for good faith content moderation.
The proposed act would require these platforms to take reasonable care in designing, training, testing, deploying, operating, and maintaining recommendation algorithms to prevent potential bodily harm or death. Under this amendment, platforms could lose the protection of Section 230 if they could reasonably predict that their content recommendations might result in physical harm.
Victims of such harm, or their representatives, would be able to pursue legal action against tech platforms for damages if they believe the platforms failed to uphold this duty of care. However, the legislation would specifically apply to for-profit social media platforms with over one million registered users.
The bill’s sponsors assert that it would not violate First Amendment rights. Similar to the Kids Online Safety Act, the Algorithm Accountability Act would not prevent platforms from offering information users directly seek or from organizing feeds chronologically. It would also prohibit enforcement based on the viewpoints of users.
Curtis has expressed concerns about Section 230’s role in fostering a detrimental social media landscape, linking it to incidents of violence, including the September slaying of conservative activist Charlie Kirk. Critics of similar legislation, including the Electronic Frontier Foundation, warn that the act could incentivize platforms to restrict content that could be perceived as violations, undermining the availability of resources meant to address harmful behavior.
This development follows a significant lawsuit against major platforms like YouTube and Meta, where recommendation algorithms were implicated in radicalizing individuals—a case dismissed due to Section 230 protections. The Algorithm Accountability Act could potentially alter the legal landscape for tech companies regarding various types of user-generated content.
Source: https://www.theverge.com/policy/824054/algorithm-accountability-act-section-230

