Fraud Management & Cybercrime , Governance & Risk Management , Privacy

House Lawmakers Announce Bill Targeting Tech Algorithms

Bill Would Remove Some Third-Party Content 'Immunity' Held by Social Platforms
House Lawmakers Announce Bill Targeting Tech Algorithms
(Photo: Book Catalog via Flickr)

Democratic lawmakers on the House Committee on Energy and Commerce announced new legislation on Thursday that would rein in tech algorithms on platforms exceeding 5 million unique monthly viewers. The news comes amid a busy month for Facebook, in which one of its former data engineers testified before the Senate on the platform's allegedly questionable data policies.

See Also: Netskope FERPA Mapping Guide

The bill - called the Justice Against Malicious Algorithms Act - would amend Section 230 of the Communications Decency Act, which shields websites and online platforms from being held liable for third-party content. It was announced by Committee Chairman Frank Pallone, Jr., D-N.J., along with Mike Doyle, D-Pa., Jan Schakowsky, D-Ill., and Anna Eshoo, D-Calif. The bill will be formally introduced on Friday by House Democrats.

If passed, the law would amend Section 230 to remove "absolute immunity in certain instances" - including lifting a liability shield when an online platform knowingly or recklessly uses an algorithm to recommend content that can lead to physical or emotional injury, the lawmakers said.

In a statement, Pallone said, "Social media platforms like Facebook continue to actively amplify content that endangers our families, promotes conspiracy theories, and incites extremism to generate more clicks and ad dollars."

Reached on Thursday, a Facebook spokesperson declined to comment on the specific bill.

'Malicious Algorithms'

The lawmakers noted Thursday that prominent platforms leverage users' personal histories to recommend or prioritize content; this bill would remove immunity given to some platforms that they say use algorithms to promote harmful content.

The bill would target "malicious algorithms" using personalization, they said, but would not apply to web-hosting sites or small platforms beneath the visitor threshold.

Pallone added: "The time for self-regulation is over, and this bill holds them accountable. Designing personalized algorithms that promote extremism, disinformation, and harmful content is a conscious choice, and platforms should have to answer for it."

Rep. Doyle added: "We finally have proof that some social media platforms pursue profit at the expense of the public good, so it's time to change their incentives, and that's exactly what [this bill] would do. … Section 230 would no longer fully protect social media platforms from all responsibility for the harm they do to our society."

Commenting on the proposed bill, Neil Jones, a cybersecurity evangelist for the firm Egnyte, tells ISMG, "[This bill] is a solid first step to improve policing of online platforms. … Should [it get] passed, I would anticipate significant legal action to follow … [and] that social media platforms will push back strongly against increasing regulation."

Roger Grimes, a data-driven defense evangelist for the security firm KnowBe4, adds, "This is tricky because it touches on first amendment freedoms and also Section 230, which is widely considered fundamental to why the internet became the internet. … Even if there becomes consensus on what is to be blocked, can it be accurately blocked at scale?"

Any level of censorship, Grimes says, "is a tough nut to crack."

(Photo: Jeremy Bezanger via Unsplash)

Whistleblower Case

The Malicious Algorithms legislation is not the first attempt to amend Section 230, although the call for reform was echoed by Facebook whistleblower Frances Haugen this month; she advocated for similar changes to its liability protections while testifying before the Senate Commerce Committee’s subcommittee on consumer protection.

In fact, Haugen said Facebook retains control of its algorithms and that "the company intentionally hides vital information from the public, from the U.S. government, and from governments around the world."

Responding via Facebook post, the company's CEO Mark Zuckerberg said the platform's internal research was misrepresented to create a false narrative. Stressing platform safety for its younger users, he added: "It's very important to me that everything we build is safe and good for kids."

Facebook Publicly Responds

Facebook's Vice President of Global Affairs Nick Clegg, also the former deputy prime minister of the U.K. in David Cameron's government, made rounds on several talk shows last weekend following the recent congressional testimony. Clegg said the platform will introduce new tools to divert users from harmful content, limit political content and offer other parental security controls.

Clegg said some features would encourage long-duration users to "take a break" and steer younger users away from harmful content.

Clegg told ABC that Facebook will have regular independent audits of data around its content.

In a hearing before a House Science, Space, and Technology subcommittee last month, cybersecurity and computer science experts from Northeastern University, New York University and the University of Illinois Urbana-Champaign similarly questioned Facebook's business model. Its user engagement-based approach, they said, may perpetuate the flow of misinformation (see: Experts Slam Social Media Platforms' Data Policies).

Facebook's Clegg has urged Congress of late to set rules on wider internet regulation. In an op-ed in USA Today this week, Clegg wrote, "We’ve proposed ways to reform Section 230 of the Communications Decency Act, including requiring platforms to be more transparent about how they remove harmful and illegal content.

"We support efforts to bring greater transparency to algorithmic systems, offer people more control over their experience and require audits of platforms’ content moderation systems.

"It’s long past time for Congress to set clear and fair rules," Clegg said.

TikTok Concerns

Facebook is not the only platform being pressed by lawmakers. On Tuesday, the chair of the Senate Homeland Security and Governmental Affairs Committee sent a letter to video-sharing social network service TikTok on extremist content.

In it, Sen. Gary Peters, D-Mich., questioned the platform on users' roles in allegedly organizing, and communicating about, the Jan. 6 U.S. Capitol insurrection. Peters makes eight requests of the company's CEO, Shou Zi Chew, to understand ways it combats such content. The senator also shared a letter with the FBI and Department of Homeland Security on existing and planned tactics for countering the same type of content.

TikTok did not immediately respond to ISMG's request for comment.


About the Author

Dan Gunderman

Dan Gunderman

Former News Desk Staff Writer

As staff writer on the news desk at Information Security Media Group, Gunderman covered governmental/geopolitical cybersecurity updates from across the globe. Previously, he was the editor of Cyber Security Hub, or CSHub.com, covering enterprise security news and strategy for CISOs, CIOs and top decision-makers. He also formerly was a reporter for the New York Daily News, where he covered breaking news, politics, technology and more. Gunderman has also written and edited for such news publications as NorthJersey.com, Patch.com and CheatSheet.com.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.