Could an Open Source Policy Make Twitter Unsafe? Debate Rages Over Plan To Make Twitter’s Algorithm Visible to the Public

by | May 11, 2022

If and when Elon Musk takes charge of Twitter later this year, he may make the social media giant’s content promotion algorithm public. There is natural interest in what causes the platform to seemingly favor some accounts and tweets over others, and certainly a valid case to be made for transparency in the public interest. But some industry experts also argue that an open source listing of Twitter’s algorithm on a site like GitHub would also be an invitation to threat actors.

Could the potential security problems created by open access to Twitter’s inner workings outweigh the public good? It could depend on how much data is ultimately shared, and in what way.

Greater visibility into Twitter’s algorithm could come with more risk to platform users

Proponents of the changes Musk is looking to make see it as a vital protection for free speech in online discourse, with Twitter having become something of the “virtual town square” for discussion about politics and policy. A major component of this, and one that directly led to Musk’s takeover bid, is a growing perception that the platform heavily favors certain political messaging (usually left-wing in orientation) to the point of censoring alternate views, or any information that might be unfavorable to certain political fortunes.

Posting Twitter’s algorithm to an open source platform, such as Github, could provide public evidence that the platform is not quietly favoring particular sides of a debate or playing defense for certain powerful elements. However, security experts also hasten to point out that it could make it a more dangerous place to be.

An ability to see the (currently heavily protected) internal code of Twitter would allow hackers to scan for vulnerabilities, and could also empower the promotion of malicious content without any real hacking required. At the very least, it could teach spammers how to ply their trade more effectively; another element that Musk has focused on is the removal of automated bots from the platform, and making Twitter’s algorithm open source could actually make that effort considerably harder.

Potential for safety risks, abuse on open source Twitter?

Greater general awareness of how Twitter’s algorithm works would nevertheless likely be very popular among users, despite some misgivings about security issues and Musk’s intentions for content moderation going forward. But Musk has not yet firmly committed to any specific course of action, and public visibility into Twitter’s algorithm might end up taking a different form than simply dumping its code to open source platforms.

Governments around the world, most notably in the EU, are using regulatory pressure to provide more insight into the workings of big tech platform algorithms without forcing them to publicly show all of the code and data that makes the system run. Something similar might end up happening here, with summary “privacy labels” describing how the algorithms work and allowing users greater ability to choose whether or not they play into personal content delivery; former Twitter head Jack Dorsey has endorsed such a plan.

On that point, Twitter engineers point out that there is not just one “algorithm” at work on the platform but a complicated collection of them that incorporate machine learning elements that are not easily displayed. Information about personalized Twitter recommendations might not be useful without also disclosing the sort of personal data that cannot be disclosed. Displaying the source code for Twitter’s algorithm(s) could thus create a situation in which there is still much the public does not know about its inner workings, but there is an increased opportunity for threat actors to abuse the platform.

Recent Posts

Attempted Audio Deepfake on LastPass is “The New Normal” for Voice Phishing
Attempted Audio Deepfake on LastPass is “The New Normal” for Voice Phishing

Employee targeted in the voice phishing attack received several different deepfake call attempts and at least one voicemail message, but did not respond as it’s exceedingly rare for anyone to communicate internally via WhatsApp, let alone for the CEO to randomly start peppering an employee with messages after business hours.

How can we help?

9 + 5 =

× How can I help you?