Government pressurizing tech firms to join big brother state

The pressure on social media companies to limit or take down content in the name of national security has never been greater. Resolving any ambiguity about how much the Obama administration values the companies’ cooperation, the White House on Friday dispatched the highest echelon of its national security team – including the attorney general, the FBI director, the director of national intelligence, and the NSA director – to Silicon Valley for a meeting with technology executives chaired by the White House chief of staff himself. The agenda for the meeting tried to convey a locked-arms sense of camaraderie, asking, “How can we make it harder for terrorists to leveraging [sic] the internet to recruit, radicalize, and mobilize followers to violence?”

Congress, too, has been turning up the heat. On December 16, the House passed the Combat Terrorist Use of Social Media Act, which would require the president to submit a report on “United States strategy to combat terrorists’ and terrorist organizations’ use of social media.” The Senate is considering a far more aggressive measure, which would require providers of Internet communications services to report to government authorities when they have “actual knowledge” of “apparent” terrorist activity (a requirement that, because of its vagueness and breadth, would likely harm user privacy and lead to over-reporting).

The government is of course right that terrorists use social media, including to recruit others to their cause. Indeed, social media companies already have systems in place for catching real threats, incitement, or actual terrorism. But the notion that social media companies can or should scrub their platforms of all potentially terrorism-related content is both unrealistic and misguided. In fact, mandating affirmative monitoring beyond existing practices would sweep in protected speech and turn the social media companies into a wing of the national security state.

The reasons not to take that route are both practical and principled. On a technical level, it would be extremely difficult, if not entirely infeasible, to screen for actual terrorism-related content in the 500 million tweets that are generated each day, or the more than 400 hours of video uploaded to YouTube each minute, or the 300 million daily photo uploads on Facebook. Nor is it clear what terms or keywords any automated screening tools would use – or how using such terms could possibly exclude beliefs and expressive activity that are perfectly legal and non-violent, but that would be deeply chilled if monitored for potential links to terrorism.

Read more