Tuesday, December 20, 2016

The Social Media Nuclear Option

I'm concerned that the rules of engagement for social engineering in elections may be changing under our feet. 

There's an old axiom about congress that says don't watch for changes in the laws, watch for changes in the rules. This principle is the basis for the dramatic naming of the so called 'nuclear option' to allow overriding a filibuster by simple majority.

The governing rule in social media has been political neutrality. The political leanings of most social media users have shifted from the left to the center as the internet population stretched beyond the young and tech savvy. Throughout this shift, social media from Facebook to Reddit have been adamant in their neutrality and deference to free speech. Incidents of alleged censorship even led to minor insurrections on Reddit and attempts to establish new "pure" news aggregate sites (see Voat).

Should we trust that news aggregators and social media are politically neutral?

Censorship is just one form of political influence. In the pre-internet days it might have been a deft weapon, but now that every user has a plethora of side channels for communication, censorship is easier to identify. Mark Zuckerberg is adamant that Facebook is politically neutral and opposed to "fake" content. Validity of news is relatively easy to check - tools like snopes make it straightforward for concerned users to call out blatant falsehoods.

What's harder to identify is content manipulation. Recently a Reddit administrator was shamed and publicly apologized for making effectively untraceable changes to the content of users' comments. News sources reported on Wikileaks' release of Clinton campaign emails while acknowledging that they have no way to determine whether the emails were manipulated.

Still, content manipulation can be combated with tools like digital signatures that fail if the content of a post is manipulated.

The knot in my stomach comes from the mechanisms that govern Facebook's news feed, Twitter's tweet stream, Reddit's front page, and all other corners of social media where users see content filtered and sorted by proprietary algorithms designed to show us the most interesting and most popular content. These mechanisms are secret and hugely influential. From a technical perspective, it would be easy to tweak the algorithm to show five percent more pro-Trump posts or decrease the rank of Bernie Sanders supporters posts.

As demonstrated by Reddit admin /u/spez, such subtle changes can be made with little oversight and zero transparency.  Reddit even notes that their vote counting system is necessarily private to protect against external vote manipulation.

In the land of traditional media, bias in headlines is well understood. Fox News leans right, MSNBC leans left. Their editors curate the stories, and we know what's going on.

In social media, someone or something is curating the stories we read. We don't get to pick between the left and right social media. There's just one Facebook, and we need to realize that Facebook may already be manipulating elections via subtle curation.

I am not claiming that Facebook is manipulating elections. I'm claiming that the capability exists for Facebook to do so in an unchecked and difficult to detect manner. Doing so would be an unprecedented change in the accepted rules and norms of the internet - Facebook's nuclear option.

As long as Facebook's news feed algorithm is proprietary and unverifiable, we need to treat it as potentially malicious.

P.S. The Guardian has a similar op-ed about this topic with additional references.



Sunday, December 4, 2016

The Internet's Next Revolution

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

The problem is trust

I first wrote this in a letter to my brother, but I decided to share it more broadly.  I'll try to describe the problem without devolving into flowery language, but forgive me if I stray from pragmatism a little.

With this past election I've been pondering the problem of trust. The internet (particularly social media) rushed us into an age where everyone simultaneously has a megaphone and headphones. Stories are shared left and right without any clear confirmation.

Case in point is the DAPL. I constantly see chatter back and forth about whether the tribe attended the zoning meetings, whether they were just greedy, whether protesters were funded by train companies, etc. With the plethora of news sites starting up, shutting down, and starting up again, it's hard to tell who's legit.

In a search for a solution, I'm thinking about a few known problems and approaches.

Establishing Trust


In cybersecurity we use a chain of trust. Web sites have certificates issued by a "root" certificate authority (e.g., verisign). My PGP public key is on my Facebook profile, so I can sign my emails appropriately with a key that folks can verify (using Facebook as my root of trust). CNN uses a signed certificate (and the notoriety of the name “CNN”) so I can have a reasonable degree of confidence that when I’m reading a story on CNN.com, it’s actually from CNN and not from someone running a fake cnn.com.
Motivation

As I’ve tried to market my book, I’ve learned that marketing is not as easy as I imagined it to be. It’s really hard to get an idea into people’s heads. Even folks that I’ve been friends with on Facebook for a long time have little to no idea that I wrote a book. Being able to expose an idea to a lot of people is immensely valuable, and I’m just starting to appreciate that. Getting people to do something (e.g., installing a browser plugin to verify the sources of documents they read) is even harder and close to impossible.

Crawling forward


Throughout the election there were tons of organizations (NPR, factcheck.org, etc) vetting claims and accusations. Shortly after the election and news of Facebook spreading false stories, some students came up with a fancy algorithm for verifying articles. Unfortunately none of this seems to have made any difference. Message board are still filled with back-and-forth unverified garbage.


What to do?


I think this situation demands some kind of dramatic shift in how we approach news media and information in general. I think the shift needs to be subtle and cultural. The answer isn’t just to provide a technological solution, it needs to change how non-techy people look at information.

I'm far from a solution, but here are my approaches to this problem:

1. Make digital signing the norm, and make it obvious. People are starting to get used to seeing the little lock icon telling them their browser connection is secure. I want every news article to be signed by its author, and again by its editor. If possible I’d like each fact in the article to be signed and referenced with its source.

2. Push conflicting views on people. Sometimes I’m a little worried that the news I read is so consistently and adamantly anti-Trump. Even NPR is pretty soundly critical. It makes me scared that I’ve unwittingly boxed myself into a liberal echo chamber.

How to do it?


Finally we arrive at the golden ticket to solve this mess. Or rather, where the golden ticket would be if I had one. I’ve done my best to articulate the need, but that’s far from enough. I don’t have a clear idea of the how, but here are some off-the-wall ideas:

Create a new HTML equivalent supporting embedded references and/or citations
Perhaps a "VHTML" (Verifiable Hypertext Markup Language) extension of HTML that allows embedding references and proof of authorship inside text. Browsers could then display non-verified or limited verification text in a style different from verifiable sources.  
Produce a new (Google News style) aggregator that aims to provide conflicting views on everything.
Create a web crawler to construct a “chain of trust” graph for news verification.
Create a “badge” or “widget” that can be installed on a web page (kinda like captcha) that indicates the human author of the content signed off on it and directs to sources.

If you read this far, thanks! I’m curious what you think.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJYRFb1AAoJEAY9AbSjDRw75GAH/04b+3Twq4ZF9DMz5LkXR1Hb
NcFUC7TrcVQZ7a/gouvmOZdbvDKr5Y0XU0IiGDI5OPqmh5zyohzevIRuRTHY+rHt
1nST8Dg+CATrEqObNXHQPE7mhYcAUxMasIkA3C3cU/T8yWMvSJACQtsWptFd3SfH
swNL+BRULg98elv//9ose2Reeh8m5OQE97q7rGR2j0b9/eStueAHq5KCk+xHLGiw
lGpe9xgJkGOou6aOh8ID/xrbDTOMO6twjzThD8/0je2zokhS9N0BaYAHbVX/EAL8
lM8Q7q1tWCD/CyWjDV1Eq4sUWHfhSaDvN8gRw0SCxoWvELYq4w7DQVJmtSevYoc=
=a6zB
-----END PGP SIGNATURE-----

How to verify this post

Unfortunately this is not nearly as easy as I'd like it to be. 
  • Install gpg tools or an equivalent gpg package.
  • Copy my public key from Facebook to a plain text file called tyler.asc
  • Import my public key
gpg --import tyler.asc
  • Copy the PGP portion of the post to a plain text file. 
  • Verify the signature with GPG:
MacBook-Air:Documents tyler$ gpg --verify testcopied.txt 
gpg: Signature made Sun Dec  4 11:48:37 2016 CST using RSA key ID A30D1C3B
gpg: Good signature from "Tyler Smith (Personal GPG Key) <tylerhesthedude@gmail.com>"