[I wrote this a few years ago, but it is still relevant, perhaps even more so now -PD]
I’ve been a privacy advocate for a long time; back in the mid-90s I’d wear my while walking around the Boston common, both to support Phil Zimmerman’s defense fund and to enact my own small protest against government restrictions on free speech.
I’m also a big fan of Cory Doctorow’s writing, and a few months read both Little Brother and Homeland, his vision of not-too-distant future of a dystopian United States in which Homeland Security mounts an all-out offensive against freedom in the name of safety. The books are frightening in that it’s easy to see a path between where we are right now and the world he depicts. I stocked up on tin foil after finishing the books.
I resolved to do my part to help secure the basic human right of freedom of speech, even if in just a small way, by setting up a Tor relay on one of my servers. I run a small business and have ample bandwidth and compute cycles, and I felt that helping the Tor network grow was a great way to participate in the free-speech movement.
The Tor network architecture uses a three-hop graph. A user connects to the network via a bridge; the next hop is to a relay, and the final hop to an exit node which makes the final hop to the service the user wants to use. Bridges and relay nodes are equivalent in terms of how they are set up, and a bridge can be either public or hidden, the latter being used to help obscure the initial connection tor the Tor network in regimes where network traffic is heavily scrutinized or suppressed. You can read full details of the architecture at the .
Exit nodes and so I decided to run a relay. It takes only a few minutes to set this up on a Linux distribution…a download and a few configuration file tweaks and you are up and running. I gave the node 1 MB/s of bandwidth so that it would have a good chance of being promoted to being a published entry point.
I set the node up on a Monday. The first sign of trouble was on Wednesday, when my wife asked why she couldn’t watch a show on Hulu. I took a look and saw an ominous message: “Based on your IP-address, we noticed that you are trying to access Hulu through an anonymous proxy tool…” The streaming ABC site displayed a similar message. The new Tor relay was an obvious source of the message, but I’d also recently been using a VPN to watch World Cup games that were blocked in the USA, and that could’ve been a trigger, too.
The next day I logged on to one of my banking sites. I was blocked. A second banking site had also blocked me. I needed to renew a domain at Network Solutions. Denied: “There’s something wrong with your credit card…”
What had happened?
A fundamental weakness of Tor is that in order to connect to the first node, you need to know the IP address of the first node. Tor handles this in two ways; a small set of bridge nodes are kept secret and distributed only by email…these are used by dissidents in China, for example, where Tor traffic is heavily censored. The large majority of bridges, though, are , and many companies scrape these lists and blacklist any IP found on them. I’d been blacklisted for supporting free speech.
Some of the blocks were easy to fix. I called Hulu and the support technician manually removed my IP from their blacklist. Others (my banks, for example) cleared themselves automatically a few days after I disabled my Tor relay.
Some were not so easy to fix. Network Solutions is still blocking me, and just yesterday I tried to do an online transaction on my state government’s web site: “There is something wrong with your credit card…”
My solution to this nagging problem is the same one that I used to watch the blocked World Cup games…a VPN to a server somewhere else in the world. Since my IP is blacklisted, I just come in with a different IP.
My advice to anyone who wants to support free speech by running a Tor relay on their home or small business network is simple:
Don’t do it.
The Tor Project downplays or ignores the risk of running a Tor relay, focusing instead on exit nodes. Their goal is to grow the network, so I can’t fault them. However, it’s clear that many organizations are throwing a wide net around Tor traffic and putting all of it in the ‘evil-doer’ basket. Even if you are just trying to do your part as a citizen of the world to promote free speech, you will be slapped down. My IP presumably is now on watch lists that I don’t know about, both private and governmental. Is my traffic being collected? What tripwires did this trigger? What other ramifications are there? These are questions that I don’t know the answer to right now.
I still support Tor and what it stands for. The Tor Project is making a big push right now to encourage individuals to , and I’m all for that as long as you keep in mind that Amazon is a third party and subject to subpoena and to national security orders. It might well be that the AWS Tor nodes are currently under heavy scrutiny…we just don’t know. If you don’t physically own the entry node, there’s no guarantee that your traffic is not being . The can be useful in providing a layer of anonymity to your web browsing, but you should approach it with a dose of skepticism.
If your goal is anonymous network access, one approach would be to set up a private Tor entry point, one that you physically control, and coming out of it. This would prevent your IP from being scraped off the list of public relays, and presumably would help prevent traffic analysis at your ISP from identifying your IP as being part of the Tor network. This approach doesn’t help the Tor project, really, but it will help anonymize your traffic. The Tor Project maintains a list of hidden entry nodes, but it’s trivial to build a list of them (they are distributed by email) and so you should assume that they have been compromised and just use your private bridge.
I still want to promote free speech. My focus is shifted away from Tor and I’m instead promoting . The idea is that if more people use encryption for everyday communication such as email and IM messages, the encrypted traffic becomes the norm rather than sticking out like a big flag. Unfortunately, 20 years after Zimmerman posted his PGP code, it’s still not easy for the average user to implement strong encryption. That’s where I’ll spend my effort…in making things simpler.
The tl;dr: Assume that anything you do online is being recorded by the government.
I had a conversation this past week with one of my students who was interested in some of the operational aspects of anonymity; he wanted to know to what extent either Tor or a VPN or both would protect his identity against varying levels of potential adversaries, from coworkers to nation-states. I think that we here in the USA forget that in many parts of the world, speech, especially dissident speech, can be extremely dangerous.
A recurring theme of this conversation was the notion of trust. For example, when we talked about how VPNs work and how they might be used to secure communications like IM or email, it came down to the level of trust that you have in the VPN provider. What if that provider is logging everything that you do across the VPN? Is the VPN provider susceptible to a Five-Eyes warrant to turn over those logs, or being monitored covertly? How do you know that the VPN provider isn't really a government agency?
On January 21st 2017, literally millions of people united in marches across the country protesting against an administration that they see as a threat to their freedoms. Those protests were organized and promoted on sites and services such as Twitter, Facebook, and Google without, I'm guessing, much thought about who else might be collecting and collating this information. We willingly expose enormous amounts of information about ourselves, our thoughts, and our actions on these sites every single day. Can we trust them?
Edward Snowden showed us how deeply entrenched US intelligence agencies are in these sites, collecting, storing, and indexing nearly every message that flows through them. A body of secret law, interpreted by a court that meets in secret, ensures that these agencies can collect nearly anything that they ask for.
We have to assume that all of the email, texts, phone calls, and posts relating to today's protests have been collected.
Do we care? On some level I suppose we don't. We use these services, the Facebooks, the Twitters, the GMails, because they are convenient and efficient at reaching large numbers of people very quickly. For a large portion of our population, the 'internet' is Facebook. We post and tweet and like, not realizing that these posts and tweets and likes are used to create profiles of us, primarily for marketing purposes, but also for analysis by our government. I'm not saying that the NSA has a folder labeled 'Perry Donham' with all my posts and tweets collated in realtime, but I am saying that the data is there if an analyst wants to sort through it.
A photo from today's march in Washington really struck me: Japanese woman at Washington protest 21 January 2017. In it an elderly Japanese woman holds a sign that reads Locked Up by US Prez 1942-1946 Never Again! There are US citizens still alive who were put into detention camps by the US government during the second world war. George Takei, a US citizen who played Sulu on the iconic series Star Trek, was imprisoned by the US government from the age of five until the age of eight. The reason? He was of Japanese descent.
We are entering unknown political territory, with an administration guided by the far right that will wield enormous technical surveillance assets. We literally don't know what is going to happen next. It's time to think carefully about what we say and do, and who we associate with, online, in email, posts, tweets, texts, and phone calls. We know that this data is being collected now by our government. We don't know what the Trump administration will choose to do with it.
My advice is simply this: Every time you post or tweet or like, assume that it is being collected, analyzed, stored, and can be potentially used against you.
Worse, we've become dependent on 'the cloud' and how easy it is to store our information on services such as Dropbox, Google Docs, and Azure. Think about this. Do you know the management team at Dropbox? The developers? The people running the data Dropbox data center? Their network provider? You do not. The only reason that we trust Dropbox with our files is that 'they' said that 'they' could be trusted with them.
You might as well drive over to your local Greyhound terminal and hand an envelope with your personal files in it to a random person sitting on a bench. You know that person just as well as you do Dropbox.
I've been thinking a lot about trust and how false it is on the internet, and about how little we think about trust. In the next few posts I'll look at how the idea of trust has broken down and at how we can leverage personal trust in securing our communications and information.
Last week The Guardian ran a story that claimed a backdoor was embedded in Facebook's WhatsApp messaging service. Bloggers went nuts as we do when it looks like there's some nefarious code lurking in a popular application, and of course Facebook is a favorite target of everybody. I tweeted my disdain for WhatsApp moments after reading the article, pointing out that when it comes to secure communication, closed-source code just doesn't cut it.
Today Joseph Bonneau and Erica Portnoy over at EFF posted a very good analysis of what WhatsApp is actually doing in this case. It turns out that the purported back door is really a design decision by the WhatsApp team; they are choosing reliability over security. The quick explanation is that if a WhatsApp user changes his or her encryption key, the app will, behind the scenes, re-encrypt a pending message with the new key in order to make sure it is delivered. The intent is to not drop any messages.
Unfortunately, by choosing reliability (no dropped messages), WhatsApp has opened up a fairly large hole in which a malicious third party could spoof a key change and retrieve messages intended for someone else.
EFF's article does a very good job of explaining the risk, but I think it fails to drive home the point that this behavior makes WhatsApp completely unusable for anyone who is depending on secrecy. You won't know that your communication has been compromised until it's already happened.
Signal, the app that WhatsApp is built on, uses a different, secure behavior that will drop the message if a key change is detected.
Casual users of WhatsApp won't care one way or another about this. However, Facebook is promoting the security of WhatsApp and implying that it is as strong as Signal when it in fact isn't. To me this is worse than having no security at all...in that case you at least know exactly what you are getting. It says to me that Facebook's management team doesn't really care about security in WhatsApp and are just using end-to-end encryption as a marketing tool.
Signal has its own problems, but it is the most reliable internet-connected messaging app in popular use right now. I only hope that Facebook's decision to choose convenience over security doesn't get someone hurt.
On the face of it, the situation is pretty straightforward: The FBI has an iPhone used by one of the San Bernardino shooters, it is currently locked with a passcode, and they want Apple to assist in unlocking the phone. Apple has stated that they don't have that capability, and that to comply with the order they would have to engineer a custom version of iOS that turns off certain security features, allowing the FBI to brute-force the passcode. It comes down to the federal government forcing a private company to create a product that they wouldn't normally have made.
We can reasonably expect the FBI and Department of Justice to push back on Apple, which has not only provided assistance to the bureau in similar cases in the past, but has also provided assistance in this case in the form of technical advice and data available from iCloud backups. What's interesting about recent events, though, is that they have taken the form of a court order under the authority of the All Writs Act of 1789, which gives federal courts the authority, in certain narrow circumstances, to issue an order that requires the recipient to do whatever it is the court deems necessary to prosecute a case.
Apple has spent months negotiating with the federal government in this matter and requested that the order be issued under seal, which means that it would have in effect been a secret order; the public would not have known about it. It's also a possibility that the order could have been issued by the Foreign Intelligence Surveillance Court (FISC), a secret court, with no representation for the accused, used by the government to carry out covert surveillance against both foreign and domestic targets. Such an order would have included a gag order precluding Apple from divulging that they had even received it.
Instead, the FBI and DoJ went public with the nuclear option...the All Writs Act. The only reasonable explanation is that they expect this matter to be appealed, and that a federal court will side with the government, setting a landmark precedent. The FBI administrators are not fools; they expect to prevail in this. They picked this specific case, out of all of the similar cases over the past few years, to move their agenda forward.
Apple's position isn't that they can't create a custom version of iOS to accomplish what the FBI wants. It is that to do so would be an invitation to any law enforcement agency to ask for similar orders in any case that came up involving an Apple product. Privacy would be permanently back-doored. And it wouldn't stop with American law enforcement; it isn't a far leap to see China demanding such a tool for Apple to continue to do business in the country.
The defense that Apple (and a growing consortium of supporters, including the EFF) is taking is that both the first and fourth amendments prevent the federal government from compelling speech. In this context, there is legal precedence that computer software is seen as speech, and so Apple cannot be compelled to write code that it doesn't want to create. If the FBI and DoJ were to prevail, they would be able to require any company to write whatever code the government felt necessary, including backdoors or malicious software.
An analogy would be if the government decided that it would be in the public's interest to promote a particular federal program, and so compelled the Boston Globe to write favorable articles about it.
This case has nothing to do with San Bernardino. It has everything to do with the federal government attempting to establish a legal foothold in which individual privacy is at the whim of the courts. And, as Apple has stated, a backdoor swings both ways; it would be only a matter of time before such a tool would be compromised and used by criminals or other governments against us. Privacy is a fundamental right as laid out in the first, third, fourth, fifth, ninth, and fourteenth amendments to the US constitution.
Edward Snowden said, "This is the most important tech case in a decade." The outcome of this case and its appeals will help determine whether our future is one of freedom and privacy or of constant surveillance and a government that can commandeer private companies to do their bidding.
I've been watching T-Mobile's new Binge-On (BO) offering for a few weeks now as it gains more and more headlines. Today TMO CEO John Legere went on a rant directed at the Electronic Freedom Foundation (of which I am a member) and their recent analysis of the service.
TMO and Legere say that Binge-On is a feature aimed at providing their customers with a better video experience, and saving them money by not charging data fees for video from Binge-On partners such as Netflix, Hulu, and Youtube. This sounds like a good thing, doesn't it? Free is good.
There are two issues with this. The first is that TMO is slowing down ALL video streams to mobile devices, not just streams from non-BO partners. Every HTML5 video stream is slowed to 1.5Mbps. Some sources are saying that HD video is being converted to 480p, but I haven't seen a definitive answer to this question. Frankly, reducing bandwidth to mobile devices makes a lot of sense, because on those devices a reduced-resolution image looks just fine. If you are watching a video on a 5-inch screen, you really don't need to see that stream in high-bandwidth, high-resolution. You can opt out of Binge-On, and that's really what has folks in a dither...it's turned on by default. TMO counters by pointing out that customers were inadvertently burning through their data plan by watching unnecessarily high bandwidth video.
The second, and larger issue, is net neutrality. In 2015 the Federal Communications Commission (FCC) issued a ruling that basically said that data is data...it's illegal to differentiate among and treat differently email versus text versus video versus web browsing. TMO's Binge-On is in direct violation of this, treating video differently than other network traffic. Worse, the Electronic Freedom Foundation (EFF) showed that TMO slowed down video traffic even when the file was not explicitly identified as video (with a .mpg file extension, for example). That means that TMO is peering inside the data to see what it is...a technique called deep packet inspection. If TMO is inspecting packets, what else are they planning on doing? And who are they sharing that information with?
I'm of two minds on this issue. I'm a proponent of net neutrality, and I find it offensive that TMO is treating different kinds of data traffic in different ways. Net neutrality was hard-fought and extremely important in protecting the free exchange of ideas on the internet. As a consumer, though (and I use TMO on a tablet for audio streaming), how can I argue against free? I specifically bought a TMO tablet so that I could stream music at no charge through their Music Freedom program.
I've asked the EFF about Music Freedom and if its the same technique as Binge-On (deep packet inspection). No one complained about MF when it was launched a year ago. I have to think that TMO's competitors are lining up their lawyers to take this one to the mat. I think that my position has to be with net neutrality...it is so much more important than a bunch of TMO subscribers getting free video.