Category: privacy
Promote free speech via Tor, earn a slapdown
[I wrote this a few years ago, but it is still relevant, perhaps even more so now -PD]
I’ve been a privacy advocate for a long time; back in the mid-90s I’d wear my PGP ‘munition’ T-shirt while walking around the Boston common, both to support Phil Zimmerman’s defense fund and to enact my own small protest against government restrictions on free speech.
I’m also a big fan of Cory Doctorow’s writing, and a few months read both Little Brother and Homeland, his vision of not-too-distant future of a dystopian United States in which Homeland Security mounts an all-out offensive against freedom in the name of safety. The books are frightening in that it’s easy to see a path between where we are right now and the world he depicts. I stocked up on tin foil after finishing the books.
I resolved to do my part to help secure the basic human right of freedom of speech, even if in just a small way, by setting up a Tor relay on one of my servers. I run a small business and have ample bandwidth and compute cycles, and I felt that helping the Tor network grow was a great way to participate in the free-speech movement.
The Tor network architecture uses a three-hop graph. A user connects to the network via a bridge; the next hop is to a relay, and the final hop to an exit node which makes the final hop to the service the user wants to use. Bridges and relay nodes are equivalent in terms of how they are set up, and a bridge can be either public or hidden, the latter being used to help obscure the initial connection tor the Tor network in regimes where network traffic is heavily scrutinized or suppressed. You can read full details of the architecture at the Tor Project home page.
Exit nodes carry potential legal issues and so I decided to run a relay. It takes only a few minutes to set this up on a Linux distribution…a download and a few configuration file tweaks and you are up and running. I gave the node 1 MB/s of bandwidth so that it would have a good chance of being promoted to being a published entry point.
I set the node up on a Monday. The first sign of trouble was on Wednesday, when my wife asked why she couldn’t watch a show on Hulu. I took a look and saw an ominous message: “Based on your IP-address, we noticed that you are trying to access Hulu through an anonymous proxy tool…” The streaming ABC site displayed a similar message. The new Tor relay was an obvious source of the message, but I’d also recently been using a VPN to watch World Cup games that were blocked in the USA, and that could’ve been a trigger, too.
The next day I logged on to one of my banking sites. I was blocked. A second banking site had also blocked me. I needed to renew a domain at Network Solutions. Denied: “There’s something wrong with your credit card…”
What had happened?
A fundamental weakness of Tor is that in order to connect to the first node, you need to know the IP address of the first node. Tor handles this in two ways; a small set of bridge nodes are kept secret and distributed only by email…these are used by dissidents in China, for example, where Tor traffic is heavily censored. The large majority of bridges, though, are available in public lists, and many companies scrape these lists and blacklist any IP found on them. I’d been blacklisted for supporting free speech.
Some of the blocks were easy to fix. I called Hulu and the support technician manually removed my IP from their blacklist. Others (my banks, for example) cleared themselves automatically a few days after I disabled my Tor relay.
Some were not so easy to fix. Network Solutions is still blocking me, and just yesterday I tried to do an online transaction on my state government’s web site: “There is something wrong with your credit card…”
My solution to this nagging problem is the same one that I used to watch the blocked World Cup games…a VPN to a server somewhere else in the world. Since my IP is blacklisted, I just come in with a different IP.
My advice to anyone who wants to support free speech by running a Tor relay on their home or small business network is simple:
Don’t do it.
The Tor Project downplays or ignores the risk of running a Tor relay, focusing instead on exit nodes. Their goal is to grow the network, so I can’t fault them. However, it’s clear that many organizations are throwing a wide net around Tor traffic and putting all of it in the ‘evil-doer’ basket. Even if you are just trying to do your part as a citizen of the world to promote free speech, you will be slapped down. My IP presumably is now on watch lists that I don’t know about, both private and governmental. Is my traffic being collected? What tripwires did this trigger? What other ramifications are there? These are questions that I don’t know the answer to right now.
I still support Tor and what it stands for. The Tor Project is making a big push right now to encourage individuals to create Tor nodes in the Amazon cloud, and I’m all for that as long as you keep in mind that Amazon is a third party and subject to subpoena and to national security orders. It might well be that the AWS Tor nodes are currently under heavy scrutiny…we just don’t know. If you don’t physically own the entry node, there’s no guarantee that your traffic is not being de-anonymized. The Tor Browser Bundle can be useful in providing a layer of anonymity to your web browsing, but you should approach it with a dose of skepticism.
If your goal is anonymous network access, one approach would be to set up a private Tor entry point, one that you physically control, and obfuscate the traffic coming out of it. This would prevent your IP from being scraped off the list of public relays, and presumably would help prevent traffic analysis at your ISP from identifying your IP as being part of the Tor network. This approach doesn’t help the Tor project, really, but it will help anonymize your traffic. The Tor Project maintains a list of hidden entry nodes, but it’s trivial to build a list of them (they are distributed by email) and so you should assume that they have been compromised and just use your private bridge.
I still want to promote free speech. My focus is shifted away from Tor and I’m instead promoting the ‘encrypt everything’ movement. The idea is that if more people use encryption for everyday communication such as email and IM messages, the encrypted traffic becomes the norm rather than sticking out like a big flag. Unfortunately, 20 years after Zimmerman posted his PGP code, it’s still not easy for the average user to implement strong encryption. That’s where I’ll spend my effort…in making things simpler.
The Death of Trust
The tl;dr: Assume that anything you do online is being recorded by the government.
I had a conversation this past week with one of my students who was interested in some of the operational aspects of anonymity; he wanted to know to what extent either Tor or a VPN or both would protect his identity against varying levels of potential adversaries, from coworkers to nation-states. I think that we here in the USA forget that in many parts of the world, speech, especially dissident speech, can be extremely dangerous.
A recurring theme of this conversation was the notion of trust. For example, when we talked about how VPNs work and how they might be used to secure communications like IM or email, it came down to the level of trust that you have in the VPN provider. What if that provider is logging everything that you do across the VPN? Is the VPN provider susceptible to a Five-Eyes warrant to turn over those logs, or being monitored covertly? How do you know that the VPN provider isn't really a government agency?
You don't.
On January 21st 2017, literally millions of people united in marches across the country protesting against an administration that they see as a threat to their freedoms. Those protests were organized and promoted on sites and services such as Twitter, Facebook, and Google without, I'm guessing, much thought about who else might be collecting and collating this information. We willingly expose enormous amounts of information about ourselves, our thoughts, and our actions on these sites every single day. Can we trust them?
We can't.
Edward Snowden showed us how deeply entrenched US intelligence agencies are in these sites, collecting, storing, and indexing nearly every message that flows through them. A body of secret law, interpreted by a court that meets in secret, ensures that these agencies can collect nearly anything that they ask for.
We have to assume that all of the email, texts, phone calls, and posts relating to today's protests have been collected.
Do we care? On some level I suppose we don't. We use these services, the Facebooks, the Twitters, the GMails, because they are convenient and efficient at reaching large numbers of people very quickly. For a large portion of our population, the 'internet' is Facebook. We post and tweet and like, not realizing that these posts and tweets and likes are used to create profiles of us, primarily for marketing purposes, but also for analysis by our government. I'm not saying that the NSA has a folder labeled 'Perry Donham' with all my posts and tweets collated in realtime, but I am saying that the data is there if an analyst wants to sort through it.
A photo from today's march in Washington really struck me: Japanese woman at Washington protest 21 January 2017. In it an elderly Japanese woman holds a sign that reads Locked Up by US Prez 1942-1946 Never Again! There are US citizens still alive who were put into detention camps by the US government during the second world war. George Takei, a US citizen who played Sulu on the iconic series Star Trek, was imprisoned by the US government from the age of five until the age of eight. The reason? He was of Japanese descent.
We are entering unknown political territory, with an administration guided by the far right that will wield enormous technical surveillance assets. We literally don't know what is going to happen next. It's time to think carefully about what we say and do, and who we associate with, online, in email, posts, tweets, texts, and phone calls. We know that this data is being collected now by our government. We don't know what the Trump administration will choose to do with it.
My advice is simply this: Every time you post or tweet or like, assume that it is being collected, analyzed, stored, and can be potentially used against you.
Worse, we've become dependent on 'the cloud' and how easy it is to store our information on services such as Dropbox, Google Docs, and Azure. Think about this. Do you know the management team at Dropbox? The developers? The people running the data Dropbox data center? Their network provider? You do not. The only reason that we trust Dropbox with our files is that 'they' said that 'they' could be trusted with them.
You might as well drive over to your local Greyhound terminal and hand an envelope with your personal files in it to a random person sitting on a bench. You know that person just as well as you do Dropbox.
I've been thinking a lot about trust and how false it is on the internet, and about how little we think about trust. In the next few posts I'll look at how the idea of trust has broken down and at how we can leverage personal trust in securing our communications and information.
I still think WhatsApp has a security problem
Last week The Guardian ran a story that claimed a backdoor was embedded in Facebook's WhatsApp messaging service. Bloggers went nuts as we do when it looks like there's some nefarious code lurking in a popular application, and of course Facebook is a favorite target of everybody. I tweeted my disdain for WhatsApp moments after reading the article, pointing out that when it comes to secure communication, closed-source code just doesn't cut it.
Today Joseph Bonneau and Erica Portnoy over at EFF posted a very good analysis of what WhatsApp is actually doing in this case. It turns out that the purported back door is really a design decision by the WhatsApp team; they are choosing reliability over security. The quick explanation is that if a WhatsApp user changes his or her encryption key, the app will, behind the scenes, re-encrypt a pending message with the new key in order to make sure it is delivered. The intent is to not drop any messages.
Unfortunately, by choosing reliability (no dropped messages), WhatsApp has opened up a fairly large hole in which a malicious third party could spoof a key change and retrieve messages intended for someone else.
EFF's article does a very good job of explaining the risk, but I think it fails to drive home the point that this behavior makes WhatsApp completely unusable for anyone who is depending on secrecy. You won't know that your communication has been compromised until it's already happened.
Signal, the app that WhatsApp is built on, uses a different, secure behavior that will drop the message if a key change is detected.
Casual users of WhatsApp won't care one way or another about this. However, Facebook is promoting the security of WhatsApp and implying that it is as strong as Signal when it in fact isn't. To me this is worse than having no security at all...in that case you at least know exactly what you are getting. It says to me that Facebook's management team doesn't really care about security in WhatsApp and are just using end-to-end encryption as a marketing tool.
Signal has its own problems, but it is the most reliable internet-connected messaging app in popular use right now. I only hope that Facebook's decision to choose convenience over security doesn't get someone hurt.
What I’m Using for Privacy: Cloud
This post is part of a series on technologies that I’m currently using for privacy, and my reasons for them. You can see the entire list in the first post.
tl;dr: I don't trust anyone with my data except myself, and neither should you.
If you aren't paying for it, you are the product
I think that trust is the single most important commodity on the internet, and the one that is least thought about. In the past four or five years the number of online file storage services (collectively 'the cloud') went from zero to more than I can name. All of them have the same business model: "Trust us with your data."
But that's not the pitch, which is, "Wouldn't you like to have access to your files from any device?"
A large majority of my students use Google Docs for cloud storage. It's free, easy to use, and well integrated into a lot of third-party tools. Google is a household name and most people trust them implicitly. However, as I point out to my students, if they bothered to read the terms of service when they signed up, they know that they are giving permission to Google to scan, index, compile, profile, and otherwise read through the documents that are stored on the Google cloud.
There's nothing nefarious about this; Google is basically an ad agency, and well over half of their revenue is made by selling access to their profiles of each user, which are built by combining search history, emails, and the contents of our documents on their cloud. You agreed to this when you signed up for the service. It's why you start seeing ads for vacations when you send your mom an email about an upcoming trip.
But isn't my data encrypted?
Yes and no. Most cloud services will encrypt the transmission of your file from your computer to theirs, however when the file is at rest on their servers, it might or might not be encrypted, depending on the company. In most cases, if the file is encrypted, it is with the cloud service's key, not yours. That means that if the key is compromised or a law-enforcement or spy agency wants to see what's in the file, the cloud service will decrypt your file for them and turn it over. Warrants, in the form of National Security Letters, come with a gag order and so you will not be told when an agency has requested to see your files.
Some services are better than others about this; Apple says that files are encrypted in transit and at rest on their iCould servers. However, it's my understanding that the files are currently encrypted with Apple's keys, which are subject to FISA warrants. I believe that Apple is working on a solution in which they haven no knowledge of the encryption key.
You should assume that any file you store on someone else's server can be read by someone else.
Given that assumption, if you choose to use a commercial cloud service, the very least you should do is encrypt your files locally and only store the encrypted versions on the cloud.
And....they're gone
Another trust issue that isn't brought up much is whether or not the company you are using now to store your files will still be around in a few years. Odds are that Microsoft and Google and Apple will be in business (though we've seen large companies fail before), but what about Dropbox? Box? Evernote? When you store files on any company's servers, you are trusting that they will still be in business in the future.
My personal solution
I don't trust anyone with my data except myself. I do, though, want the convenience of cloud storage. My solution was to build my own personal cloud using Seafile, an open-source cloud server, running on my own Linux-based RAID storage system. My files are under my control, on a machine that I built, using software that I inspected, and encrypted with my own secure keys. The Seafile client runs on any platform, and so my files are always in sync no matter which device (desktop, phone, tablet) I pick up.
The network itself is as secure as I can manage, and I use several automated tools to monitor and manage security, especially around the cloud system.
I will admit that this isn't a system that your grandmother could put together, however it isn't as difficult as you might think; the pieces that you need (Linux server, firewall, RAID array) have become very easy for someone with just a little technical knowledge to set up. There's a docker container for it, and I expect to see a Bitnami kit for it soon; both are one-button deployments.
Using my own cloud service solves all of my trust issues. If I don't trust myself, I have bigger problems than someone reading through my files!
What about 'personal' clouds?
Several manufacturers sell personal cloud appliances, like this one from Western Digital. They all work pretty much the same way; your files are stored locally on the cloud appliance and available on your network to any device. My advice is to avoid appliances that have just one storage drive or use proprietary formats to store files...you are setting up a single point of failure with them.
If you want to access your files anywhere other than your house network, there's a problem: The internet address of your home network isn't readily available. The way that most home cloud appliances solve this is by having you set up an account on their server through which you can access your personal cloud. If you're on the road, you open up the Western Digital cloud app, log on to their server, and through that account gain access to your files.
Well, here's the trust problem again. You now are allowing a third party to keep track of your cloud server and possibly streaming your files through their network. Do you trust them? Worse, these appliances run closed-source, proprietary software and usually come out of the box with automatic updates enabled. If some three-letter agency wanted access to your files, they'd just push an update to your machine with a back door installed. And that's assuming there isn't one already installed...we don't get to see the source code, so there's no way to prove there isn't one.
I would store my non-critical files on this kind of personal server but would assume that anything stored on it was compromised.
Paranoia, big destroyah
The assumption that third parties have access to your files in the cloud, and that you should assume that anything stored in the cloud is compromised, might seem like paranoia, but frankly this is how files should be treated. It's your data, and no one should by default have any access to it whatsoever. We certainly have the technical capability to set up private cloud storage, but there apparently isn't a huge market demand for it or it we'd see more companies step forward.
There are a few, though offering this level of service. Sync, a Canadian firm, looks promising. They seem to embrace zero-knowledge storage, which means that you hold the encryption keys, and they are not able to access your files in any way. They also seem to not store metadata about your files. Other services such as SpiderOak claim the same (in SpiderOak's case only if you only use the desktop client and do not share files with others).
I say 'seem to' and 'claim to' because the commercial providers of zero-knowledge storage are closed-source...the only real evidence we have to back up their claims is that they say it is so. I would not trust these companies with any sensitive files, but I might use them for trivial data. I trust Seafile because I've personally examined the source code and compiled it on my own machines.
Bottom line
I can't discount the convenience of storing data in the cloud. It's become such a significant part of my own habits that I don't even notice it any more...I take it for granted that I can walk up to any of my devices and everything I'm working on is just there, always. It would be a major adjustment for me to go back to pre-cloud work habits.
I have the advantage of having the technical skills and enough healthy skepticism to do all of this myself in a highly secure way. I understand that the average user doesn't, and that this shouldn't prevent them from embracing and using the cloud in their own lives.
To those I offer this advice: Be deliberate about what you store on commercial cloud services and appliances. Understand and act on the knowledge that once a file leaves your possession you lose control of it. Assume that it is being looked at. Use this knowledge to make an informed decision about what you will and will not store in the cloud.
What I’m Using for Privacy: Email
This post is part of a series on technologies that I'm currently using for privacy, and my reasons for them. You can see the entire list in the first post.
Email privacy is a tough nut to crack. To start, the protocol that's used to move email around the internet, SMTP, is extremely simple and text-based. Email messages themselves are typically moved and stored as plain text. You know those fancy T0: and From: and Subject: fields that you see on every email message? They are just text...the email client you are using formats them based on the name. It's trivial to forge emails to look like they are coming from someone else. Here's an Instructable on how to do it.
Note that there are parts of the email transaction that are more difficult to forge, but if the target is an average user, it probably isn't necessary to worry about those bits.
To provide some modicum of privacy for emails, many of us bolt on PGP encryption, which encrypts the email, or digitally signs it, or both. Note that the encryption covers just the body of the email message...the subject, to, from, and other headers are not encrypted, which means that a fair amount of metadata is being sent in the clear.
PGP is a strong solution for personal encryption. Unfortunately it is exceptionally difficult for the average user to set up and maintain. Even geeks have trouble with it. I've discussed my changing attitude toward PGP here in the blog, and many technologists who I respect highly are starting to turn away from it in favor of simpler, transactional, message-based systems like Signal.
The tldr; of my own post is that I will continue to use PGP to digitally sign my outgoing email (as I have been doing for many years) but will move to Signal for secure conversations. The PGP signature provides nonrepudiation to me, which is to say that I can prove whether or not a message was sent by me and whether is was altered once it left my hands.
So, I'm sticking with PGP and email.
But here's the rub. I'm a Mac user, and MacOS Mail doesn't support PGP. Worse, there's no Apple supported API for Mail. There's a project maintained by the folks at GPGTools that provides a plugin for Mail, however the method they use is to reverse-engineer each release of Mail to try to wedge their code in. This worked for a while, but the Sierra release of MacOS completely broke the plugin, and it's not clear if it will ever work again.
Since I still want to use PGP to digitally sign my email, I've transitioned to Mozilla's Thunderbird client. It is slightly less friendly than Apple Mail, but it does fully support plugins that provide PGP tools for both encryption and signing. I'm actually finding it to be a little more flexible than Apple Mail with filters and rules. Enigmail is the plugin that I'm using and it seems pretty straightforward.
If you are Windows user and have found a good solution, please send me a note and I'll update this post for our Windows readers.
Rethinking PGP encryption
Filippo Valsorda wrote an article recently on ArsTechnica titled I'm Throwing in the Towel on PGP, and I Work in Security that really made me think. Filippo is the real deal when it comes to PGP; few have his bona fides in the security arena, and when he talks, people should listen.
The basic message of the article is the same one that we've been hearing for two decades: PGP is hard to use. I've been a proponent since 1994 or so, when I first downloaded PGP. I contributed to Phil Zimmerman's defense fund (and have the T-shirt somewhere in my attic to prove it). As an educator I've discussed PGP and how it works with nearly every class I've taught in the past 20 years. I push it really hard.
And yet, like Filippo, I receive two, maybe three encrypted emails each year, often because I initiated the encrypted conversation. Clearly there's an issue here.
Most stock email clients don't support PGP. Mail on MacOS doesn't. I'm pretty sure that Outlook doesn't. I use Thunderbird because it does support PGP via a plugin. I really don't get this...email should be encrypted by default in a simple, transparent way by every major email client. Key generation should be done behind the scenes so that the user doesn't have to even think about it.
We might not ever get there.
And so, after 20 years of trying to convince everyone I meet that they should be using encryption, I, like Filippo, might be done.
However, there is a use case that I think works, and that I will use myself and educate others about. I've digitally signed every email that I send using PGP for several years, and I think that it might be the right way to think about how we use PGP. Here's the approach, which is similar to what Filippo is thinking:
- I will continue to use PGP signatures on all of my email. This provides nonrepudiation to me. I will use my standard, well-known key pair to sign messages.
- When I need to move an email conversation into encryption, I'll generate a new key pair just for that conversation. The key will be confirmed either via my well-known key pair or via a second channel (Signal IM or similar). The conversation-specific keys will be revoked once the conversation is done.
- I will start to include secure messaging ala Signal in my discussions of privacy
Nonrepudiation is really a benefit to me rather than anyone receiving my messages and I don't see any reason not to use my published keys for this.
Secure apps like Signal I think are more natural than bolting PGP onto email and are easier for non-tenchical users to understand. Further, the lack of forward secrecy in PGP (and its inclusion in Signal) should make us think twice about encrypting conversations over and over with the same keys rather than using a new set of keys for each conversation.
I think this approach will do for the time being.
[Update: Neil Walfield posted his response to Filippo's article; the comments are a good read on the problems we're facing with PGP. ]
Evernote changes privacy policy: ‘We might look at your notes’
The intertubes are lit up today with righteous indignation after popular note-saving app Evernote announced a change in its privacy policy that basically says, "One of our employees might take a look through the notes you trusted us with." Here's the official statement.
I have several thoughts on this, especially since I've been using the service for many years.
First. It might be that Evernote has received one or more National Security Letters or warrants from the secret US FISA court and this is their way of putting their users on notice that they are obligated to turn over user data. We've seen several high-profile companies (including Apple) kill off their warrant canaries in the past few years, but I think that at this point we can pretty much assume that these warrants and letters are being served across the board. There are simpler ways of telegraphing an NSL than changing a subscriber privacy policy. NSL's come with a gag order preventing the recipient from disclosing them, so it wouldn't make sense to change a policy in response to this kind of order.
Second. Evernote provides a free tier that, while not as generous in terms of storage as others, is adequate for the casual user. There is no such thing as 'free' on the internet; if you aren't paying for a service, your data is being mined out the wazoo. It is scanned, analyzed, stored, sold, and otherwise wrung out for every dime that the provider can make off of you. I would guess that even Evernote's paid tiers are subject to some kind of meta-data analysis. As in the first case, this is assumed and wouldn't require a policy change.
Third. One of Evernote's marketing points is that it a seamless way to store and reference your day-to-day information. They're doing some heavy lifting on the back end in terms of indexing, predicting, and optimizing the way that they present information to their users. The communication from Evernote around their policy change hints at this being the reason to allow their employees to see stored notes, that they want to optimize their processes with human intuition, and to be honest it's the most likely reason that I can think of.
I'm giving Evernote the benefit of the doubt. I think that they are being as upfront as they can about this policy shift, with the understanding that there will be a period of indignation, and that they will lose a small number of customers. When I read the news, the first thing that I did was to spend half an hour looking for alternatives to Evernote. My conclusion is that Evernote is unique in its feature set and that there just isn't any service or software that is as convenient or comprehensive as Evernote.
There's a however, however.
Any file that you store in the cloud is no longer under your control.
You absolutely have to keep this in mind every time you sign up for a service like Evernote, or Google Docs, or OneDrive, or Snapfish, or any of the other thousands of sites that want to feast on your data. If you aren't paying for it (and sometimes even if you are), YOU are the product. Your information is being sold to third parties for a profit.
The bottom line is this: Do not use any cloud service to store private information.
Even services that allow you to 'password protect' information (Evernote, OneNote, etc.) should not be trusted. If the file is on someone else's server, you must assume that it has been compromised. There just isn't any way to prove that it hasn't been. If you don't want anyone else to see your data, it should be stored on your personal computer and encrypted.
So, will I continue to use Evernote after this shift in their policy privacy? I will. I didn't trust them with sensitive information before, and I don't now. But I don't think that the collection of risotto recipes that I've built up over the years is going to land me in jail. I maintain a very clear line between data that I know will be scanned and divulged and data that is private; the latter is never stored on a server or computer that I don't control.
Which VPN Should I use?
Recently one of my students asked for a recommendation on a VPN app for his Macbook. I thought my rather long-winded reply might be useful to others wondering the same thing, and it's appended below.
There are two primary use cases for a VPN:
- You are away from your home network, possibly on an unsecured network such as in a café or an airport, and want to encrypt all of the network traffic coming to and from your computer (even traffic that isn't normally encrypted)
- You want to appear to be somewhere else in the world. I ran into this when I wanted to watch World Cup soccer matches not shown in the US but available in the UK; I set up a VPN connection to a server in London so that it appeared I was in that city, and then watched the games on the BBC.
Here's my reply to my student's question:
The short answer is that I don’t trust the apps on the App Store for VPNs. The longer reason…all of them provide their own server to connect to, which means that my VPN internet traffic is going through an endpoint that I don’t control. The only assurance I have that my traffic isn’t being decrypted, stored, or otherwise manipulated is that the app seller tells me that they don’t. Also, the programs are not open source, so I can’t look through the code to assure myself that there is no back door or other security risk.
For that reason, I use Tunnelblick on the Mac (https://tunnelblick.net), which is an open-source VPN program. I have very high confidence that it hasn’t been compromised. I run my own VPN server (which I personally built and maintain) to connect Tunnelblick to when I’m away from the home network, so the encrypted tunnel goes from my Macbook, through the Tunnelblick VPN, into my own server, and from there out onto the internet. The use case is typically that I’m away from home, on an insecure network, and want to lock down / encrypt everything going over that network.
That being said, if my purpose is to connect to a VPN so that it appears I am somewhere else, such as if I want my internet address to be in the UK to watch soccer, I’m forced to use one of the commercial VPN providers, and for that I use Tunnelbear, https://www.tunnelbear.com. Note that this is not open-source, and so your confidence in it in terms of privacy should be very low. They do get good reviews, and I’ve had a $5/month subscription with them for about three years now. I generally use Tunnelbear for very specific purposes (such as location shifting) and take steps to make sure that no other traffic is going through their VPN endpoint (I use Little Snitch firewall rules to accomplish this).
On the iPhone/iPad side I use OpenVPN (https://openvpn.net), but again I’m connecting back to my on VPN server with it. It’s an open-source project that I have high confidence in.
OpenVPN offers PrivateTunnel, with a pay-as-you-go connection plan that is fairly inexpensive. It’s the same team that produces OpenVPN, so I would trust them a little more. The ‘tunnel’ is a VPN connection back to one of their servers, and so you run the same risk of interception as with something like TunnelBear, which means that you would NOT use this solution for highly sensitive traffic. Also, I don’t believe that they have all that many servers, so you’d be limited in your choice of where you appear to be. I’ve been meaning to give them a try to see what the service looks like.
[update 7/15/2016] I've installed Private Tunnel for testing. They offer endpoints in: NYC; Chicago; Miami; San Jose; Montreal; London; Amsterdam; Stockholm; Frankfurt; Tokyo; Zurich; and Hong Kong.
I know that’s a long answer! Bottom line is that if you are connecting to someone else’s VPN server, don’t trust it with anything other than mundane traffic. For location-shifting to do something trivial like watch soccer or get around a school’s firewall, commercial solutions like TunnelBear are fine.
Since we’re on the subject, I can’t recall if I mentioned it in class, but if you need secure IM and voice, you (currently) should be using Signal and nothing else. And of course PGP for email :^)
Why FBI vs Apple is so important
On the face of it, the situation is pretty straightforward: The FBI has an iPhone used by one of the San Bernardino shooters, it is currently locked with a passcode, and they want Apple to assist in unlocking the phone. Apple has stated that they don't have that capability, and that to comply with the order they would have to engineer a custom version of iOS that turns off certain security features, allowing the FBI to brute-force the passcode. It comes down to the federal government forcing a private company to create a product that they wouldn't normally have made.
We can reasonably expect the FBI and Department of Justice to push back on Apple, which has not only provided assistance to the bureau in similar cases in the past, but has also provided assistance in this case in the form of technical advice and data available from iCloud backups. What's interesting about recent events, though, is that they have taken the form of a court order under the authority of the All Writs Act of 1789, which gives federal courts the authority, in certain narrow circumstances, to issue an order that requires the recipient to do whatever it is the court deems necessary to prosecute a case.
Apple has spent months negotiating with the federal government in this matter and requested that the order be issued under seal, which means that it would have in effect been a secret order; the public would not have known about it. It's also a possibility that the order could have been issued by the Foreign Intelligence Surveillance Court (FISC), a secret court, with no representation for the accused, used by the government to carry out covert surveillance against both foreign and domestic targets. Such an order would have included a gag order precluding Apple from divulging that they had even received it.
Instead, the FBI and DoJ went public with the nuclear option...the All Writs Act. The only reasonable explanation is that they expect this matter to be appealed, and that a federal court will side with the government, setting a landmark precedent. The FBI administrators are not fools; they expect to prevail in this. They picked this specific case, out of all of the similar cases over the past few years, to move their agenda forward.
Apple's position isn't that they can't create a custom version of iOS to accomplish what the FBI wants. It is that to do so would be an invitation to any law enforcement agency to ask for similar orders in any case that came up involving an Apple product. Privacy would be permanently back-doored. And it wouldn't stop with American law enforcement; it isn't a far leap to see China demanding such a tool for Apple to continue to do business in the country.
The defense that Apple (and a growing consortium of supporters, including the EFF) is taking is that both the first and fourth amendments prevent the federal government from compelling speech. In this context, there is legal precedence that computer software is seen as speech, and so Apple cannot be compelled to write code that it doesn't want to create. If the FBI and DoJ were to prevail, they would be able to require any company to write whatever code the government felt necessary, including backdoors or malicious software.
An analogy would be if the government decided that it would be in the public's interest to promote a particular federal program, and so compelled the Boston Globe to write favorable articles about it.
This case has nothing to do with San Bernardino. It has everything to do with the federal government attempting to establish a legal foothold in which individual privacy is at the whim of the courts. And, as Apple has stated, a backdoor swings both ways; it would be only a matter of time before such a tool would be compromised and used by criminals or other governments against us. Privacy is a fundamental right as laid out in the first, third, fourth, fifth, ninth, and fourteenth amendments to the US constitution.
Edward Snowden said, "This is the most important tech case in a decade." The outcome of this case and its appeals will help determine whether our future is one of freedom and privacy or of constant surveillance and a government that can commandeer private companies to do their bidding.