Date: September 30th, 2013
Event: Congress on Privacy & Surveillance (COPS)
Venue: EPFL Rolex Learning Center Forum, Switzerland
Link: http://ic.epfl.ch/privacy-surveillance
Video http://slideshot.epfl.ch/play/cops_appelbaum
Video (Youtube): https://www.youtube.com/watch?v=GW2OAhFkq3E
Jacob Appelbaum: ...A journalist in Germany once said something very fantastic. It was Tilo Jung. He said that "All journalists are activists for the truth." I really like that. Because there’s a lot of discussion about whether or not talking about the facts and having a position on them, somehow represents being an activist. And I think what an activist means in this context is a pejorative thing. This notion that somehow to be involved is a negative thing.
And this is an empowering tactic that we see all over the place. Part of the reason that I thought it was important to try to have Bill, and other people like Laura, who unfortunately cannot be here today. To have these discussions taking place with some facts, is how we would actually be able to change the discussion in the non journalism world, in the non technical world, or in the non academic world, or in all of these worlds put together.
There is probably the chance that you guys are all really tired, as it’s the end of the day. I wondered if you guys would all join me by standing up for a moment and just kind of shaking your body.
[laughter]
Jacob: Literally, just stand up so that you’re not going to fall asleep because I know you all want to do it. Just take a moment.
[laughter]
Jacob: Just shake it out. Pop your back, relax. You can tell that I’m not from academia in this regard.
All right. Great, now we all feel better. That’s good.
I’ve been debating whether or not to reveal some new programs today. Basically, the way that I have been thinking about it is, do I want to go to prison or not? That’s part of what’s actually at stake here, which is that it is not the case that I can tell you everything that is in my head right now without fear of that. This is just a crazy situation to be in. This notion that I know a thing that impacts each of you—but in theory is done in my name. I can’t actually talk freely about those things. This is a really bad situation to be in.
When you hear people talking about so-called lawful interception, you hear them talking about spying, really in a lot of cases when they’re talking about this, and they talk about it in secrecy, ant they do ”the war on terrorism”, “the war on some drugs”, “the war on child pornography”, “the war on money laundering”, and various other thought crimes—I feel like those people are generally intellectually dishonest.
I feel like what they’re actually saying is that they’d like to help create a special class of people, and to have special privileges, whereby the architecture itself will allow everyone else’s rights to be violated as well.
Fundamentally, the reason that I think cryptography is so important is because it gives us one of the avenues of resistance, where it becomes basically not possible to break the law. The work that Lenstra does here, for example, is really important. You can’t subvert the mathematics when the mathematics are good. The implementation you might be able to subvert, the law you could break—the law is being broken right now. But to actually be able to have something that we individually can do, and societally can do, is extremely important.
So I want to try to describe to you today the new threat model.
Before I do that, I want to say something that happened to me today. I’m sorry to the person who sort of came to me and told me this, I didn’t tell you I was going to say this. But I met a guy here today who told me that he has a friend that was killed by a drone strike. There’s a guy in this room who told me that my country killed one of his friends with a flying robot.
Just let that sink in. Think about the graph analysis talk that we are having. You are all one hop away. This is a really serious problem.
When we talk about keeping the architecture of the Internet the way that is right now, we have a serious problem, which is that fundamentally, this is about power as Schneier says. These powers, maybe we should get rid of them. The same way that we have with land mines. That’s technologically possible. Maybe what we need is to get rid of surveillance as much as is possible. Technically we can surveil people but perhaps we can build counter-measures and legal instruments in order to actually resist that surveillance. I think that we can do that.
One of the threat models, though, is this guy who wanted to live a life, and I’m sure he was an unpleasant person to some degree.
President Obama himself spoke for 10 minutes on what an unpleasant person he was. But what Obama never actually said about this person that was killed by a drone, was that he had a trial by jury and was convicted, because he wasn’t.
He received no trial by jury, and I can tell you this because I have white privilege. I can stand here and say this. But the friends of this person, that knew this person, they won’t stand here and say that, because they would fear to be targeted. So we need to build systems so the bad person could stand here and tell us that story without fear of being targeted.
The only way we build a security system like that is if we use mathematics. There’s really no other way around it, because the law simply does not work. But in isolation, it will not work either. We have to have fundamental rights to privacy that are upheld in national and international courts. We have to actually have these things come together.
If you don’t have the right to resist giving up your pass phrase, for example. If you don’t have the right to remain silent. If you don’t have a right against self-incrimination. With the exception of some kinds of anonymity systems, eventually, the powers that be may come to rest on you and ask you for these things. They may demand it in many ways. Some of those ways are extremely unpleasant.
This is a threat model that exists for this person, and very few people would design a security system for that guy. In fact, if you were to try to design a security system for the guy that was killed by a drone strike, you’d probably go to prison for that—even if you were to think about it in that way.
So I’m not designing it for that guy, but I want to design it for everybody here, who is one hop away from that guy. You should too, because you’re one hop away. But you should also do it, because you’re probably that guy also, at some point in the future, if you’re unlucky.
We shouldn’t just think about ourselves, we should actually think about what that kind of thinking does do ourselves. We should care about each other, I think.
If we want to think about the NSA’s spying programs, we could do it like so. I don’t have slides, I hate PowerPoint. PowerPoint is great if you want to leak NSA stuff, but other than that, I don’t know why anybody would use it.
[laughter]
Jacob: Try to imagine this. I think it’s good to use your imagination instead of to get you to read slides. Imagine a picture of the Earth. You know the pale blue dot? Imagine a picture of the Earth. There is no bounds to the surveillance. All electronic communications on the planet are, in some way, under surveillance by the NSA.
There are so many programs. I actually have a slide where I made a visual... I don’t know if we could put this up here. I made a visual of just program names I scrapped from LinkedIn pages. Search for some classified NSA programs, and you’ll find on LinkedIn, that people put the program names on their CV.
If you look at how they're grouped in the CV, you’ll actually see what the program names generally are. Like, “Oh, it turns out MAINWAY must be related to telephone metadata, because they say that they are an expert in telephone metadata surveillance.” You can see five other programs next to it, that gives you some idea about those two, right? You can make your graph of that.
Now, imagine that for all the program names you find there, each one of those is solving a discrete problem for the whole planet surveillance. You have satellites in space -- so you’re imagining the pale blue dot. You have fiber cables all over the place. You have the interconnectedness of people using those cables. You have telephone systems. You have satellite modems. You have cell phones, all this stuff.
I was really pleased to find out that the New York Times published this piece, by Laura and Rison, Laura Poitras and James Rison, on Sunday, because they reviewed the existence of a couple of key things.
One of them is the fact that the US Government has 15 years of data retention. Just think about that. In the EU, it’s clear that this data retention is violating some pretty serious national and international norms. There are still some battles that are going on.
Now, imagine the pale blue dot, and all the communications that are going on. The NSA is trying to grab a hundred percent of the metadata of all those communications. They’re doing a pretty good job. If you make a phone call, it’s going to go into a database, straight up. There’s no question about that, in my mind, from the things that I’ve seen from programs that are being revealed, that are being talked about.
The really, really scary part though is that, if we view the world in this way, and we view this pale blue dot with some electronic communications crossing it, we should also think about how the adversary views it.
When I say the adversary, I mean the people that are actively attacking our privacy. They even use slides where they talk about revolution--this is pretty incredible--inside of the documents, and they even name them according to American Civil War battles.
It’s fascinating to me to see this, because they view themselves as an attacker and us, all of us, we are the victims. Of course, they are the good guys in this battle. It’s probably not a good idea to engage in too much on that, because I think that is a dialect goal imperative, that leads in a bad cycle.
But it’s important to consider that they are in fact, the advisory. Basically, defending against them is the advisory that is worth defending against.
What they’re doing when they surveil this, this being the whole planet, is that they build social graphs of everyone.
The story, in addition to 15 years of metadata, what they were actually telling is that they take the telephone metadata and the email metadata, which as Bill says and as the "New York Times" piece yesterday pointed out, includes content.
You send an email. Arjen sends me an email. Thank you, I appreciate that you still send me email.
That goes into this database. There’s no question about it. The way that it works is that whatever system he touches, no matter what, even if it’s a brand-new computer system and then he sends me that email, there’s enough plain text there to extract features from that plain text. We call these selectors.
These selectors are fed into a system, either by an analyst or by a social graph analysis. These actually allow you to automatically learn, or the surveillance machine itself to learn, and to pull this stuff out. It automatically starts to flag other people around.
What that means is that when I’m in the room with all of you and I have a cell phone and you have a cell phone, you’re in the graph.
Now, imagine that that’s how it works for the entire planet. How long does it take for the entire planet to be really being collected on in some pretty nasty ways? It’s pretty clear that you’ll start to gather lots and lots of innocent people all the time.
It’s also quite clear that when you tie this to an actionable intelligence—let’s say a target like the drone strike fellow that I mentioned—then it starts to get really scary. It means that merely using a telephone and having the pattern of looking like you might be a bad guy is enough, from a surveillance perspective, to have you targeted for murder.
This is a really fascinating thing. When we as engineers designed communication systems, shouldn’t we be designing against our communication systems being misused in this way? I think so. Is that activism? I don’t know. I don’t think so. I think that what that is is actually recognizing from history what we have actually seen transpired before.
There are a couple of really fantastic books that I would encourage everyone here to read. One of them is Philip K. Dick’s "A Scanner Darkly." He was an optimist.
[laughter]
In Philip K. Dick’s reality, they had way better drugs.
[laughter]
I think. I mean, I don’t actually know.
[laughter]
Edwin Black wrote a really fantastic book. It’s called "IBM and the Holocaust". And I appreciate that the gentleman from the ITU earlier mentioned the Second World War.
For example, the Norwegians. As I understand it, they learned about metadata collection from the Nazi occupation. They actually changed the way that the telephone system did billing, such that individual houses had the equivalent of a gas meter with something spinning so that they could see that you’d called somewhere locally or far away, without centralizing the actual records keeping. So that an occupying power wouldn’t be able to do these metadata attacks. Along came cell phones--that was all out of the window. Now, we’re back.
"IBM and the Holocaust" is really fantastic because it talks about the threat models that we needed to think about about 65 years ago. Using just census data, it was really possible for the Nazis to do incredible stuff. To go house to house, to have this real seemingly impossible ability. They were omnipotent, essentially.
In fact, in Holland, 70 percent of the Jews were eradicated, whereas in France, it was only 30 percent. The difference was that the punch card technician and racial statistician in Holland was a rabid anti-Semite, and he was very good at his job. He was able to complete his task.
In France, the gentleman tasked with this, he was a part of the French Resistance and used the census to organize the Resistance. He died in a death camp. The Dutch guy succeeded, if you could call it a success.
The point is that this machinery has been built again. only it is much better this time. And it is the case that you no longer have to opt in to the census. This time when you pick up the telephone, in some countries, you get voice print analysis. You have an IMEI, that’s the individual identifier for the phone. You have the MC, that’s the actual secret key inside of the SIM card. These are unique identifiers that become selectors.
The graph published by the "New York Times" yesterday shows, in fact, the main view data as well as some of the so called UTT, that’s the universal targeting tool. It’s all tied together into a social graph, shows the social relationships of the people.
Then, they have surveillance systems throughout the entire planet where they have them either through something like these so-called upstream programs where they actually peek in and then copy data out. And when they don’t have a view into a network, let’s say that they don’t have an AT&T... AT&Treason, is that what their nickname is, Bill? Yeah, something like that.
When they don’t have an AT&Treason, then instead what they do is they break into the system. Then, they re-task the system essentially with some of the things they would like—like, maybe they wanted to start to look for selectors to be able to target people and then they exfiltrate that data in some way.
Part of the threat model we need to think about is that, with just the census data, you can pretty much wipe out an entire population. You could do incredible stuff with that, and not in a good way. With this system, we actually have that in real time for the whole planet. Even if you change all your devices, you use one of the so-called burner cell phones or something like that, same problem.
The thing is that it’s not just passive. The thing we really need to consider here is that there’s a myth of passivity with the NSA. They have these watching programs, these upstream collection. The PRISM programs are a little bit moving towards the active attack.
We need to move towards building systems that are what we would call provider independent security. That is to say, you store only cipher text with them. They maybe store a message for you which is encrypted asymmetrically because that is not the way that these systems are designed. Instead, the systems are designed, in the case of Google for example, as surveillance systems where you are the product, as has been said several times today.
We see that power sort of congeals around the centralization points and that power can be very coercive.
Caspar [Bowden] has written quite a bit about this and I think that his analysis is spot on. I think the only thing he really needs to add to it is that he needs to just start throwing in the NSA slides, every couple of pages just to sort of drive home for effect how correct he is.
I think if we look at this, we look at the targeting system, we look at the surveillance system. It means we have to actually consider traffic analysis resistance to be one of the most fundamental problems of our generation. That is we have to destroy the ability for graph analysis, graph analysts actually, to work.
We need to ruin their field, and that’s a kind of strange thing to say because it’s actually a really hard problem.
How is it that we can freely communicate with each other without revealing content metadata? How is it that we can resist these kinds of selector-based surveillance systems?
Some people would suggest what we should do is we should build a lawful, necessary and proportionate surveillance system, and I think that’s a good step but it’s not the end goal. The reason is because these systems will be re-tasked by someone else.
For example, if we want to completely destroy the myth of the NSA’s passivity, we should look and consider what happened with Belgacom.
Belgacom is the case of a Belgium telecom where GCHQ engaged in criminal—but then I’m sure they would say that it was just for national security—activity. And what did they do in the Belgacom case? What we understand is they have a system that does a thing called quantum insertion.
Quantum insertion is part of an entire set of programs, and here’s where I’ve to be very careful about what I say so as to not cause myself more trouble. But let’s just say that quantum insertion is the classic man-in-the-middle attack.
What this means is that you take the whole pale blue dot that is under surveillance, where there are selectors looking to automatically target a person. A person is then tasked. Whenever they pop up anywhere in the world, that person is being fully recorded 100 percent, and tasked for an attack.
In this case when they’re tasked for an attack, there may be a live analyst doing it. There is a case that I know of, one specific case, which seems to me like it’s largely bigotry, or political surveillance, where you will be attacked untasked. That is, you visit this website, and there is likelihood that you will be attacked, just based on the fact that you visited this website. It would be flagged, you’d become a new note in the graph of interest, so to speak.
In the case of Belgacom, instead of it being done for terrorism reasons or something like this, they decided that Belgacom is a stepping stone, so they went after people involved in Belgacom. From what I understand, the way that it works is that they had the selector system, they had in fact some of the people that are inside Belgacom, and they targeted those people.
When those people went about their lives, these systems attacked them. The way that they did the attack is when they connected to services, whatever services they were using on the Internet, those services were the man in the middle and what was exploitable was then exploited. Which sounds very generic, and I’m sorry for that.
When they exploited the protocol—let’s say TCP where you can easily inject something, let’s say that it’s web-related so they inject maybe a web-related payload—then they work on exploiting the actual computer on the other side.
Traditionally speaking, you can’t easily inject into TCP, unless you can see the whole conversation. They’ve got that. They solved that problem, there’s no difficulty for them. Then they simply need to be able to fingerprint whatever client-side software is running. With Belgacom, my understanding is that’s exactly what they did. They owned the Belgacom this way.
What we see is a shift in the way the GCHQ is performing their operations. But it really doesn’t seem so different than the classical stereotypical James Bond bullshit. This is key. This is the new cool.
This is something we really need to consider, which is that the social cost of this kind of criminal activity, this kind of spying, frankly, it’s too low.
If somebody has access to this kind of Quantum Insertion system, so that they can compromise people, they would probably be considered pretty cool in a lot of circles, especially in the computer security world.
We should change this, I think. Maybe this is the activism coming out, but when you see these kinds of things happening, we should ask about national sovereignty, about due process. We should ask about justice. We should ask about human rights.
We should also ask, if what we really want to do is change it such that every person has to have a kind of military security training, to be able to resist what is essentially military aggression, with extremely advanced electronic surveillance and injection systems.
Part of what we need to do is: make it so that things like quantum insertion are impossible, without quite a lot of work. That is: without finding really, really serious vulnerabilities in transport layer security, without having cryptographic breaks. We need to actually build and engineer communication systems that raise the cost so much that this can’t be done without being detected, and without a fundamental cryptographic breakthrough.
There’s no reason that we can’t have end-to-end encryption for all of systems that we use on a regular basis. Except that it seems to be the case that people think it isn’t necessary.
I should say that there are some things to correct here. One of them is XKeyscore.
XKeyscore is a system whereby someone inserts selectors into a query interface. Germany has had access to this, the UK has unfettered access, and it’s actually a distributed database query engine, from what I understand of it.
You enter a query, and it flows out to many databases and queries those subsequent databases. And then all the data flows back to the analyst.
This is particularly interesting in the LoveInt scandal, which Bill mentioned, which is to say, you put in your ex-girlfriend—or ex-boyfriend, or current girlfriend, or current boyfriend or whatever—and you put in their email address. And it pops out whatever information this planetary surveillance system has.
I should reiterate: I’m using planetary surveillance system as a shorthand. Obviously, it’s literally dozens and dozens of programs, hundreds of programs put together. Whatever is collected into these systems will be delivered to the analyst.
From that, they have other analytical tools where they can either tie them together for your social graph—which they precompute, I might add, to speed up searches—or they’ll tie it in with other enriched data.
For example, tax records, voter registration information, information about what operating system you have recently downloaded, tools you have recently downloaded, IP addresses that you have recently used, who your father is, where you’ve worked -- these kinds of things. They all get tied together. XKeyscore is one of the interfaces for searching through this.
[23:20]
One of the fascinating things about looking into XKeyscore is that we are starting to see a common picture. It’s not just quid pro quo for data sharing with intelligence agencies. It’s actually that all of the intelligence agencies of the world that are alligned -- for example, Five Eyes, that is the so-called alignment of the UK, USA, New Zealand, Canada and Australia -- as well as so-called third-party countries, Germany, for example.
They bascially have a ratio. It sounds to me a little bit like a Bit Torrent tracker, which is kind of funny. But only it’s the data that they spy on you to get, and they feed it back into these systems.
But it appears that GCHQ has, for example, unfettered access into the NSA’s database, and vice versa.
What it looks like is these intelligence agencies have given up on national sovereignty, in order to work directly with each other. Which is madness, in a sense that I can almost not wrap my mind around. It’s really terrifying.
Now, imagine that those same people have access to something like Quantum. There’s a suite of things that are like this. Quantum Insertion is just one of many things.
There also exists, I mentioned, the way that they insert the attack payload. I’m being very careful about how I phrase this. There is a system that determines whether or not you’re vulnerable, and there is a different system that decides that since you are vulnerable, it will insert the right payload in order to exploit you.
[25:00]
Imagine the GCHQ, also, since we know they used it on Belgacom, imagine that they have that, and they’re able to use it against, for example, a Swiss company, an American company. They have this.
To me, this seems like a pretty serious problem. Not just politically, but technologically. It is especially scary when we consider that we cannot actually do forensics on our devices. I mean really do forensics. This ties into a lack of free software, verifiable software.
It’s extremely difficult to know if your EFI firmware, your bios has been replaced, if the firmware on your hard drive is what you think it is, if the embedded controller in your keyboard is what you think it is, if the operating system or the user space has been tampered with.
Fundamentally, when we think of this system, we have to think about the worst possible outcomes. If you were to design such a system as a good engineer, now make it worse: that’s what exists. I’m serious.
The good news is that, as widespread as this system seems to be, and as much coverage that it seems to have over the world, we haven’t yet lost. Here we are having this conversation.
We are seeing a paradigm shift from a "collect everything" perspective. Collect whatever you can, do retroactive policing, automatically pick new targets, do full recording on those targets. That’s really, really scary.
I wanted to spend the first half of my talk explaining why I think we’re totally doomed.
And the second half talking about, regardless of whether or not we’re totally doomed, what we should do to resist.
Just to slightly touch on this, one of the fundamental things we need to consider is the right to free association and how it is under threat.
When I met this gentleman today who told me that his friend had been killed by a drone, or an associate killed by a drone, I instinctively took a step backwards. Then I realized that he had the same feeling, and that I had done to him, what people regularly do to me.
I felt really bad, and I stepped forward and I apologized, not only for our country killing his friend, but also for having let that fear take me, even for a moment.
What I would like to say is that I don’t think that silence will protect us. Being silent about these issues, for example, just following orders, is not the way that we should continue with these systems. We should build systems of resistance.
To give you an idea of this, there’s a system that’s called Colocation. This is a separate NSA program. The Colocation system is a system that does graph analysis. It looks at cell site location data. For those of you that are writing this up for a newspaper, when General, or, Emperor Alexander—as he is called in Washington, DC, no kidding—when he talks about how they’re not collecting things under Section 215. He says it in such a funny way, "Well, not under this program." Thank you, brilliant.
What they’re doing is they’re pulling location data for everybody nearby. They look to see if Caspar’s cell phone is in an audience full of people, and if some of these people go to a cafe with him afterwards, they pull the data to automatically find targets by geographic proximity, even if they don’t call each other, because Caspar doesn’t have a cell phone.
Audience Member: I don’t either, for that reason.
Jacob: Exactly. I only have a cell phone just to mess with people at parties these days. It’s information and fun. It’s a very motivating tool. Have you ever met someone that says, "I don’t care about any of this stuff at all?" and you say, "What’s your phone number," then you dial it, "Hey, thanks for those documents," click.
[laughter and applause]
You’ve never seen someone so immediately care about the problem. They go through phases, the phases of grief. They get to anger pretty quickly. They also tend to start to recognize that the place they should place their anger is not with me. That’s important.
Colocation is really scary, when we consider the passive wire-tapping of the planet. It’s really scary when we consider the active attacking for economic espionage and for political things.
For example, cyber security is a reason to spy. General, Emperor Alexander has said that, "They are allowed to monitor people for cyber security reasons." Imagine, for example, that there is a global dragnet looking for logins and passwords and storing it in a database. Imagine that for a second. For cyber security reasons.
Now, that is real. That does exist. It’s like dsniff by Dug Song, but for the whole planet, when they can find it. Every time you use a plain text login and password over the Internet, I’m fairly certain that if the NSA has coverage, and they do probably, they’ve got it in a database.
I think they also store a lot of data, cryptographic data, for example cryptographic handshakes. So every Diffie-Hellman performed in the wild. Imagine you could watch the entire Internet set of Diffie-Hellmans, what would you be able to do?
I will leave that as an open research question to Dan and Tanja and Arjen. That, I think, is also a realistic thing. We have to model for an adversary that, when they do selective recording of certain things, they may be able to do mathematical attacks that we would even have trouble simulating.
How do you simulate 50,000 different kinds of devices during a Diffie-Hellman, some with bad entropy, some without, some talking to a server to has great entropy, some with back-doored entropy, follow the NIST standard, or NIST back-door standards?
[30:52]
I wanted to echo, also, what Bruce said about RC4. I really think there is some crypto out there that is broken. If an academic has broken it in some way, you should really stop using it.
Because the NSA is definitely ahead of some academics. And yet, the only academics that are really investing and attacking some of these systems are the academics who are breaking the cutting edge stuff. So they are going to look at something like "Ron’s Code 4" and they are going to say, "What’s the point?"
And yet, if you look at some of the major websites of the world, what do they use, but RC4? So if you go to Google for example, right now, you will get a web page with RC4. That’s pretty crazy in my mind.
We’ve got to consider that theoretical academic attacks are only going to get better, and they probably are already better to the point of being practical.
I don’t know if we will ever actually see that in public. I mean, Dan and Tanja, I think, have done some really good improvements on RC4. I think there’s a lot more to be done there.
That includes horrible protocols, which are known and obviously broken like PPTP. We should stop using anything like that. But something like IPsec, that has a NIST standard, is going to be just as weak, if not worse in ways that we don’t know.
We need to consider that proprietary hardware and proprietary software are, in fact, a serious problem. For my computing environment, I try really hard to get rid of all of that.
I even drill the microphones out of my laptop.
You know you have to take these things seriously, depending on the stuff that you are working on. Unfortunately, I think, we are about as good at securing general purpose computers, at the moment, as we’ve ever been in the history of the world. That is really bad. It is awful.
It doesn’t get a lot better with the walled garden solutions. Especially when some of those walled garden solution providers like Apple are part of collaborating with the NSA, à la PRISM and other things.
Even in the best security centers, when it comes to malware, or when it comes to even some kinds of targeted attacks, it just doesn’t stand up against the stuff that the NSA can do -- either through coercion or worse.
So, how do we resist these things?
I think, basically, there are some tough prices to be paid, and we just have to confront this.
One of them is that there are entire classes of problems that we just don’t really solve right now.
For example, if I want to meet someone -- say that I’d never met a gentleman in the front row before and we want to meet -- we need a discovery protocol which is forward secret. I’ve been working on this with a fellow whose name I won’t name at the moment -- he’s a great guy, and when he wants to mention he’s been working on this in public, I’m sure he will -- but we call it phrase authenticated NIM and discovery automation or something along those lines.
It’s called PANDA, because pandas are endangered and we wanted a cute name -- just like free association is in danger. The idea is that we want a way to have a shared secret and then to be able to meet in a forward-secret way to do a key exchange.
This is fundamentally important because when we look at selector based surveillance, what we see is that they look for phrases or they look for email addresses or they look for certain qualifiers, like a telephone number. They say, "OK, whenever these things pop up on the grid, we want to extract everything tied to that. Then all the features of that flow of data, we want to look for more features from that to find more things."
The moment you connect to the Internet, you immediately betray all your devices and every device attached to that network, probably, also, is tagged in the same way or it’s at least related, and that’s captured. So when you’re behind a NAT that makes for a really interesting surveillance problem I’m sure.
What we need is a way to meet again. When you fall out of your selective social graph, in a forward secret way such that even if you go to a server and you have this key exchange there, there is a window of time in which it is useful.
We’re doing this with EKE2, which is the encrypted key exchange and we basically take a patch phrase that a user has and we run it through something which slows it down, such as S-Script or PBDFK2. We slow it down as much as is practically possible and then, we basically use that to generate a tag.
We post the encrypted part, basically of the Diffie-Hellman and the other side does the same. From that, we are able to derive some other information. Then, from that, we can then jump forward and have a key exchange.
That’s a pretty convoluted way to do it, but there are no fixed selectors and the path phrase is only brute forcible and useful for a very small window of time. We, then, do that over the Tor network to a Tor hidden service and voila, we can now do pretty awesome key exchanges where we don’t give up any selectors at all.
We are using this in a system, one of the systems where it’s deployed right now, it’s called Pond. Pond is sort of a forward secret messaging system which is in an experimental phase and it’s a little bit like email and it’s a little bit like off the record messaging.
It gives you a delay tolerant system where we try to beat the traffic analysis adversary. We hope that they can’t beat Tor. I’m fairly confident that the way that they would beat Tor, if they could beat Tor, would be to try to target and exploit things like the Wweb browser.
In fact, I’m certain of that, you should—I’m going to bite my tongue on that one. I’m pretty confident in Tor.
In fact, it’s one of the only things that I work on Tor, so I mean, they pay me to say that I’m sure. But I actually work on it because I believe it. I would quit if I didn’t believe it.
This kind of system requires an anonymous communications channel, but when we combine these things, it means you can meet a person, have a communication with them, share a file, it’s all forward secret and we can even, potentially build encrypted systems into the system that are post-quantum computer resistant.
That would be pretty incredible. Trying to save email, for example, I think, is a lost cause, mostly because it’s just a total disaster from every perspective that you can look at it, except in how it’s distributed.
That’s, actually, very nicely architected. But in terms of security, things like PGP I trust, but actually getting people to use it and user interface is just a nightmare.
We need to solve the meeting problem. Our first stab at that, Panda is probably garbage. Not garbage in the forward secret sense, but garbage in the sense that it probably won’t be usable. It probably won’t be that great.
We need to start attacking those kinds of problems. When we have that, it means that we can start to organize. It means that we can meet again. We can do a key exchange securely. We can do it in person. We can do it remotely. We can do something else. I’m not sure what that is, but we, then, have the foundation to start to build more useful systems all in this way.
To that end, I think we need to just basically get rid of plain text on the Internet whenever possible. Internal networks for major corporations -- I hear that Google, for example, is starting to just, coincidentally, they are starting to encrypt their own private backbone, all of their internal fibers.
So, if the NSA, for example, was inside or sniffing some of the backbones, they would get cipher text. So, big networks need to adopt that kind of stuff internally, even if the communications that are running on top of it maybe are TLS, or something like TLS.
We also need to add things like “off the record messaging” (OTR). It’s another project I work on, as a fair disclosure. Basically, I’m just a maintainer there, so I deserve almost none of the applause that comes from OTR. But, embedding OTR into other programs, so that when we send an SMS -- things like TextSecure, they do this -- it’s like off the record messaging, but for SMS.
Having these kinds of systems deployed are the first step to reducing, at least in some cases, the content of the messages. But when you have something like OTR and you have something like Pond, you can actually start to reduce the total amount of metadata, as well as the actual content that a passive adversary can have.
When you have a secure key exchange, you can start to reduce even what an active attacker would have. To that end, I think that if you’re working on these things -- for example, I know a couple of people here are interested in the Syrian Revolution -- one of the things that you would probably want to consider is: have one machine, which is completely routed through Tor, using something like the Tails bootable live CD. Never use a Web browser on that computer, ever, no matter what. Have another machine for browsing the Web.
We need some way to be able to transfer information, like a URL, from a secure communication system -- probably visually with QR codes -- to that machine which is going to get compromised. And it will. A web browser is just a nightmare of code. Really it’s a disaster. It’s really quite, quite scary.
What we need is, I think, more money for research, specifically into decentralized, distributed, verifiable and reproducible builds of free software, as well as actually building the same sort of Internet that we have now. Except, instead of being centralized and compromised, it can be a little bit harder to use in some cases for the first initial set-up.
And then, we can move into a decentralized distributed naming system, something like the GNU Name System, I think is a good idea. Something like DNS Curve, I think, is a good idea.
They start to move us in that direction. They don’t solve the whole problem.
I think funding research in that is really critical, because part of the way that the NSA wins with this, is that they do have the so-called "home field advantage." Part of the home field advantage that they have is all the major corporations that even control things like DNSSEC, I think, you know, who are they to hold them to if not the NSA?
That, to me, is a very scary thing. As an anecdote to this, I met a person who works in a security company and they told me about an instrument, a legal instrument which, I believe, is the NSA’s business records request instrument. And basically, they were told that whatever they do for their customers, they are required -- and I haven’t verified this yet, but it seems feasible that this is true. I heard it from two separate sources -- basically, they are required to turn over their entire work product to this requesting agency. Even before they turn it over to their customers, and they’re not allowed to tell their shareholders, and they’re not allowed to do anything with this, except to hand it over.
They can, also, hand it over to the customer like they normally would. But this means that the company has these privacy agreements and disclosure agreements, you know, those really important papers that are sometimes viewed as more important than the constitution itself.
Then, basically, in this, private property is destroyed, as the state has essentially privatized armies of computer security researchers and their research, exploits, bugs that they found, interesting audits for private customer, will get handed over to this requesting agency.
I’ve heard that from two separate sources, but I haven’t seen documents on it. If anyone in this audience works for a security company and has had something like that happen to you -- you’d have to be an American, I suspect there are a few of you here -- I’d love to learn more about that. Because what I see in the other things is that that seems to be what they would want to do in some cases.
For example, with the Bullrun cryptographics aversion stuff, it seems clear that one of the things that they’re going to want to do is get internal information from each of those corporations. They are really going to push hard on that.
Well, how do they do that? With these business record requests.
We’re missing one link there. I think that will probably come out in due time.
If we look beyond just the technology, and we look at policy, we need to really consider that free software is one of the only ways to move forward, but we need open and variable hardware as well.
This is a really, really, really hard problem. For example, I think even proprietary CPUs are probably something worth betting against. Like Intel’s random number generator. I wouldn’t -- I don’t know. It just seems like a bad idea.
I mean, get a lot of sources of entropy. Don’t just use one directly from hardware which is a black box.
Maybe we can try to build diverse systems, where we take parts from many different manufacturers, where it really requires a lot more subversion than currently, when you have only one vendor for CPUs, for example.
Writing it into law that we need to have verifiable technological solutions means that when industry builds a solution... We will, actually, be building it for something. We will be building it for verifiability. We will be building it so that when someone does subvert it, we will be able to detect it.
For example, making it actually a crime to back-door cryptographic standards. Sounds great. Put those NSA guys in prison, right?
They are making it so that they will do a mission, right? Mission one, protect integrity insurance, right? “Protect”.
On the other hand, they have these other guys whose job is to do the exact opposite of that. The problem is that, basically, one of those really harms the other one a lot more. And that harms the rest of us, as well.
It comes from a sort of vanguard prospective where what they are trying to do is say that as long as they can watch, they can interfere in a way that will universally be beneficial. That’s really only true as long as they have hegemony.
That may not always be true. When Greece, for example, had the Athens Affair in 2004, someone compromised the so-called lawful interception systems. When the Greek government and Vodafone agreed that this is the reasonable standard for lawful interception, so-called, they probably didn’t consider that someone would subvert it.
Now that attack, as I understand, looks very similar to the Belgacom Case. This tells us that maybe sometimes people have these systems built for one set of people controlling it, but actually there is just no way to build it in such a way that only those people will in fact control it.
Adding purposeful back-doors into these systems, we should also consider making this intentional weakening -- essentially, I think, we should look at criminalizing it, because there is no way to control it. It’s like land mines, this kind of surveillance.
I know that I’m sort of to the extreme of some of the people that have talked today on this, because those people are under the mistaken -- I respect them -- assumption that the law will be able to hold these systems in place. But they just won’t, right?
If we are worried about Chinese hackers -- which I think is largely just veiled racism -- if we are to look at Chinese hackers as a big problem, what I would say is they’re not the big problem, because they keep getting caught all the time, and the way we learned about the NSA was that someone inside had to tell us.
They are in everything and really, everything. Actually making sure that when these NSA people travel to Europe that they get arrested would be fantastic, right?
Because when these people commit crimes, like for example, The Guardian doesn’t want to release names. That’s good generally, because you should be careful about releasing names.
It’s interesting because if, for example, it was the name of a person who had committed a violent crime, they would put it on the front page. But there is a lot of sympathy for these people who are so-called "just following orders."
I think we need to look at policies that recognize that people who commit mass human rights violations, even if they are seemingly just surveillance, that those people should be prosecuted. Especially, because the NSA has a tagline which is, "We track ’em, you whack ’em."
[laughter 46:50]
Not a joke. “CIA Drone Program, NSA surveillance data”, that is, in fact, their real slogan internally.
When they target people for exploitation, they use they term "whacked." They have internalized the word metaphor, and they are using these systems in a kind of cyber war, if you want to use that term.
I hate the word "cyber." I grew up in the ’90s. Anytime I hear the word "cyber," I think of AOL and something unrelated to war.
But nonetheless, it’s really critical that we look at these systems and we see all the parallels that we are losing.
We are losing due processing. We are losing fairness and justice. In fact, when we have thousands of people being killed by this drone programs, it starts to really weigh.
For example, do any of you know the number of people killed on the Berlin Wall during its operation -- just a ballpark figure? Raise your hand -- anybody that knows.
How many?
Audience Member: 200?
Jacob: It was more like 180 something. Not to belittle the exact number. I won’t try to pretend that I know it, but I went to a Stasi Museum recently in Berlin where I’m living, and they give you a number that is under 200.
The number of people killed by drone strikes in the last 10 years, is like, what? An order of magnitude more than that? I think we really need to consider how policy is lagging behind technology.
We should try to insure that when we learn about parallel construction, as Binney talks about, for example, that anybody involved in that is criminally prosecuted to the maximum penalty permitted under law. And that the policy, also, insures that we use cryptographic standards to protect our communication.
So the radical notion, instead of adding more spying, is that we make it impossible to spy, so that we are, actually, securing our communications. The only way, I think, we’re going to do that is if we really marry policy and technology together in these ways.
Meeting each other online is one of the first steps. There are lots of other things like that. We need good delayed voice messaging. We need good real time voice messaging that has really strong cryptographic protections.
We need ways to compartmentalize. Not just to compartmentalize the information that we have in our day-to-day lives, but actually to compartmentalize different groups of activities that we might have.
For example, I know a researcher, who I think was targeted by the NSA, who was doing interesting cryptographic research. They announced that they were going to do something, that they had a great breakthrough. And then their systems were compromised a couple of days before they were going to do the release. It seemed very targeted.
It would seem to suggest that even our personal lives need to be compartmentalized in different security containers. We need some way to make that usable. That’s, I think, very difficult.
But the floor model, at least for me, is very clear, which is that wherever there are unique identifiers, it will be recorded. The NSA wants to record it forever. At least 100 years, so that’s forever in my lifetime.
They currently have approximately 15 years of data retention for the metadata and for some content, which is a currently unqualified thing.
So, that said before, I might be able to take in some questions which might be useful.
One thing we saw earlier about Facebook was the real name policy. I recently met someone who really blew my mind. He’s a really great guy. He works on a project, the Transnational Republics Project. He was telling me that a fascinating notion is that we have one name.
The really fascinating notion is that it, actually, belongs to a state. So your name is not actually your own, it’s the property of the state.
And the way that Facebook verifies it, for example, is that you show them a government issued ID or something like that. The only people that really get anonymity are people that can produce IDs, and those are states, and not us.
A fascinating related thing is that this is something that we can actually fix. The Transnational Republic is trying to actually change it, so that we can have all of our own names, or regenerate them, which sounds like a totally far out idea, because it is. But this idea is that you should be able to have a name, that it is your name, but not because the state says so.
That is to say, there should be a third party, independent, verifiable way, to declare a name or a pseudonym is yours.
This is a really hard problem, actually. But if we tie it up together with some of the other problems we have, we see it is actually a generic problem which we’ve punted on for a long time.
So, things like that I suspect are worth working on, and I hope that I at least a little bit motivated you. I have a cell phone on.
[laughter - 51:50]
Jacob: I wanted to draw one really important conclusion here, which is Deutsche Homag, which was the German subsidiary of IBM, during the Second World War, they knowingly built punch cards machines for the Nazis and repaired them in Auschwitz. So, they understood what they were being used for.
I know that whenever you bring up the Holocaust and the Internet, that, in another discussion Godwin’s law rules and you’ve lost the conversation. But I think we really need to consider the fact that when these people are killing Muslims without a trial, with flying robots, things have gone too far. It is not the case that we need to wait until it gets significantly worse.
Guantanamo Bay exists, other black prisons exist, and these targeting systems exist. We should really try to stop the engineers that work on these systems, that are working on extra-judicial assassination programs.
But we should also work just as diligently, I think, on recognizing the parallels with history. So, Deutsche Homag was aware of what they were doing. They understood it. Today, you have groups like Gamma, the Gamma Group, and Hacking Team, and some of these other groups, who are building so-called lawful interception systems.
In the case of Hacking Team, they actually sell weaponized exploits to an absolute monarchy in Morocco. They use that to attack people involved in Mamfakinch, which is a legitimate journalistic organization, to break into their computers. This has also happened in the UAE, it has happened in Egypt. The point is that it’s not just a problem in "Over there-istan," it’s a problem everywhere, because Gamma actually recently just had their software purchased by the German government.
What we’re seeing is the battlefield sort of comes home, you could say. We really need to work on solving these problems because those capabilities, what Deutsche Homag did, what they understood what was happening, there are differences. But there are differences in scale and there are differences in the progression, but not really in the nature of these things. That is, non-consensual targeted surveillance for really strange power things, power dynamics which are a little difficult to understand, like targeting people for torture or assassination. That’s really a matter of scale. This is something we can actually resist, we can stop.
I wanted to hopefully convey some of these things to you in a cohesive way. I know that we’re all sort of tired and I’m starving and tired, as well.
Hopefully, I didn’t just get myself into too much trouble, but I wanted to leave you with one last thought, which is, you know how everyone talks about minimization in surveillance? Imagine there’s a wiretapping machine and it goes into an Internet service point. I just wanted to clear this up because this is maybe useful to mention.
There exists a fiber that is tapped, and then there is a machine that comes and it grabs the fiber tap, and then it rebroadcasts that information. The NSA gets one and when it gets that one, it does the so-called 702 minimization procedure.
When the FBI gets a tap, what procedure do they follow? It turns out the CIA also gets one of those taps.
So, the CIA, the NSA, the FBI, they duplicate this data, and each of them has their own stuff that they do with this data. The CIA says they don’t want to throw anything away.
What we need to consider here is that the problem is not just the NSA. The problem is actually all of these agencies working together. They have effectively subverted our democracy. One of the last stands against total surveillance that exists is, in fact, crypto.
If you are an engineer and you are working here, if you are a cryptographer, you are one of the few people on the planet that can help to resist this type of stuff non-politically. And politically, I think that all of us have a role in this.
Hopefully, we can actually do that. I’d like to not be so doom and gloom. I don’t necessarily think that it’s going to be totally awful for everyone. But I don’t think we should wait until it’s totally awful for everyone to do something about it, because it’s very awful for a number of people right now. And especially if you happen to be an anti-drone activist in Yemen or in Pakistan, in the Wazari region, or any of the other 80 countries where US drones are deployed and killing people without a trial.
Thank you very much. If you have any questions.
[applause 56:40]
Arjen Lenstra: Questions for Jake.
Audience Member: I have a question for you.
Jacob: Thank you.
Audience Member: I’m trying to understand who sponsors the activities against this surveillance. I’m no specialist and my knowledge on this stems from Wikipedia.
But as far as I know, Tor was sponsored by the administration of the US in the beginning. As far as I read, Tor was subverted two months ago, and of the 3,000 nodes in the Internet, approximately the half was taken away. And this half was ran by the porn industry.
Jacob: I can answer the first question. And the second question, I don’t quite understand, but I know the story behind it. Are you talking about the FBI...?
Audience Member: The story I read was that the FBI took out a child porn ring, and in this action took out half of the nodes of Tor.
Jacob: OK. I understand both questions.
So the first point is about financial transparency and I think it’s important. Julian Assange wrote a paper, which, if you guys don’t know about Julian Assange, you should check him out. He’s going to be a big deal someday.
I say that lovingly. In a really positive sense. He wrote a paper or an essay which is called "On the Take and Loving it." It tracks academic grants from the Department of Defense and the National Security Agency. He did this years ago.
It’s important to look at where the money comes from. It’s also important to understand the process, and the procedures, and the goals, and the outcomes as they actually are.
In the case of Tor, we put all of your financials up online. I’m not here speaking on behalf of Tor. I’m here because of my investigative journalism, which I do with “Der Spiegel” and with other groups. That is largely funded, well, basically, out of my own pocket. Because it is important to me as an American, and just as a human being, to work on that stuff.
But Tor’s funding is all published online. It includes the State Department and the Electronic Frontier Foundation.
We are one of the only free software projects that’s had the Department of Defense and EFF funding us, as well as the Swedish International Development Agency and a number of other groups. We would love to not take any money from any government to write any free software, but at the same time, I actually don’t think that all states and their funding of good, reasonable projects are a problem.
When it becomes a problem is if they try to interfere. Here’s what I know about states’ interference in Tor.
There is an article, right now, that is being suppressed by GCHQ and the White House. It is not coming out right now because they would like certain things in that article to change.
We want that article to come out and we want all of the documents related to that article to be published in full, so that people can see them. The NSA has been targeting Tor. They are failing, because we are better than them.
[laughter and applause - 59:40]
Jacob: But not every time, and not all the time. So we are really working to improve it.
Part of the reason that I live in Europe right now is because I don’t feel comfortable with what is happening in the United States.
What you mentioned, your second point about the FBI, is correct. The FBI did, in fact, and it looks like it was FBI/NSA wrote an exploit for the Tor browser -- an old version which we had already patched and fixed. And they targeted a specific set of people. They did not, however, take down any Tor nodes, nor did they run any Tor nodes as far as we understand it.
They did, however, compromise someone’s website and then, put up an exploit payload on that and then, targeted everybody that went to that website. They were able to exploit the browser on some unpatched machines in a way where it would bypass Tor.
So this is not a problem with Tor, it’s a problem with Firefox. Our build of Firefox, though, so this is why we release security updates. They were able to connect back to an FBI server. This is a pretty amazing thing, that we actually caused an exploit in the wild. We work really hard to stop this kind of stuff.
I get that child pornography is a really bad problem. But when the states use techniques that are indistinguishable from criminal activity, it’s difficult for me to really have a lot of sympathy for the stuff that they’re doing.
I mean, I know that’s an untenable position because the topics are such extreme topics. But breaking into people’s computers, I don’t think that’s actually the solution.
We are vehemently against that, and we work very hard to make sure that that’s the best they can do. We work very hard to make sure that they cannot do better than that, and to stop them from doing exactly that.
When that article comes out in the near future, which I really hope it will be any minute and I’ve been waiting for weeks for it, I think you will see that this is the angle that these guys have taken, and that we’ve done everything in our power to not only to not let any money influence us, but that they’ve got to take these angles because we refuse to be complicit in the things that they do.
We give them a really hard target, more than almost anything else.
Arjen: More questions?
Audience Member: Thanks. Very, very interesting talks, today. Brilliant.
A quick question about Tor. Whenever I am using Tor, if I click on a network map, I find you get all the traceroutes of the Tor nodes. There’s always one going to the center of the USA, like Arkansas, and it’s really thick and green. It’s always there. I wonder what that means. I can’t figure it out.
Jacob: The Tor network is made... I mean I really came here to talk about NSA spying. I’m surprised that when you have an opportunity to ask about the world’s largest spying operation that you ask about how to use the Tor user interface.
[laughter]
Jacob: The short version is I’ll just ignore your question, and answer a question which is related to it, which is the same question, but it’s a little bit different, which is, does the NSA run Tor nodes?
As far as I can tell, the NSA, they don’t run Tor nodes. The things that I’ve seen -- and I’m happy to say that I’ve seen some things about this -- as far as I can tell, that’s not the angle they’re going for. There’s no evidence of them, that I have seen -- maybe it exists -- that they’re running nodes.
There are, however, tons of people with fast Internet connections, who care about free speech on the Internet, the right to read and the ability to fight against censorship, who have super fast Internet connections and will run a Tor relay.
When you have 3,000, 4,000, 5,000 Tor relays in the network, you’re going to rotate over those relays quite frequently, and if that one is your guard so-called flag, the guard flag, then you’re going to see that one a lot.
I would really encourage you to not reconfigure your Tor to avoid countries where we know there is a lot of surveillance because this is actually a really fascinating thing. While you have the Tempora program, which buffers the Internet for three days, you also have some semblance of legal protection and some semblance of networks, which are not under complete and total surveillance, where the NSA will also not easily be able to break into that network and reposition and extract data from it.
If you behave like all other Tor users, which includes passing through some of these countries, I think you may be better off than if you change it to avoid them. Especially because the countries where there is allegedly less surveillance, there might in fact be more illicit surveillance there, and only routing through those countries might change the probability.
What Tor does is it actually changes it from a certainty -- which is that you’re at home, you browse the web and your machine is compromised, it runs auto-update from your house with a unique identifier, you’re compromised -- to a statistical probability, where maybe they’re watching a particular exit, or maybe they’re able to see that network, and then they’re able to target that. So I think it’s better.
And I think the situation you’re describing is there’s a high capacity node. So, one way to fix this problem is: run some high capacity nodes to offset it in the network, instead of just being a leach. That will help a lot because other people have that problem, too.
[laughter and applause]
Arjen: One more question up here.
Jacob: I mean that in a loving way. Thanks for using Tor. You’re giving me cover traffic that I need.
Audience Member: I can remark that here the NSA is presented like some omnipotent Darth Vader with unlimited capacity of database communication, whatever. Do you really believe that they will be technically able to keep the data for 15 years about everything, including the call to my wife that I will do in my five minutes? Will they really be able to analyze all this stuff, all this rubbish?
Jacob: Yes. Next question.
Audience Member: Will they be overwhelmed with all this rubbish? Maybe it will be even worse for them. The really important message will be done let’s say in some Moroccan Arabic with some code words, and they will not notice it at all.
Jacob: I understand your question. We should have another question, I think, if it’s possible because I can just say, analytics are better than we think. On top of that, data storage is cheaper and cheaper every single day. On top of that, retroactive policing is the thing that really matters to a number of these agencies.
Being able to use data as a “time machine” is really something that we are seeing more and more. Even if they aren’t able to stop, let’s say, the Boston bombing in advance, they want to be able to go back in time and target people for political surveillance and for other things, using this data and pulling through it. They definitely have the ability to store 15 years of data. They’ve already got 15 plus years of data online, as it is right now.
I’ve heard some reports of people that may have been involved in building those storage arrays. They’re using all of the tax dollars that they can to really build incredible storage arrays. And they’re building data centers, for example, like Bluffdale, Utah, which is huge. It’s a huge data center, which I’ve heard is called an MDR or a Massive Data Repository or a Mission Data Repository.
These centers are hooked up, they can be queried. This is where something like XKeyscore comes in. You’ve got MDR here, an MDR here, an MDR here. You have a query here, a query flows to these different systems, then the answers flow back. Then they can make sense of it because they can say, "This person was interesting." Then, they can do graph analysis on it. Then, they can do other things, extract features, and so on.
Someone wants to run an analysis on all the TLS handshakes of the world, they can maybe do that query. I think that that’s a possibility. Then, they can pull all of that from all this data for 15 years.
I mean, if the question is, "Do we really have to worry about it?" I think the answer is, "Yes, we have to worry about it. It’s a serious problem."
I’m sorry to be depressing about it, but I really think it’s important to not try to minimize that. Because it’s not that they will be successful that we need to be worried about. What we need to worry about is that they won’t do a very good job, but it will still result in someone getting killed by a flying robot.
Arjen: We’ll take one final question.
Audience Member: Jake, thanks for a great talk. Last question, it’s a bit of pressure after a fantastic day. Thanks, Arjen. I had one very short comment or question and then, a bit longer one, which I would like to challenge you with.
Jacob: You only get one question, he says.
Audience Member: OK, so the short one. PGP won’t protect you against the social graph, right? I mean the metadata is not protected. The second is, because you said PGP is...
[laughter]
Jacob: If you’re going to ask a second question, give the microphone to a woman in the audience, who has not had a voice, today. That would be what I would say.
Arjen: That was the last question, everybody is very thirsty and exhausted. Thank you very much for being here and for still being here and please join us in the apero [Swiss word for "drinks"].
Caspar Bowden: There’s one more thing I must say, which is we must thank Arjen, because none of this would have happened, this fantastic day...
[applause]
Jacob: Thank you.