Friday, February 23, 2018

Blockchain Export Control

Wanting to withdraw from the Wassenaar Arrangement is totally sane policy position and hopefully this blogpost will help explain why.

Mara would be better off rewriting Wassenaar's regulatory language as a Solidity smart contract on top of Ethereum. They share (aside from the obtuseness of the language) several key features. In particular, they can be described as one way transaction streams.

I know that supporters of the WA, which requires 41 nations to all agree on a change before it happens, think that the current path of export control is hunky dory and well adjusted to technical realities. But even in areas that ARE NOT CYBER you only have to sit through a couple public ISTAC meetings before you see that while it is easy to CREATE regulations, it is nearly impossible to revise or erase regulations. This is why we have regulations on board that appear to apply to technology from the 50s, which one day is what people will look at all Ethereum programs as.

For technologies that change slowly, this is less of an issue. But you cannot predict the change rates in technological development before you decide to regulate something with export controls. Nor is any form of return on investment function for your regulation specified, so unused and ill-planned regulatory captures just hang around on the Wassenaar blockchain forever.

As a concrete example, let's take a look at Joseph Cox's spreadsheets, wherein he FOIA'd various UK Govt license filing information.

The 5A1J ("internet surveillance system") spreadsheet, here, specifies two real exports, one of what appears to be ETIGroup's EVIDENT system to the UAE and the other which appears to be BEA Detica to Singapore, both of which were approved.

Now I personally have spent maybe fifty hours this year trying to untangle the stunningly bad 5a1j language, which uses technically incorrect terminology, arrived vastly out of date (i.e. applies to any next gen firewall/breach detection system) and has no clear performance characteristics. All of this for something that in the UK resulted in TWO SALES, which if they had been blocked would just have resulted in the host governments putting something together from off the shelf components??!?!

Taking a look at his 4D4 "intrusion software" spreadsheet, here, you get similar results:

  • A sale to the United States
  • A sale of a blanket license for "Basically anything penetration testing related" to Jordan, Philippines, Indonesia, Kuwait, Egypt, Qatar, Oman, Saudi Arabia, Singapore and Dubai.
  • A sale to Bahrain
  • A sale to Dubai (but just for equipment "related"?)

Even if those are the most important four export control licenses ever issued I think the time anyone has spent on implementing or talking about these regulations is EXACTLY LIKE the entire rainforest fed into the blazing fire every day that is Ethereum's attempt to emulate the world's slowest Raspberry Pi running Java.

There's a weird conception among "civil society" experts that export control is useful whenever any technology can have negative uses. That's a misunderstanding of how Dual-Use works that is not shared even among the most optimistic of the specialists I've talked to in this area.

In addition, NOT issuing those licenses results in four possibilities none of which is "Country does not get said capabilities":

  1. The country develops it internally by gluing off the shelf components together (because there is basically no barrier to entry in these markets - keep in mind HackingTeam was not...a big team)
  2. The country buys it from China 
  3. The country buys it from a Wassenaar country with a different and looser implementation of the regulation. (Unlike Ethereum, every WA implementation is different, which is super fun. For example, the US has this neat concept called "Deemed export" which means you need a license if you give the H1B employee next to you something that is controlled.)
  4. The country buys it from a reseller in a country with less baggage using a cover company and then emails it to themselves using the very complicated export control avoidance tool "Outlook Express".

But for FOUR LICENSES seriously who cares? This whole thing is like having a BBQ on the side of the space shuttle. With enough expended energy you can sure toast a few marshmallows, but it's not going to be the valuable memory building Boy Scout experience for your kids that maybe you were hoping for.

And I'll tell you why I personally care and it's because all the people who should be working on policies that "make sure we don't lose an AI war to China" are instead sitting in Commerce Dept rooms defending their companies from the deadly serious rear naked choke that is Wassenaar! And it's not just cyber, it's everything.

If you want to make a number for your controlled Frommy Widget in the WA go from 4Mhz to 6Mhz then it's a simple three year process of arguing about it with various agencies and then it goes through the  system and by the time the language has changed it's already out of date, much like every valuation of your BitCoin you've ever gotten. So now you're spending your precious cycles arguing for a change from 6Mhz to 8Mhz in the very definition of a Sisyphean process.

The end result is that instead of exporting hardware around the world, we export jobs as companies set up overseas in the VERY INDUSTRIES WE CONSIDER MOST SENSITIVE AND IMPORTANT. This is a hugely real issue that should be part of the ROI discussion around any of these regulations but never is for some reason.

This could be maybe fixable by implementing a mandatory nonrenewable 5 year sunset to all Wassenaar regulations. But to do this, the US (and the international community) basically needs to hard-fork the whole idea of technological export control, which is something we should do for many reasons. A more realistic option may be to pull completely out of WA and re-implement the parts that make sense with bilateral agreements.

Another issue is that the actual technical understanding cycles spent on implementing new regulations are lower than they should be, for a process that is only a one-way diode. I.E. you need people full time on every one of the new and old issues but by definition the technical experts on these issues work on them part time. Basically you want people doing a TDY looking at all the regulations from a technical perspective, and we don't have that as a community. We could solve that by giving grants to various companies to fund it, or by hiring it within the Commerce department (and various related international equivalents). Think the DARPA PM program, but for export control experts.

But that's hugely expensive, and as pointed out, it's questionable if any of this makes any more sense to invest in than a virtual blockchain cat!




Thursday, February 22, 2018

Spookware

Today I'm listening to Brandon Valeriano, Donald Bren Chair of Armed Politics, Marine Corps University. You can do that yourself here:


He makes some good points and has some good questions regarding a few clear things, in particular, that our US-focused understanding may be making it hard to see the real shape of the effects of cyber power projection, and likewise, that as a community we focus too much on the Megafauna operations such as Stuxnet.

In particular though it's funny to hear him talk about how limited the effects of cyber operations are, while the entire first page of the NYT today, and every day, is about a successful Russian cyber operation.



This, in a nutshell, is where I thought Brandon's previous book ran into trouble and it's evident in the current talk. Policy and law communities like to split the spookware set of disciplines into very clear buckets. This is espionage, this is sabatoge, etc. But this is like trying to say what's Karate and what's kickboxing and what's Kung-Fu but you're doing so in the UFC cage, and someone is currently punching you in the face!

When we forward deploy NSA people into war zones and provide total coverage across an entire populace's telecommunications for our Marine units, is that cyber power projection? In a way, the final part where you kick down the door and shoot someone is the boring part, right?

Again, he says with China that their policy of stealing technology (and M&A deals) through cyber "does not work" and that they've given up. Which frankly is exactly what they wanted us to think.

Maybe a more accurate description was that it DID work and they are now pivoting to protecting their lead? They have more AI research happening than we do now. Basic science research now happens in Shanghai and Beijing as the US draws back on funding it. Their Quantum detectors are amazing and revolutionary, if hard to understand. Why wouldn't they want a new norm against economic cyber espionage after fifteen years of running the table?

Also, let me point out that Brandon's usual comments on "Cyber weapons being one use tools" are just weird. Exploits can be reused, and are rarely caught, but you do run that risk, and implants get caught eventually, but are often re-tooled and re-deployed. And methodologies, listening points, and all the other things that go into cyber power projection are not "one shot". I'm honestly not sure where he comes from here. But he does keep saying it! Maybe after he reads this post he will write why he thinks that. I know it's part of his logic regarding the desire of nation states to hold back on escalatory cyber attacks, but it's not strictly true in any important way. I feel like someone from TAO told him this at a dinner party over drinks and he really hung onto it.

Ok, so as you finish the talk I know he's not going to be able to support his larger thesis in 20 minutes, but it's so hard to hear someone say that cyber power projection is NOT a revolution of nation state conflict and that it cannot cause disruptive effects on a mass scale. Also, it's clear that everyone is now focused on influence operations enabled by cyber, and are going to be completely surprised at cyber's next metamorphosis. :)










Tuesday, February 20, 2018

Meta changes in endpoint defense: Airframes vs Drones

As the video I stole this images says "The more autonomy and intelligence you put on these platforms the more useful they become!" You know what's a lot more autonomous than an F-35? A drone! :)

One clear shift in defense occurred when Crowdstrike and Mandiant and Endgame (and now Microsoft, etc.) built platforms for companies to do detailed introspection of their computing fabric. For the first time ever serious attackers were getting caught in the act. 

This technology, despite the buzzword hype, is quite simple: A kernel inspector, streaming metadata to an aggregation system, optionally a network sniffer doing same, and algorithms that run on the data to generate actionable results. The expensive part here is the kernel inspector, which is stupidly hard to make reliable, portable, and secure! 

This recent MITRE/CrowdStrike piece demonstrates clearly the effectiveness of this approach against a modeled nation-state adversary who has not themselves tested their implant against CrowdStrike Falcon. 

These mega-implants/"endpoint protection agents" are essentially as expensive to build as airframes. In addition, every vendor produces multiple airframes which escalate in complexity when they detect anything wrong on an endpoint. But what you don't see right now is a lot of ingestion of open-source-style telemetry for your pre-escalation defenses. 

For example, this blogpost details using ELK+OSQUERY+KOLIDE to build an off-the-shelf, scalable, and completely free suite that rivals the instrumentation abilities of some of the more complex market products for "threat hunting". This is essentially the drone-analogy to the endpoint protection market. In many cases, these sorts of toolchains completely avoid the need for a kernel-level inspector, which avoids every bluescreen being "your fault". In many cases, Operating System vendors have upgraded the built-in capabilities of their platforms so that it's not necessary and in other cases, you just go without the deeper levels of data.

Just as drones changed air war forever, I expect these sorts of widely deployed defensive toolkits to change cyberwar, if for no other reason than we can assume they will penetrate the mid and low-end markets, as opposed to just the high end that the major endpoint protection players cover. Also like drones, these sorts of things didn't even exist a couple years ago, and now they are fairly fully featured. 

Of course, DARPA has a role to play here, as it did with the stealth technology behind the F-35. Much as the best part of Cyber Grand Challenge is less the attack tools and more the corpus of targets, we really really really need a massive "corpus" of behavioral/network/etc data from a real company, sanitized such that different detection algorithms can be trained and tested. 





Thursday, February 15, 2018

Indicators of Nation-State Compromises

What team composition counters what is an extremely complex question with direct applicability to the level of complexity we see around cyber war decision making.

So while I enjoy talking about Overwatch, I'm not doing so on this blog for the fun of it. There is a fundamental difference worth pointing out between our "game theory of deterrence" and our evolving understanding of cyber war which is best illustrated by the complexity of modern gaming. I'm not going to point fingers at any particular paper, but most papers on the game theory of cyber war use ENTIRELY TOO SIMPLE game scenarios. Maybe political science departments need to play more Overwatch?

Here's two problems I have run into in the policy space:

  1. I found an implant on my nuclear energy plant and I'm not sure if it's just in the wrong place, or deliberately targeting this plant for espionage, or targeting this plant as a precursor for turning off the power to Miami-Dade.
  2. I found an implant on the Iranian president's network, which I also have an implant on, and I want to know if I should "remove it" or if I should back off because I'm already getting all the take from this network via partner programs of some sort
  3. I found an implant on an ISIS machine, which needs to let me know that it is about to be used to do something destructive, and I should not install "next" to it for fear of getting detected when it does so

Instead of doing a program that is all about diplomats and lawyers meeting constantly to try to work out large global norms around these issues, which invariably will result in long (and completely useless) lists of "Places that should not be hacked" and "Effects your trojans may not cause!", I want to do something that works!

Let's go into this with eyes wide open in that we have to assume the following:

  • We hack our allies and vice versa
  • Our allies hack systems we also want to hack
  • Someone could in theory reuse our own technology against an ally
  • Allies are not going to want to let us know exactly which machine they caught us on

Obviously the first take on solving these sorts of problems is going to be a hotline. You would have someone from one State Dept call up the US State Dept and say "Hey, we found this thing...is this something you think will do serious damage if we uninstall it?"

This has problems in that the State Dept is probably not aware of our programs, and may not know who to call to find out. Likewise, any solutions in this space need to work at wire speed, and be maintainable "in code space" as opposed to "in law space".

So here is my suggestion. I want a server that responds to a specialized request that contains a sort-of-Yara rule, with some additional information, that lets you know if an implant or exploit is "known" to you as being in that particular network or network type. The server, obviously is going to federate any questions it gets. So while the request may have come into the US State Dept, it may be getting answered by a NATO partner. You would want to rate-limit requests to avoid the obvious abuses of a system like this by defenders.

The offensive teams hate any idea of hints of attribution, but life is about compromises, ya know, pun intended. :)

Saturday, February 10, 2018

2-2-2

Overwatch games have six players on a team. It's a common thing to ask for "2-2-2" at the beginning of a game, meaning you want your team to organize into two healers, two tanks, and two DPS. In hacking terms, what this means is that you need to invest both in exploits, implants, and a sustain/exfiltration crew.

"Ready for...?"
That sounds obvious, I can hear you say in your head. Who would invest only in exploits? Who would have only implants? How far can you get with only a sustain crew? Lots of idiots, lemme tell you. Everyone thinks DPS is the fun part so why would anyone play the other team roles? It is the same in hacking.

The truth is that any team comp can be a very viable strategy, but unbalanced comps tend to be the result of immature CNE efforts. Balance and coordination are the sign of mature - and successful - programs. You may find advanced teams using primitive toolchains and simple strategies to great success because they've built a program with the proper team composition.

People (including me on this blog!) like to measure adversary programs by the sophistication of their tools. But what true teams have is rapid turnaround on exploits, completely unique implants, and massively creative sustain while inside. They take every small advantage - every tiny mistake the defenders make - and turn it into domain admin. 

Friday, February 9, 2018

SUSTAIN

So if you watch Overwatch League you know that there are three major classes of characters who show up at the pro-level:

  • Healers (Providing SUSTAIN)
  • Damage Dealers (Penetrating into space)
  • Tanks (Holding space)

Heroes never die.

In our game-theory model we use tanks as synonyms for Implants. Damage dealers are clearly your initial operator team or automated toolset which penetrate into adversary networks. Healers are your sustain. But what is sustain, when it comes to CNE?

I have a very particular definition of sustain which is best illustrated by a story I heard recently from Law Enforcement about a hacker who got caught after ten years of having his implants on a regional bank. Every day, for ten years, he had logged in and maintained his presence on that network. Think of the dedication that requires.

But he's not alone. Right now, all over the world, hackers are waking up and visiting thousands of networks, making sure logs are being deleted, gathering new passwords that have changed, moving from host to host to avoid detection, looking to make sure no one is investigating their boxes. There's a giant list of things you have to do - reading the admin's mail to see when upgrade cycles are scheduled and then planning how to stay installed through that kind of activity is not easy!

But just as in Overwatch, this game is won or lost not by how great your DPS is, and sometimes not by the sophistication of your implants, but purely on sustain.

Wednesday, February 7, 2018

Changing the Meta: The Evolution of Anti-Virus

Extremely accurate graphical timeline of AV changes...there has been a LOT of innovation here yet everyone's mental picture is still signature based systems!

So when we talk about the changing Meta of cyber war, I believe that many people have somehow ignored the massive disruptions happening in the defensive "Anti-Virus" market.

Looking at AV from the offensive side, there are many things you have to now take into account, including VirusTotal, Cloud Reputation Systems burning your executables, Cloud Reputation Systems burning your C2/dropper web sites, malware heuristics catching you, VM-detonation systems catching you, anti-rootkit systems messing with you, other implants running their own private analysis against you, etc.

In other words, it's a rough world out there for implants ever since about 2010, and only getting rougher.

But the biggest change, the one that altered the Meta forever, in my opinion, was the switch to reputation-based systems from signatures and heuristics. Being able to see and predict this and engineer around it drove attacker innovation for some time. This affected policy as well, because now targets that normally would be of no value became of huge value because of their reputational quality. What are the policy implications of stealing certificates from random Hong-Kong based software providers to hack random other people?

In fact, there were many attacker responses, all of which were predictable, to this meta-shift:

  • Attacking of cloud AV providers (for example, the Israeli team on Kaspersky's network)
  • Coopting of cloud-AV providers (which is what DHS claims it is worried about re: Kaspersky)
  • Full-scripting language implants (aka, powershell implants, chinese webshells)
  • Implants which run only as DLL's inside other programs (and hence, don't need reputation against earlier systems which did not check DLLs)
  • Watering hole attacks (for both exploitation and C2)
  • Large scale automated web attacks (for gathering C2 Listening Posts)
  • Probably more that I'll think of as soon as I post this. :)

The next meta-change is going to be about automated response (aka, Apoptosis - see MS Video here), as the Super-Next-Gen systems are about to demonstrate. So my question is: Have we predicted the obvious attacker responses?