Black Hat is a global information security event series that provides attendees with the latest in information security research, development, and trends in a vendor-neutral environment. Black Hat (USA) 2014 was held from 2nd to 7th August 2014 in Las Vegas, where the keynote address was delivered by Dan Geer, a respected cyber-security analyst and risk management specialist, who is currently the Chief Information Security Officer for In-Q-Tel – a not-for-profit venture capital firm that invests in technology to support the Central Intelligence Agency. Geer, in his keynote titled ‘cyber-security as realpolitik’, made a few very interesting policy-proposals on a host of pressing issues in cyber-security. Below are some excerpts from his speech:
[The complete transcript is available at: http://geer.tinho.net/geer.blackhat.6viii14.txt]
Cyber-security is now a riveting concern. When we speak about cyber-security policy, we are no longer engaging in some sort of parlor game. Once a topic area like cyber-security becomes interlaced with nearly every aspect of life for nearly everybody, the outcome differential between good policies and bad policies broadens, and the ease of finding answers falls.
The central dynamic internal to government is, and always has been, that the only way for either the Executive or the Legislature to control the many sub-units of government is by way of how much money they can hand out. Suppose, however, that surveillance becomes too cheap to limit through budgetary processes. Does that lessen the power of the Legislature more, or the power of the Executive more? I think that ever-cheaper surveillance substantially changes the balance of power in favor of the Executive and away from the Legislature. Is the ever-wider deployment of sensors in the name of cyber-security actually contributing to our safety? Or is it destroying our safety in order to save it?
There are parallels between cyber-security and the intelligence functions insofar as predicting the future has a strong role to play in preparing your defenses for probable attacks. The hardest part of crafting good attack tools is testing them before deployment. This too, may grow in importance if the rigor of testing causes attackers to use some portion of the Internet at large as their test platform rather than whatever rig they can afford to set up in their own shop. If that is the case, then full scale traffic logs become an indispensable intelligence tool. However, the fact that there is a lot of traffic that we don’t have a handle on continues to remind us that the end-to-end design of the Internet was not some failure of design intellect but a brilliant avoidance of having to pick between the pitiful toy a completely safe Internet would have to be versus an Internet that was the ultimate tool of State control. In nothing else is it more apt to say that our choices are Freedom, Security, Convenience – Choose Two.
We now turn to some policy proposals on a suite of pressing current topics.
-
Mandatory reporting
The force of law must require reporting of cyber-security failures that are above some severity threshold that we have yet to negotiate. Below that threshold, a data collection consortium could illuminate the character and magnitude of cyber attacks against the private sector, using the model of voluntary reporting of near-miss incidents.
-
Net neutrality
ISPs can charge whatever they like based on the contents of what they are carrying, but they are responsible for that content if it is hurtful; inspecting brings with it a responsibility for what inspectors learn.
In the alternative, ISPs can enjoy common carrier protections at all times, but they can neither inspect nor act on the contents of what they are carrying and can only charge for carriage itself.
-
Source code liability
For better or poorer, the only two products not covered by product liability today are religion and software, and software should not escape for much longer.
If developers deliver their software with complete and buildable source code and a license that allows disabling any functionality or code the licensee decides, their liability must be limited to a refund. In any other case, developers are liable for whatever damage their software causes when it is used normally.
-
Fall backs and resiliency
Resiliency is an area where no one policy can be sufficient. Embedded systems, if having no remote management interface and thus out of reach, must be treated as a life form and as the purpose of life is to end, an embedded system without a remote management interface must be so designed as to be certain to die no later than some fixed time. Conversely, an embedded system with a remote management interface must be sufficiently self-protecting that it is capable of refusing a command. Inevitable death and purposive resistance are two aspects of the human condition we need to replicate, not somehow imagine that to overcome them is to improve the future.
-
Vulnerability finding
The (US) Government could openly corner the world vulnerability market, that is buy them all and make them all public. Simply announce “Show us a competing bid, and we’ll give you 10x.” This strategy’s usefulness comes from two side effects:
-
By overpaying, we enlarge the talent pool of vulnerability finders
-
By making public every single vulnerability the Government buys, we devalue them.
-
Right to be forgotten
A unitary, unfakeable digital identity is no bargain. I want to choose whether to misrepresent myself. I may rarely use that right, but it is nevertheless my right to do so.
In that regard the EU’s “Right to be Forgotten” is both appropriate and advantageous though it does not go far enough. A right to be forgotten is the only check on the tidal wave of observability that a ubiquitous sensor fabric is birthing now, observability that changes the very quality of what “in public” means.
-
Abandonment
If Company X abandons a code base, then that code base must be open sourced. With respect to security, some constellation of I, we, you, or they are willing and able to provide security patches or workarounds as time and evil require.
-
Convergence
Several Government institutions have already moved to exclusive digital delivery of services. This means that they have created a critical dependence on an Internet swarming with men in the middle and, have doubtlessly given up their own ability to fall back to what worked for a century before. It is that giving up of alternative means that really defines what convergence is and does.
However, with convergence comes the risk of critical infrastructure failure. Accommodating old methods and Internet rejectionists preserves alternate, less complex, more durable means, and therefore bounds dependence. Bounding dependence is the core of rational risk management.
As a matter of policy, everything that is officially categorized as a critical infrastructure must conclusively show how it can operate in the absence of the Internet. The 2008 financial crisis proved that we can build systems more complex than we can operate, the best policy counter to which has been the system of “stress tests” thereafter administered to the banks. We need other kinds of stress tests even more.