Tuesday, May 29, 2012

All the world is made of faith, and trust, and pixie dust

All the world is made of faith, and trust, and pixie dust:

Title quote: J.M. Barrie, Peter Pan

We are engaged on a quest for the desktop Holy Grail (not this), founded on the observation that our computer technology doesn’t serve our humanity: Every problem that arises from new work patterns, the consumerization of IT, and “public vs private cloud” is the result of an impedance mismatch between human nature and the computer systems we build:  We users are gullible and easily fooled. The software we write is buggy. The combination is a toxic cocktail that betrays both us and those that trust us.
We need a profound change in the trustworthiness of our systems. They must be inherently secure – by design.   If we could achieve this, they would shrug off attacks, protect enterprise assets and guarantee user privacy and confidentiality.
Stripped down, the “trust issue” is this: We humans are adept  (but, crucially, not perfect) at navigating many different social contexts and domains of trust.  Examples: my wife and me, our family, our family and friends, work colleagues, and so on.  These groups are messy and dynamic, with many intersections, and it is only because we instinctively adopt the principle of least privilege (or “need to know”) for information sharing that we have been able to develop a society that is robust to conflicting interests.
When we use computers we expect to be able to seamlessly move between different trust domains (browse the web, edit a work document, send some tweets…), just as easily as when we walk down the corridor at work (updating the CEO on a business deal, then chatting to the friendly mail person).
But the operating systems we use today do not offer a granular, semantically nuanced, dynamic representation of trust.   Most were invented before the Internet, before anyone imagined they would be continually attacked by persistent malware.  All rely on various forms of software isolation (eg: processes, JVMs, sandboxing) to separate applications, OS services and data based on the idea of least privilege, but they all fail to deal with two sure-fire ways for an attacker to actually escalate his privilege:
  • Some user will unwittingly raise the privilege of the attacker, perhaps by opening a poisoned email attachment “from a trusted colleague”.
  • Todays OSes offer a vast attack surface (About 50 MLOC for Windows, and about 10 MLOC for Android).   Complex specifications, buggy coding, and incomplete testing mean they ship with about 1 serious bug / KLOC – all the attacker needs to further escalate his execution privileges.
Two architectural flaws common to every OS ensure that a determined attacker will gain access to everything, because the highest privilege (root) is only a zero-day away:
  • There is no unassailable root of trust that the system will defend to the last,
  • There is no granular isolation construct that guarantees that the principle of least privilege will be consistently applied to applications and data, no matter how severely compromised the system may be.
I’m good at navigating trust boundaries, but not perfect.   I will make a mistake at some point. If an attacker masquerading as a trusted “Friend” can trick me into executing his code in good faith, thereby elevating his trust, he can easily (with a little pixie dust) gain access – to everything.  My browser visiting Facebook must never be able to run with the same privileges as an enterprise ERP app accessing critical data.   Neither of these may ever have access to all my files.
Interestingly, vastly more secure systems, such as the CAP computer (which stood in the hall at the Computer Lab during my PhD) and Multics were built, but never made it to mainstream. Why? I think it comes down to the complexity of managing inherently dynamic and messy trust relationships.  Today’s user is faced with a barrage of decisions about trust (any email could harbor an attack). But when the probability of an attack is low, systems that try to protect the user typically fail – again because they fail to accommodate our humanity:
  • They generate a stream of false positives, which trains the user to ignore them (eg: Windows UAC), or
  • They to impose such a significant burden on the user experience (eg: repeated re-authentication) that the system is of no value to the user.
A trustworthy system must empower and delight users, who will nonetheless make mistakes while acting in good faith, letting the bad guys in.   At its heart, therefore, it requires an unassailable root of trust, and a dash of some magic pixie dust to guarantee granular isolation according to the principle of least privilege.

No comments:

Post a Comment