No More Secrets: Breaking Out of the Locked Door Mindset

This post is a first in a series I will be exchanging with Ohad Samet (ok, second, he’s a much quicker blogger than I am), one of my esteemed colleagues in Paypal Risk, and the mastermind behind the Fraud Backstage blog. Read Ohad’s article here.

Despite best efforts to protect systems and assets using a defense-in-depth approach, many layers of controls are defeated simply by exploiting access granted to users. Thus the industry is trying to determine not only how we protect our platforms from external threats, but also how we keep user accounts from being attacked. User credentials being the “keys” (haha) guarding valuable access to both user accounts and to our platfoms, a popular topic among the security-minded these days center around alternatives to standard authentication methods. Typically, the discussion centers not around how an enterprise secures its own assets and users, but about arming consumers who come and go across ISPs, search sites, online banking, social networks…and are are vulnerable to identity theft and privacy invasions wherever they roam.

How many information security professionals does it take to keep a secret?

While there are a number of alternatives out there, focusing on authentication as if it’s a silver bullet misses the point. When we assume that keeping our users secure means protecting (only, or above all other things) the shared secret between us, it leaves us over-reliant on simple access control (the fortress mentality) when as an industry we already know that coordinating layers of protection working together is a more effective model for managing risk. To clarify our exposure to this single point of failure, let’s consider:

1) How much exposed (public, or near-public) data is needed to carry out reserved (private) activities? Meaning, how much does a masquerader need to know that is private to approximate an identity?
– and –

2) How does our risk model change if we assume all credentials have been compromised?

Shall We Play a Game…of Twenty Questions?
Really all this nonsense started when we started teaching users to use “items that identify us” as “items that authenticate us”. Two examples, SSN and credit card numbers. SSN we know has been used by employers, banks, credit reporting agencies…as well as for its original purpose, to identify participation in social security (this legislation being considered in Georgia may limit use of SSN and DOB as *usernames* or *identifiers*, although it is silent on using SSN/DOB to verify/authenticate identity).

As far as credit cards go, two trends that played off of each other explain where we are today: e-commerce grew based on the credit card networks’ model for mail order/telephone order services, which allowed for card acceptance absent full data (held in the magnetic stripe or chip) exchange with the issuer. Application security holes or bad configurations allowed attackers to get access to customer data, which was scary in and of itself, since with the Internet it felt like all kinds of data was just “out there” and easily abused. But wait for it…here comes trend #2: the growth of large, distributed acceptance actors (retailers, hotels, payment processors) created an even bigger attack space when they began centralizing their customer data storage and management. If we just think for a moment, we could probably come up with a list of 10-20 card acceptors and assume that at least half of the households in the U.S. used a payment card at at least one of these businesses. How could an attacker resist such a motherlode? Taking data online brought them to the show, centralizing massive databases mean they’re here to stay. Here’s a little chart (totally unscientifically, by no means meant to be a full view) I cobbled together using data from press articles and the very interesting chronology of data breaches at privacyrights.org. (I might improve the graphic a bit later if folks find it useful).

No More Secrets

At the same time we have another (third) trend emerging — every user account I’ve established on the internet has asked the same types of gating questions to “get to know me”. What is my background? What are my favorite things? What are my preferences? Who are my friends and contacts? What is my educational history and work history? The differences between signing up for an email account and a job recruiting site vary in the depth of questioning on a few of these dimensions, but in the end as a participant in these systems I voluntarily profile myself to benefit both my network/friends/audience, and the system owner/profiler. Call it voluntary pre-profiling, it is a lot of “incidental” information that can easily be correlated across sites. This is distressing on a number of levels, but we are probably a few years out from understanding the true severity of the exposure implied.

Authenticating the Unknown Identity

N-factor authentication is not the solution for trust-level issues. Layers upon layers of authentication do not make up for a shifting foundation. So now we have a situation where credentials that were authenticators have essentially degraded into second-level identity descriptors.

New definitions of identity are required, it is not what is asserted that can be depended upon in private interactions, but instead what is detected and confirmable. As a risk manager, an entity that has a) spent some time on the system, and established a baseline identity, and is weakly authenticated but behaving consistently over time may be the same or lesser risk as a user that b) has an incomplete profile, behaves erratically, but is strongly authenticated. It is preferable to consider the user’s reputation and context of an interaction rather than depending on basic authentication (password/phrase, security questions) and simple access control (once a user’s authenticated, they get full privileges of the user account).

Here’s how this might play out, assuming one must build a trust model and manage an intial interaction between two unknown actors. Generally, initial interactions are either low-value, create a small threat surface, or require the involvement of a third party who will confirm the validity of one or both parties as a way to establish relationship by proxy. This is a perfectly reasonable system, although it seems to break down after a few levels of separation between the third party validator and the actors, as direct links are more reliable than indirect links. Assuming a successful first interaction, a subsequent series of repeated interactions establish a relationship wherein the actors are able to manage additional risk exposure.  The risk of identity default between two parties is diminished by the shared history as well as the implied damage of reputation to the network  — branches of bad apples get pruned away.

Leaving Secrets Behind
However identities are established on a system and granted privileges — defense-in-depth means starting with the user. Passwords are not the problem. Even credit cards are not the problem. The problem is that service providers must rely on credentials that are stored everywhere, with limited incentives (economic incentives — moral incentives are different issue) to invest in protecting credentials and even more limited ability to recognize when credentials have been compromised at one of the other locations where the credentials have passed through or are stored. Secrets don’t keep information secure, after all: How many information security professionals does it take to keep a secret?

Under Creative Commons License: Attribution

One thought on “No More Secrets: Breaking Out of the Locked Door Mindset

  1. Hi Allison, I would like to get in touch with you as that the company I consult to, ITWeb South Africa (www.itweb.co.za), would like to invite you to present a keynote at our upcoming annual Security Summit (www.securitysummit.co.za) in South Africa.
    Mariette du Plessis
    mariettep@jabitblog.com

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.