Hardly a week goes by without some new internet security snafu being reported. And with web usage exploding, expect to hear about a lot more. According to a new analysis from Forrester Research, the number of Internet users is forecast to grow 45% globally over the next four years, reaching 2.2 billion by 2013. More people online, more data to hack — it’s a cybercriminal’s paradise.
Many people don’t yet fully understand the enormity of the threat — to individuals, their families and the companies that they work for, warns Andrea M. Matwyshyn, professor of legal studies and business ethics at Wharton. A frequent public commentator on the topic, Matwyshyn is the editor of a forthcoming book titled, Harboring Data: Information Security, Law and the Corporation.
In an interview with Knowledge@Wharton, Matwyshyn is joined by two of the book’s contributors, Diana Slaughter-Defoe, professor of urban education at University of Pennsylvania, and Cem Paya, a data security expert at Google, who discuss the major risk management gaps that are leaving valuable data assets unprotected not only in the office, but also at home, while also sharing a number of measures that everyone — from parents to CEOs — can take to avoid Internet security disasters.
An edited transcript of the conversation follows.
Knowledge@Wharton: Your forthcoming book says that otherwise sophisticated business entities regularly fail to secure key information assets and that many companies are struggling with incorporating information security practices into their operations. Why is that the case?
Andrea M. Matwyshyn: It’s not apparent to me exactly why this is. But there seems to be a process-based failure under way. It’s in companies’ interests, internally and externally, to secure their information assets. Internally, when a company experiences a data breach, it is potentially compromising trade-secret protection on key intangible assets. Externally, it is going to get bad publicity and trust will diminish among customers, business partners and even its own employees. So securing information assets is a win/win.
[Our speculation] about what may be driving the failure to secure assets [is] partially based on historical … facts. Information security [has been] generally viewed as the province of IT departments, and at one point that may have made sense. But at this point, IT security needs to have a process approach, [coming] from the top layers of a company and a culture of security [should be] filtered through the company’s lower layers.
Security breaches can happen not only in a company’s servers, but also as a result of an employee inserting a CD, [as was] the case of a Sony rootkit problem that arose a few years ago [when its CDs automatically downloaded digital rights management tools on to computers]. [Similarly,] an employee can insert a CD into a PC at work to listen to some music and the vulnerability that arises because of, for example, some digital rights management software on that CD can lead to an employer’s network being compromised. Employee education and [a] top-down [approach to] security information assets are an organizational priority, which is something that hasn’t necessarily permeated corporate culture.
Knowledge@Wharton: It sounds like a company pays a steep price when it fails to do all the things that you suggest. Could you give any examples of companies that have faced problems as a result of not having secured their information assets?
Matwyshyn: The recent example that comes to mind is The TJX Companies. TJX had an extensive database of consumer information because it’s a retailer. [In 1996] a hacker sitting in a car in the parking lot of a Minnesota store with relatively primitive tools accessed its network, compromised it and stole millions of records, subsequently resulting in banks needing to reissue TJX credit cards. There may be incidents of identity theft associated with that activity as well. TJX paid a high price in the press and the banks filed a class-action lawsuit against it.
The costs imposed on other entities because of security breaches [at a company] are starting to result in court cases, and at entities that are forced to reissue cards and absorb the costs [are] finding it unacceptable to pay the price for other people’s security practices.
Part of this stems from the nature of information assets. When a company possesses sensitive information, each subsequent sharing of that information creates another dependency, another point of risk. A compromise anywhere in the chain of possession … is the equivalent of a compromise along every point. So the banks in the TJX case were not pleased, the customers who had data compromised were not pleased and TJX had regulatory action [launched] against it because of that breach.
Knowledge@Wharton: Are there any causes at the macro or social level that have led to information security failures?
Matwyshyn: There are some technological causes, structural causes and legal deficiencies that exacerbate the problem. Information security has become more prominent in part because broadband access is so prevalent. People are using the Internet more, which is a good thing. But such information sharing is leading to additional points of vulnerability. Twenty years ago, there weren’t databases full of such rich consumer information as we have today. The ease of sharing information through the Internet generates targets for information criminals. At this point, the identity-theft economy is on par with or surpassing the [illegal] drug economy.
So when you have a financial incentive driving criminals, dissuading them [from perpetrating a breach] is very difficult and they’re going to innovate to stay one step ahead of information-security experts….
[As for those of us who] think about the legal issues, we haven’t resolved the fundamental holes in our legal structures, which might stop some of this from arising. For example, with extradition treaties, we might expect that if we [in the U.S.] prosecuted an individual cyber criminal somewhere in an Eastern European country who hacks into a U.S. database, [one would think that] we would simply work with the other country to execute the extradition. Alas, it’s not that straightforward in part because to get the extradition, the act that was committed must also be illegal in the other country. In many countries where cyber criminals live, the acts that they’re engaging in aren’t illegal, and their governments are not going to extradite these individuals…. On top of that is the lack of a reciprocal regime for recognizing judgments in other countries, which predates the Internet…. We just never resolved the convention on jurisdiction and judgments to allow us to have our judgments in our courts efficiently enforced in other countries.
Now with the rise of international information crime, these problems are highlighted yet again and we need to take a step back legally and work through some of the gaps….
Knowledge@Wharton: What’s the solution? Do we need more international coordination among legal entities?
Matwyshyn: Absolutely. We need to get some harmonization in cyber crime and the opinion of the international community as to what is acceptable computer conduct. In an economic downturn in particular, this problem reaches a new level because with the ease of information crime and the lack of … job opportunities, it is expected to get even worse.
Knowledge@Wharton: The fascinating thing about your book is the examples of the techniques that cybercriminals use, such as phishing and zombies. Could you describe some of those techniques?
Matwyshyn: Phishing takes the form of an email arriving in an unsuspecting user’s email inbox. The user sees an email from what is assumed a trusted service provider. The email contains a link. The individual follows the link and is asked to provide information, maybe a log in, password or the last four digits of a Social Security number. The information is used by the criminal, sometimes in connection with other information the criminal has purchased online on the black market or even from a legitimate source, which may not have been careful in vetting [who is buying] the information….
The other possibility of a phishing attack is that by following an unsafe link, a person’s computer becomes part of a zombie “botnet,” meaning that someone remotely takes control of the machine … [which is then] used to attack targets, generate spam or engage in other types of [unwanted] activities. We’ve had instances of power grids being threatened by zombie botnets. And there’s speculation that zombie botnets are being used by some countries as a form of cyber war against countries they don’t [want] to prosper.
Knowledge@Wharton: What happened at the job website Monster.com?
Matwyshyn: There was a particular incident at Monster that I mentioned in the book, but since then it has had at least two more. Despite it alleging that it has been trying to improve its security, the bad guys are still getting in.
What happened [in the case mentioned in the book] was that some individuals posed as employers and by using Monster resources, they mined information about job seekers and consequently sent them emails containing malicious code, which [the job seekers] downloaded, with their security being compromised….
One of the controversies arising [from subsequent failures] involves data breach notification legislation [requiring companies to tell customers if their information has been put at risk]. Data breach notification legislation now exists in [45 U.S. states], the District of Columbia, Puerto Rico [and the Virgin Islands], and there’ll be probably a few more [states complying] by the end of the year. There’s talk of harmonization, but we’re uncertain when that’s coming. But Monster, not necessarily violating the timeframes stated in the legislation, didn’t notify its customers as promptly as it could have, so the argument goes. If you looked at the website of its information security service provider, you [found out] that it had an information security problem sooner than you did from the Monster website, which led to some criticism, as you might imagine, that only an elite group of people knew [first] about the compromise rather than the individuals who may have been most impacted, the users of Monster.
After Monster’s latest breach, [there was] a posting on its website informing users about [it]. It was posted, I believe, on January 20th, without much fanfare. It is still working through its security problems. That case [involved] individuals with legitimate credentials. Where they got those credentials we’re not sure. It may have been through a different attack before their interactions with Monster. A series of compromised firms may have led to the attacks [in] the Monster database of consumers who had posted their resumes [online]. Of course, with unemployment rates skyrocketing, targets such as Monster will only become more attractive to information thieves. And thinking about the amount of information that an individual puts on his or her resume, a lot of very sensitive, personally identifiable information can let someone pose as you very efficiently.
Knowledge@Wharton: People tend to disclose all kinds of things about themselves in [social] networks like Facebook and LinkedIn as well. Does that affect information security?
Matwyshyn: Very much so. First, as you mentioned … individuals voluntarily disclose a significant amount of information. But if you ask them, they’ll say they’re very concerned about their privacy. When [such] contradictory behavior [is combined] with difficulty in using privacy settings on websites, [people] sometimes don’t realize how much information is readily available to the public.
There was an incident last year [involving] consumer purchasing on other websites linked to … profiles on Facebook because Facebook had a piece of code, the beacon, that would post information found in its profiles about consumers’ purchases on other websites. Although this was within the [usage] terms [that consumers] had agreed to when they signed up for Facebook, there was an … instinctive reaction of shock on the part of many users that [such] information was pushed [out]…. That was perceived by many users to be too invasive…. Facebook recognized that the beacon plan was a little [too] aggressive for consumer tastes … and it consequently made the privacy settings easier to maneuver. But there is a bit of a contradiction between users’ behavior and users’ stated preferences on privacy.
A corollary concern for information security, as a result of social networks such as Facebook, is that platforms on those networks enable developers to generate interesting, fun, new applications for users to interact. There’s really no information security vetting of those applications by the central platform provider, Facebook in this case. The applications request information on a users’ entire portfolio of friends and then all of those people have data that is possessed by the application provider. What the application provider is using that data for and the extent of secure storage that [it] uses are unknown [to users] and Facebook or another social networking site is not going to [publicize] it. It’s not in their interest to do so because they’d rather not be associated with that relationship. They just want to provide the platform. But most users don’t realize or don’t analyze the extent of information sharing that happens through, for example, the applications….
Knowledge@Wharton: When it comes to financial data, do security breaches have a different root cause than other kinds of information?
Paya: [All] security breaches are ultimately caused by a failure in a process or implementation of a security policy. But the damage [from financial information breaches] does have a unique, unusual root cause [in] that financial information cannot at once both be distributed to thousands of entities and be so valuable that mere knowledge or access to it is enough to cause monetary losses. We can’t have it both ways.
It’s not so much that the breaches are surprising. It’s that when a breach occurs, the fact that there is no damage control and containment are impossible is a function of how we … use financial information.
Knowledge@Wharton: So financial information is unique in the sense that it is both confidential and widely disseminated?
Paya: Exactly, that’s the paradox.
Knowledge@Wharton: How can a balance be maintained to allow online commerce to proceed? Clearly online commerce is growing, but we need to figure out a way to balance those two things.
Paya: Since we are not going to put the genie back in the bottle, the only option is to reduce the secrecy requirement and ask, “What happens if my financial information is no longer that secret? What if my credit card number is known by other people? Is that a situation we can deal with?” And surprisingly, for that particular type of information, the answer [to the latter] turns out to be, “Yes.” The credit card networks realize that they can absorb the cost of fraud entirely. They can still say to customers, “Continue to shop freely, you can disclose your credit card number to anybody you like. Continue typing in that number. If there’s any fraud, the system will absorb the losses and you don’t have to worry about it.” And they found that that risk management actually works, that the profits made by the credit card networks more than outweighs absorbing fraud losses.
Unfortunately that’s not the case for other things. Social Security numbers, which have become essentially financial information … because of their use in credit reporting, aren’t at that stage yet. But for credit cards, we have [achieved a balance]….
Knowledge@Wharton: The paradox that you talk about also applies to financial information generally. For example, a company about to merge with another … [will keep] information about that event confidential [during negotiations]…. But once the announcement is made it is, of course, expected to be widely and publicly disclosed. Are there any lessons from the offline world about how you manage this paradox of confidentiality versus the public nature of financial information that can be applicable to this space?
Paya: In the example you’ve mentioned, the shelf life of the secret is limited. If the merger talks are going on for three months, all you have to do is keep it secret for three months…. Best practice is to make sure that your secrets have short shelf lives and can be frequently renewed. That’s not something generally followed with consumer data. Credit cards have multi-year expiration periods and Social Security numbers are indefinite [since] you have one for life.
The lesson from the offline world is … acknowledging the fact that the longer a secret exists, the greater the probability of a breach of confidentiality. So try to limit that window of time. That’s a lesson that hasn’t quite carried over to the consumer financial data, because much of [it] has a very long shelf life.
Knowledge@Wharton: What is the legacy design problem you refer to in the book and how does it affect financial information security?
Paya: The legacy design problem … is the assumption built into many systems and processes we have today that the way transactions will be carried out is by a disclosure of secrets. In other words, to buy something on the Web, I must disclose my credit card number to the merchant. To obtain credit, I must disclose my Social Security number…. To sign up for a cell phone service, I have to disclose my Social Security number.
[What] if we were to say, “Let’s stop doing that and come up with a better way for consumers to, for example, authorize payment or run background checks?” And then say, “Here’s this brand new, far more secure, better designed system.” We’re still stuck with all the processes … that only understand credit card or Social Security numbers. Even if magically … we could deploy something better that gave consumers more control over their data and wouldn’t require them to disclose secrets as part of everyday transactions, there would be a huge and slow migration effort to make a dent in the problem. We’re not starting from scratch, but from the assumption that it’s okay to disclose secrets and that’s how many transactions work.
Knowledge@Wharton: Could you discuss some of the biggest mistakes companies make while trying to protect the privacy of their financial information?
Paya: The biggest mistake … is not having a clear handle on where the information lives. The design of large systems calls for a lot of redundancy. Data is copied, duplicated, backed up, sometimes sent to different partners, data warehouses, shipped off site in case some catastrophic event destroys your data center. So data has a tendency to replicate itself. And one of the big challenges is when companies lose track of where the information is. It’s very hard to point to a particular computer or a particular rack and say, “This is where all the credit cards live.” …. The problem is that the more spread out they are, the more points of failure you have to worry about…. The first challenge [arises by] not having an inventory of what you’re collecting, even if you know where you collect it, not knowing where exactly you put it.
Matwyshyn: [Cem’s] commentary is borne out by PricewaterhouseCoopers, which did a survey of chief information officers, chief security officers, high-ranking … decision makers…. One of the startling [findings] is that a large number, approximately 30%, of the respondents could not provide information about where all of their information assets were stored and this is self-reported. A significant number, similarly, could not identify what the major threats were that the company faced in terms of information security. And many of the individuals stated their organizations did not have a comprehensive information security policy.
There’s a broader lack of planning in many enterprises. In their defense, this field is relatively new. However, the downside of not securing information assets is so severe that it’s important that companies start to focus on process-based, top-down initiatives to incorporate information security at every level of their enterprise. Really the neglect is reaching the point that … an argument could be made that the lack of planning that’s prevalent in U.S. companies may give rise to cause a breach of fiduciary duty. That’s serious. We’ve reached a turning point. This is when it really needs to be addressed aggressively in a process-based approach throughout enterprises.
Slaughter-Defoe: A lot of the people [running companies] are parents, and if this is how they’re functioning at their workplace, you can imagine what they must not be doing at home.
Knowledge@Wharton: If in this room with us right now there were the CEO and the CIO of a company who heard everything you said and they want each of you to give one piece of advice of how they can do better job protecting their information assets, what would that advice be?
Matwyshyn: The first piece would be to set up a top-down process and a culture of security. Have every employee go through mandatory information security training regularly. Have every employee know what to do in the case of an information security breach. One of the key mistakes that many companies make, and I talk about this a little bit in my chapter, is that people from the outside [of a company] will report a security breach and employees simply won’t know what to do with the information. They won’t know who to contact internally to stop the bleeding. Each individual in an organization needs to recognize the importance of the team effort in keeping information secure. And the tone really needs to come from the top.
Slaughter-Defoe: This problem, based on what I’ve heard today and at other conferences, has reached a point where attention needs to be called to the nation’s Department of Homeland Security. They need to get this book. They need to look this over…. They need to think about this in terms of future directions of the nation. There was one comment [today at the conference] from a gentleman about how his state … [is] at least ensuring that there is appropriate communication between people who were engaged in rescue operations. In a manner of speaking, if you project the next 20 years or the next generation, that’s what we’re talking about here. We’re talking about at the state level, resources will be coordinated to protect families and where people work, now that the genie is out of the bottle. I don’t think anybody, say, 20 or 30 years ago thought this was a serious issue that they would have to address. But it’s very much with us. And it puts us in a new era.
Matwyshyn, Andrea. Interview with Knowledge@Wharton. Knowledge@Wharton. August 2009. 20 August 2009. <http://knowledge.wharton.upenn.edu/article.cfm?articleid=2317>.