rev. 1 Nov. 2005 [PDF]

The Kevin Mitnicks of the world introduced the computer illiterate mainstream to their first concept of a “hacker” in the sense of someone who gains unauthorized access to computer systems: a computer-savvy but socially-awkward nerd with near-mythical coding skills. In fact, the original meaning of “hacker” was merely a person susceptible to long programming binges in order to complete a project (“Hacker” 1). Today’s threats to computer security are no longer technophiles or savants; rather, they are more salesman than programmer. Rather than pushing a keyboard’s buttons, they push people’s buttons. The favored method of threatening network security may have changed over the years, but the method is as old as gullibility itself. A modern “hacker” conjures up images of a serpent in a garden, offering Eve an apple in exchange for her password. And therein lies the problem.

Even Kevin Mitnick relied more on social engineering than most people realize. One memento of his youthful days penetrating secure networks at firms like NEC, Novell, and Pacific Bell is a printed employee directory—replete with names, phone numbers, email addresses and other info about the entire personnel—that he got—ostensibly—from dumpster diving. “Because people hate to say no even when they’re suspicious of a well-presented stranger[,] smooth talking has gotten many a hacker far closer to a target company’s network than days of brute-force technological attacks” (Qtd. In Braue 1).

Not just business are at risk. Even important government agencies—though technologically locked down—fall prey to such social engineering. In a March 2005 report, Pamela J. Gardiner, Deputy Inspector General for Audit, wrote that government employees posing as IT helpdesk personnel were able to convince 35 out of 100 employees and managers to reveal their login and password over the phone, despite the IRS’s stringent password policies (1). She goes on to list five reasons for this breach in security as admitted to by the aforementioned 35, which comprise a concise but relatively complete overview of the dangers of social engineering.

First and most important, they were ignorant of social engineering (Gardiner 4). It is an understated risk that is lost in the spectre of Mitnick-like hackers. Part of this comes from news coverage that focuses on the sensational (savant hackers, rather than lazy employees) and part from a lack of corporate and government education on the matter. The invisible boogie-man tapping on the firewall distracts them from the more commonsense responsibilities that lay with each employee and not with the IT manager. Management, in this case, had also not done a good enough job communicating the IRS’s password policy to its employees.

Secondly, once agents posing as helpdesk personnel identified themselves as such, the employees at fault immediately assisted in any way possible. There is an implicit trust of technological assistance, which I think stems from the daunting challenge that information systems present to the nontechnical user. As soon as an employee “gives up” and lets a resident technical guru wax thaumaturgical in their machine, they are already part of the problem, at least if that guru is unknown or unspecified, as in this case. Understanding of technology systems will get better with each passing generation. Many older employees working today predate the current market saturation of computerized information systems. The younger generation, however, has proven itself more tech-savvy (Kogan 2).

Thirdly, the fake call was timed in such a way as to coincide with the user’s problem with the network (Gardiner 4). Because the call “seemed” legitimate, employees saw no reason to distrust the call, even without proper identification. The ability of social engineering hackers to garner information about employees and occurrences within the company means that they can time their assaults so as to seem innocuous in nature.

Lack of proper identification—the agents’ falsified names weren’t viewable on the caller ID or present in a company directory—failed to stop some employees from divulging their information, even though they reported later that they were suspicious (Gardiner 4). This represents a clear ignorance of agency policy and of common sense. It also illustrates the hard truth of IT management: sometimes, users do things that are inexplicably nonsensical. This is incredibly hard to guard against, since it involves leaning leftward on the liberal-conservative spectrum of security: restricting users’ rights and access may be the only way to prevent such breaches from occurring.

Finally, some employees were hesitant until their managers instructed them to comply with the help desk personnel’s requests (Gardiner 4). Managers, even more so than employees, need to be aware of the policies of the agency or division regarding security. The conclusions that one may draw from this will be similar to Gardiner’s, namely that companies need to “[e]nhance security awareness efforts by periodically reminding managers and employees of social engineering risks and providing examples and scenarios that show how hackers can use social engineering tactics to gain access to IRS systems”(4).

Social engineering isn’t just about unauthorized access, like the divulging of a password. It is also responsible for virtually the whole spectrum of IT woes that we face today, from viruses to spyware to spam to phishing, and in a larger sense, other hacker tricks such as distributed denial of service attacks. Computer viruses—usually thought of in the abstract as invisible worms digging into the guts of the operating system—generally spread by clever ruses designed to tempt users into running them. No virus is so perfidious that it cannot be preemptively deleted or avoided, yet such PC pandemics as Sober are instigated by the cautionless clicking of a mouse. June’s Sober.Y variant came packaged either as a bogus “change of password” notification or archive ostensibly containing a photograph of a high school classmate (“Sober” 1). Users unaware of the risks opened the files, and IT managers across the globe felt the sting. Other users, either in the search of entertainment or pornography, or just unwittingly—usually because the default permissions for their browser were set too high—infest their machines with spyware (Connolly 1).

Even phenomenon we wouldn’t normally consider to be prompted are in fact fueled by human error. Spam, the dreaded junk mail that now accounts for as much as 65% of email traffic, is only commercially viable because the response rate, even when a fraction of a percent, can turn teenagers mass-mailing from their parents’ basement into millionaires (McWilliams xi). In his new book, Spam Kings, Brian McWilliams asserts that because spam taps into the vulnerable psyche of the consumer, there will always be those waiting to bite the hook, and the ability to buy questionable items easily and anonymously is all the more incentive to click (298). Just as those 35 IRS employees took the easy way out of their dilemma by trusting a supposed authority, so the respondents to spam email encourage further abuses by clutching at the prospect of immediate wealth, a better libido, or a longer life.

But greed isn’t the only impetus that drives unsafe network use: as we’ve seen, fear also proves an important motivator for foolishness. “Phishing,” the usage of fraudulent emails requesting user information (in the same tradition as Gardiner’s test of the IRS) generally plays upon the inherent fear that users harbor of identity theft or unauthorized credit card charges. Purportedly from this retailer or that bank or that payment service, phishing emails count on the immediacy of their intended reaction to overwhelm its suspicion. The user, for instance, may not even remember that the fraudulent email from “PayPal” was sent to an different account than what they use for the service; or, they may not notice that the URL doesn’t look quite right. Like a car salesman, phishers use urgent tones to pressure their prey into acting early and without proper consideration. What’s more, phishing continues to grow in its popularity—reaching over 15’000 reported cases per month by June 2005—because, like spam, it continues to be profitable (“Phishing” 3-4). More so, in fact, since the success rates of phishing have been reported at high as 5%, as compared to about 0.01% for spam (Stephenson 1).

Terms like phishing and spam may be new to the lexicon, but they are little more than old-fashioned hucksterism in the trappings of technology. While the geometric progression of computing power is giving ordinary people a heretofore unknown capacity for privacy and security, it also enables snake oil salesmen with unprecedented volume. For all the work being done on stateful packet inspection, strict firewalls, and encryption, the most insecure component of the workstation is and always will be the person sitting at the keyboard. “There is no technology in the world that can stop a social engineering attack,” says Kevin Mitnick (246). The only way to keep data and networks secure is a combination of good technology and properly implemented policies and education. Only when companies—or individuals—learn to say “no” will any secret be safe.

Works Cited

  • Braue, David. “Better Security Not About Tech: Mitnick.” ZDNet Australia. 4 Mar. 2005. CNet Networks. 25 Oct. 2005 <http://www.zdnet.com.au/news/security/0,2000061744,39183334,00.htm>.
  • Connolly, P.J. “A problem technology can’t fix.” InfoWorld. 5 Nov. 2004. IDG Network. 25 Oct. 2005 <http://www.infoworld.com/infoworld/article/04/11/05/45secadvise_1.html>.
  • Gardiner, Pamela J. “While Progress Has Been Made, Managers and Employees Are Still Susceptible to Social Engineering Techniques.” United States Department of the Treasury. 15 Mar. 2005. 25 Oct. 2005 <http://www.treas.gov/tigta/auditreports/2005reports/200520042fr.html>.
  • “Hacker.” Wikipedia. 25 Oct. 2005. 25 Oct. 2005 <http://en.wikipedia.org/wiki/Hacker>.
  • Kogan, Marcela. “Federal managers work to bridge workplace generation gap.” Government Executive 31 Aug. 2001. 25 Oct. 2005 <http://www.govexec.com/dailyfed/0801/083101mk1.htm>.
  • McWilliams, Brian. Spam Kings. Sebastopol, CA: O’Reilly, 2004.
  • Mitnick, Kevin D., and William L. Simon. The Art of Deception. Indianapolis: Wiley Publishing, 2002.
  • “Phishing.” Wikipedia. 26 Oct. 2005. 26 Oct. 2005 <http://en.wikipedia.org/wiki/Phishing>.
  • “Sober returns using social engineering techniques.” HNS. 10 June 2005. Help Net Security. 25 Oct. 2005 <http://www.net-security.org/virus_news.php?id=582>.
  • Stephenson, Robert Louis B. “Plugging the ‘phishing’ hole: legislation versus technology.” Duke Law & Technology Review 2005.0006 (2005). 26 Oct. 2005 <http://www.law.duke.edu/journals/dltr/articles/2005dltr0006.html>.
§821 · November 1, 2005 · Tags: , ·

Leave a Reply