Ten Years After: Where Security Monitoring Still Falls Short

It really doesn’t seem like it was that far in the past. Ten years ago, the company I was with was running an information security program for a large healthcare corporation. As the program manager and de facto CISO, my job was to work with my team to protect our client’s data using every means at my disposal. Because our client understood the value and importance of risk-based information security, we had all of the right “security stuff” in place.

Information security policies were based on data risk assessments and compliance mandates. Security standards and procedures were well-documented and kept continuously up-to-date. I had a cadre of 12 deeply technical, “big-brained” security and risk analysts on my team. We had acquired and properly deployed the appropriate security technologies available at the time, including vulnerability scanners, anti-malware, host management agents, host- and network-based intrusion detection, Security Information and Event Management (SIEM), Network Behavior Analysis (NBA), and more. Unfortunately, in early 2003, the client was hit by the SQL Slammer worm, virtually grinding business operations at the enterprise to a halt.

The morning that the worm hit, my BlackBerry started going off and didn’t stop for hours. Alerts were being triggered on host management agents across the enterprise, indicating that traceroute times between subnets inside our own network were growing incrementally, and very rapidly. Some of our internal networks were simply unreachable. Our Microsoft SQL servers and DBA workstations were generating massive network traffic and they were crashing in a cascading fashion, like a tipping line of dominos. Application support teams, business line managers, and eventually senior executives were demanding information in an increasing cacophony of confusion, concern and – soon after – anger. It was a dark day – one that every security professional fears, and one that leaves an indelible impression on the team.

Naturally, a legitimate question you probably have is: “If this was such a great information security program, then why was the organization so deeply affected by the SQL Slammer worm?” It’s a very fair question – and the answer, when everything is boiled down to the most basic components, is that my team was unable to answer two basic questions that should be at the heart of every worm or other unwanted attack:

  1. How did this thing get into my environment?
  2. How is it propagating to all of my systems?

Surprisingly, despite all of the security monitoring technology and highly qualified and experienced personnel our team had, these questions could not easily be answered, because no single tool provided the necessary visibility to identify the root cause of the problem. Instead, we only had individual pieces of information: our NBA tool and network consoles telling us that massive amounts of User Datagram Protocol (UDP) traffic on port 1434 were hitting seemingly-random IPs; our host management consoles telling us that services and processes were crashing; and our SIEM telling us that our monitored applications were reporting database connection errors.

Each security tool had its own view into what was going on, but there was no way to pull all of this information together into a single view, and correlate the data. Some tools were providing a network-centric view of the incident, others a server OS-centric view, still others an application-centric view. Even SIEM, a relatively new technology at the time that was already being hailed by clever marketers, analysts and industry pundits (read: “people with little-to-no actual hands-on security experience”) as the “only security console you’ll ever need”, had no facility for capturing or analyzing non-event data such as state-based data (think Windows registry changes, or Access Control Lists changes) or network traffic data (such as netflow packets).

My team’s triage for this incident was fast, furious and – eventually – effective. We had carte blanche to grab whomever we needed from across the enterprise, and sequester them for as long as needed: we unilaterally took network operations center (NOC) personnel, DBAs, members of the server build team, developers, and of course members of my own team by the hand, and unceremoniously dragged them into a big conference room with their laptops and consoles in tow. Within an hour, we had covered the walls with overlapping pieces of E-size sticky paper, set up a Medusa-like network switch with port blocking enabled for everyone to share, and provisioned a half-dozen pizzas (two pepperoni, two veggie and two “meat mania”) from a very grateful neighborhood restaurant.

White-boarding out the information we knew from different tools and consoles, and then manually correlating the data from these different sources, it took us four hours to answer the second question: “How is this thing propagating to all my systems?” – and a further two days to answer the first. Of course, all the while, the attack was continuing to propagate. But eventually – and slowly – networks were quarantined and brought back online to the network core, SQL servers were patched and rebooted, and critical business applications were once again available.

Many of you will note that a patch for the MS SQL Slammer worm pre-dated the arrival of the worm itself. This is true; very few of our SQL Servers were patched against this known threat. However, SQL Slammer is only one of many worm outbreaks, many of which occurred in 2003 and 2004 – and many of these, including MS-Blaster, Sobig, MyDoom, NetSky, Sasser and others had no patch that could have mitigated their spread. And of course, Microsoft has not been the only direct target of worms; L10n (Linux), Samy (XSS), Caribe (SymbianOS) and Santy (phpBB) were among the worms of the early-to-mid 2000s to emerge, proving that no platform was completely immune.

Even today, worms affect popular technologies, with well-known attacks such as Koobface (Facebook) and Kenzero (P2P networks) wreaking havoc; and of course, as we see practically every week, distributed denial of service (DDoS) attacks, which have a similar result as inert worms, continue unabated. The pain we went through with SQL Slammer is similar to what many other organizations have experienced over the past 10 years, often in the absence of patches, anti-malware “signatures”, or other mitigating tools.

Fast-forward 10 years, and the sad fact is, threat detection is in much the same state. Yes, it’s true that security products are much more advanced than they were 10 years ago. But at their heart, they’re still using the same discovery and analysis methods, and looking for the same types of patterns. They haven’t kept up with modern threats: not just worms, but Advanced Persistent Threats (APTs) and other signature-less threats, low-and-slow intrusions, and fraud and other malicious insider activity have all become the new hallmarks of the security threat landscape.

Unfortunately, yesteryear’s technology isn’t very helpful in discovering today’s threats. Here, 10 years after the first generation of SIEM, Intrusion Detection System (IDS), stateful firewalls, and other technologies, it’s clear that we’ve walked very far… but progressed very little.

In the next article in this two-part series, we’ll talk about what security professionals can do to start fighting back against modern threats… and will provide security technology vendors with recommendations to close the gap between what they can do, and what is needed by their customers.

John Linkous is a Security Research Fellow at eIQnetworks, Inc.