Ten Lessons For Incident Response | The author of the article requested to remain anonymous

This article is from eForensics Open: 2016 Compilation edition, that you can download for free if you have an account on our website. 


Ten Lessons For Incident Response

It started with an email. There was an odd amount of traffic from our Primary Data Center to the IP of some non-descript website. I had a Blackberry at the time and felt safe checking out the site to see if it was obviously malicious. It was a Sunday, I was in the car with the family and the website didn’t seem threatening at first blush. I replied that this could wait until Monday morning to dive deeper and find out what was wrong.  

Monday morning came and the various teams started to slowly spark to life as new data was discovered. I began to investigate the website itself. By whois, the site seemed to match the type of business of the small ecommerce company that owned the domain name.  The IP matched the city and country of the published corporate address. The site, by itself, didn’t seem overly suspicious but I still requested logs of all traffic to and from that IP for the past week from the Network Team.  Much to my chagrin, I was informed that the SIEM system was slow. My query was expected to run for more than an hour before churning back my information.

Lesson 1

During Incident Response, you need fast query times. Test your SIEM beforehand to ensure that you can answer ad hoc queries such as “show me all traffic related to this IP between date X and date Y” or “show me all traffic that utilized TCP Port A between data X and date Y”.  Your results need to be measured in minutes, at most. If results take more than 5 minutes, then involve your SIEM vendor’s support engineers for recommendations on enhancing the system performance. Also, verify that you are actually receiving the correct logs and in the right format. You don’t want to learn during IR that you’re not collecting one of your Firewall’s logs or that there is a parse error ingesting your Active Directory logs.  You will not regret spending advance time insuring that your SIEM is tuned and comprehensive. When in doubt, ingest the logs! Storage issues aside, No one has ever said “I wish I didn’t have this extra log source” during IR events.

While I was waiting, the Information Security team discovered an old antivirus alert that seemed suspicious – it wasn’t determined if this was directly relevant but they began investigating.  Temporary mayhem ensued as each disparate team began their disparate searches. My own team members repeatedly came to me asking for ways that they could help but there didn’t seem to be enough actionable information.  I just told them that we had something strange but, since I couldn’t pinpoint what it was, they should standby. I was unaware at that point that we had an Event that had morphed into an Incident but no one team understood this, at this point.

Lesson 2

Communication, during an Incident, is paramount. Consider a shared conference bridge or a group Instant Messaging session.  You want some method for each engineer or analyst to chime in with what they discover in real time.   That new information may put puzzle pieces together in someone else’s mind and help expedite containing your threat.

With the wait for the laggard SIEM, the overall focus shifted to the anomalous Antivirus Event.  A hoard of engineers gathered pitchforks and torches to raid an unsuspecting IT Manager, the owner of said laptop.  There seemed to be a line out the door of people waiting their turn to inspect the suspect laptop and find “the” problem.  Something didn’t feel right about this. It did not sit well with me to have so many people working on the machine. I figured out quickly that we were each stomping all over the data as we parsed files and logs.  I asked for one of my team to take an image of the laptop and restore it to a like model. I sat in my office, just like the angry mob, and dove through System Event Logs, File Creation Dates/Times and nearly any log file related to or about the AV event.  I spent about two hours at this before deciding that there may be something there but nothing that jumped off the screen at me.

Lesson 3

We overwrote potentially critical forensic metadata when the first user logged in and started poking around. And in doing so, also unwittingly turned any evidence that we may have found inadmissible in court. If a machine is suspect, take a “Forensically Sound” image of the machine and perform your analysis on that image or at least only after you have that backup.  “Forensically Sound” means that no metadata is altered from the original. Every copy should be identical to the other and can be taken using commercial products, such as Guidance Encase or open source, such as dd for *nix. And make certain that you document and never break Chain of Custody. Ever.  If you do, an attorney simply has to say “the evidence may have been tampered with” and the evidence (and anything juicy you may have found on it) will likely be deemed inadmissible!

By this time, some SIEM query data began trickling out of the network group.  The internal “source” server was a generic application server. It contained little or no sensitive information by itself so discussion of a simple misconfiguration in a random configuration file began to air.  The business unit tested the application and things appeared to be functioning as expected. Still, I began to look deeper on the box without knowing what (or when) to look for. Again, with no forensically sound image, I began to read through the event log looking for suspicious events.  I saw some, what would later be determined to be, unusual login hours by a service account. But it was a service account…I assumed, at the time, that the application triggered a login event at semi random intervals. Otherwise, the event logs looked clean. None had been erased and I saw no other “unusual” activity.  I decided to search and sort all files by creation date, just to see if I could find anything interesting. I stumbled onto a Dameware log file showing a remote connection event from another server.

Lesson 4

Malware can hide from the Windows GUI. I was lucky and found files that helped to solve significant pieces of the IR puzzle.  But that was luck. A better method would have been to use a dedicated forensic platform.  Malware “can’t hide” from these applications and most modern malware will hide itself if you only look via the Windows GUI. There are open source products available but even some of the Gartner Magic Quadrant Forensic vendors may surprise you with their affordable price.

Did we own Dameware?  Why wouldn’t we just use Remote Desktop Protocol (RDP) from the other server?  And why would the Service Account use Dameware for remote control…or did Dameware provide some other feature aside from remote control?  This should have been a huge red flag but it drew suspicion only. Dameware is a legitimate company, right? I looked at the company website and saw it for sale.  I could think of some use cases for the product where RDP may not be the best solution, use cases where adding Dameware to the architecture could make solid business sense.  But it was still strange to see the product at all. I asked a team member that handled our licenses and there were no licenses in the database. But that licensing database was hopelessly inaccurate.  All updates were manual, so software was more often missed than accounted for. I called in a favor of one of my team members that was well known for his scripting skills. I asked if he would run a script, en masse, to see where else the product was installed…maybe it was included with an enterprise application for remote support or maybe we simply had not entered it into our database.  I left him to devise his script while I followed the rabbit hole…the Dameware logs.

Lesson 5

Hardware and software inventory tops the list of the CIS Top 20 Security Controls.  Know what’s on your network and why. It’s simpler to say than to do, often times, when you’ve inherited a legacy of “Who knows? It was like that when I got here.”  But it is imperative to know what should and should not exist. Had I known at this point that no license was owned, much time would have been saved.

The Dameware logs pointed me to another server and others after that.  The logs included the login and logout date/time…sometimes with multiple events.  In each instance, I would review the Windows event logs and search the file system hoping to find correlating information.  Curiously, the log folder also contained a file with typed commands and URLs. Our attacker was keylogging himself?! One of the log files included a reference to http://winzip.com that raised the question of querying the web traffic from those servers.  A quick instant message request went out to the Information Security team for the web filter logs but the server traffic wasn’t logged. Who would surf the web on a server? They’d “saved” licensing costs by not filtering that traffic.

Lesson 6

Yes.  Once established, attackers will often download their toolset and may do so directly from your servers.  Make sure to license and secure appropriately. The safest method is to deny all Internet traffic except that which is explicitly allowed and authorized.  Our attacker had been downloading his toolkit, then downloaded winzip to unzip them (why he ignored Windows built-in unzip capabilities is still a mystery to me) and then uninstalled winzip.  

OK, so back to the snail’s pace of the SIEM.  I dreaded it, but it is what it is. By this time in the day, I’d learned how to craft my own queries in the product’s non-intuitive interface.  They didn’t run any faster but I could queue up requests to run while I continued to follow the path of the curious Dameware logs tittering from server to server, like a butterfly. I queried all traffic from one of the server IPs using the approximate date and time from the Dameware log.  The result set was staggeringly Cisco. Raw firewall logs are not particularly light reading but a tip on using Microsoft Excel’s “text to columns” feature helped a bit. I searched for “winzip.com” and saw the log referencing the downloaded copy of winzip. They must have uninstalled the product because shortly thereafter, the logs included a reference to the common “why did you uninstall our product?” survey webpage.  

Reports started to come in from my scripting friend about dozens of copies of Dameware.  Only a few were from workstations, most were from servers. And as I combed through more and more, those curious logs began to scream “I’m a keylogger” by this point.  There was no doubt anymore. We had an intruder for sure. But for how long? What did they want? How did they get in? We did have firewalls. We had antivirus. The answers would have to wait.

Lesson 7

Attackers still scan your perimeter 24x7 but it’s that soft juicy middle of your organization that modern threat actors are after.  All it takes is a single successful phishing email to plant “packed and crypted” malware, undetectable by any antivirus vendor, on one of your computers.  From there, it’s pivot laterally until they can either find or escalate privileged credentials. Consider your users as hostile and treat the center of your organization the same way that you do from the perimeter…complete with internal firewalls, intrusion detection systems, network segregation, etc.

I put the firewall logs aside for the moment, as I learned this new information.  I rekindled my efforts around pursuing the Dameware logs. The date stamps on the Dameware files seemed to have stopped weeks ago but some of my slowly generating SIEM queries showed evidence of recent logins in Windows event logs.  Could the attacker still be active? Through some dumb luck and sheer obstinance, I had traced the logs (including reading what felt like millions of lines of Event Logs) back to a single server.

Lesson 8

Segregate your network. I had just learned that our attacker had moved from Dameware to RDP but there was nothing in my power to stop him beyond that.  The network was virtually flat and the attacker could move to any server. There are many variations on the theme. Segregate servers that talk frequently to each other. Segregate by business function.  But segregate them. Internal Firewalled DMZ’s are best but at least pre-stage ACL’s to allow you to “wall up” once an intruder is made known.

My boss and the (only 2 months new) VP of Information Security were standing behind me now.  I’d told them that I’d found tangible evidence and was now able to follow a trail through the logs.  I demonstrated what I’d seen on, what I would find later was, the attacker’s central hub server. I showed them the event in the security event logs that led me to believe that the attacker had gained access to our Active Directory Domain Controllers (DC).  I felt utterly and completely helpless and “pwned”, paying homage to the hacker-speak definition. The attacker apparently owned our domain! I happened to have a DC from our DR site on screen at the time. I was still debriefing what I’d found so far but I hit refresh out of sheer habit and there it was…a recent login event only from just minutes ago.  I quickly opened Remote Desktop Services Manager and there it was…our attacker in an active RDP session. I already had a Group Policy setting in place that allowed me to take control of a RDP session without permission from the user, so I right clicked and selected “Remote Control”. There was a moment of stunned silence as we watched the attacker typing DNS commands into an open CMD window.  We found our attacker red handed, almost by accident.

The silence turned to brief panic as we tried to decide what to do.  We feared that tipping him off that way may cause him to launch a logic bomb against us but then decided to risk it since the attacker seemed so sloppy.  We knew that he had been downloading Dameware for remote control when he could have been using RDP. We knew that he’d even keylogged himself. And not to mention using the Windows native zip extract feature.  

By now, we knew the IP and port (https 443) associated with the attacker.  We could cut him off through a firewall Access Control List (ACL) entry. So we made the call to the Networking team to make the emergency ACL change.  The typing stopped mid-stream. We’d won the battle and breathed a collective sigh of relief. We knew there was still damage control to cover but the worst was over.  Or was it?

As we patted ourselves on our backs for a  job well done, we moved on to discussing what to do with the compromised servers.  Again, out of habit, I continued to hit refresh on the event logs. Should we power off all the compromised servers?  Could we clean them? Did we know what to clean? Should we reimage them fresh? After all, we did need to get critical business processes running again ASAP.  

But then it happened again…a logon event.  Out attacker was back? How? From my RDP session, I quickly ran netstat to look at the active connections.  I saw another external IP but this one was using port 8080 instead. Ugh, the attacker had a backup plan?! We confirmed the suspicion by taking over control of the RDP session.  I attempted to communicate with the attacker via notepad but the connection was terminated immediately. Another call and another emergency ACL change and we were tentatively sure that we’d stopped the attack.  

Lesson 9

We had unwittingly stumbled into the essence of incident response…stop the bleeding.  But think of the hours wasted between the initial event discovery until this moment. It was tantalizing spending the time trying to decipher the puzzle of the manager’s laptop but it was precious hours that could have been spent wiser.  Keep note of important clues but don’t start detailed forensics until the hemorrhaging has ceased. You can bet that management is going to be demand to know “how” but the first priority needs to be stop the bleeding.

My bosses left my office and I went back to the tedious log tracking.  They made the call to incident responders from a large and well-known security company.  Two people arrived later that evening in full suits. The man that appeared to be in charge asked questions while another sat quietly clicking away at a laptop.   A precursor to their arrival was an export of 3 months of firewall activity logs before their arrival. The leader asked us to describe the events of the day. He then went on to describe the potential threats facing us…crimeware rings, corporate espionage, hacktivists, etc. With a curt nod, the quiet one turned the laptop screen and the leader asked us if we knew about five different zip files.  We all shrugged our shoulders as he explained that these files had been FTP’d out of the organization and potentially contained exfiltrated data. I felt a lump in my stomach.

We convened for a couple of hours as the CEO and the Board of Directors where woken and made the trip to our office building.  In the interim, I asked for a copy of the log file export that contained the zip files. I could see that the quiet one had simply searched for “ftp” amongst the data.  I reflected on the dates and times and vaguely suspected that they corresponded to the dates I had seen in the Dameware logs. I also noticed that all of the suspect files were sent at night and not during business hours – our attacker knew our working hours?  

The CEO and the Board of Directors arrived, obviously miffed.  Actually, the CEO’s face was red and he seemed to be seething in anger.  The CIO was notably absent. The Incident Responder described the new threat environment in terms of a castle.  For years, IT had spent resources building strong walls and even metaphorical moats. But these protections were ostensibly worthless if a single user clicked a phishing link and loaded malware on their computer.  That description seemed to fit our “patient zero” laptop. He described botnets and Remote Access Tools (RATs) and the features available to these botnet owners.

The meeting quickly boiled down to the specifics...those FTP zip files.  They were small, each less than a megabyte in size. So the potential data exfiltration couldn’t have been too bad, right?  The CEO began to call the meeting to a close when I sheepishly raised my hand and asked about one large file that apparently the quiet Incident responder had overlooked, but this file was nearly two terabytes!  I could feel everyone in the room pale as my question was responded to with a spectacle of management peering over my shoulder to see the log entry in question. Maybe I made a mistake? But we collectively confirmed our worst fear.  The huge file was attempted but aborted by the user.  Unlike the other files, we could identify the source and type here…this was a SQL Backup of our entire Customer Data Warehouse!

In the following days, a press release was announced that “some” data “may” have been leaked but few specifics provided.  The news angered some customers and drew some wild speculations from others that we’d lost all of their data. But it was the right thing to do.  We hired a specialized call center to handle the Incident’s call volume. We learned as fact what most of us knew as hearsay…the Information Security budget and team had been a back burner priority for years.  The CIO never returned. The CEO had already read the tea leaves that first night and reportedly asked him not to attend the emergency meeting.

A quick two-hour nap later, and IT had turned from a development powerhouse to a forensics puzzle.  Nearly all productivity stopped as critical resources were diverted to ascertain how this could have happened.  The forensics took months. The remediation plan took more than a year and cost millions of dollars.

Lesson 10

Incident Response is seldom a practiced skill set but it is difficult to forget under these conditions.  Determine, well in advance, your contact list…include vendor escalation phone numbers and emails. Practice Red Team/Blue Team exercises to let your team practice using the tools and honing their skills.

Parting Thought

Security is pay now or pay later.  It may feel like just a recurring cost center until “the” breach happens.  Security may not be as sexy as the development feature rollout but if it is not integrated fully, from start to finish, then you’ve failed before you started – the dead man walking of the impending incident world.  Business decision makers, often times including CTOs, need to be taught Security but in business language. Security Metrics may prove fitting to this discussion area. Show and describe the holistic layered security architecture – remember, a firewall and antivirus is not “secure”.  There is no silver bullet. All security measures can be defeated, thus we layer. This costs, but so does the seemingly inevitable incident.

 

Did you liked the article? If you want to read more publications from this magazine, click here:

eForensics Open: 2016 Compilation >>


May 14, 2019

Leave a Reply

avatar

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  Subscribe  
Notify of
© HAKIN9 MEDIA SP. Z O.O. SP. K. 2013

Privacy Preference Center

Necessary

Cookies that are necessary for the site to function properly. This includes, storing the user's cookie consent state for the current domain, managing users carts to using the content network, Cloudflare, to identify trusted web traffic. See full Cookies declaration

gdpr, PYPF, woocommerce_cart_hash, woocommerce_items_in_cart, _wp_wocommerce_session, __cfduid [x2]

Performance

These are used to track user interaction and detect potential problems. These help us improve our services by providing analytical data on how users use this site.

_global_lucky_opt_out, _lo_np_, _lo_cid, _lo_uid, _lo_rid, _lo_v, __lotr
_ga, _gid, _gat, __utma, __utmt, __utmb, __utmc, __utmz
vuid

Marketing


tr, fr
ads/ga-audiences