Forensic Blogs

An aggregator for digital forensics blogs

January 31, 2017 by Harlan Carvey

Tools

Tools
Memory Analysis
When I've had the opportunity to conduct memory analysis, Volatility and bulk_extractor have been invaluable.

Back when I started in the industry, oh, lo so many years ago, 'strings' was pretty much the tool for memory "analysis".  Thanks to Volatility's strings plugin, there's so much more you can do; run 'strings' (I use the one from SysInternals) with the "-o" switch, and parse out any strings of interest.  From there, using the Volatility strings plugin lets you see where those strings of interest are located within the memory sample, providing significant context.

I've run bulk_extractor across memory samples, and been able to get pcap files that contained connections not present in Volatility's netscan plugin output.  That is not to say that one tool is "better" than the other...not at all.  Both tools do something different, and look for data in different ways, so using them in conjunction provides a more comprehensive view of the data.

If you do get a pcap file (from memory or any other data source), be sure to take a look at Lorna's ISC handler diary entry regarding packet analysis; there are some great tips available.  When conducting packet analysis, remember that besides WireShark, you might also want to take a look at the free version of NetWitness Investigator.

Carving
Like most analysts, I've needed to carve unallocated space (or other data blobs) for various items, including (but not limited to) executable images.  Carving unallocated space, or any data blob (memory dump, pagefile, etc.) for individual records (web history, EVT records, etc.) is pretty straight forward, as in many cases, these items fit within a sector.

Most analysts who've been around for a while are familiar with foremost (possible Windows .exe here) and scalpel as carving solutions.  I did some looking around recently to see if there were any updates on the topic of carving executables, and found Brian Baskin's pe_carve.py tool.  I updated my Python 2.7 installation to version 2.7.13, because the pip package manager became part of the installation package as of version 2.7.9.  Updating the installation so that I could run pe_carve.py was as simple as "pip install bitstring" and "pip install pefile".  That was it.  From there, all I had to do was run Brian's script.  The result was a folder with files that had valid PE headers, files that tools such as PEView parsed, but there were sections of the files that were clearly not part of the file.  But then, such is the nature of carving files from unallocated space.

Addendum, 1 Feb: One of the techniques I used to try to optimize analysis was to run 'strings' across the carved PE files, in hopes of locating .pdb strings or other indicators.  Unfortunately, in this case, I had nothing to go on other than file names.  I did find several references to the file names, but those strings were found in the files that were clearly part of the sectors included in the file that likely had little to do with the original file.

Also, someone on Twitter recommended FireEye's FLOSS tool, something you'd want to use in addition to 'strings'.

Hindsight
Hindsight, from Obsidian Forensics, is an awesome tool for parsing Chrome browser history.  If you haven't tried it, take a look.  I've used it quite successfully during engagements, most times to get a deeper understanding of a user's browsing activity during a particular time frame.  However, in one instance, I found the "smoking gun" in a ransomware case, where the user specifically used Chrome (was also using IE on a regular basis) to browse to a third-party email portal, download and activate a malicious document, and then infect their system with ransomware.  Doing so by-passed the corporate email portal protections intended specifically to prevent systems from being infected with...well...ransomware.  ;-)

Hindsight has been particularly helpful, in that I've used it to get a small number of items to add to a timeline (via tln.pl/.exe) that provide a great deal of context.

Shadow Copies
Volume shadow copies (VSCs) and how DFIR analysts take advantage of them is something I've always found fascinating.  Something I saw recently on Twitter was a command line that can be used to access files within Volume Shadow Copies on live systems; the included comment was, "Random observation - if you browse c$ on a PC remotely and add @TIMEZONE-Snapshot-Time, you can browse VSS snapshots of a PC."

An image included within the tweet chain/thread appeared as follows:

Source: Twitter









I can't be the only one that finds this fascinating...not so much that it can be done, but more along the lines of, "...is anyone doing this on systems within my infrastructure?"

Now, I haven't gotten this to work on my own system.  I am on a Windows 10 laptop, and can list the available shadow copies, but can't copy files using the above approach.  If anyone has had this work, could you share what you did?  I'd like to test this in a Win7 VM with Sysmon running, but I haven't been able to get it working there, either.

Addendum, 1 Feb: Tun provided a tweet to Dan's blog post that might be helpful with this technique.  Another "Dan" said on Twitter that he wasn't able to get the above technique to work.

As a side note to this topic, remember this blog post?  Pretty sneaky technique for launching malware.  What does that look like, and how do you hunt for it on your network?

Windows Event Logs
I recently ran across a fascinating MSDN article entitled, "Recreating Windows Event Log Files"; kind of makes you wonder, how can this be used by a bad guy, and more importantly, has it?

Maybe the real question is, are you instrumented to catch this happening on endpoints in your environment?  I did some testing recently, and was simply fascinated with what I saw in the data.

Read the original at: Windows Incident ResponseFiled Under: Digital Forensics Tagged With: bulk_extractor, hindsight, pe_carve, Volatility, VSC, Windows Event Logs

January 27, 2016 by Harlan Carvey

The Need for Instrumentation

The Need for Instrumentation
Almost everyone likes spies, right?  Jason Bourne, James Bond, that sort of thing?  One of things you don't see in the movies is the training these super spies go through, but you have to imagine that it's pretty extensive, if they can pop up in a city that they maybe haven't been to and transition seamlessly into the environment.

The same thing is true of targeted adversaries...they're able to seamlessly blend into your environment.  Like special operations forces, they learn how to use tools native to the environment in order to get the information that they're after, whether it's initial reconnaissance of the host or the infrastructure, locating items of interest, moving laterally within the infrastructure, or exfiltrating data.

I caught this post from JPCERT/CC that discusses Windows commands abused by attackers.  The author takes a different approach from previous posts and shares some of the command lines used, but also focuses on the frequency of use for each tool.  There's also a section in the post that recommends using GPOs to restrict the use of unnecessary commands.  An alternative approach might be to track attempts to use the tools, by creating a trigger to write a Windows Event Log record (discussed previously in this post).  When incorporated into an overall log management (SEIM, filtering, alerting, etc.) framework, this can be an extremely valuable detection mechanism.

If you're not familiar with some of the tools that you see listed in the JPCERT/CC blog post, try running them, starting by typing the command followed by "/?".

TradeCraft Tuesday - Episode #6 discusses how Powershell can be used and abused. The presenters (one of whom is Kyle Hanslovan) strongly encourage interaction (wow, does that sound familiar at all?) with the presentation via Twitter.  During the presentation, the guys talk about Powershell being used to push base64 encoded commands into the Registry for later use (often referred to as "fileless"), and it doesn't stop there.  Their discussion of the power of Powershell for post-exploitation activities really highlights the need for a suitable level of instrumentation in order to achieve visibility.

The use of native commands by an adversary or intruder is not new...it's been talked about before.  For example, the guys at SecureWorks talked about the same thing in the articles Linking Users to Systems and Living off the Land.  Rather than talking about what could be done, these articles show you data that illustrates what was actually done; not might or could, but did.

So, what do you do?  Well, I've posted previously about how you can go about monitoring for command line activity, which is usually manifest when access is achieved via RATs.

Not all abuse of native Windows commands and functionality is going to be as obvious as some of what's been discussed already.  Take this recent SecureWorks post for example...it illustrates how GPOs have been observed being abused by dedicated actors.  An intruder moving about your infrastructure via Terminal Services won't be as easy to detect using command line process creation monitoring, unless and until they resort to some form of non-GUI interaction.

Read the original at: Windows Incident ResponseFiled Under: Digital Forensics Tagged With: command line, powershell, Windows Event Logs

January 20, 2016 by Harlan Carvey

Resources, Link Mashup

Monitoring
MS's Sysmon was recently updated to version 3.2, with the addition of capturing opens for raw read access to disks and volumes.  If you're interested in monitoring your infrastructure and performing threat hunting at all, I'd highly recommend that you consider installing something like this on your systems.  While Sysmon is not nearly as fully-featured as something like Carbon Black, employing Sysmon along with centralized log collection and filtering will provide you with a level of visibility that you likely hadn't even imagined was possible previously.

This page talks about using Sysmon and NXLog.

The fine analysts of the Dell SecureWorks CTU-SO recently had an article posted  that describes what the bad guys like to do with Windows Event Logs, and both of the case studies could be "caught" with the right instrumentation in place.  You can also use process creation monitoring (via Sysmon, or some other means) to detect when an intruder is living off the land within your environment.

The key to effective monitoring and subsequent threat hunting is visibility, which is achieved through telemetry and instrumentation.  How are bad guys able to persist within an infrastructure for a year or more without being detected?  It's not that they aren't doing stuff, it's that they're doing stuff that isn't detected due to a lack of visibility.

MS KB article 3004375 outlines how to improve Windows command-line auditing, and this post from LogRhythm discusses how to enable Powershell command line logging (another post discussing the same thing is here).  The MS KB article gives you some basic information regarding process creation, and Sysmon provides much more insight.  Regardless of which option you choose, however, all are useless unless you're doing some sort of centralized log collection and filtering, so be sure to incorporate the necessary and appropriate logs into your SEIM, and get those filters written.

Windows Event Logs
Speaking of Windows Event Logs, sometimes it can be very difficult to find information regarding various event source/ID pairs.  Microsoft has a great deal of information available regarding Windows Event Log records, and I very often can easily find the pages with a quick Google search.  For example, I recently found this page on Firewall Rule Processing events, based on a question I saw in an online forum.

From Deus Ex Machina, you can look up a wide range of Windows Event Log records here or here.  I've found both to be very useful.  I've used this site more than once to get information about *.evtx records that I couldn't find any place else.

Another source of information about Windows Event Log records and how they can be used can often be one of the TechNet blogs.  For example, here's a really good blog post from Jessica Payne regarding tracking lateral movement...

With respect to the Windows Event Logs, I've been looking at ways to increase instrumentation on Windows systems, and something I would recommend is putting triggers in place for various activities, and writing a record to the Windows Event Log.  I found this blog post recently that discusses using PowerShell to write to the Windows Event Log, so whatever you trap or trigger on a system can launch the appropriate command or run a batch file the contains the command.  Of course, in a networked environment, I'd highly recommend a SEIM be set up, as well.

One thought regarding filtering and analyzing Windows Event Log records sent to a SEIM...when looking at various Windows Event Log records, we have to look at them in the context of the system, rather than in isolation, as what they actually refer to can be very different.  A suspicious record related to WMI, for example, when viewed in isolation may end up being part of known and documented activity when viewed in the context of the system.

Analysis
PoorBillionaire recently released a Windows Prefetch Parser, which is reportedly capable of handling *.pf files from XP systems all the way up through Windows 10 systems.  On 19 Jan, Eric Zimmerman did the same, making his own Prefetch parser available.

Having tools available is great, but what we really need to do is talk about how those tools can be used most effectively as part of our analysis.  There's no single correct way to use the tool, but the issue becomes, how do you correctly interpret the data once you have it?

I recently encountered a "tale of two analysts", where both had access to the same data.  One analyst did not parse the ShimCache data at all as part of their analysis, while the other did and misinterpreted the information that the tool (whichever one that was) displayed for them.

So, my point is that having tools to parse data is great, but if the focus is tools and parsing data, but not analyzing and correctly interpreting the data, what have the tools really gotten us?

Creating a Timeline
I was browsing around recently and ran across an older blog post (yeah, I know it's like 18 months old...), and in the very beginning of that post, something caught my eye.  Specifically, a couple of quotes from the blog post:

...my reasons for carrying this out after the filesystem timeline is purely down to the time it takes to process.

...and...

The problem with it though is the sheer amount of information it can contain! It is very important when working with a super timeline to have a pivot point to allow you to narrow down the time frame you are interested in.

The post also states that timeline analysis is an extremely powerful tool, and I agree, 100%.  What I would offer to analysts is a more deliberate approach to timeline analysis, based on what Chris Pogue coined as Sniper Forensics.

Speaking of analysis, the folks at RSA released a really good look at analyzing carrier files used during a phish.  The post provides a pretty thorough walk-through of the tool and techniques used to parse through an old (or should I say, "OLE") style MS Word document to identify and analyze embedded macros.

Powershell
Not long ago, I ran across an interesting artifact...a folder with the following name:

C:UsersuserAppDataLocalMicrosoftWindowsPowerShellCommandAnalysis

The folder contained an index file, and a bunch of files with names that follow the format "PowerShell_AnalysisCacheEntry_GUID".  Doing some research into this, I ran across this BoyWonder blog post, which seems to indicate that this is a cache (yeah, okay, that's in the name, I get it...), and possibly used for functionality similar to auto-complete.  It doesn't appear to illustrate what was run, though.  For that, you might want to see the LogRhythm link earlier in this post.

As it turned out, the folder path I listed above was part of legitimate activity performed by an administrator.


Read the original at: Windows Incident ResponseFiled Under: Digital Forensics Tagged With: Analysis, resources, Windows Event Logs

About

This site aggregates posts from various digital forensics blogs. Feel free to take a look around, and make sure to visit the original sites.

  • Contact
  • Aggregated Sites

Suggest a Site

Know of a site we should add? Enter it below

Sending

Jump to Category

All content is copyright the respective author(s)