Thursday, October 24, 2013

PowerShell and Nessus


Wouldn't it be nice if every PT tool spat out their results in the exact same format? I'd be happy if Nessus, nmap, MetaSploit, and Nikto all use the exact same format for output. Whether this means they only use CSV's and keep the same order for the "header" rows or using the same elements and tree for XML output. However, reality is slight skewed...I mean different. :-)  Recently I worked on a PT mission and it was my responsibility to compile the Nessus results and then to parse them for certain items. This was the easy part. The fun part came when I needed an easy way to convert the CVE numbers in the CSV output to that of what the Army uses: IAVMs/IAVAs. Easy right?

I thought so, figuring it to be just a "simple" find & replace operation. But, and this is where reality came in to perspective, it did take a little work in getting some powershell scripts to work, stripping out what I wanted into multiple output files (still CSV's, intentionally) and then convert the CVE numbers in ALL of those output files.

You may be asking "Dave, why didn't you just convert the CVEs first then strip out the results you wanted?" Good question. The answer is, to me, very simple: I would rather my find and replace operation only have to do absolutely what it must in terms of processing lines/words of a file as opposed to processing the entirety of ALL of my output files. In this case I had five seperate output files, totaling only 105MB. This isn't a big amount when one considers the computing power available today. The parsed output files numbers six and weighed in at a glaringly heavy 18.8MB...might call this a feather-weight contender.

Since the find and replace operation runs line by line, at least the way I wrote it, this 18.8MB is a much better number to run through. I should also mention that this particular pentest was very small in scope and size, consisting of five Class 'C' blocks, with a grand total of less than 500 live hosts between them. That said, I think any PT'er with even only a few PT's under the belt could recognize the small size of the original Nessus output files (105MB).

Below is what I wrote to work for my needs on this particular mission. I am in the process of turning this into a module that is dynamic, allowing the user to select the same, or some combination, of what I hard-coded, as well as the proper directories for the original Nessus output CSV's and the final output of the script.
#Parse-Nessus.ps1
#
# Ingredients:
#   1) Directory of Nessus output files
#      - In CSV format
#   2) User has ability to read and write to appropriate directories
#   3) User is able to read/modify the original and output files
#   4) IF you want to convert CVE's to IAVA/IAVM numbers, the mapping can be found at:
#       - http://iase.disa.mil/stigs/downloads/xls/iavm-to-cve(u).xls
#          - This file has a lot of other information in it. I found it easier to strip out just the CVE and IAVA columns and store into a new CSV file labled:
#          - ReplacementList.csv
#   5) I also found a great script to start with...had to modify it for me, but the link is:
#       - http://tangodude.wordpress.com/2013/04/15/powershell-multiple-find-replace-in-files-with-lookup-list/
#
# Comments: The big problem here is that the list from DISA doesn't seem to have all of the CVE numbers so there is still a little manual work that has to be done

#In order to strip out just the CVE and IAVA numbers from the xls spreadsheet, we need to convert it from an XLS document to a CSV document:
$xlCSV=6
$Excelfilename = "C:\users\UserOne\Desktop\iavm-to-cve(u).xls"
$CSVfilename = "C:\users\UserOne\Desktop\TempListing.csv"
$OutCSVfilename = "C:\users\UserOne\Desktop\ReplacementList.csv"
#create an Excel object
$TempExcel = New-Object -comobject Excel.Application
#we don't need to actually open Excel
$TempExcel.Visible = $False
#we don't need macro or other alerts
$TempExcel.displayalerts=$False
#Open the downloaded XLS file with the Excel
$TempWorkbook = $Excel.Workbooks.Open($ExcelFileName)
#Now save the opened file with the new filename and as an CSV file
$TempWorkbook.SaveAs($CSVfilename, $xlCSV)
#Close the Excel object
$TempExcel.Quit()
#Just in case, Really close the Excel object :-)
if(ps excel){kill -name excel}

#we don't need to keep the converted CSV file.
del $CSVfilename  

#we will be reading this CSV file into a hash table but first, let's parse out the info we really want
#Where the Nessus CSV files are located
$CSVSourceDir = "C:\Users\dwerd_000\Desktop\nessus"
#Get ONLY the CSV files you want. In this example, the files have names like scan1.csv, scan2.csv, so the -like "sc*.csv" will select only our scan output files
$DataFiles = Get-ChildItem $CSVSourceDir -force | Where { $_.Name -like "sc*.csv" } | Foreach-Object -process { $_.FullName }
#If you want to know how many files were stored in the DataFiles object
[int] $DataFilesCount = $DataFiles.Count
#Again, this is just for verifying the number of CSV files
Write-Output "Discovered $DataFilesCount CSV Data files in $CSVSourceDir "

#Now that we have the needed files in the DataFiles object, it's time to strip out the data that we want.
#First, declare some vars for the output files. Here you can see the items I was most interested in.
#One small note of caution: create the directory structure that you want to use.
$outFileHighs = "C:\Users\UserOne\Desktop\nessus\Parsed\NessusResults_ALL_Highs.csv"
$outFileAdobeReader = "C:\Users\UserOne\Desktop\nessus\Parsed\NessusResults_ALL_Adobe_Reader.csv"
$outfileShockwave = "C:\Users\UserOne\Desktop\nessus\Parsed\NessusResults_ALL_Adobe_Shockwave.csv"
$outFileFlash = "C:\Users\UserOne\Desktop\nessus\Parsed\NessusResults_ALL_Adobe_Flash.csv"
$outFileRCE = "C:\Users\UserOne\Desktop\nessus\Parsed\NessusResults_ALL_Remote_Code_Execution.csv"
$outFileOracleJava = "C:\Users\UserOne\Desktop\nessus\Parsed\NessusResults_ALL_Oracle_Java.csv"

#Now let's parse through the DataFiles. The ForEach loop will load each file, search for each wanted item,
ForEach ($DataFile in $DataFiles)
{
        #Again, some verbosity here for info/debugging purposes
        $FileInfo = Get-Item $DataFile
        $LogDate = $FileInfo.LastWriteTime
        Write-Output "Reading data from $DataFile ($LogDate ) "
       
        #Let the Parsing begin. Each parsing line appends the selected data line to the specified CSV file. The NoTypeInformation switch is personal preference...but I'd recommend using it.
        #Find all vulns for Adobe Reader that have an actual Risk value and the Risk value isn't "None"
        [array]$CSVData += Import-CSV $DataFile | where {$_.Description -Match "Adobe","Reader" -and $_.Risk -ne "None" } | select "Plugin ID", CVE, Risk, Host, Protocol, Port, Name, Description | Export-Csv $outFileAdobeReader -NoTypeInformation -Append
      
        #Find all high vulns
        [array]$CSVData2 += Import-Csv $DataFile | where {$_.risk -eq "high"} | select "Plugin ID", CVE, Risk, Host, Protocol, Port, Name, Description | Export-Csv $outFileHighs -NoTypeInformation -Append
       
        #Find all vulns for Shockware that have an actual Risk value and the Risk value isn't "None"
        [array]$CSVData3 += Import-Csv $DataFile | where {$_.Description -Match "Shockwave" -and $_.Risk -ne "None" } | select "Plugin ID", CVE, Risk, Host, Protocol, Port, Name, Description | Export-Csv $outfileShockwave -NoTypeInformation -Append
       
        #Find all vulns for Flash that have an actual Risk value and the Risk value isn't "None"
        [array]$CSVData4 += Import-Csv $DataFile | where {$_.Description -Match "Flash" -and $_.Risk -ne "None" } | select "Plugin ID", CVE, Risk, Host, Protocol, Port, Name, Description | Export-Csv $outFileFlash -NoTypeInformation -Append
       
        #Find all vulns for "Remote Code Execution" that have an actual Risk value and the Risk value isn't "None"
        [array]$CSVData5 += Import-Csv $DataFile | where {$_.Description -Match "Remote Code Execution" -and $_.Risk -ne "None"} | select "Plugin ID", CVE, Risk, Host, Protocol, Port, Name, Description | Export-Csv $outFileRCE -NoTypeInformation -Append
       
        #Find all vulns for Java that have an actual Risk value and the Risk value isn't "None"
        [array]$CSVData6 += Import-Csv $DataFile | where {$_.Description -Match "Java" -and $_.Name -Match "Java" -and $_.Name -notmatch "JavaScript" -and $_.Risk -ne "None" } | select "Plugin ID", CVE, Risk, Host, Name, Description | Export-Csv $outFileOracleJava -NoTypeInformation -Append 
       
        #If you want to track how many lines are added to each file you can do something like this
        #[int] $CSVDataCount6 += $CSVData6.Count
        #Write-Output "Imported $CSVDataCount6 records"
}

#OK, so now that we have all of our files with the output we want, it's time to create the Hash table
#I manually removed the header row of the ReplacementList.csv file. You can do the same or modify the code that creates the file above
$HashTable  = @(get-content C:\Users\UserOne\Desktop\nessus\ReplacementList.csv ) -replace ",","=" | convertfrom-stringdata

#The location of the parsed nessus values. By using a standard naming convention, I can again use a wildcard mask to load only the files I want
$ParsedFiles = "C:\users\UserOne\Desktop\nessus\Parsed\Nessus*.csv"

#Let's get the nessus csv files and run them through a ForEach loop.
#We load each file, using the FullName value (path and filename) into a var and then use
#an HashTable enumerator to make the changes.
gci $ParsedFiles |
ForEach-Object {
    $Content = gc -Path $_.FullName;
    foreach ($h in $HashTable.GetEnumerator()) {
        $old = $($h.Keys)
        $new = $($h.Values)
        $Content = $Content -Replace "$old", "$new"
    }
    Set-Content -Path $_.FullName -Value $Content
}


Wednesday, October 9, 2013

Powershell for Hashes and Timestamps

I have recently been loving the functionality that PowerShell provides. I think it's Microsoft's best attempt at a *nix-like shell system. There is so much that I have been able to do just playing around with it that today, when needed, I was able to bang out two quick scripts. I figured I would post one here now (and maybe the second one after I clean it up...and maybe "module-ize" it).

I recently have been put through the wringer by first Dell (in regards to a failed hard drive on my primary laptop) and then Microsoft (in regards to my primary Live account being hacked and misused). Because of these issues, I have had to rebuild and set up my system from scratch. What I found while doing this was that I had somehow made a good number of "backup" copies of the source directory of a big project I have been working on. As I use BitBucket, this normally wouldn't be a problem...except for the fact that there was a good amount of changes I made while the last hard drive was failing, which caused multiple pushes to fail when the laptop froze up. This left me with a less than sure feeling of main and primary backup folders for the project. What to do?

I decided that the easiest thing would be to have a spreadsheet of the filepath, filename, the MD5 of each file, and the Last Write Time of each file. So, I moved all of the folders under one temporary one on my desktop. Now I just needed to get the meta-data I needed. PowerShell and the PowerShell Community Extensions to the rescue!

The PowerShell Community Extensions (PSCX, http://pscx.codeplex.com/) provides a useful Get-Hash function. This function can produce a number of different type of hashes depending on the switches applied by the user. Even better, this function accepts pipelined results and the pipelining of its own output, which comes in very handy.

To get to some code, using the PSCX Get-Hash function is as easy as:

Get-Hash MyFile.txt

The above will by default produce the MD5 hash of MyFile.txt and will output four data values:
- Path: the full path and name of the file
- Algorithm: the algorithm used. In the example above the output would be 'MD5'
- HashString: the hash string based on the algorithm used
- Hash: the system datatype of the HashString

For my purposes, I only care about the 1st and 3rd columns (Path, HashString). However, this is still not enough information. The below script is the solution that works for me. I think I am going to convert this to get rid of the hard coded values at some point in the very near future.



####################################################
# FileName: Get-HashesAndTimeStamps.ps1            #
# Author: Dave Werden                              #
# Date:   9 Oct 2013                               #
# NOTES:                                           #
# The four columns produced by the PSCX's Get-Hash #
# module are: Path, Algorithm,HashString,Hash      #
# Dependencies: The PSCX pack must be installed and#
#  imported in order to make use of the Get-Hash   #
#  module.                                         #
####################################################



#Hardcoded csv filepath and name
$outCSVFileTemp = "C:\users\dwerd_000\Desktop\SB_File_Hashes_Full.csv"

#Hardcoded location of files
$sbpath ="C:\users\dwerd_000\Desktop\ScoutNB_Collections\"
#create the collection of files
$sbfiles = gci $sbpath -Recurse | ? { !$_.PSIsContainer }


#process each file, getting the file hash and last write time for each
#ouput goes to file defined in outCSVFileTemp above
foreach ($sbfile in $sbfiles ) {
   
    #smarter to grab file's LastWriteTime value first in order to append to the Get-Hash object
    $sbfileTime = $sbfile.LastWriteTime.ToString("dd/MM/yyyy HH:mm:ss")
    Get-Hash $sbfile | Select-Object Path,HashString,@{Name='LastModified';Expression={$sbfileTime}} | Export-Csv $outCSVFileTemp -Append

 }


To quickly explain what is going on here exactly:
$sbfiles is set to contain all of the files in the given path. This is done recursively and excludes folders themselves.
Next, a foreach loop is used to process each file by:
   - First grabbing the file's LastWriteTime property, using the given format and saving to $sbfileTime
   - Next (and this was the FUN part) the file object is cut-up using the Select-Object function where only the Path and HashString 'columns' are retained and third column (LastModified) is added and set to the value of $sbfileTime
   - The "new" object, consisting of Path, HashString, LastModified, columns/values is now exported to the $outCSVFileTemp.

By running this script, I am able to use one spreadsheet to identify the newest version of each file, as well as if multiple versions of the same file are the same or different. While there is probably a way to automate this in PowerShell, I still prefer to do these kinds of tasks semi-manually by using Excel's ability to filter/sort as well as the its ability to highlight duplicates (HashString, in this case). The only other action that I currently do manually but may add to this script is the splitting of the Path value into the full path to the lowest folder in one column and the filename by itself in a second column (not sure which way to go on this).

Anyway, it was a lot of fun to bang this out and to see that I ended up with a CSV file of exactly the data I needed and nothing else. The other PowerShell script I knocked out today was a (for now) hardcoded parser for finding specific items from one or more Nessus results file and creating an appropriately named CSV file for these found subsets. Maybe later this week or next I will post that up as well....actually, I am certain I will as I have not found a good PS or other tool to find and compile the subsets I need from Nessus in order to provide valid data for PT reports.