Script for Windows Firewall Baseline

Another baseline an auditor or system administrator might want to consider when assessing their systems is a baseline for the general configuration of the Microsoft Windows Firewall. Many organizations are starting to utilize the built in Microsoft Windows Firewall more and more when protecting even their internal systems. The use of a host based firewall should definitely be on the list of things to consider when evaluating the security of a system.

If the system that you are auditing is using a third party firewall product, then you will need to contact that vendor to see how best to automate collecting data about the firewall’s configuration. However if you are using the built in Windows Firewall, then the following script will help by gathering a baseline of the general configuration settings of the firewall:

$Global = (netsh advfirewall show global)
$Profiles = (netsh advfirewall show allprofiles)

$Global | Out-file firewall_config.txt
$Profiles | Out-file -append firewall_config.txt

It should be noted that this script will only work on systems running PowerShell – but you have probably already noticed that theme in our posts. But also please remember that this will only work on systems running the Windows Advanced Firewall that was released with Windows Vista and later systems. This will not work on the earlier, original firewall built into Windows XP.

Parsing Active Directory Groups

In a previous post we shared a PowerShell script that would allow an auditor to parse a list of groups and group members on a Microsoft Windows system as a part of a security assessment or baselining process. The question has come up though – what if someone wants to follow the same process but parse a list of Active Directory groups and their members instead?

It turns out that it is much easier to perform the task against Active Directory than it is to do it against a local Microsoft Windows local Security Accounts Manager (SAM) database. It seems that the native capabilities of the Active Directory PowerShell cmdlets is going to make it easier for us to accomplish this task.

The output requirement for this script will be the same as for the previous. The output should be in a CSV format that could be easily parsed by a spreadsheet program like Microsoft Excel. Also, like the previous script, empty groups should still be listed, but just without any group members listed along with them.

The code we used to parse a list of AD groups and their members is:

Get-ADGroup -Filter * | ForEach-Object { 
    $GroupName = $_.Name 
    $Members = Get-ADGroupMember $_.samaccountname -Recursive | foreach{$_.Name} 
        If ($Members.count -ge 1){
            $Out = $GroupName + "," + [String]::Join(",", $Members)
            $out | Out-File -append ad_group_members.txt
        }
        Else{
            $Out = $GroupName
            $out | Out-File -append ad_group_members.txt  
        }
    } 

Ideally an auditor would combine this script with our previous example and using PowerShell remoting run one script that would parse local and Active Directory groups at the same time and output the data as an easily readable file. But between the two examples, hopefully it will give you enough inspiration to make this usable for you in your audit efforts.

Parsing Local Windows Groups

One step that has become a staple part of any audit of Microsoft Windows systems is a listing of all the local groups on the system. Listing all the groups on a system with all the members of that group can help establish a baseline for the security configuration of a system. Certainly groups like the local Administrators group is a concern during a security audit. Ideally though auditors would have the ability to look at the members of each group on a system and validate that the list of members is appropriate for that system.

As with many of the scripts we have recommended, we decided to write a script using PowerShell to pull the information from a local system. We also wanted to make sure that when we pulled the information from a system that we could parse it easily once we were done. Therefore we wrote the script to allow us to pull all the local computer groups from a system, with the members of the group, and then save it as a CSV file for easier parsing with Excel.

Here is the code we used for the script:

$GroupList = Get-WmiObject Win32_Group | ForEach {$_.name}

$ComputerName = "."

ForEach ($GL in $GroupList)

{

    $Computer = [ADSI]("WinNT://" + $ComputerName + ",computer")

    $Group = $Computer.psbase.children.find($GL)

    $Members= $Group.psbase.invoke("Members") | %{$_.GetType().InvokeMember("Name", 'GetProperty', $null, $_, $null)}

        If ($Members.count -ge 1){
            $Out = $GL + "," + [String]::Join(",", $Members)
            $out | Out-File -append local_group_members.csv
        }
        Else{
            $Out = $GL
            $out | Out-File -append local_group_members.csv
        }

}

We ran into a couple issues when writing the script. One was the formatting of the output to a CSV file. Normally we would use an EXPORT-CSV cmdlet to do that, but in this case since we had multiple data sources we had to do some custom formatting. Also since some of the groups were empty, we had to add the IF clause to make sure that even blank groups were reported in our output.

Once the concept of the script starts to make sense to you, it should be an easy next step to consider adding PowerShell remoting capabilities or a computer name list to the script to parse multiple systems at once. You might also consider saving the output from the script in a format (such as HTML) that is easier to read and include in your audit reports.

We hope the script is useful as you audit Microsoft Windows servers or establish security baselines for those systems.

Continuous Monitoring with PowerShell

Welcome back from the holidays! I imagine many of you are just returning from the holidays and are ready get started on those new year’s resolutions. If one of them was to implement continuous monitoring or learn more about scripting, do I have a treat for you! Now that I’m back from some holiday travel myself, I think it’s time to continue our series on automating continuous monitoring and the 20 Critical Controls.

I don’t want these blog posts on an introduction to PowerShell. There are plenty of fine references on that available to you. In talking with Jason Fossen (our resident Windows guru), I have to agree with him that one of the best starter books on the topic is Windows PowerShell in Action, by Bruce Payette. So if you’re looking to get started learning PowerShell, start here, or maybe try some of the Microsoft resources available at the Microsoft Scripting Center.

But let’s say you’ve already made a bit of an investment in coding and you already know what tasks you’d like to perform. For example, maybe you wonder who is a member of your Domain Admins group, so you use Quest’s ActiveRoles AD Management snap in to run the following command:

(Get-QADGroup ‘CN=Domain Admins,CN=Users,DC=auditscripts,DC=com’).members

Or on the other hand, maybe you are concerned about generating a list of user accounts in Active Directory who have their password set to never expire, you’d likely have code such as:

Get-QADUser -passwordneverexpires

Or maybe even you want to run an external binary, like nmap, to scan your machines, you might have a command such as:

Nmap –sS –sV –O –p1-65535 10.1.1.0/24

In any case, the first step is to come up with the code you want to automate. That’s step one.

Next, you don’t just want to run the code, you want the code to be emailed to you on a regular basis, say once a day or once a week. The next step is to use a mailer to email you the results of your script. Now you have a few choices here. One option is to use a third party tool like blat to generate your email. But since we’re using PowerShell, let’s stick with that. Version 2.0 of PowerShell also has some built in mailing capabilities in this regard.

The easiest way to get started is to save the output of the commands you want run to a temporary text file, mail the text file as the body of an email message, and then delete the temporary file. An easy way to do this to get started would be to use the following commands:

$filename = sometextfilewithoutputresultsinit.txt

$smtp = new-object Net.Mail.SmtpClient("mymailserver.auditscripts.com")
$subject="SANS Automated Report - $((Get-Date).ToShortDateString())"
$from="[email protected]"

$msg = New-Object system.net.mail.mailmessage
$msg.From = $from
$msg.To.add("[email protected]")
$msg.Subject = $subject
$msg.Body = [string]::join("`r`n", (Get-Content $filename))

$smtp.Send($msg)
remove-item $filename

Save your data as an appropriate PS1 file, automate the command to run once in a while using Task Scheduler, and you’re off to the races!

We certainly have more to discuss, but hopefully this inspires some thinking on the matter. I’ll post again soon with some other steps to consider, before we move on the Bash. There’s a lot we can talk about here. Until next time…

Scripting Automation for Continuous Auditing

One of the topics that we have been discussing with organizations a great deal lately is the idea of automation in regards to continuous auditing. Said a different way, the standard audit model involves auditors making a list of audit scopes that they want to cover in the course of a year. Then, one at a time, the auditor interviews people, examines the output from technical tools, and in general manually collects audit evidence.

What if the act of evidence collection could be automated? Then it would seem that auditors could spend less time collecting data and spend more time analyzing evidence for risk. Maybe even an auditor could consider additional scopes in the course of a year if less time was being spent manually collecting such evidence.

In the next few blog posts I’d like to consider tools that could be used to assist us in automating the collection of audit evidence. Ideally we would use tools suited to this purpose, but realistically I know many of us will be left on our own to develop tools that can accomplish this purpose. Therefore we will focus on scripts and built in operating system tools to achieve this end.

To begin this discussion, let’s identify some tools that we can use for these purposes. Next time we can start to delve into a couple of these at a time. But in the meanwhile, check these out as a way to automate data collection. Consider each of these a piece of a bigger recipe for continuous auditing.

Windows Automation Tools:

PowerShell (scripting language)

Task Scheduler (scheduling tasks)

Advanced Task Scheduler (scheduling tasks)

Blat (command line mailer)

Bmail (command line mailer)

Unix Automation Tools:

Bash Scripting (scripting language)

Cron (scheduling tasks)

Anacron (scheduling tasks)

Mail (command line mailer)

Mailsend (command line mailer)

As you can see this is just a start. We will need to use other tools to actually collect our data sets, but these tools will form the building blocks of each of our scripts. If you have other tools you would like to suggest, don’t be shy. We’d all love to hear what tools have helped you to be a more successful auditor.

Parsing Lynis Audit Reports

Last week we passed along some information on a Unix audit tool called Lynis, maintained by Michael Boelen (https://cisofy.com/lynis/). The value of this tool is that it is an open script that auditors can give to system administrators to run on their Unix servers in order to assess specific technical security controls on the system.

If an auditor chose to use this tool to gather data, likely the process would look something like this:

  1. The auditor gives a copy of the script to the data custodian (system administrator).
  2. The system administrator runs the script on the target machine (note, it does not work across the network).
  3. Lynis produces an output file: /var/log/lynis-report.dat (location can be customized).
  4. The system administrator gives the auditor a copy of this output file.
  5. The auditor parses the file for relevant information to include in their findings report.

Unfortunately the output report isn’t quite as pretty as you might hope. The auditor will need to plan on spending some time parsing the outputs of the report file in order to gather useful information that can be included in their audit report. The data can most certainly be copy and pasted and manually edited, but why not use a little command line scripting to make our lives easier.

At a minimum, an auditor will want to parse a list of all the warnings and suggestions automatically made by the tool. Let’s use built in commands, like the Linux cat, grep, and sed commands. A simple command line command that will give an auditor the ability to parse out only the warnings made in the report file is the following:

cat /var/log/lynis-report.dat | grep warning | sed –e ‘s/warning\[\]\=//g’

The same can be done with the suggestions as well using the following command line syntax:

cat /var/log/lynis-report.dat | grep suggestion | sed –e ‘s/suggestion\[\]\=//g’

A couple other commands that you might want to play with to parse this file are the following commands as well. The following gathers a list of all installed software packages on the system:

cat /var/log/lynis-report.dat | grep installed_package | sed –e ‘s/installed_package\[\]\=//g’

The following command gathers a list of all the installed shells on the system from the report:

cat /var/log/lynis-report.dat | grep available_shell | sed –e ‘s/available_shell\[\]\=//g’

You get the idea. The system administrator can provide the auditor one output file, and the auditor could easily write a parsing script based on this syntax to completely parse the file into a manageable output that is a little more friendly to read.

We hope this inspires you to continue writing your own scripts to automate the audit process. The more we can automate, the faster our audit analysis. The faster our audit analysis, the more technical audits we can perform and the more likely our systems will be properly secured. Until next time…

Unix Auditing with Lynis

One of the questions I get asked often times in our audit classes is how to automate data collection from systems in a way that system administrators will trust. The problem is that there are a number of tools available for doing data collection, but often times those tools are compiled with no easy way to do code review on the tool. And rightfully so, sysadmins are normally cautious about installing software on their systems that they’ve not had the time to verify.

One solution to this is to write your own scripts for automated data collection. We tend to recommend keeping things simple with languages like PowerShell and Bash. One of the nice things about using these languages is that it allows the sysadmin the ability to do code review prior to running the tools. This can give them a little more comfort with the tools typically. But we don’t always have the time to write our own. So what then?

One option, at least for Unix, is to run a tool called Lynis (https://cisofy.com/lynis/). Michael Boelen is lead developer on this project, and from the site he says that:

“Lynis is an auditing tool for Unix (specialists). It scans the system and available software, to detect security issues. Beside security related information it will also scan for general system information, installed packages and configuration mistakes.

This software aims in assisting automated auditing, software patch management, vulnerability and malware scanning of Unix based systems. It can be run without prior installation, so inclusion on read only storage is no problem (USB stick, cd/dvd).

Lynis assists auditors in performing Basel II, GLBA, HIPAA, PCI DSS and SOX (Sarbanes-Oxley) compliance audits.

Intended audience:
Security specialists, penetration testers, system auditors, system/network managers.

Examples of audit tests:
– Available authentication methods
– Expired SSL certificates
– Outdated software
– User accounts without password
– Incorrect file permissions
– Firewall auditing”

In addition it has been found to run on the following Unix operating system platforms:

Arch Linux; CentOS; Debian; Fedora Core 4 and higher; FreeBSD; Gentoo; Knoppix; Mac OS X; Mandriva 2007; OpenBSD 4.x; OpenSolaris; OpenSuSE; PcBSD; PCLinuxOS; Red Hat, RHEL 5.x; Slackware 12.1; Solaris 10; Ubuntu

The nice thing about this tool is that even for junior Unix auditors, or senior auditors without a lot of experience on Unix systems, it is simple to gather information and potential findings on a system. Like similar tools on Microsoft Windows (think MBSA), Lynis gives auditors the ability to identify findings on a system simply by running the tool. There is no install necessary, simply unzip the tools and run the script.

The tool is free and the code is easy enough to review if you enjoy that kind of thing. Hopefully you will find that this becomes a staple of your audit toolkit. Enjoy!

Script for Network Adapter Configuration Baselines

So in this series of blog articles so far we have identified a number of different baseline scripts written in PowerShell. We hope that auditors and others will be able to take this scripts, modify them for their own purposes and use them for baselining the systems that they are evaluating.

This week we found ourselves in the position of having to gather some information about both the MAC addresses and the logical (IP) addresses of the adapters on a set of machines. Unfortunately that meant we had to change our strategy from using simple WMI calls of the Win32 namespace. Since we’re dealing with Microsoft Windows machines, we thought, why not go to the grand-daddy of all network configuration utilities – NETSH? So this week we primarily will use NETSH to gather the information.

We didn’t do much parsing this week (we do have day jobs too), but to query the information on a system’s adapters, we would use the following script:

echo "The following is the list of adapters and MAC Addresses as supplied by WMI calls:"
Get-WmiObject win32_networkadapter | Select-Object Name,MACAddress

echo "The following is the IPv4 network configuration as supplied by the netsh command:"
netsh interface ip show config

echo "The following is the IPv6 network configuration as supplied by the netsh command:"
netsh interface ipv6 show interfaces
netsh interface ipv6 show subinterfaces
netsh interface ipv6 show dnsservers
netsh interface ipv6 show addresses
netsh interface ipv6 show global
netsh interface ipv6 show teredo

There will be more NETSH scripts to come, but we figured we would show you the basics first. Enjoy!

Script for Network Share Baselines

Today we’re going to continue blogging about scripts that we can use to create system baselines. (For a primer on why you might want to consider performing a system baseline or for a process for performing system baselines, check out our previous blog entries here.)

As we discussed earlier as well, we are going to rely primarily on PowerShell to pull this information for us. So all of the scripts you will see in this series will be written as PowerShell scripts. For your sake, make your life easy, and install PowerShell version 2.0, that will give you the latest and greatest functionality and give you the same development environment we’re primarily using to write these scripts.

Today’s script should give you a list of the shares installed on an individual machine and that share’s path on the local file system. In other words, this script gives us a snapshot of the local directories on a machine that are shared on the network. Again for this script we are going to access the WMIObject via PowerShell and access the Win32 component. Eventually we’ll explore other namespaces, but when this space is so productive, why make it harder than we need to? So here is the script you would run if you want to query this share data:

Get-WmiObject win32_share | Select-Object Name,Path

Enjoy! We look forward to providing you more fully featured scripts as the year progresses. If you do have any requests, don’t be shy, and feel free to make requests too.

Script for Locally Installed Software Baselines

Today we’re going to continue blogging about scripts that we can use to create system baselines. (For a primer on why you might want to consider performing a system baseline or for a process for performing system baselines, check out our previous blog entries here.)

As we discussed earlier as well, we are going to rely primarily on PowerShell to pull this information for us. So all of the scripts you will see in this series will be written as PowerShell scripts. For your sake, make your life easy, and install PowerShell version 2.0, that will give you the latest and greatest functionality and give you the same development environment we’re primarily using to write these scripts.

The script we’re going to use today will be useful if you want to discover what software is installed on a given machine. This will not detect stand alone binaries that are copied to a computer (for that we would need to reference a file system object and look for all applications). But if you’re just looking for a basic listing of all the installed applications on a machine, then this is the script for you:

Get-WmiObject win32_product | Select-Object Name,Vendor,Version

Enjoy! We look forward to providing you more fully featured scripts as the year progresses. If you do have any requests, don’t be shy, and feel free to make requests too.