AuditScripts.com Policies Update

Hello Folks,

It’s update time. The biggest update and perhaps the one we are most excited about is the update for the Center for Internet Security’s (CIS)  Critical Security Controls Version 7. As many of you know, James and I, along with Philippe Langlois at CIS served as technical editors of the Controls and we reviewed terrific feedback from students at the SANS Institute, community forums at CIS, feedback from the RSA Conference. Version 7 of the Controls was released this past spring, and this policy update, version 2.4, mirrors the new technical controls and simplified language introduced in Version 7. To help subscribers visualize what was changed in the this policy update, we have provided an Excel spreadsheet that highlights the changes to policy statements as well as highlights in pink those statements that were removed. This can found in the zip file for Complete AuditScripts Policies v2.4.

Other major highlights:

  • We have also updated the policy library to reflect the NIST Cybersecurity Framework from 1.0 to 1.1
  • We have updated our policy language to emphasize the ever growing importance of multi-factor authentication and we have references to using passwords.
  • The policies have been updated to reflect NERC CIP Version 7 updates.

We will be releasing the updates for the AuditScripts questionnaires and checklists in the upcoming week.

Thank you for the great feedback on the documents so far, and we hope you find this policy update helpful.

Limiting Windows Local Administrator Rights

One of the common issues we run into during security assessments and incident response cases is the issue of users being assigned too many permissions on their local computer. For the sake of convenience and expediency, end users often demand that they be assigned local administrator rights. These users, often in an agitated and exaggerated manner, explain to their bosses that they simply cannot do their job unless they are given these rights on their work computer. This begs the question, do end users normally need these rights or is there a better more secure approach to take. Or using an analogy, should we stop locking the doors to our home or vehicles because one day we might forget our keys and be inconvenienced in the process?

First of all, let’s start with the basic business principle – end users should be assigned the rights and permissions that they need on their computer in order to do their job. They should not be assigned any more rights or permissions than this, and they should not be given any fewer rights than this. Of course it can be tough to strike a balance between the two, but it’s certainly the ideal we should be striving for.

But what’s the big deal? Why not just assign everyone local administrator rights on their workstation and call it a day? We even saw one company take it so far as to add the local “Everyone” group to the local administrators group on all their machines to limit the helpdesk requests that would come in. Is this really so bad? The answer is YES!

If end users are assigned local administrator rights and they don’t need those rights and permissions it opens the door for a number of abuses. Malicious actors wanting to cause harm to the users can arbitrarily run code on their systems with full permissions if they can convince the user to click on a malicious link or open malicious email content. End users could turn off the security controls we use to protect our systems like whitelisting software, password controls, anti-malware software, and similar tools. Unapproved software could be installed on an end user’s machine, breaking business critical applications, thus requiring troubleshooting from desktop support. And the list goes on…

Basically if you give end users local administrator rights to their workstations, in effect the organization has disabled all the locks on the system put in place to protect the end user and the organization’s data.

But the common complaint still exists. Will users be able to do their work if they are not local administrators? The answer of course is yes. But we still need to be able to assign the correct rights and permissions on the system to enable the user to work. One of the most common complaints we hear is that certain pieces of software simply will not function unless the end user is granted local administrator rights on their computer. How do we address that?

Whenever we encounter a software application on a Microsoft Windows system that believes it needs to have local administrator rights in order to function, there are a series of checks that we try first. In most every case when a developer states that they need local administrator rights, it is because they haven’t taken the time to try to figure out specifically what rights are necessary in order for the program to function properly or they are trying to access an operating system object that they should not be trying to access. So what are those checks, what do we do when we are troubleshooting to see why a program will not run unless you are an administrator?

Generally when an application developer says that their software requires local administrator rights, it is because their software is trying to access a protected operating system object. So to troubleshoot, the trick is to determine which object they are trying to access. When we’re troubleshooting this issue, it is always one of the following objects:

  • File system object permissions
  • Registry hive object permissions
  • User right assignments of the user
  • Mandatory Integrity Control (MIC) levels on an object

Nine times out of ten the issue is simply a file system issue. But don’t forget the other issues as well, they all can play a part.

If you don’t want to take the time to figure it all out on your own, there are of course software vendors that will make this process easier for you. Companies like Cyber-Ark, Computer Associates, BeyondTrust, and Cloakware all sell software designed to make this process easier for you. But as with anything else, trying the simple steps first often times will solve the issue for you.

In future posts we will look into each of the operating system objects above to see if we can better understand how to examine the necessary rights for each. But for now hopefully by examining those objects you will be able to determine what the specific, necessary rights are for your organization’s applications to function properly and only with the rights they need.

DARPA & MIT Partnership – Example of “Leap-Ahead” Technology?

Yesterday, DARPA and MIT announced the results of a project that has been in development which would allow an organization’s network to function even while under an active attack from a distributed denial of service or similar attack. Overly simplified, it’s a network based, whitelisting solution with the ability to baseline normal traffic patterns and automatically block traffic if it detects that it’s under attack. Think of it like an advanced IPS on steroids.

TheNewNewInternet.com reported on it Thursday and stated:

“Previously, when a system was under cyber attack, the only solution to mitigate the threat was to take the server offline. However, there may now be another option. MIT researchers have developed a system that allows servers and computers to continue to operate even while under cyber attack.

The research, predominately funded by the U.S. Defense Department’s Defense Advanced Research Projects Agency (DARPA), has stood up to outside testing. DARPA hired outside security experts to attempt to bring down the system. According to Martin Rinard, an electrical engineering and computer science professor who led the project, the system exceeded DARPA’s performance criteria in each test.

During normal operations, the system developed by the MIT team monitors any programs running on computers connected to the Internet. This allows the system to determine each computer’s normal behavior range. When an attack occurs, the system does not allow the computers to operate outside of the previously determined range.

“The idea is that you’ve got hundreds of machines out there,” Rinard says. “We’re saying, ‘Okay, fine, you can take out six or 10 of my 200 machines.’” But, he adds, “by observing what happens with the executions of those six or 10 machines, we’ll be able to deploy patches out to protect the rest of the machines (http://tr.im/Sosj).”

So why is this all so interesting and worth repeating? I think this first of all a great example of a public / private partnership in the realm of cybersecurity defense. We simply don’t see enough of this kind of activity. Secondly, I have to appreciate their focus on an automated response to cyber attacks. This has been one of the major premises of the 20 Critical Controls / Consensus Audit Guidelines for quite some time and it’s great to see these groups creating solutions in that same spirit.

Finally I think it’s interesting in light of the mission of DARPA’s National Cyber Range project, which is:

“The National Cyber Range (NCR) is DARPA’s contribution to the new federal Comprehensive National Cyber Initiative (CNCI), providing a “test bed” to produce qualitative and quantitative assessments of the Nation’s cyber research and development technologies. Leveraging DARPA’s history of cutting-edge research, the NCR will revolutionize the state of the art for large-scale cyber testing. Ultimately, the NCR will provide a revolutionary, safe, fully automated and instrumented environment for our national cyber security research organizations to evaluate leap-ahead research, accelerate technology transition, and enable a place for experimentation of iterative and new research directions (http://www.darpa.mil/sto/ia/ncr.html – Link Removed from DARPA’s site).”

So is this an example of a “leap-ahead” research project? We might all have different opinions. But the bottom line is that it appears that the DARPA initiatives are moving forward. Let’s all hope this is just one of many more game changing technologies that we hope to see in the near future from these teams.

Searching for Hashes of Malicious Files (APT – Aurora)

A couple weeks ago I posted a blog article with some sample file hashes and domain names associated with the recent Google hacks (think APT or Aurora).

Since then I’ve had quite a few people ask me, if you have a system that you suspect might have been compromised, how do you search that system for files that are malicious if you have a list of hashes that you know are malicious? In other words, you have a list of hashes and you want to know if there are any files on your file system that has the same hash value.

Disclaimer – before we continue you should know, hashes of malicious files are just one way of attempting to discover if your system has been compromised. Especially when dealing with a threat like APT, which is highly intelligent and adaptable, you have to know that if the threat knows that you’re on to them and that you’re looking for a specific set of hashes, that they’re smart enough to adapt. What will they do, they’ll change their malicious files so the hashes change as well. There’s no doubt this is a limitation. But utilizing the technique we’re about to describe you can at least start to eliminate some of the low hanging fruit. You may also want to investigate the projects involved with fuzzy hashing. This may be an alternative to some of the standard techniques described here.

Ok, now that you’re ready to start examining your systems for malicious files, here is a process to consider:

Step One: Assemble a Text File of Known Malicious Hashes. The first step you need to follow is to gather a list of hashes of known malicious files. This will be the list of hashes you’re scanning your system for. Remember, the value of your scan will only be as good as the list of hashes you have. A starter list of MD5 hashes is currently being hosted at Enclavesecurity.com and can be found here if you’re looking for a list to get you started. This list certainly is not comprehensive, but at least is a place to consider building your first list from.

Step Two: Decide Which Hashing Tool to Use. There are a number of good tools that you can use to scan your system and to generate hashes of all the files on your file system. Many of these tools are commercial and there are open source tools for this as well. On the commercial side tools like Tripwire, Lumension, and Bit9 are quite effective at this. There are certainly others, but many of you are already using these tools, so you might as well take advantage of them. Unfortunately there are also many of you that simply cannot afford these tools. If you’re looking for a good open source tool to use to start scanning your systems, let me recommend MD5Deep. This is a tool in the public domain that is especially useful for this purpose. While there’s not enough time in this post to talk about how to use the tool, we’ll post more on it later. You could also consider rolling your own scripts, using PowerShell or shell scripting to generate these hashes as well (but I still recommend MD5Deep – it’s cross platform, supports recursive file scans of directories, and natively interfaces with a number of hash databases).

Step Three: Scan Your System. Now that you have your list of hashes for malicious files and you have your scanning tool, now it’s time to scan your system to see if any files with these hashes exist on your system. This is the basic part of the exercise – do you have malicious files on your system or not? Depending on the tool you’re using this process will be slightly different, but in the end you’re trying to determine if you have a compromised host. Auditors – you should be asking companies the control question, if law enforcement approaches you with a list of hashes like we’re describing here and they say you need to check your system to see if any of these files exist on particular hosts in your environment, how would you look for the hashes? Ask to see their process in action (we want more than tabletop reviews here).

Step Four: Automate System Scans. Finally once you have your tool working in a manual mode, automate the scans. This is one of the major principles of the 20 Critical Controls / Consensus Audit Guidelines that we talk so much about. Manual scans are fine when you need to use them – but how much better is it if you could implement a tool that would be constantly scanning your systems and would notify you one of the hashes were discovered? Automation is key.

While there are certainly other ways to go about looking for malicious files on your file system or indicators of compromise on a system, examining file hashes certainly has to be part of your arsenal. If you’re auditing a system, knowing that you have a control in place to scan for signatures of known bad files has to be part of your toolkit. Traditionally we’ve done this with anti-malware tools, but unfortunately many of the large anti-malware vendors still don’t let you know which hashes they’re scanning for and they don’t give you the ability to add hashes that you’d like to scan for in their tools. Thus we’re left to our own devices to discover if files with these signatures are still on our systems.

Hopefully putting this tool in your toolkit gives you one more angle to consider when looking for indicators of a compromise on your systems.

20 Critical Controls, “Aurora”, APT, and the Google Hack

Obviously there has been a lot of discussion in the news, on blog posts, even tweets, on the issue of the Aurora attacks and what they mean. This is certainly not a new threat. Evidence of this threat can be seen back to at least 2008 if not earlier (if you consider Titan Rain or other operations), but until now no one wanted to talk about it publicly. But in the background work has been in progress to discover techniques to stop the threat.

Enter the 20 Critical Controls…

In 2009 the Consensus Audit Guidelines / 20 Critical Controls were released to prioritize the information security controls that need to be implemented in order to combat known attacks (ie. think Aurora or APT). US federal government and commercial systems were being compromised by this threat and others and something had to change. But what was the tipping point? Why were these controls introduced in 2009? The tipping points were these advanced, directed attacks against US federal systems by foreign entities. That’s what tipped the scales and precipitated the release of these controls.

So let me say what a lot of us have been dancing around for the last two years – there are dedicated, focused, well-funded attackers who are successfully breaking into government and commercial network systems and the 20 Critical Controls were introduced to stop this threat. It’s real, many of us have seen it first hand, and it’s hard to get out of your systems. Call it APT, Aurora, whatever, the 20 Critical Controls were put in place to stop these hacks.

Sales pitch time – so why should you care about the 20 Critical Controls? Why should you learn more? Because this is a real threat and it seems to be getting worse. The controls are meant to prioritize your resources and encourage you to automate an effective response. They’re more than just a list of good things to do, the purpose behind the controls is to change our way of thinking about how we protect our systems. One great place to start the education is here:

http://www.sans.org/security-training/20-critical-security-controls-in-depth-1362-mid

There have been a lot of good people commenting and posting information on the topic as well. If you aren’t following this information already, here are a couple other sources you might look into as you’re learning more about these attacks:

Mandiant M-Trends & Blog (http://blog.mandiant.com/)
Enclave Security Blogs (http://enclavesecurity.com/blogs/)
TaoSecurity Blogs (http://taosecurity.blogspot.com/)

But my biggest complaint however, and I’m sure I’ll rant more about this later, is that we are simply not sharing enough information as a community on this subject. We have to share more. We all have reasons why we’re not sharing the attack signatures we’ve seen – some reasons are commercial, some are because of fear of retribution, some are due to contractual restraints. I get it. But if we’re going to be successful at combating this threat, we have to share signatures and methodologies. But I’ll leave the rest of this rant for another day…

Some people are already sharing, here are two of the few postings I’ve found publicly on the subject. Take advantage of these when you find them, there aren’t many people sharing. Or if you are sharing signatures or indicators of compromise, drop me a note at james.tarala (a) enclavesecurity.com and I’d be happy to link to you as well. Here are a couple:

Mandiant Blogs (http://blog.mandiant.com/archives/730)
McAfee (http://www.mcafee.com/us/local_content/reports/how_can_u_tell_v5.pdf) (Old Link Removed)

More to come…

Aurora Malware Hashes and Domains

McAfee has recently released specific details about their analysis of the Aurora malware that was used to compromise 30+ companies over the past few months. This malware is consistent with the types of files that Enclave and other organizations who have responded to APT based attacks have discovered. It appears to utilize many of the same mechanisms and even file name in many such cases. A link to one of their reports on the topic can be found at:

www.mcafee.com/us/local_content/reports/how_can_u_tell_v5.pdf

Specifically the hashes for the Aurora malware are:

securmon.dll: E3798C71D25816611A4CAB031AE3C27A
Rasmon.dll: 0F9C5408335833E72FE73E6166B5A01B
a.exe: CD36A3071A315C3BE6AC3366D80BB59C
b.exe: 9F880AC607CBD7CDFFFA609C5883C708
AppMgmt.dll: 6A89FBE7B0D526E3D97B0DA8418BF851
A0029670.dll: 3A33013A47C5DD8D1B92A4CFDCDA3765
msconfig32.sys: 7A62295F70642FEDF0D5A5637FEB7986
VedioDriver.dll: 467EEF090DEB3517F05A48310FCFD4EE
acelpvc.dll: 4A47404FC21FFF4A1BC492F9CD23139C
wuauclt.exe: 69BAF3C6D3A8D41B789526BA72C79C2D
jucheck.exe: 79ABBA920201031147566F5418E45F34
AdobeUpdateManager.exe: 9A7FCEE7FF6035B141390204613209DA
zf32.dll: EB4ECA9943DA94E09D22134EA20DC602

In addition they have also identified a list of domains that you should be blocking that are used as a part of this malware as well. The following domains have been detected as containing malicious code associated with the Aurora malware:

ftpaccess[dot]cc
google[dot]homeunix[dot]com
tyuqwer[dot]dyndns[dot]org
blogspot[dot]blogsite[dot]org
voanews[dot]ath[dot]cx
360[dot]homeunix[dot]com
ymail[dot]ath[dot]cx
yahoo[dot]8866[dot]org
sl1[dot]homelinux[dot]org
members[dot]linode[dot]com
ftp2[dot]homeunix[dot]com
update[dot]ourhobby[dot]com
filoups[dot]info

Thanks again to the teams at McAfee / Foundstone for releasing this data. These are the types of datasets we need to be better about sharing if we are going to be effective at stopping these directed attacks!

Automating Audit Tests with Eventtriggers.exe (20 Critical Control Scripting Tip)

One of the issues that we have been dealing with extensively lately is the issue of auditing and automation. This has come most often been raised when we’ve been discussing how to address automating control assessments in conjunction with implementing the 20 Critical Controls. One of the core principles of the 20 Critical Controls is that organizations need to have the ability to automate security assessments in order to reduce risk detection times and allow for a more prompt response to detected threats.

One way to assist with the automation of any given assessment is to script your assessments and automate the scripts you write. This way your tests can work for you and can automatically respond in some way should a particular event be discovered. Rather than creating a mechanism to perform detection and alerting from scratch, why not use a mechanism that’s already built into most Microsoft Windows versions you’re already running? The Windows Event Log is a great place to start.

First, you can use a command such as EventCreate to generate new event log entries as a result of a particular action in your scripts. For example, if you use nmap with PBNJ to look for new hosts on your network (think critical control #1), then you could use EventCreate to generate an event log entry every time a new device is discovered. Or, for example, let’s say you use WMIC to list startup items on a machines (think critical control #2), then you could use EventCreate to generate an event log entry every time a new startup entry is added. Get the idea? Use built in Windows tools to support your automation efforts – and all it costs is a little sweat equity and trial with built in tools!

For more details on how to use EventCreate, check out these resources to get started:

Microsoft TechNet Reference on EventCreate:

http://technet.microsoft.com/en-us/library/bb490899.aspx

Microsoft Support article for creating custom event log entries:

http://support.microsoft.com/kb/324145

For details on how to use eventtriggers in more depth, here are a couple resources that will help to get you started:

Microsoft TechNet Reference on EventTriggers.exe:

http://technet.microsoft.com/en-us/library/bb490901.aspx

Petri.co.il Article on EventTriggers.exe:

http://www.petri.co.il/how-to-use-eventtriggersexe-to-send-e-mail-based-on-event-ids.htm

In addition to automating tasks with the eventtriggers.exe command, you may also want to consider command line e-mail tools which can be used to generate an e-mail as a result of an action in your command line tool. Two such free command line tools that you may want to consider are:

Blat (http://www.blat.net/)

Bmail (http://www.beyondlogic.org/solutions/cmdlinemail/cmdlinemail.htm)  (Old Link Removed)

To run either of these tools you will need to have access to an active mail server, although thankfully it does not need to listen only on tcp/25 – but you do have to use the SMTP protocol. While security by obscurity is certainly no singular way to protect your system, running administrative mail servers like this on alternate ports can never hurt!