How to Prioritize Log Review

PCI Requirement 10.6.1 requires daily review of logs of system components that store, process, or transmit cardholder data, logs of all critical system components, and logs of all servers and system components that perform security functions. But what about all other system components? PCI Requirement 10.6.2 addresses this and requires that organizations review logs of all other system components periodically based on the organization’s policies and risk management strategy. PCI Requirement 10.6.2 allows you to prioritize your log review program and apply log review in an appropriate way.

How do you determine what are “all other” system components? Based on your organization’s annual risk assessment, you’ll be able to determine which logs should reviewed on a periodic basis. For example, if you have a receptionist’s workstation that is in scope because of connectivity and a database that has millions of credit card numbers, you’re going to want to prioritize reviewing database logs. Going through an annual risk assessment will help determine which components should or should not be prioritized for log review.

PCI Requirement 10.6.2 is another one of these requirements that is new to the standard. What it says is that you can review all other logs on a periodic basis. We go back to PCI Requirement 6.1, and the requirements there talk about reviewing the logs for all systems that store, process, transmit, and provide security services. One of the things that it allows you to do is to prioritize your log review program. For example, if you have an administrator that has access to a database, you’re going to want to look at that database logs pretty much every day. But if you have a receptionist whose work station is in scope just because of connectivity, are you going to want to spend the same level of diligence around the receptionist’s desktop as you do the database that might contain a million credit card numbers? Probably not. So, PCI Requirement 10.6.2 recognizes that and it allows you to apply your log review resources in the most appropriate way. If you as an organization are going to be looking at these other logs on a periodic basis, you’ll need to do a risk analysis to define when and how often we’re going to be reviewing these and, once again, make sure that where there’s items identified, you’re following up with those as appropriate.

Daily Review

By reviewing logs daily, organizations can maximize their security efforts and minimize the exposure to potential breaches. PCI Requirement 10.6.1 requires that organizations review the following at least daily:

  • All security events
  • Logs of all system components that store, process, or transmit cardholder data
  • Logs of all critical system components
  • Logs of all servers and system components that perform security functions

From many breaches that have recently occurred, we see “log fatigue.” The information to identify a breach was there, but staff didn’t react appropriately. This may be because their job is to sit and review logs for hours and hours, and they eventually suffer from “log fatigue.” Employees are your first line of defense, so give them the tools they need to identify anomalies.

Other elements of PCI Requirement 10.6.1 to consider:

  • The definition of a “security event” will vary from organization to organization based on types of technology, location, scope, etc.
  • Organizations should establish a baseline of “normal” traffic to help better identify anomalies or suspicious behavior.

During an assessment, an assessor should examine policies and procedures and observe personnel to ensure that all security events, logs of all system components that store, process, or transmit cardholder data, logs of all critical system components, and logs of all servers and system components that perform security functions are under review at least daily.

To comply with PCI Requirement 10.6.1, we’re going to be reviewing all our logs at least daily, and this would include all the logs from anything that’s in scope for your PCI assessment. We will be looking at any security log or any log from any security-related device. When individuals are looking at these logs, be cognizant of what I call “log fatigue.” These people are going to be sitting there for eight or nine hours a day and watching the matrix waterfall, so help them, train them, and give them tools to help to identify out of this haystack, what the needle is that they need to be looking for. A lot of the breaches that have occurred as of late, the information was there; it was available for them to of identified that they had been breached, but staff, for whatever reason, didn’t react appropriately. This is one of the first lines of defense that you have, monitoring these logs for anomalies and acting and reacting appropriately.

Log Review

Many breaches occur over a period of time before being detected. That’s why it’s not enough for you to just create logs, you also have to create a process for reviewing them. How could you ever spot a pattern of suspicious activity if you don’t review your logs? PCI Requirement 10.6 requires that organizations review logs and security events for all system components to identify anomalies or suspicious activity. You could meet PCI Requirement 10.6 through manual methods, such as your staff being trained on how to review logs and identify suspicious activity. Other options are automated methods such as log harvesting, parsing, and alerting tools.

Meeting PCI Requirement 10.6 is a proactive way to protect the cardholder data environment and should tie into your overall risk management program.

It’s not enough within your environment just to create these logs. It’s necessary that you spend the time to train your staff, or that you have competent staff, outsource your staff, or outsource your log review program to competent individuals. One of the things that I’ve found interesting about the breaches that have occurred as of late is that these logs machines, central logging servers, and solutions were all logging and they were hacked, yet they were ignored or they were found to be false positives. I never got my hands around why they were ignored to the extent that they were. But effectively, PCI Requirement 10.6 says that we have to have a program in place for monitoring these logs. Your assessor is going to be working with you and looking at your log monitoring solution ad your log monitoring methodologies, making sure that you’re looking at the log, monitoring these logs, for any unauthorized events. Assessors will look at where there’s unauthorized events or where there’s an anomaly and that you’re following up with that. But, who’s to define what an anomaly is? What’s interesting about this is that this should really be a cue into your risk management program and to determine, from a technology perspective, what do we consider an anomaly and what do we do from an incident response program in the event that one of these things should show up in our logs.

File-Integrity Monitoring

PCI Requirement 10.5.5 requires organizations to use file-integrity monitoring or change-detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert). The PCI DSS guidance explains that file-integrity monitoring or change-detection systems check for changes to critical files and provide notification when such changes are noted. Organizations usually monitor files that don’t regularly change, but when they do change, indicate a possible compromise.

You should expect an assessor to spend time examining how your organization logs, where you log, and what your logging methodologies are.

There’s a couple places within the PCI Requirements that calls out the need for using file-integrity monitoring, and PCI Requirement 10.5.5 is one of them. PCI Requirement 10.5.5 says that where you have all of these logs being generated, we need to make sure that we are reviewing these logs with a file-integrity monitoring solution or something that has the ability to determine whether or not these logs have been modified. What we look for from an assessor perspective is that you have a file-integrity monitoring system, or other systems available, and that all logs that are being generated are being monitored. If you have a central logging server, it gets pretty hard to put a file-integrity monitoring solution on the file-integrity monitoring database, but there’s a lot of times that you might have a Linux system and you’re writing your logs out to the VAR log file. The assessor should expect to spend some time with you talking about your logging methodologies, how are you logging, and where they are getting the logging to. Assessors will then spend some time with the individuals that are managing your file-integrity monitoring solution, making sure that those particular logs are being monitored such that if those logs have been modified, somebody gets notified of that event at least weekly.

What is PCI Requirement 10.5.4?

Another element to PCI Requirement 10 is PCI Requirement 10.5.4, which requires organizations to write logs for external-facing technologies onto a secure, centralized, internal log server or media device. The PCI DSS explains the purpose of PCI Requirement 10.5.4 when it states, “By writing logs from external-facing technologies such as wireless, firewalls, DNS, and mail servers, the risk of those logs being lost or altered is lowered, as they are more secure within the internal network.”

During an assessment, an assessor will examine logs external-facing technologies and ensure they are written onto a secure, centralized, internal log server or media.

Back in PCI Requirement 1, we talked about establishing a DMZ. You’re going to have firewalls, web servers, email servers, SFTP servers, or you might have a plethora of devices out there. What we require from the PCI perspective is that the logs that are being generated off of those devices pull those logs back into your internal environment. You assessor is going to be pulling the configurations from those devices and looking at where you’re writing those logs to, making sure that those particular logs are pulled out of the DMZ and stored within the secure safe net or secured portion of your network.