With 2017 still in its infancy, plenty of time remains for InfoSec practitioners to make concrete strides toward better security and compliance in their organizations. That’s why to help you start off the year on the right foot, we’ve shared best practices, ideas and recommendations in our Qualys Top 10 Tips for a Secure & Compliant 2017 blog series.
As we continue our Qualys Top 10 Tips for a Secure & Compliant 2017 blog series, we zoom in on the all important area of compliance and risk monitoring, a key element of any comprehensive security program.
IT compliance and risk managers don’t have it easy. You face an increasingly complex regulatory landscape, constantly evolving industry standards and a technology environment that’s changing at a dizzying pace. It falls on your shoulders to make sure your organizations follow rules, regulations, laws, standards and practices in areas of IT across all business functions.
In this post, we’ll offer tips 5 – 7 on our list, to help you:
- Ensure internal and external IT compliance
- Assess procedural and technical controls among vendors to reduce the risk of doing business with them
- Comply with the Payment Card Industry Data Security Standard (PCI DSS)
The out-of-band release of Qualys PCI Compliance that adds support for PCI DSS 3.1 is out! The primary intention of this release is to address SSL and TLS encryption issues that have evolved recently. Effective immediately merchants are prohibited from implementing new technologies that rely on SSL or early TLS. SSL and early TLS cannot be used in any way as standalone security control after June 30, 2016. So basically merchants have about 14 months to remove SSL and early TLS from their environments. ‘Early TLS’ is TLS version 1.0 and in some cases 1.1 depending on where it’s used and how it’s implemented.
Comply with PCI DSS 3.0 using Mandate-Based Reporting in Qualys Policy Compliance
We are excited to announce an ‘out-of-box’, ready-to-use mandate-based policy for PCI DSS 3.0 consisting of security checks which automate assessment of ‘In-scope’ PCI assets. This policy will greatly simplify the process merchants have to go through to validate PCI compliance for a key set of technical controls that need to be validated across a wide set of different technologies. Qualys Policy Compliance can now automatically scan for all these PCI controls and provide you a detailed report that you can use to demonstrate ongoing compliance.
Your PCI 11.2 Checklist and Toolbox
Merchants are getting ready for the upcoming changes to the internal scanning requirements for PCI compliance. This blog post provides a checklist on what you should have ready and will review some of the tools Qualys provides for these requirements.
There are four core areas to focus on in preparation for your compliance to PCI 11.2, taking into account the changes from PCI 6.2 regarding risk ranking of vulnerabilities.
- Your documented PCI scope (cardholder dataenvironment)
- Your documented risk ranking process
- Your scanning tools
- Your scan reports
Merchants will need to complete each of these elements to be prepared to pass PCI compliance.
1. Your documented PCI scope (cardholder data environment)
All PCI requirements revolve around a cross-section of assets in your IT infrastructure that is directly involved in storage, processing, or transmitting payment card information. These IT assets are known as the cardholder data environment (CDE), and are the focus areas of the PCI DSS requirements.
These assets can exist in internal or external (public) networks and may be subject to different requirements based on what role they play in payment processing. These assets can be servers, routers, switches, workstations, databases, virtual machines or web applications; PCI refers to these assets as system components.
QualysGuard provides a capability to tag assets under management. The screenshot below shows an example of PCI scope being defined within the QualysGuard Asset Tagging module. It provides the ability to group internal assets (for 11.2.1), external assets (for 11.2.2), and both internal and external assets together (for 11.2.3).
This allows you to maintain documentation of your CDE directly, and to drive your scanning directly from your scope definition.
2. Your documented risk ranking process
This is the primary requirement associated with the June 30th deadline; this is the reference that should allow someone to reproduce your risk rankings for specific vulnerabilities.
The requirement references industry best practices, among other details, to consider in developing your risk ranking. It may help you to quickly adopt a common industry best practice and adapt it to your own environment. Two examples are the Qualys severity rating system, which is the default rating as per the security research team at Qualys; or, the PCI ASV Program Guide, which includes a rating system used by scanning vendors to complete external scanning. QualysGuard is used by 50 of the Forbes Global 100, and spans all market verticals; it qualifies as an industry best practice. Additionally, the QualysGuard platform is used by the majority of PCI Approved Scanning Vendors and already delivers rankings within the PCI ASV Program Guide practices.
The core rules of your risk rankings should take into account CVSS Base Scores, available from nearly all security intelligence feeds. These scores are also the base system used within the PCI ASV Program Guide. Your process should also account for system components in your cardholder data environment and vendor-provided criticality rankings, such as the Microsoft patch ranking system if your CDE includes Windows-based system components.
The process should include documentation that details the sources of security information you follow, how frequently you review the feeds, and how you respond to new information in the feeds. QualysGuard provides daily updates to the vulnerability knowledgebase and now offers a Zero-Day Analyzer service, which leverages data from the iDefense security intelligence feed.
3. Your scanning tools
After you have your scope clearly defined and you have your process for ranking vulnerabilities documented, you will need to be able to run vulnerability scans. This includes internal VM scans, external VM scans, PCI ASV scans (external), internal web application scans and external web application scans. It is thefindings in these scans that will map against your risk ranking process and allow you to produce the necessary scan reports.
You will need to be able to configure your scanning tools to check for “high” vulnerabilities, which will allow you to allocate resources to fix and resolve these issues as part of the normal vulnerability management program and workflow within your environment.
QualysGuard VM, QualysGuard WAS and QualysGuard PCI all work together seamlessly to provide each of these scans capabilities against the same group of assets that represent your PCI scope or CDE.
4. Your scan reports
You will want to produce reports for your internal PCI scope, as defined in #1 of this checklist, both quarterly and after any significant changes. If you have regular releases or updates to your IT infrastructure, you will want to have scan reports from those updates and upgrades. Quarterly scan reports need to be spaced apart by 90 days. In all cases, these reports need to show that there are no “high” vulnerabilities detected by your scanning tools.
Each report for the significant change events will also need to include external PCI scope. QualysGuard VM makes it easy to include both internal and external assets in the same report. QualysGuard VM also provides a direct link to your QualysGuard PCI merchant account for automation of your PCI ASV scan requirements.
QualysGuard WAS allows you to quickly meet your production web application scanning requirement (PCI 6.6) as well as internal web application scanning as part of your software development lifecycle (SDLC), by scanning your applications in development and in test.
If you follow these guidelines you will be well prepared to perform and maintain the required controls for PCI 11.2.
Merchants subject to Payment Card Industry Data Security Standard (PCI DSS) rules are often blindsided by DSS changes, arrival of new payment technologies, and newly emerging business context. In addition, many organizations still narrowly focus on annual PCI assessment instead of on running an ongoing compliance program. This article will provide insight on the updated PCI DSS requirement, highlighting the need for internal vulnerability scanning ("perform quarterly internal vulnerability scans"), which was less visible in previous versions.
Whether you are facing PCI compliance or if you have been PCI compliant in the past, you may already know what it means to have a passing external scan; it means that a PCI Approved Scanning Vendor (ASV) will perform a vulnerability assessment of your public IP address space according to the guidelines issued by the PCI Security Standards Council (SSC) in the ASV Program Guide. Typically, it also means that your public IP address space does not contain any vulnerabilities with a CVSS score of 4.0 or higher, or that you have compensating controls in place to mitigate any vulnerabilities in your public IP address space.
Internal Vulnerability Assessment
Beginning June 30th of this year, the PCI SSC is going to require that you also show proof of passing an internal vulnerability assessment. This requirement is detailed in the PCI DSS Requirement #11.2.1/11.2.3, which describes the testing procedures for internal vulnerability assessments. The key aspects of these assessments are that they must be completed quarterly, and after any significant change; the assessments must also be performed by qualified internal or external resources. Lastly, the assessments must document a “passing result.”
To obtain passing results, the PCI DSS references that “all ‘High’ vulnerabilities defined in PCI DSS Requirement #6.2 are resolved.” The basic requirements are that you are able to perform a vulnerability assessment of your internal IP address space and that you are able to show that your environment does not have any “High” vulnerabilities, which is the subtle change from prior standards.
The purpose of PCI DSS Requirement #6.2 is to define the process by which you identify vulnerabilities that are to be considered “High,”“Medium,” and ”Low.” Specifically, PCI DSS Requirement #6.2 states: “Establish a process to identify and assign a risk ranking to newly discovered security vulnerabilities.” The requirement also includes notes describing how risk rankings should take into consideration industry best practices and other criteria unique to your own environment; this can include CVSS base scores, vendor-supplied patch rankings, and the criticality of the underlying system components themselves.
The key aspect of PCI Requirement #6.2 is that you have a list of vulnerabilities that you (and your organization) have ranked according to your own process. Then you need to leverage these risk rankings in your vulnerability assessment against your internal IP address space. This will allow you to produce a report that shows a passing scan against your internal scope based on the risk rankings of vulnerabilities you have specified.
Quarterly Internal Scans
This brings us back to the requirement for internal scanning. It is important to remember that you need to perform these scans quarterly and after any significant change to your environment. This will mean that you will want to make sure that however you are assigning risk rankings and using risk rankings in concert with your vulnerability assessment tool, it is simple and repeatable. The ability to automatically produce an internal assessment report quarterly and after any change is a critical component of maintaining your PCI compliance.
It is also critical to review your PCI scope, which defines what IP addresses (both internal and external), are involved in the delivery of your payment card infrastructure. You will want to make sure that you can represent this scope in your vulnerability assessment tools to reduce the manual work that can be involved managing scope changes and reporting.
In conclusion, having a structured approach for dealing with PCI DSS changes, involving relevant stakeholders, evaluating their impact, and planning controls to close the gaps, should be adopted by security teams. This will help make any security program resilient to environmental and regulatory changes and ensure that the organization can maintain PCI compliance.
I was surrounded by numbers, more numbers that I could ever remember or justify. Every time I tried to add them up they would find a new combination – one I hadn’t seen before – and mock me with a sum that was just a few dollars above or below where it was supposed to be. I spent nearly three days doing calculations before I finally swallowed my pride and put in a "calculation error" entry to finish the process.
Reconciling my family’s checkbook had defeated me…this time.
Over the years I got better at doing the reconciliations, and eventually Microsoft Money made everything easier by automating the process, downloading transactions from my bank and helping me categorize and track all expenses. Today I can happily say that balancing my account takes just a few minutes each month.
In many ways the PCI DSS section 1.1.5 requirement is a lot like reconciling a bank statement. It states the following:
Documentation and business justification for use of all services, protocols, and ports allowed, including documentation of security features implemented for those protocols considered to be insecure.
Simply keeping track of the assets in a cardholder data environment (CDE) can be a challenge, and this requirement adds on the responsibility for administrators to keep track of all ports and protocols that are in use in the CDE. Additionally, the business justification for each port and protocol must be included; for most enterprises this requires involving multiple people and keeping notes about what the justification is and who provided it.
I’m pleased to announce that QualysGuard PCI version 5.3 now provides the Open Services Report. In the same way the Microsoft Money helped me keep track of my spending, the Open Services Report can help you comply with PCI 1.1.5 by automating the workflow for discovering, authorizing, and reporting of the ports and protocols in your CDE.
Once you have performed a scan of your CDE you can access the Open Services Report via Network -> Open Services Report.
You’ll immediately see a few key capabilities:
- The Summary section shows you how many services have been identified during the most recent scans and tracks how many have been categorized. As you perform the workflow to approve/reject services these numbers will be updated.
- A dynamic listing of all open ports and protocols detected in your CDE is listed in the grid. You can change the grouping by host IP or by service, and can filter the list to show only the items you are interested in (such as description containing "NetBIOS" or service marked as "Unauthorized")
- A CSV download of all the services and their status can be downloaded for distribution outside of the PCI application.
The Open Services Report includes the ability to classify services as authorized or unauthorized. To do so, simply select all the services you wish to mark and click on "Classfiy". You’ll be prompted to enter a business justification for that decision:
A complete history of all activity – who classified a service, when, and the reasons why – will be maintained and viewable in the report. You can then proceed to use the report to demonstrate your compliance with the PCI 1.1.5 requirement.
We hope you find these new capabilities helpful in tracking and justifying the business needs for services in your CDE, and look forward to hearing your feedback.
Verifying the integrity of files from unauthorized changes has become one of the areas of focus due to the Payment Card Industry (PCI) Data Security Standard (DSS). Traditionally, the task of verifying file integrity has been reserved for agent-based solutions that run locally on devices. However, with the introduction of QualysGuard 6.10, Policy Compliance supports agent-less verification of file integrity.
The Need for File Integrity
Verifying the integrity of critical files has always been a concern of security professionals. However, the introduction of the Payment Card Industry (PCI) Data Security Standard (DSS) has made this a focal point for compliance. The original standard suggested software like Tripwire to meet this requirement, but over the years, this requirement has been updated to remove specific software recommendations. The new standard, Version 2.0, states the following:
11.5 Deploy file-integrity monitoring tools to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.
Notice the change in language from the previous version: file-integrity monitoring software was replaced with file-integrity monitoring tools to clarify that software is not the sole means of meeting this requirement.
Although PCI has been a primary driver for file integrity, other regulations also require file integrity monitoring, including NIST SP 500-53 and SANS Consensus Audit Guidelines. File integrity is a key requirement for IT policy compliance.
Traditional Methods for File Integrity
When PCI DSS Version 1.0 was introduced in 2004, the primary mechanism for meeting file integrity requirements was an agent-based solution, such as Tripwire. Soon, other agent-based solutions quickly added file integrity capabilities to their agents to capitalize on the new PCI market. The challenge with agent-based file integrity monitoring software is that it can be costly to implement and maintain. Agents need to be deployed, maintained, and updated. Some estimate that organizations can easily spend a quarter or more of their security budgets on high cost file-integrity monitoring products. Organizations should consider more cost effective investments, such as leveraging existing technologies.
Agent-less File Integrity
Agent-less file integrity uses authenticated scans, not agents, to verify the integrity of files on a device. During an authenticated scan, the scanner calculates an MD-5, SHA-1 or SHA-256 hash of the file. From scan to scan, the hash values are compared to determine if a change to the file has occurred. This approach eliminates the need for costly agents and minimizes performance impacts typically experienced with agents.
Using QualysGuard Policy Compliance, organizations can leverage their existing investment in QualysGuard to not only verify the integrity of files, but collect additional configuration settings needed for compliance. This approach has a compounding affect on the Total Cost of Ownership for many reasons:
- The cost of Policy Compliance is a fraction of the cost for agent-based solutions, typically equal to the annual maintenance fees charged for the agent.
- Policy Compliance eliminates the cost of deploying, updating, and maintaining agents.
- File Integrity is included in Policy Compliance without an additional licensing.
Configuring QualysGuard Policy Compliance
To meet the requirements of file integrity monitoring, configure QualysGuard Policy Compliance as follows:
- Define critical Windows and/or Unix files as User Defined Controls.
- Add the User Defined Controls to a Policy.
- Update the Compliance Profile to enable File Integrity Monitoring.
- Scan files weekly.
- Report weekly.
Demo and Technical Paper
To see a demo of configuring file integrity within Policy Compliance, please view the File Integrity Check Demo.
For additional technical details on file integrity, please download the QualysGuard Tips and Techniques, File Integrity Check Document.
On November 18th the PCI Security Standards Council published version 2.0 of the Self Assessment Questionnaires (SAQs). These updated documents now align with the new version 2.0 of the PCI Data Security Standard.
The changes to the SAQs mostly involve minor refinements and clarifications, but one major change is the inclusion of a new type of SAQ: C-VT. This SAQ is a simplified version of SAQ C that is targeted at merchants who use virtual terminals to process payments. The SAQ defines a virtual termals as:
a web-browser based access to an acquirer, processor or third party service provider website to authorize payment card transactions, where the merchant manually enters payment card data via a securely connected web browser. Unlike physical terminals, virtual terminals do not read data directly from a payment card. Because payment card transactions are entered manually, virtual terminals are typically used instead of physical terminals in merchant environments with low transaction volumes.
Note that data is not read directly from the card, so no card readers or other swipe devices. The most accurate representation of a qualifying merchant would be someone at a personal computer typing in card numbers and getting authorization codes from a provider like Authorize.Net or Paypal.
Version 2.0 of the SAQs become available in January of 2011, but merchants can still choose to use version 1.2 instead throughout 2011 (you may not mix SAQ versions and DSS versions, however; everything must be either 1.2 or 2.0). Version 1.2 of both the DSS and the SAQs expire on December 31st of 2011.
In order to provide the most flexibility for merchants, QualysGuard PCI has added support for all version 2.0 SAQs, including wizards to help choose the proper SAQ version (A,B,C,C-VT,D), help text to provide guidance when completing the questionnaire, and full support for the milestone-based prioritized approach to the SAQs. Version 1.2 of the SAQs is also supported throughout 2011 for merchants choosing to use that version.
We hope you find the new capabilities helpful in achieving PCI compliance, and look forward to hearing your feedback.
More technical resources are available at QualysGuard PCI.
By now, most of you have either read or are aware that Verizon released it’s 2010 Data Breach Investigations Report. It details the specifics on data breaches and how effective (or ineffective) the the controls are that are being used as countermeasures at organizations that have suffered a breach. This report slices and dices the data about any which way you can think of and consequently it’s very enlightening. It calls into question many of the traditional controls that security administrators have used for years and makes one wonder whether or not they actually help at all. If you have a few minutes, I encourage you to read it. It can be found here:
However, that is not why I wrote this article as that has been well documented already. What people might not be aware of is that Verizon also recently released a companion report Verizon 2010 Payment Card Industry Compliance Report. In this report, Verizon has compiled data surrounding their PCI clients and broken the research down into subcategories and the specifics of the PCI DSS. As far as I know, this is the first large scale effort at compiling PCI data detailing what customers are doing effectively, what they are not, and then correlating that against actual breaches.
For those of you that have worked with PCI for any period of time, you are well aware of the debate that rages on as to whether or not the DSS actually makes organizations more secure. Rather than delve into that argument, I’d prefer to highlight what the data from the report suggests. According to the report, organizations that suffered a breach were 50% less likely to be compliant than a normal population of PCI clients (clients that are moving towards compliance but have not yet been validated as compliant). So while the data suggest that organizations that have been validated as PCI compliant are less likely to suffer a breach, some still do get breached. So, does this mean that PCI works, or doesn’t work? I’d be interested in hearing your opinions on that. Feel free to leave comments with any thoughts you may have.
If you are interested in reading the report in it’s entirety it can be downloaded here: