The new QualysGuard Vulnerability Notification feature allows you to configure QualysGuard to send email notifications to users about new and updated vulnerabilities in the QualysGuard KnowledgeBase. An update to QualysGuard 7.11 will be released in production in the coming weeks to introduce this feature.
Qualys today announced that for the sixth time, readers of SC Magazine have named QualysGuard Vulnerability Management “Best Vulnerability Management Tool." The award was presented on February 26, 2013 at the SC Awards Gala in San Francisco.
“Our readers are on the front lines of information security, and they have recognized QualysGuard Vulnerability Management as a key tool for securing their organizations,” said Illena Armstrong, VP of editorial, SCMagazine. “Without leaders in innovation like Qualys, we would not be able to plan for the future of enterprise security.”
The SC Awards, now in its 16th year, is the premier recognition for IT security professionals and products that fend off the myriad security threats in today’s corporate world. The annual awards showcase the best solutions, services and professionals while recognizing achievement and technical excellence. QualysGuard Vulnerability Management was selected by a panel representing a cross-section of SC Magazine readership, comprised of large, medium and small enterprises from all major vertical markets, including financial services, health care, government, retail, education and other sectors. Read the full announcement.
Merchants are getting ready for the upcoming changes to the internal scanning requirements for PCI compliance. This blog post provides a checklist on what you should have ready and will review some of the tools Qualys provides for these requirements.
There are four core areas to focus on in preparation for your compliance to PCI 11.2, taking into account the changes from PCI 6.2 regarding risk ranking of vulnerabilities.
Your documented PCI scope (cardholder dataenvironment)
Your documented risk ranking process
Your scanning tools
Your scan reports
Merchants will need to complete each of these elements to be prepared to pass PCI compliance.
1. Your documented PCI scope (cardholder data environment)
All PCI requirements revolve around a cross-section of assets in your IT infrastructure that is directly involved in storage, processing, or transmitting payment card information. These IT assets are known as the cardholder data environment (CDE), and are the focus areas of the PCI DSS requirements.
These assets can exist in internal or external (public) networks and may be subject to different requirements based on what role they play in payment processing. These assets can be servers, routers, switches, workstations, databases, virtual machines or web applications; PCI refers to these assets as system components.
QualysGuard provides a capability to tag assets under management. The screenshot below shows an example of PCI scope being defined within the QualysGuard Asset Tagging module. It provides the ability to group internal assets (for 11.2.1), external assets (for 11.2.2), and both internal and external assets together (for 11.2.3).
This allows you to maintain documentation of your CDE directly, and to drive your scanning directly from your scope definition.
2. Your documented risk ranking process
This is the primary requirement associated with the June 30th deadline; this is the reference that should allow someone to reproduce your risk rankings for specific vulnerabilities.
The requirement references industry best practices, among other details, to consider in developing your risk ranking. It may help you to quickly adopt a common industry best practice and adapt it to your own environment. Two examples are the Qualys severity rating system, which is the default rating as per the security research team at Qualys; or, the PCI ASV Program Guide, which includes a rating system used by scanning vendors to complete external scanning. QualysGuard is used by 50 of the Forbes Global 100, and spans all market verticals; it qualifies as an industry best practice. Additionally, the QualysGuard platform is used by the majority of PCI Approved Scanning Vendors and already delivers rankings within the PCI ASV Program Guide practices.
The core rules of your risk rankings should take into account CVSS Base Scores, available from nearly all security intelligence feeds. These scores are also the base system used within the PCI ASV Program Guide. Your process should also account for system components in your cardholder data environment and vendor-provided criticality rankings, such as the Microsoft patch ranking system if your CDE includes Windows-based system components.
The process should include documentation that details the sources of security information you follow, how frequently you review the feeds, and how you respond to new information in the feeds. QualysGuard provides daily updates to the vulnerability knowledgebase and now offers a Zero-Day Analyzer service, which leverages data from the iDefense security intelligence feed.
3. Your scanning tools
After you have your scope clearly defined and you have your process for ranking vulnerabilities documented, you will need to be able to run vulnerability scans. This includes internal VM scans, external VM scans, PCI ASV scans (external), internal web application scans and external web application scans. It is thefindings in these scans that will map against your risk ranking process and allow you to produce the necessary scan reports.
You will need to be able to configure your scanning tools to check for “high” vulnerabilities, which will allow you to allocate resources to fix and resolve these issues as part of the normal vulnerability management program and workflow within your environment.
QualysGuard VM, QualysGuard WAS and QualysGuard PCI all work together seamlessly to provide each of these scans capabilities against the same group of assets that represent your PCI scope or CDE.
4. Your scan reports
You will want to produce reports for your internal PCI scope, as defined in #1 of this checklist, both quarterly and after any significant changes. If you have regular releases or updates to your IT infrastructure, you will want to have scan reports from those updates and upgrades. Quarterly scan reports need to be spaced apart by 90 days. In all cases, these reports need to show that there are no “high” vulnerabilities detected by your scanning tools.
Each report for the significant change events will also need to include external PCI scope. QualysGuard VM makes it easy to include both internal and external assets in the same report. QualysGuard VM also provides a direct link to your QualysGuard PCI merchant account for automation of your PCI ASV scan requirements.
QualysGuard WAS allows you to quickly meet your production web application scanning requirement (PCI 6.6) as well as internal web application scanning as part of your software development lifecycle (SDLC), by scanning your applications in development and in test.
If you follow these guidelines you will be well prepared to perform and maintain the required controls for PCI 11.2.
“You know only insofar as you can measure.” – Lord Kelvin
“If you want it, measure it. If you can’t measure it, forget it.” – Peter Drucker
Measurement is critical in achieving objectives. But a more subtle factor drives your success: what you measure and how you measure it. These are what guide your actions. The measurement of vulnerabilities is no exception, and with vulnerabilities, the difference between automatic and manual data and its implications are the key factors.
So, what is the difference?
Manual data is a point-in-time snapshot of vulnerability data that is tied to a single scan and shows the vulnerability posture of the hosts at the time the scan was run.
Automatic data is data from multiple scans normalized into a database. It is the asset-centric history of vulnerability data, built out of the results of previous scans.
Simple enough, right? Let’s examine the implications.
Assessment vs. Management
Manual data lets you assess vulnerabilities, but you need automatic data for vulnerability management.
Manual data shows you where you’re vulnerable at the time of the scan. You can think of manual data as a file folder on the left side of your desk with a folder corresponding to each scan. Inside each folder is a piece of paper containing the forensic record of the raw results from that point-in-time scan. The biggest limitation of this data model is that it lacks context and trending since it is a snapshot of a point in time. For example, if you scanned on January 1 and found 500 vulnerabilities, then scanned the same assets on February 1 and found 300 vulnerabilities, what does that mean? Did you fix all 500 vulnerabilities from January and have 300 new vulnerabilities for February? Did you fix 200 vulnerabilities from January and have 300 left, but no new vulnerabilities in February? There are several other potential scenarios that would also need to be considered, and determining the answer with any degree of certainty is problematic at best.
If you only have access to manual data, you have to perform a manual monthly process with a custom spreadsheet to attempt to reconcile and normalize the results from scan to scan to show month-over-month trending.
Another big problem with this data model is that it is difficult to track the lifecycle of a vulnerability on a particular host. For example, you should be careful not to assume that if you don’t find a vulnerability in a subsequent scan that it has been fixed. This is a poor assumption as there is a huge difference between "fixed" and "not found". For example, if you first scan with authentication, then scan without authentication, many vulnerabilities won’t be detected in the second scan, simply because authentication wasn’t used. This does not mean that the vulnerabilities are actually fixed and can lead to a false sense of security.
Lifecycle of a Vulnerability
Automatic data addresses these limitations by introducing the concept of a vulnerability’s state and providing additional context that is valuable when managing the lifecycle. Automatic data can be thought of as a large relational database on the right side of your desk that normalizes the results of every scan over time for each asset. A vulnerability can have one of four states:
NEW: Detected for the first time
ACTIVE: Detected more than once
FIXED: Detected, then confirmed to be resolved by scanning in the *same* manner as originally detected – e.g. with authentication
REOPENED: Detected, confirmed to be remediated, then detected again. This may be the result of a machine being re-imaged without all relevant patches being applied.
The automatic data also enable users with the capability to mark vulnerabilities as IGNORED, and create an audit trail of all the transitions. The IGNORED state is complementary to the status. A vulnerability can be NEW/IGNORED or ACTIVE/IGNORED for instance. It is a way to manage exceptions.
Trending and Reporting
In addition to a vulnerability’s state, automatic data allows us to report on when a vulnerability was first detected, last detected, and the number of times it has been detected. Also, vulnerability status is tracked intelligently to account for different option profiles being used. For example, if a vulnerability is first detected using authentication, it will not be considered closed until a rescan *with authentication* confirms that the vulnerability has been resolved. This addresses the limitation of the assumption that not found = fixed. And it prevents "saw tooth" trend results that can happen when scans are conducted with varying configurations (e.g. with / without authentication) over time.
This type of accurate trending information is valuable to be able to correctly report the postures of organizations and the progress (or lack thereof) over time in remediating vulnerabilities in their environments. Using the QualysGuard Detection API, this concept of vulnerability state/trend information can be included in data integrated with third party platforms (e.g. SIEM, GRC, etc). Without automatic data, organizations are left to extremely manual, time-consuming, and error-prone approaches to attempt to measure and track the effectiveness of their vulnerability management programs over time.
Decoupling Reporting / Remediation from Scanning
One other main benefit of automatic data is that it allows the scanning and reporting/remediation efforts to be decoupled since all the data is tracked and normalized. Scanning can be conducted according to location and reporting can be performed according to those responsible for remediation.
The most obvious place where the difference between manual and automatic data is found in the QualysGuard user interface is when editing a scan report template and choosing the Scan Results Selection:
Automatic data is also used in “Status” and “Status with Trend” scan reports and Scorecard reports, as well as throughout the user interface including your dashboard, asset search results, remediation tickets and host information.
Automatic is the Way to Go
The difference between manual and automatic data is the difference between a vulnerability assessment program that identifies only current vulnerabilities and a vulnerability management program that drives the remediation of vulnerabilities over time. Automatic data makes QualysGuard the only vulnerability management solution that can differentiate between vulnerabilities that are actually fixed, versus those that simply weren’t detected.
Understanding overall security and compliance risk is an integral part of a risk management program. The integration of security and compliance solutions has provided some insight to understanding this risk, but lack true security risk as organizations are challenged with hundreds or even thousands of vulnerability detections every day.
Integrating QualysGuard and Modulo
Modulo provides a simple mechanism for importing asset and vulnerability data into Modulo Risk Manager. With Modulo Risk Manager, QualysGuard Vulnerability Management data is tightly integrated into the risk management program, allowing vulnerability risk to be correlated with other risks, controls, and assets providing a holistic management perspective of the most important risks.
Asset Synchronization and Correlation
Schedule import of assets from QualysGuard Vulnerability Management to constantly keep your asset management module updated with new assets and vulnerabilities. In addition, correlate these assets with other business assets in Modulo to understand business risk.
Holistic IT Risk Approach
QualysGuard Vulnerability Management data is automatically collected and integrated into the risk management program, allowing vulnerability risk to be correlated with other risks, controls, and assets, providing a holistic management perspective of the most important risks. In addition, the Risk Score, the formula used to calculate the risk score for vulnerabilities, can be customized using the following variables:
This integration allows customers to prioritize not only compliance risks, but also security risks to manage remediation efforts across the organization, prioritize large amounts of vulnerability data using a mature and reliable approach, produce compliance documentation and make more accurate decisions.
Sometimes it’s the little things that make your day run more smoothly. The release of QualysGuard 6.19 includes highly-focused new features that add functionality for Windows systems. Also, an update to the Qualys KnowledgeBase identifies vulnerabilities that can be attacked via exploit kits, helping organizations better prioritize patching efforts and protect against vulnerabilities that could be abused via exploit kits.
Windows Share Enumeration: Find Windows shares that are readable by everyone, and report details like the number of files in the share and whether the files are writable. This is good for identifying groups of files that may need tighter access control.
Detailed Audit Settings: Verify auditing subcategory settings introduced in Windows Vista, Windows 7, and Windows Server 2008. You can now check all of the audit logging settings within Windows.
ExploitKit Mapping: If a vulnerability can be attacked via an exploit kit, it should be considered higher priority simply because of the larger number of people who can easily attempt to attack it via the exploit kit. The new ExploitKit Mapping in the KnowledgeBase makes it easier to identify these vulnerabilities and prioritize their remediation.