All Posts

54 posts

Automatic vs. Manual Data

“You know only insofar as you can measure.”
        – Lord Kelvin
          
“If you want it, measure it. If you can’t measure it, forget it.”
        – Peter Drucker

Measurement is critical in achieving objectives. But a more subtle factor drives your success: what you measure and how you measure it. These are what guide your actions.  The measurement of vulnerabilities is no exception, and with vulnerabilities, the difference between automatic and manual data and its implications are the key factors.

So, what is the difference?

Manual data is a point-in-time snapshot of vulnerability data that is tied to a single scan and shows the vulnerability posture of the hosts at the time the scan was run.

Automatic data is data from multiple scans normalized into a database. It is the asset-centric history of vulnerability data, built out of the results of previous scans.

Simple enough, right?  Let’s examine the implications.

Assessment vs. Management

Manual data lets you assess vulnerabilities, but you need automatic data for vulnerability management.

Manual data shows you where you’re vulnerable at the time of the scan.  You can think of manual data as a file folder on the left side of your desk with a folder corresponding to each scan.  Inside each folder is a piece of paper containing the forensic record of the raw results from that point-in-time scan.  The biggest limitation of this data model is that it lacks context and trending since it is a snapshot of a point in time.  For example, if you scanned on January 1 and found 500 vulnerabilities, then scanned the same assets on February 1 and found 300 vulnerabilities, what does that mean?  Did you fix all 500 vulnerabilities from January and have 300 new vulnerabilities for February?  Did you fix 200 vulnerabilities from January and have 300 left, but no new vulnerabilities in February?  There are several other potential scenarios that would also need to be considered, and determining the answer with any degree of certainty is problematic at best.

If you only have access to manual data, you have to perform a manual monthly process with a custom spreadsheet to attempt to reconcile and normalize the results from scan to scan to show month-over-month trending.

Another big problem with this data model is that it is difficult to track the lifecycle of a vulnerability on a particular host.  For example, you should be careful not to assume that if you don’t find a vulnerability in a subsequent scan that it has been fixed.  This is a poor assumption as there is a huge difference between "fixed" and "not found".  For example, if you first scan with authentication, then scan without authentication, many vulnerabilities won’t be detected in the second scan, simply because authentication wasn’t used.  This does not mean that the vulnerabilities are actually fixed and can lead to a false sense of security.

Lifecycle of a Vulnerability

Automatic data addresses these limitations by introducing the concept of a vulnerability’s state and providing additional context that is valuable when managing the lifecycle.  Automatic data can be thought of as a large relational database on the right side of your desk that normalizes the results of every scan over time for each asset.  A vulnerability can have one of four states:

  • NEW: Detected for the first time
  • ACTIVE: Detected more than once
  • FIXED: Detected, then confirmed to be resolved by scanning in the *same* manner as originally detected – e.g. with authentication
  • REOPENED: Detected, confirmed to be remediated, then detected again.  This may be the result of a machine being re-imaged without all relevant patches being applied.

states

The automatic data also enable users with the capability to mark vulnerabilities as IGNORED, and create an audit trail of all the transitions. The IGNORED state is complementary to the status. A vulnerability can be NEW/IGNORED or ACTIVE/IGNORED for instance. It is a way to manage exceptions.

Trending and Reporting

In addition to a vulnerability’s state, automatic data allows us to report on when a vulnerability was first detected, last detected, and the number of times it has been detected.  Also, vulnerability status is tracked intelligently to account for different option profiles being used.  For example, if a vulnerability is first detected using authentication, it will not be considered closed until a rescan *with authentication* confirms that the vulnerability has been resolved.  This addresses the limitation of the assumption that not found = fixed. And it prevents "saw tooth" trend results that can happen when scans are conducted with varying configurations (e.g. with / without authentication) over time.

This type of accurate trending information is valuable to be able to correctly report the postures of organizations and the progress (or lack thereof) over time in remediating vulnerabilities in their environments.  Using the QualysGuard Detection API, this concept of vulnerability state/trend information can be included in data integrated with third party platforms (e.g. SIEM, GRC, etc).  Without automatic data, organizations are left to extremely manual, time-consuming, and error-prone approaches to attempt to measure and track the effectiveness of their vulnerability management programs over time.

Decoupling Reporting / Remediation from Scanning

One other main benefit of automatic data is that it allows the scanning and reporting/remediation efforts to be decoupled since all the data is tracked and normalized.  Scanning can be conducted according to location and reporting can be performed according to those responsible for remediation.

User Interface

The most obvious place where the difference between manual and automatic data is found in the QualysGuard user interface is when editing a scan report template and choosing the Scan Results Selection:

edit-scan-template

Automatic data is also used in “Status” and “Status with Trend” scan reports and Scorecard reports, as well as throughout the user interface including your dashboard, asset search results, remediation tickets and host information.

Automatic is the Way to Go

The difference between manual and automatic data is the difference between a vulnerability assessment program that identifies only current vulnerabilities and a vulnerability management program that drives the remediation of vulnerabilities over time. Automatic data makes QualysGuard the only vulnerability management solution that can differentiate between vulnerabilities that are actually fixed, versus those that simply weren’t detected.

Contributors to this article: Jason Falciola, Steve Ouzman, Karl G. Schrade, and Leif Kremkow.

Integrated Vulnerability Risk Management and IT Risk and Compliance with QualysGuard and Modulo

Understanding overall security and compliance risk is an integral part of a risk management program. The integration of security and compliance solutions has provided some insight to understanding this risk, but lack true security risk as organizations are challenged with hundreds or even thousands of vulnerability detections every day.

Integrating QualysGuard and Modulo

Modulo provides a simple mechanism for importing asset and vulnerability data into Modulo Risk Manager. With Modulo Risk Manager, QualysGuard Vulnerability Management data is tightly integrated into the risk management program, allowing vulnerability risk to be correlated with other risks, controls, and assets providing a holistic management perspective of the most important risks.

Integration Benefits

Asset Synchronization and Correlation

Schedule import of assets from QualysGuard Vulnerability Management to constantly keep your asset management module updated with new assets and vulnerabilities.  In addition, correlate these assets with other business assets in Modulo to understand business risk.

Displaying_Qualys_vulnerabilities_at_asset_level

Holistic IT Risk Approach

QualysGuard Vulnerability Management data is automatically collected and integrated into the risk management program, allowing vulnerability risk to be correlated with other risks, controls, and assets, providing a holistic management perspective of the most important risks.  In addition, the Risk Score, the formula used to calculate the risk score for vulnerabilities, can be customized using the following variables:

    • Asset Criticality
    • Asset Relevance
    • CVSS Score
    • Vulnerability Level
    • Vulnerability Type

Evaluation_of_Qualys_vulnerabilities_in_Risk_Manager

Prioritized Remediation

This integration allows customers to prioritize not only compliance risks, but also security risks to manage remediation efforts across the organization, prioritize large amounts of vulnerability data using a mature and reliable approach, produce compliance documentation and make more accurate decisions.

Treatment_of_Qualys_vulnerabilities_in_Risk_Manager

For more information regarding this integration, please see the Qualys and Modulo Showcase Integrated Vulnerability Management with IT GRC Press Release.

Windows Share Enumeration, Detailed Audit Settings, and ExploitKit Mapping

Sometimes it’s the little things that make your day run more smoothly.  The release of QualysGuard 6.19 includes highly-focused new features that add functionality for Windows systems. Also, an update to the Qualys KnowledgeBase identifies vulnerabilities that can be attacked via exploit kits, helping organizations better prioritize patching efforts and protect against vulnerabilities that could be abused via exploit kits.

Windows Share Enumeration: Find Windows shares that are readable by everyone, and report details like the number of files in the share and whether the files are writable. This is good for identifying groups of files that may need tighter access control.

Detailed Audit Settings: Verify auditing subcategory settings introduced in Windows Vista, Windows 7, and Windows Server 2008. You can now check all of the audit logging settings within Windows.

Both of the above features require the new dissolvable agent, which is configured via a new workflow for easier activation.  Details in the 6.19 Notification.

ExploitKit Mapping: If a vulnerability can be attacked via an exploit kit, it should be considered higher priority simply because of the larger number of people who can easily attempt to attack it via the exploit kit. The new ExploitKit Mapping in the KnowledgeBase makes it easier to identify these vulnerabilities and prioritize their remediation.

Integrating Qualys Data with RSA Archer

Is your organization using RSA Archer to manage your governance, risk and compliance program? Would you like to integrate vulnerability and configuration data from Qualys? RSA Archer integrates with both Qualys Vulnerability Management (VM) through the Qualys XML APIs.

Why RSA Archer?

RSA Archer is the leading enterprise governance, risk and compliance (GRC) solution. Qualys, Inc. is the leading provider of on-demand IT security risk and compliance management solutions — delivered as a service. Since Qualys and RSA Archer have a large number of joint customers, it was logical to integrate our solutions, allowing customers to maximize their investment in both solutions.

Vulnerability Management

Using the Qualys VM scanning infrastructure, vulnerability data can be collected for all enterprise assets in an automated and accurate manner. This integration automatically updates RSA Archer with asset vulnerability data to be used in remediation efforts.

RSA Archer’s integration leverages the Qualys XML API frameworks.

Integration Details

For full integration details with RSA Archer, please see Qualys / RSA Archer integration.