Qualys Blog

www.qualys.com
10 posts

How to Avoid Account Lockouts When Scanning Web Applications

Organizations that use automated scanners to test the security of their web apps must watch out for instances where these tools may trigger user account lockouts inadvertently.  Here we explain why this occurs and offer some tips for how to prevent this from happening with Qualys Web Application Scanning (WAS).

Continue reading …

Interview: Michael Calvi, author of “Multiple Scanner Appliance Selector”

<strong>Multiple scanner appliance selector</strong> is an open source tool written by Michael Calvi that automates the dynamic assignment of scanners to QualysGuard target hosts. The tool helps increase scanning efficiency across large networks. Given the niche problem Michael chose to solve, I wanted to learn more about it.

Continue reading …

Automate the Delivery of Security Intelligence for New Assets

As 2013 comes to a close, enterprise partnerships and mergers and acquisitions in the tech sector have continued to occur at billion dollar levels. One can infer there is much to gain from adding the confidential intellectual properties of others. The true puzzle is understanding if the intellectual properties are, in fact, truly confidential. After all, what is the value in acquiring trade secrets if they are not secret?

Continue reading …

QualysGuard WAS Wins Readers’ Choice Award for Application Security

ism_gold_seal_2013Qualys today announced that QualysGuard WAS won the 2013 Information Security™ Magazine (ISM) and SearchSecurity.com™ Reader’s Choice Award for Application Security. The eighth annual Readers’ Choice Awards surveyed Information Security readers, asking for their votes and ratings on top security products in 19 categories, with participants asked to only vote on the products in use in their environments. The QualysGuard WAS cloud service was rated the highest in the Application Security category for helping enterprises identify and remediate issues before attackers can find and exploit them.

Continue reading …

DHS Director Streufert: Continuous Monitoring Stops Attacks, Saves Money

LAS VEGAS — When John Streufert was CISO at the U.S. State Department he saw that the agency was losing a lot of money and wasting a lot of employee time trying to defend against cyber attacks. And despite all the audits and reports, the defense wasn’t working – the bad guys were getting in and stealing data.

Video: John Streufert Keynote at QSC

streufert-300So, he overtook a move to continuous monitoring of the network that was able to reduce as much as 90% of the security risk, he said in a keynote at the Qualys Security Conference 2013 today. Specifically, they were able to identify the worst problems in minutes rather than years, to fix the worst problems in days as opposed to months, and get costs down to about $200 million compared to $600 million per year.

Now, Streufert is bringing that same game plan to the Department of Homeland Security where he is director of federal network resilience.  “We are in the process of making a shift in the federal government as to how we handle our security challenges,” he said. “Continuous Diagnostics and Mitigation can stop 85% of cyber related attacks” and report on attacks in near real time, as well as enable system administrators to respond to exploits much faster.

The system can help the agency avoid being low-hanging fruit.  According to CSIS and Verizon reports: 75% of attacks use known vulnerabilities that could be patched; more than 90% of successful attacks require only the most basic techniques; and 96% of them could be avoided if there had been simple or intermediate controls in place.

At the State Department the statistics of the environment before the changes made for a strong economic case, Streufert said:

  • Every three days there were trillions of security events; millions of attempted attacks; thousands new flaws introduce; and hundreds of successful attacks.
  • Every three months there were over 10,000 successful attacks; terabytes of data stolen; 7,200 reports written; and hundreds of labor hours wasted.
  • Every three years there are thousands of assessments and other reports written, each requiring 3-9 months to prepare and out-of-date the moment they are printed; and the data provide only a snapshot in time versus real-time identification and mitigation of problems.

These manual processes, reports and audits cost between $600 million and $1.9 billion a year, or $1,400 per page, and result in the equivalent of 438 feet of paperwork. They also consume as much as 65% of the overall IT security effort in the agencies involved, according to Streufert.

He was asked to go to DHS to work on moving the agency from a cybersecurity defense strategy modeled on process and compliance to one focused on continuous diagnostics and mitigation. The first phase will be completed this year, the second phase next year and the final phase in 2015. The cost will be about $600 million over three years.

Update: See attachments for data sheet describing the US Department of Homeland Security Continuous Diagnostics and Mitigration Program.

Attachments

Continuous Diagnostics and Mitigation Program 175.4 K

Automate Host Discovery with Asset Tagging

Let’s assume you know where every host in your environment is. Wasn’t that a nice thought? The reality is probably that your environment is constantly changing. Knowing is half the battle, so performing this network reconnaissance is essential to defending it.

Tag, you’re mapped!

A common use case for performing host discovery is to focus scans against certain operating systems. This can be done a number of ways in QualysGuard, historically via maps or light scans followed by a manual workflow. Today, QualysGuard’s asset tagging can be leveraged to automate this very process. By dynamically tagging hosts by their operating system, one can split up scanning into the following:

  1. Frequent light scans that update QualysGuard with the current mapping of your network via dynamic asset tags.
  2. Targeted complete scans against tags which represent hosts of interest.

We step through how to set up your QualysGuard to do exactly this below.

Continue reading …

How to Scan Across an Air Gap with QualysGuard

An air gap network, sometimes called an isolated network, is a set of systems that are intentionally isolated from the Internet or other networks for increased security. If there is an air gap, i.e. no physical connection between your systems and unsecured networks or the Internet, then you have better protection against data leakage or intrusion.

Air gap networks are most common in production or manufacturing environments, such as nuclear power plants or where SCADA-type systems are installed; in military or government organizations; and in sensitive financial applications like stock exchanges. Despite the air gap, these environments still can require security audits to ensure that other defense-in-depth controls are in place and working properly. For example, it is believed that the well-known Stuxnet virus entered the systems via a USB stick, showing how an air gap is not a foolproof security measure. In these cases, Qualys sees demand from our customers for a QualysGuard scan across the air gap.

Continue reading …

Meeting PCI Requirement 11.2 with QualysGuard

Your PCI 11.2 Checklist and Toolbox

Merchants are getting ready for the upcoming changes to the internal scanning requirements for PCI compliance.  This blog post provides a checklist on what you should have ready and will review some of the tools Qualys provides for these requirements.

There are four core areas to focus on in preparation for your compliance to PCI 11.2, taking into account the changes from PCI 6.2 regarding risk ranking of vulnerabilities.

  1. Your documented PCI scope (cardholder dataenvironment)
  2. Your documented risk ranking process
  3. Your scanning tools
  4. Your scan reports

Merchants will need to complete each of these elements to be prepared to pass PCI compliance.

1. Your documented PCI scope (cardholder data environment)

All PCI requirements revolve around a cross-section of assets in your IT infrastructure that is directly involved in storage, processing, or transmitting payment card information. These IT assets are known as the cardholder data environment (CDE), and are the focus areas of the PCI DSS requirements.

These assets can exist in internal or external (public) networks and may be subject to different requirements based on what role they play in payment processing. These assets can be servers, routers, switches, workstations, databases, virtual machines or web applications; PCI refers to these assets as system components.

QualysGuard provides a capability to tag assets under management.  The screenshot below shows an example of PCI scope being defined within the QualysGuard Asset Tagging module.  It provides the ability to group internal assets (for 11.2.1), external assets (for 11.2.2), and both internal and external assets together (for 11.2.3).

pci-asset-tagging-ii

This allows you to maintain documentation of your CDE directly, and to drive your scanning directly from your scope definition.

2. Your documented risk ranking process

This is the primary requirement associated with the June 30th deadline; this is the reference that should allow someone to reproduce your risk rankings for specific vulnerabilities.

The requirement references industry best practices, among other details, to consider in developing your risk ranking.  It may help you to quickly adopt a common industry best practice and adapt it to your own environment.  Two examples are the Qualys severity rating system, which is the default rating as per the security research team at Qualys; or, the PCI ASV Program Guide, which includes a rating system used by scanning vendors to complete external scanning. QualysGuard is used by 50 of the Forbes Global 100, and spans all market verticals; it qualifies as an industry best practice.  Additionally, the QualysGuard platform is used by the majority of PCI Approved Scanning Vendors  and already delivers rankings within the PCI ASV Program Guide practices.

The core rules of your risk rankings should take into account CVSS Base Scores, available from nearly all security intelligence feeds.  These scores are also the base system used within the PCI ASV Program Guide.  Your process should also account for system components in your cardholder data environment and vendor-provided criticality rankings, such as the Microsoft patch ranking system if your CDE includes Windows-based system components.

The process should include documentation that details the sources of security information you follow, how frequently you review the feeds, and how you respond to new information in the feeds.  QualysGuard provides daily updates to the vulnerability knowledgebase and now offers a Zero-Day Analyzer service, which leverages data from the iDefense security intelligence feed.

zda-info

3. Your scanning tools

After you have your scope clearly defined and you have your process for ranking vulnerabilities documented, you will need to be able to run vulnerability scans. This includes internal VM scans, external VM scans, PCI ASV scans (external), internal web application scans and external web application scans. It is thefindings in these scans that will map against your risk ranking process and allow you to produce the necessary scan reports.

You will need to be able to configure your scanning tools to check for “high” vulnerabilities, which will allow you to allocate resources to fix and resolve these issues as part of the normal vulnerability management program and workflow within your environment.

QualysGuard VM, QualysGuard WAS and QualysGuard PCI all work together seamlessly to provide each of these scans capabilities against the same group of assets that represent your PCI scope or CDE.

scan-by-tag

4. Your scan reports

You will want to produce reports for your internal PCI scope, as defined in #1 of this checklist, both quarterly and after any significant changes.  If you have regular releases or updates to your IT infrastructure, you will want to have scan reports from those updates and upgrades. Quarterly scan reports need to be spaced apart by 90 days.  In all cases, these reports need to show that there are no “high” vulnerabilities detected by your scanning tools.

Each report for the significant change events will also need to include external PCI scope. QualysGuard VM makes it easy to include both internal and external assets in the same report.  QualysGuard VM also provides a direct link to your QualysGuard PCI merchant account for automation of your PCI ASV scan requirements.

pci-account-link

QualysGuard WAS allows you to quickly meet your production web application scanning requirement (PCI 6.6) as well as internal web application scanning as part of your software development lifecycle (SDLC), by scanning your applications in development and in test. 

If you follow these guidelines you will be well prepared to perform and maintain the required controls for PCI 11.2.

Passing the Internal Scan for PCI DSS 2.0

pci-logoMerchants subject to Payment Card Industry Data Security Standard (PCI DSS) rules are often blindsided by DSS changes, arrival of new payment technologies, and newly emerging business context. In addition, many organizations still narrowly focus on annual PCI assessment instead of on running an ongoing compliance program. This article will provide insight on the updated PCI DSS requirement, highlighting the need for internal vulnerability scanning ("perform quarterly internal vulnerability scans"), which was less visible in previous versions.

Whether you are facing PCI compliance or if you have been PCI compliant in the past, you may already know what it means to have a passing external scan; it means that a PCI Approved Scanning Vendor (ASV) will perform a vulnerability assessment of your public IP address space according to the guidelines issued by the PCI Security Standards Council (SSC) in the ASV Program Guide. Typically, it also means that your public IP address space does not contain any vulnerabilities with a CVSS score of 4.0 or higher, or that you have compensating controls in place to mitigate any vulnerabilities in your public IP address space.

Internal Vulnerability Assessment

Beginning June 30th of this year, the PCI SSC is going to require that you also show proof of passing an internal vulnerability assessment. This requirement is detailed in the PCI DSS Requirement #11.2.1/11.2.3, which describes the testing procedures for internal vulnerability assessments. The key aspects of these assessments are that they must be completed quarterly, and after any significant change; the assessments must also be performed by qualified internal or external resources. Lastly, the assessments must document a “passing result.”

pci11.2

To obtain passing results, the PCI DSS references that “all ‘High’ vulnerabilities defined in PCI DSS Requirement #6.2 are resolved.”  The basic requirements are that you are able to perform a vulnerability assessment of your internal IP address space and that you are able to show that your environment does not have any “High” vulnerabilities, which is the subtle change from prior standards.

The purpose of PCI DSS Requirement #6.2 is to define the process by which you identify vulnerabilities that are to be considered “High,”“Medium,” and ”Low.” Specifically, PCI DSS Requirement #6.2 states: “Establish a process to identify and assign a risk ranking to newly discovered security vulnerabilities.” The requirement also includes notes describing how risk rankings should take into consideration industry best practices and other criteria unique to your own environment; this can include CVSS base scores, vendor-supplied patch rankings, and the criticality of the underlying system components themselves.

The key aspect of PCI Requirement #6.2 is that you have a list of vulnerabilities that you (and your organization) have ranked according to your own process. Then you need to leverage these risk rankings in your vulnerability assessment against your internal IP address space. This will allow you to produce a report that shows a passing scan against your internal scope based on the risk rankings of vulnerabilities you have specified.

Quarterly Internal Scans

This brings us back to the requirement for internal scanning. It is important to remember that you need to perform these scans quarterly and after any significant change to your environment. This will mean that you will want to make sure that however you are assigning risk rankings and using risk rankings in concert with your vulnerability assessment tool, it is simple and repeatable. The ability to automatically produce an internal assessment report quarterly and after any change is a critical component of maintaining your PCI compliance.

It is also critical to review your PCI scope, which defines what IP addresses (both internal and external), are involved in the delivery of your payment card infrastructure. You will want to make sure that you can represent this scope in your vulnerability assessment tools to reduce the manual work that can be involved managing scope changes and reporting.

pci-for-dummies-2Structured Approach

In conclusion, having a structured approach for dealing with PCI DSS changes, involving relevant stakeholders, evaluating their impact, and planning controls to close the gaps, should be adopted by security teams. This will help make any security program resilient to environmental and regulatory changes and ensure that the organization can maintain PCI compliance.

Automatic vs. Manual Data

“You know only insofar as you can measure.”
        – Lord Kelvin
          
“If you want it, measure it. If you can’t measure it, forget it.”
        – Peter Drucker

Measurement is critical in achieving objectives. But a more subtle factor drives your success: what you measure and how you measure it. These are what guide your actions.  The measurement of vulnerabilities is no exception, and with vulnerabilities, the difference between automatic and manual data and its implications are the key factors.

So, what is the difference?

Manual data is a point-in-time snapshot of vulnerability data that is tied to a single scan and shows the vulnerability posture of the hosts at the time the scan was run.

Automatic data is data from multiple scans normalized into a database. It is the asset-centric history of vulnerability data, built out of the results of previous scans.

Simple enough, right?  Let’s examine the implications.

Assessment vs. Management

Manual data lets you assess vulnerabilities, but you need automatic data for vulnerability management.

Manual data shows you where you’re vulnerable at the time of the scan.  You can think of manual data as a file folder on the left side of your desk with a folder corresponding to each scan.  Inside each folder is a piece of paper containing the forensic record of the raw results from that point-in-time scan.  The biggest limitation of this data model is that it lacks context and trending since it is a snapshot of a point in time.  For example, if you scanned on January 1 and found 500 vulnerabilities, then scanned the same assets on February 1 and found 300 vulnerabilities, what does that mean?  Did you fix all 500 vulnerabilities from January and have 300 new vulnerabilities for February?  Did you fix 200 vulnerabilities from January and have 300 left, but no new vulnerabilities in February?  There are several other potential scenarios that would also need to be considered, and determining the answer with any degree of certainty is problematic at best.

If you only have access to manual data, you have to perform a manual monthly process with a custom spreadsheet to attempt to reconcile and normalize the results from scan to scan to show month-over-month trending.

Another big problem with this data model is that it is difficult to track the lifecycle of a vulnerability on a particular host.  For example, you should be careful not to assume that if you don’t find a vulnerability in a subsequent scan that it has been fixed.  This is a poor assumption as there is a huge difference between "fixed" and "not found".  For example, if you first scan with authentication, then scan without authentication, many vulnerabilities won’t be detected in the second scan, simply because authentication wasn’t used.  This does not mean that the vulnerabilities are actually fixed and can lead to a false sense of security.

Lifecycle of a Vulnerability

Automatic data addresses these limitations by introducing the concept of a vulnerability’s state and providing additional context that is valuable when managing the lifecycle.  Automatic data can be thought of as a large relational database on the right side of your desk that normalizes the results of every scan over time for each asset.  A vulnerability can have one of four states:

  • NEW: Detected for the first time
  • ACTIVE: Detected more than once
  • FIXED: Detected, then confirmed to be resolved by scanning in the *same* manner as originally detected – e.g. with authentication
  • REOPENED: Detected, confirmed to be remediated, then detected again.  This may be the result of a machine being re-imaged without all relevant patches being applied.

states

The automatic data also enable users with the capability to mark vulnerabilities as IGNORED, and create an audit trail of all the transitions. The IGNORED state is complementary to the status. A vulnerability can be NEW/IGNORED or ACTIVE/IGNORED for instance. It is a way to manage exceptions.

Trending and Reporting

In addition to a vulnerability’s state, automatic data allows us to report on when a vulnerability was first detected, last detected, and the number of times it has been detected.  Also, vulnerability status is tracked intelligently to account for different option profiles being used.  For example, if a vulnerability is first detected using authentication, it will not be considered closed until a rescan *with authentication* confirms that the vulnerability has been resolved.  This addresses the limitation of the assumption that not found = fixed. And it prevents "saw tooth" trend results that can happen when scans are conducted with varying configurations (e.g. with / without authentication) over time.

This type of accurate trending information is valuable to be able to correctly report the postures of organizations and the progress (or lack thereof) over time in remediating vulnerabilities in their environments.  Using the QualysGuard Detection API, this concept of vulnerability state/trend information can be included in data integrated with third party platforms (e.g. SIEM, GRC, etc).  Without automatic data, organizations are left to extremely manual, time-consuming, and error-prone approaches to attempt to measure and track the effectiveness of their vulnerability management programs over time.

Decoupling Reporting / Remediation from Scanning

One other main benefit of automatic data is that it allows the scanning and reporting/remediation efforts to be decoupled since all the data is tracked and normalized.  Scanning can be conducted according to location and reporting can be performed according to those responsible for remediation.

User Interface

The most obvious place where the difference between manual and automatic data is found in the QualysGuard user interface is when editing a scan report template and choosing the Scan Results Selection:

edit-scan-template

Automatic data is also used in “Status” and “Status with Trend” scan reports and Scorecard reports, as well as throughout the user interface including your dashboard, asset search results, remediation tickets and host information.

Automatic is the Way to Go

The difference between manual and automatic data is the difference between a vulnerability assessment program that identifies only current vulnerabilities and a vulnerability management program that drives the remediation of vulnerabilities over time. Automatic data makes QualysGuard the only vulnerability management solution that can differentiate between vulnerabilities that are actually fixed, versus those that simply weren’t detected.

Contributors to this article: Jason Falciola, Steve Ouzman, Karl G. Schrade, and Leif Kremkow.