Qualys Blog

www.qualys.com
3 posts

Making Asset Inventory Actionable With a Cloud-Based System

As we’ve discussed in this blog series on automated IT asset inventory, having — or regaining — unobstructed visibility of your IT environment is key for a strong security and compliance posture.

We met Max, the CISO of a large manufacturer, whose organization progressively lost this visibility, as it adopted cloud computing, mobility, virtualization, IoT and other digital transformation technologies.

AssetView_Overview_v2_crop_searchbarWith the company’s IT environment upended and its network perimeter blurred, Max and the InfoSec team recovered control with a cloud-based, automated IT asset inventory system. This successful solution featured five key elements. In the previous posts, we addressed the first three:

This means that you need a complete and continuously updated list of IT assets, as well as granular security, compliance and system details on each one.

In this final post, we’ll explain the last two requirements for an effective cloud-based IT asset inventorying system:

  • Asset criticality rankings
  • Dashboarding and reporting

Continue reading …

Automate On-Demand Reporting with Selenium

The Terminator exposed it, the DARPA Grand Challenge rewards it, and Selenium puts its future in your hands. "It" is man versus machine… well, sort of. With Selenium, you are in control. Why link Selenium to the Terminator? Because it is that powerful.

What is Selenium? Their homepage says it best, "Selenium automates browsers." For more information on how Selenium relates to Qualys, check out our WAS guru, Jason’s, excellent blog post.

As explained in the blog post above, Selenium scripts are often used to automate complex web app interactions such as authentication when scanning them via QualysGuard Web Application Scanning.  Here we introduce a different use-case where we automate a QualysGuard subscriber’s interaction with the QualysGuard Vulnerability Management user interface, in order to demonstrate a best practice and make it easy to adopt — simply by running a Selenium script.

Make it happen

In a previous blog post about customizing Scorecard Reports, a fellow community member came up with a pretty good list of criteria of vulnerabilities to watch out for. Let’s take a closer look at creating a dynamic search list tailored to externally facing hosts. For such hosts, an initial starting point for discovering their "worst off the worst" vulnerabilities are those with the following criteria:

  • Remote (no authentication necessary) vulnerability.
  • Associated with a penetration testing toolkit (such as CORE or Exploit-DB).
  • Confirmed, severity 5 (easy segue to "game over").

Now I can show you screen shots of how to create this, but it’s so much easier to just create a Selenium script — by the way, it’s easier for you to run the script, too!

  1. Install Firefox and the Selenium IDE extension.
  2. Download the Selenium scripts from the bottom of the script document.
  3. Open Firefox, log in to QualysGuard. While in QualysGuard, make sure you do not already have a search list with the name, "Remote exploit-available confirmed sev5 (Selenium)", or the script will error out.
  4. From Firefox, Tools menu –> Selenium IDE
  5. From Selenium IDE, File –> Open –> Open test case –> "create/Selenium test case, QualysGuard, create dynamic search list – remote exploit-available confirmed sev5.html". Check out the source, it is commented so you can see how it breaks apart building the search list.
  6. Optional step. The script works at any speed, but if you would like to actually watch it work, I recommend slowing the execution down. Drag the speed bar from Fast to Slow so it’s easier to follow the script.
    selenium - fast to slow
  7. We are ready to run the script, also known as a test case in Selenium. Click on the "Play current test case" button. (Note this will play the test case that is shown in the right column under "Table | Source".)
    selenium - play current
  8. Congratulations! You now have the dynamic search list we architected from our example. It’s called "Remote exploit-available confirmed sev5 (Selenium)".

You should see the dynamic search list under the "Search Lists" subtab (blue bar).

search list name

Note the criteria is optimized for externally facing hosts, as we described above:

search list criteria

Make it actionable

Pretty neat stuff, huh? But a search list on its own is not the most useful. Let’s create a report template that builds on it. Scratch that, let’s automate building of a report template that uses it.

  1. Make sure you are still logged into QualysGuard inside of Firefox.
  2. The test case creates a report template named "Remote exploit-available confirmed sev5 (Selenium)". So you want to make sure you do not already have a report template with the same name.
  3. Load up the test case ("create/Selenium test case, QualysGuard, create report – remote exploit-available confirmed sev5.html") in the Selenium IDE. Check out the source, it is commented to explain how it accomplishes building the report.
  4. Run it! It works at any speed.

After the test case completes, you will have a report template that uses the dynamic search list we just created.

report template name

Note the template filters against the search list we just created:

report template filters

Make it easy

Now that we have built individual test cases for creating a search list and report template, let’s merge them into one step. Rather than copy and pasting the rows from one script into another, the Selenium IDE offers a much cleaner way via test suites.

Test suites offer a more coupled integration of test cases. This enables visible separation for troubleshooting individual test cases, while remaining transparently functional as a whole to the user. When we want to run a test suite, we click a different button from the Selenium IDE:

selenium play suite

Let’s get started on running one together.

  1. Confirm you are still logged into QualysGuard.
  2. Before we recreate the report template and search list, we will need to delete the existing ones, as QualysGuard requires unique names of each. You can either delete them manually, or run the test suite, "Selenium test suite, QualysGuard, delete remote exploit-available confirmed sev5 report.html" that does it for you — try it, it’s not just for the lazy! Remember to click on the play button with multiple lines to its right, and it will delete both the report template and the search list.
  3. Open the test suite, "Selenium test suite, QualysGuard, create remote exploit-available confirmed sev5 report" that combines the above test cases.
  4. Run the entire suite. It will start to create the search list from the first test case, and then automatically progress to create the report template from the second test case.
  5. Congrats! You now have both a search list and report template. Easy, huh?

Make yours the next big hit

Now that we are able to create search lists and report templates associated with those search lists in one simple step, what’s next? Collaborate! Share your favorite search list, or report template, or both. Feel free to comment the criteria, or QIDs (one can also automate creation of static search lists, too).

If you are feeling ambitious, and want to create Selenium scripts of these reports, feel free to modify the ones we demoed. I am here to help if you have questions, just comment on this post. To get things started, I shared one more test suite that you may find useful (great for creating policies in Remediation):

Dynamic search lists: Confirmed severity 5, 4, and 3

You can find the above script (and more to come!) by searching for the tag, selenium_script. Help us grow QualysGuard automation by contributing! If you need help or have a request, just comment on this post… I’ll be back.

Automatic vs. Manual Data

“You know only insofar as you can measure.”
        – Lord Kelvin
          
“If you want it, measure it. If you can’t measure it, forget it.”
        – Peter Drucker

Measurement is critical in achieving objectives. But a more subtle factor drives your success: what you measure and how you measure it. These are what guide your actions.  The measurement of vulnerabilities is no exception, and with vulnerabilities, the difference between automatic and manual data and its implications are the key factors.

So, what is the difference?

Manual data is a point-in-time snapshot of vulnerability data that is tied to a single scan and shows the vulnerability posture of the hosts at the time the scan was run.

Automatic data is data from multiple scans normalized into a database. It is the asset-centric history of vulnerability data, built out of the results of previous scans.

Simple enough, right?  Let’s examine the implications.

Assessment vs. Management

Manual data lets you assess vulnerabilities, but you need automatic data for vulnerability management.

Manual data shows you where you’re vulnerable at the time of the scan.  You can think of manual data as a file folder on the left side of your desk with a folder corresponding to each scan.  Inside each folder is a piece of paper containing the forensic record of the raw results from that point-in-time scan.  The biggest limitation of this data model is that it lacks context and trending since it is a snapshot of a point in time.  For example, if you scanned on January 1 and found 500 vulnerabilities, then scanned the same assets on February 1 and found 300 vulnerabilities, what does that mean?  Did you fix all 500 vulnerabilities from January and have 300 new vulnerabilities for February?  Did you fix 200 vulnerabilities from January and have 300 left, but no new vulnerabilities in February?  There are several other potential scenarios that would also need to be considered, and determining the answer with any degree of certainty is problematic at best.

If you only have access to manual data, you have to perform a manual monthly process with a custom spreadsheet to attempt to reconcile and normalize the results from scan to scan to show month-over-month trending.

Another big problem with this data model is that it is difficult to track the lifecycle of a vulnerability on a particular host.  For example, you should be careful not to assume that if you don’t find a vulnerability in a subsequent scan that it has been fixed.  This is a poor assumption as there is a huge difference between "fixed" and "not found".  For example, if you first scan with authentication, then scan without authentication, many vulnerabilities won’t be detected in the second scan, simply because authentication wasn’t used.  This does not mean that the vulnerabilities are actually fixed and can lead to a false sense of security.

Lifecycle of a Vulnerability

Automatic data addresses these limitations by introducing the concept of a vulnerability’s state and providing additional context that is valuable when managing the lifecycle.  Automatic data can be thought of as a large relational database on the right side of your desk that normalizes the results of every scan over time for each asset.  A vulnerability can have one of four states:

  • NEW: Detected for the first time
  • ACTIVE: Detected more than once
  • FIXED: Detected, then confirmed to be resolved by scanning in the *same* manner as originally detected – e.g. with authentication
  • REOPENED: Detected, confirmed to be remediated, then detected again.  This may be the result of a machine being re-imaged without all relevant patches being applied.

states

The automatic data also enable users with the capability to mark vulnerabilities as IGNORED, and create an audit trail of all the transitions. The IGNORED state is complementary to the status. A vulnerability can be NEW/IGNORED or ACTIVE/IGNORED for instance. It is a way to manage exceptions.

Trending and Reporting

In addition to a vulnerability’s state, automatic data allows us to report on when a vulnerability was first detected, last detected, and the number of times it has been detected.  Also, vulnerability status is tracked intelligently to account for different option profiles being used.  For example, if a vulnerability is first detected using authentication, it will not be considered closed until a rescan *with authentication* confirms that the vulnerability has been resolved.  This addresses the limitation of the assumption that not found = fixed. And it prevents "saw tooth" trend results that can happen when scans are conducted with varying configurations (e.g. with / without authentication) over time.

This type of accurate trending information is valuable to be able to correctly report the postures of organizations and the progress (or lack thereof) over time in remediating vulnerabilities in their environments.  Using the QualysGuard Detection API, this concept of vulnerability state/trend information can be included in data integrated with third party platforms (e.g. SIEM, GRC, etc).  Without automatic data, organizations are left to extremely manual, time-consuming, and error-prone approaches to attempt to measure and track the effectiveness of their vulnerability management programs over time.

Decoupling Reporting / Remediation from Scanning

One other main benefit of automatic data is that it allows the scanning and reporting/remediation efforts to be decoupled since all the data is tracked and normalized.  Scanning can be conducted according to location and reporting can be performed according to those responsible for remediation.

User Interface

The most obvious place where the difference between manual and automatic data is found in the QualysGuard user interface is when editing a scan report template and choosing the Scan Results Selection:

edit-scan-template

Automatic data is also used in “Status” and “Status with Trend” scan reports and Scorecard reports, as well as throughout the user interface including your dashboard, asset search results, remediation tickets and host information.

Automatic is the Way to Go

The difference between manual and automatic data is the difference between a vulnerability assessment program that identifies only current vulnerabilities and a vulnerability management program that drives the remediation of vulnerabilities over time. Automatic data makes QualysGuard the only vulnerability management solution that can differentiate between vulnerabilities that are actually fixed, versus those that simply weren’t detected.

Contributors to this article: Jason Falciola, Steve Ouzman, Karl G. Schrade, and Leif Kremkow.