All Posts

8 posts

QSC 2013: The Future is Continuous Security

It was a great time all around at the 2013 Qualys Security Conference. There were plenty of bright, energetic security professionals who are deeply engaged in their work to best protect their organizations against advanced threats. The opportunity to take part in so many quality conversations with such security professionals is something that just isn’t possible at the mega cons.

At the show, attendees enjoyed a preview of features that are upcoming in the QualysGuard Cloud Platform, as well as insight on QualysGuard’s continuous monitoring capabilities.

qsc13-wrapupAs Elinor Mills covered in her post, Qualys CEO Courtot in QSC Keynote Says Security Should Be Felt, But Not Seen, details on product enhancements were covered, including the increased focus on web application security and expanding the notion of continuous monitoring of the network perimeter.

The challenges associated with continuous monitoring – vetting systems for weaknesses and policy posture at enough of a periodicity required to mitigate attack risk – was a significant focus of the conference. One of the highlights included Director, Federal Network Resilience (FNR), U.S. Department of Homeland Security John Streufert keynote, in which he comprehensively detailed DHS’s efforts to boost the security, resilience, and reliability of the nation’s IT and communications infrastructure. That included the continuous monitoring as a service contracts the FNR has put into place for Federal, state, and local governments.

Their continuous monitoring efforts also include security dashboards designed to inform and prioritize cyber risk assessments across the government.

Mills provided a great overview of Streufert’s talk in her post, DHS Director Streufert: Continuous Monitoring Stops Attacks, Saves Money.

Of course, one doesn’t need to be the size of DHS to benefit from the implementation of continuous monitoring. Securosis analyst and president Mike Rothman helped put continuous monitoring in perspective for the rest of organizations, both large and small. In his keynote, he served attendees pragmatic advice on how they can incorporate continuous monitoring by informing attendees what continuous monitoring entails, and strategies detailing how to put continuous monitoring in practice.

I provided more details on Rothman’s talk in my post: Focus Continuous Monitoring Efforts Where Breach Will Cause “Blood to Flow in the Streets,” Analyst Says.

Securosis also just published their paper on continuous monitoring.

In the final keynote of the show, journalist and author Steven Levy reminded everyone what it is we truly owe to hacker culture. In short: just about everything we do today digitally. His talk hailed back to the hacking culture of MIT in the late 50s and and early 60s and up through modern times, including the Internet, and how hacking culture remains a crucial part of the fabric as such companies as Google and Facebook. You can find coverage on his keynote, in the post Author Steven Levy: What We Owe to the Hackers.

For security professionals based in Europe, be sure attend Qualys Security Conference 2013 in Paris (14 November), Munich (19 November) and London (22 November).

Author Steven Levy: What We Owe to the Hackers

LAS VEGAS – In an engaging keynote to close the 2013 Qualys Security Conference, technology journalist Steven Levy – author of the 1984 classic Hackers: Heroes of the Computer Revolution – shared his vision of what modern computing – and society – owes to hackers.

levy-speaking“In the three decades that I’ve been reporting about technology, I’ve found one subject that I keep coming back to again and again and again. One North star in all of the reporting. And that is hackers. I think that hackers are the key thinkers of our age,” Levy said. Levy acknowledged that the word “hacker” does come with unwelcome baggage today, and some of it deservedly so. But, “the spirit of hackers drive our technology, even today,” he said.

And he explained how his technology reporting has lead him to believe that the best in the technology industry have adopted from hackers the drive to strive to do the impossible. Levy cited Google’s efforts in such areas as space exploration, self-driving cars, and work to postpone aging as examples of hacker vision. “They [hackers] are telling us, all of us, that if you aren’t making use of this technological bonanza and make something new, you’re missing the great opportunity of our times,” he said.

He then challenged the room full of attendees to strive for the (seemingly) impossible when it came to making the advancements necessary to build securable technology and keep systems safe.

Levy also recalled how his initial reporting on hackers, for the book, revealed to him a much different culture than was portrayed by the near-universal stereotype. Hackers, in their first presentation for the general public, a Physiology Today article written by Philip Zimbardo, were debuted as anti-social nerds, losers, with unhealthy curiosities. Zimbardo said that a hacker’s "fascination with computer becomes an addiction, and with most addictions, the substance that gets abused is human relationships."

At the time of his assignment, Levy thought he would go to Stanford University in California and “talk to these weirdos,” and then write the story about these solitary outcasts.

What he actually found, when interviewing hackers, “blew away the stereotype,” he said.

“The people I met from academia, at Stanford, and the growing PC world were not anti-social nerds. They were explorers, adventurers. They were flushed with joy, really thrilled about what they were doing on the computer. I found so many amazing things by looking at the way they view the world. And they were a real joy because they felt they were empowering themselves by programming,” he said.

levy-signingAnd from those humble, hacker beginnings, sprung the PC industry. At the time he wrote the book, he interviewed a programmer who hacked the very first word processor. Another student, who did his math homework on the computer, handed in his work to the professor, who flunked the student because “you can’t do this on a computer.” That application was a precursor to the modern spreadsheet. Levy also recalled how the first interactive computer game was a collaborative effort undertaken by hackers on a minicomputer donated to MIT from Digital Equipment Corporation.

Also, when the Defense Department sought to build what would become the Internet, “they turned to people with the hacker mentality to do it,” he said. “People realized that if they based the Internet on open principles, hacker principles, you would get something that would be more reliable and robust than forced cooperation. So the Internet itself is a hacker kind of creation,” he said.

“My point is that nearly all the things that masses of people do with computers today were things invented by hackers,” Levy said.

Obviously, with the considerable attacks upon organizations and websites over the past 20 years and the use of the term “hacking” within the media to describe many of those incidents, the term hacker now has negative connotations. “But as I present to you today, hackers are so much more than that,” he said.

And the word hacker is experiencing a kind of renaissance in Silicon Valley, thanks in part to the hacker spirit being summoned for creative efforts at organizations such as Google and Facebook. One Facebook example Levy used included Mark Zuckerberg’s letter to shareholders, which stated that “he runs his company by what he calls the hacker way.”

And that vision, drive, and even those hacker ideals are all things those working within the security industry should heed, Levy said. “And I really believe, and that goes for everyone in this room, in this field,” he said. “We live in this unbelievable unique time where technological advances make impossible things, actually possible. Which may be the biggest lesson of all from the hackers,” he said.

FireMon – Traffic Engineer for the Corporate Network

LAS VEGAS – In the 1950s, British transportation expert John Glen Wardrop popularized several theories related to network equilibrium models that are useful for predicting traffic patterns and congestion. To prove the theories, he used a tool to help collect data – cables that registered when cars drove over it.

Fast forward 60 years and the same principles are being applied to solving traffic jams on a different type of network – corporate networks. In the area of network security, QualysGuard Vulnerability Management is the default solution to show corporations the risk posture of their networks, Ward Holloway, vice president of business development at FireMon, said in a session at Qualys Security Conference 2013 on Friday. “It’s effective at its job, the backbone of you being able to assess risk in your environment, like a data cable,” he said. Traffic Jam
    photo credit: Buzrael

“But as network security practitioners, you may be working with a complex (environment), multiple data centers around the world, different connections to different partners, tens of thousands of remote connections. When you run a scanner it’s not unusual to be told there are 85,000-90,000 vulnerabilities in an entire infrastructure,” Holloway said. “Which one do you focus on and fix first?”

Similar to traffic engineering, it helps to see the full context of the environment. Upstream, there may be a firewall that blocks all SQL coming into the segment, for example. Or there could be a low-value asset, say a Windows machine running a legacy, outdated billing application, which doesn’t need immediate patching. However, due to a firewall misconfiguration, the server is reachable from the outside, potentially putting valuable financial data at risk.

To address this, FireMon offers an automated traffic management system for risk in corporate environments that allows customers to take the Qualys data and overlay it with the knowledge of the network layout, the switches and routers and intrusion prevention systems to see “exactly how data travels through the network,” according to Holloway. For instance, a corporation could find out that “out of 5,000 vulnerabilities, there’s only 400 you need to worry about that are actually reachable right now,” and of those, only 30 are vulnerable to remote code execution, so fix them first.

Now that’s one way to keep bad guys out of the network.

RSA, Oracle Tools Help Businesses Manage Qualys Vulnerability Data

LAS VEGAS — At a reception late last week at Qualys Security Conference 2013, I talked to a Qualys customer who said Qualys does a great job at vulnerability scanning, in fact, too great of a job in the opinion of some of his IT staff. As QualysGuard identifies vulnerabilities, you must triage the problems to fix them.

We all know that what you don’t know can definitely hurt you when it comes to computer security. With QualysGuard data in hand, it is important to determine: Which issues are the most important? What can be done to remediate them effectively and efficiently? The answers to these questions depend on the customer’s specific networks and operations, which only the customer can truly understand. NeedleHaystack
    photo credit: James Lumb

QualysGuard is integrated with tools that can help customers prioritize their remediation steps. Corey Bodzin, solution manager for RSA, gave an overview of RSA’s Archer Risk Management solution, which helps organizations assess and resolve risks identified by Qualys. Marlene Veum, director of security for product development IT at Oracle, talked about how organizations can find the “actionable needle in the compliance haystack” by using Oracle Application Express.

With Archer, IT admins can pull the technical data into one place, set up a workflow and rules, prioritize issues and measure outcomes to make the best business decisions possible. Maybe a proof-of-concept that has been ignored should now be paid attention to because it’s being used in active watering hole attacks targeting the customer’s industry. “Something has changed that makes me want to respond differently,” Bodzin said in this scenario. “Archer sees that it’s flagged and that it’s part of the PCI data world… Now I’ve got to go in and ask people what are you going to do and address this change.” Archer can also help admins measure the results, find out what the average remediation time, for instance. “If an issue is 45 days old but it took 28 days to make a decision, then we need to fix it,” Bodzin said. The outcomes can be published in Archer dashboards and viewed by executives as a part of the company’s overall IT, operational and financial risk. “Qualys grabs the technical bits and Archer helps grab the human bits,… and make good business decisions in a timely fashion,” he said.

Meanwhile, Oracle’s system helps companies pull data from other sources within the company to put the Qualys data into context. Qualys “is so good at collecting information that that’s the challenge — how do you deal with it?” Veum said. By pulling in asset, system and network information, and establishing a baseline, an organization can get better understand its environment. It’s important to “have the ability to see we have a problem and to share the information with people who can act on it,” she said. Oracle Application Express, a free html-based tool that works with Oracle Database, has an executive dashboard for executives to see consolidated scans broken down by line of business and viewable by project status, scan summary and categories like vulnerability type.

Having data on vulnerabilities is just one part of managing risk; you need to know enough about your network to decide how to act on the information. These tools in the Qualys ecosystem can help organizations get the most out of their vulnerability data.

Focus Continuous Monitoring Efforts Where Breach Will Cause “Blood to Flow in the Streets,” Analyst Says.

rothmanThe security industry is one that’s never short of buzzwords. And recently there’s certainly been a lot of buzz around the “continuous monitoring” of business-technology systems and how continuous monitoring can improve security, compliance, and even operations. But what does “continuous” really mean, and what really needs to be monitored so closely?

Last week, at the Qualys Security Conference 2013, Securosis president and analyst Mike Rothman tried to provide some answers, and offer attendees pragmatic advice on how they can incorporate continuous monitoring into their IT management and security efforts.

SEE ALSO: DHS Director Streufert: Continuous Monitoring Stops Attacks, Saves Money

“It’s one of these things that when you drive down and really examine all the stuff that you could possibly monitor it quickly becomes overwhelming,” said Rothman. “Especially when people look at everything that they’re supposed to try to do,” he said.

To help make certain that security manager’s focus on the right things when they try to vet what events and data they should continuously monitor, Rothman went through a number of specific use cases where continuous monitoring and continuous security monitoring really makes sense.

Rothman broke down the following use cases:

The attack use case: In the attack use case, teams are monitoring assets for potential attack paths. This includes logs, network topology and traffic for anomalous behavior, as well as asset vulnerability and configuration posture.

The change control use case: Continuous monitoring is used to help better manage operational change, understand who made changes, and were the changes made within policy, or did they add any faults into the infrastructure. In this use case, configuration and vulnerability levels are also tracked.

The compliance use case: In many ways, the regulatory compliance continuous monitoring use case is the easiest use case to sell and obtain budget for, said Rothman. This would be monitoring assets, configuration and vulnerably management posture, event logs with an eye for staying within policy to any number of industry and government regulations depending on the nature of the organization’s industry.

The first thing enterprises must do is identify their core goals, or what continuous compliance use cases they’re going to focus upon. “It’s about identifying the problems you are going to try to solve. Is it a compliance problem? Are you trying to get a better operational handle on your environment? Are you trying to deal with attacks? Obviously those aren’t mutually exclusive, but they do tend to build toward the broadest, deepest, and ultimate goal of monitoring and that’s to deal with advanced attacks,” Rothman said.

Of course organizations can’t start to monitor every bit of data and every asset in their organization. They key is to focus on monitoring and protecting the data that would cause “blood to flow in the streets” should there be a breach and that data be compromised. “Go through a period of asset and data classification,” advises Rothman. “Identify what assets you are worried about, then determine how important each of those assets are,” he said.

The most critical are the ones where the continuous monitoring efforts should start.

Once assets are being monitored at a steady frequency, the alerts are going to start rolling in. That’s both good news, and bad news, Rothman explained. “There’s always going to be an aspect of false positives. Ultimately you want to minimize those. And as you’re monitoring and you mature the program, you get better at isolating the root cause of what happened around attacks. But I would say, even more important than that, is that you’re able to narrow down focus on a select set of alerts that are more likely to be problematic,” he said.

One way to help minimize those false positives is to get your security monitoring technologies working together properly, such as Security Information and Event Monitoring and Vulnerability Management applications , he said. Doing so will help tune the proper signal from all of the day-to-day noise.

Finally, how can enterprises gauge the success of their continuous monitoring programs? Rothman said in a chat outside of his talk that success comes when “you’re a lot more responsive and focusing on incidents and situations that actually create risk and can result in breaches. That’s when you know you’re making progress: when you’re not spending a lot of time in dark holes chasing down stuff that ultimately becomes inconsequential.”

It’s Time to Put Security First, Says Expert Panel

LAS VEGAS — If there was one message a team of panelists conveyed here at the Qualys Security Conference 2013 it was this: When it comes to keeping their business-technology systems secure most organizations are concentrating on the wrong things.

Video: John Pescatore Keynote and Customer Panel

pescatore-panel-600“Many security professionals have lost focus on the fact that they are getting paid to protect the business first,” said John Pescatore, director of emerging security trends at SANS.

Instead of building resilient systems, Pescatore said, enterprises are emphasizing their efforts on being compliant, and proving to auditors that they’re meeting regulatory mandates. Or they’re spending too much time creating hypothetical risk equations where they’re estimating the costs of incidents (typically a large imaginary number) with the likelihood of the breaches (typically a small imaginary number). In the meantime, with their heads down in speculative risk models, real-world attackers are infiltrating their systems.

It’s time to flip the perspective. “Compliance must follow security,” Pescatore said. “Compliance reports that say we are compliant are great, yet most credit card exposures occur on PCI compliant systems.”

It’s this misguided focus on compliance first that makes it too easy for most any large security consultancy to find evidence of live, ongoing compromises on most of the systems they evaluate for clients, he explained.

Surprisingly, he added, many of the persistently weak areas in these programs are the basics. Organizations continue to be lax in areas of vulnerability and configuration management, operate with no real visibility into advanced threats, or put forth much in the way of a mature application security program.

Considering that, how do security managers prioritize their spending and efforts so that security is the first priority?

Essentially, panelists argued, organizations need to combine the expertise of the security professional with automation of best-practice security processes and controls along with accurate and timely threat and vulnerability information. The panel, consisting of Pescatore and panelists Jonathan Trull, CISO, State of Colorado, Doug Dexter, audit lead, Cisco Systems, and Wolfgang Kandek, CTO, Qualys, all pointed to the value of the SANS 20 Critical Security Controls.

The SANS 20 Critical Controls, they explained, helps enterprises to change their focus from compliance to continuous security monitoring.

Cisco’s Doug Dexter told how Cisco started its journey on implementing the critical controls – such as inventorying unauthorized and authorized applications and devices or continuous security monitoring – before the list had been formalized.

One example Dexter shared was how Cisco, rather than scan for new vulnerabilities as they are publicly announced, the company continuously scans its systems and maintains a database of information around hosts for when new software flaws surfaced,” Dexter said. This way when there was an issue, “all we had to do was query the database to gain an understanding of the situation rather than have to conduct a full scan,” he said.

Colorado’s Jonathon Trull added that the SANS 20 Critical Controls is the primary security framework for the state’s IT security program. Trull said that the initial goal was to obtain quick wins and build momentum around the programs implementation.“We focused on things that we could achieve in 120 days, and by August of this year we will have the first five controls completed,” he said.

One of the main objectives for the state is move away from a compliance mindset to a security mindset. “We were investing 80 percent of our time on compliance and 20 percent on security,” he said. But those efforts didn’t create the level of security Trull thought adequate. “That’s why we’re flipping the equation to 20 percent compliance and 80 percent security.”

Qualys CEO Courtot in QSC Keynote Says Security Should Be Felt, But Not Seen

philippe-300LAS VEGAS – Qualys founder and CEO Philippe Courtot has a vision for the future, and if he succeeds you might not even see it.

Basically, he’s working to make security a part of the fabric of the IT infrastructure on which companies run their businesses. And to make things easier on people using the systems, the security should be painless and out of sight, and just do its work in the background.

“The challenge is to bring security into this new infrastructure and make it invisible,” he said in his keynote at the Qualys Security Conference 2013 today. Security should not be a burden for anyone, otherwise it’s a failure.

Qualys’ aim is to “build continuous security into the fabric of the cloud. In the mainframe world, it took 10-15 years to build security into their infrastructure,” he said. “In the cloud security world we had to do everything again, and the challenge is to bring security into that infrastructure.”

About five years ago or so, Qualys adopted a philosophy from an unlikely source — Goldman Sachs. The financial services firm had a security model in which it used different enterprise security tools to ensure the security of the entire infrastructure and treated vulnerabilities as a part of compliance. It might seem counter-intuitive, but that approach is more dynamic and effective, Courtot said. If a device if misconfigured, in violation of an internal policy or external regulations, it can be addressed more quickly than dealing with disparate enterprise software solutions that have different update cycles, for example.

“At the core, you define your assets, then provide them attributes, fingerprint them…” he said. “Now you can have two different views, from compliance and security. You want to have the ability to report, to look at trending, have alerts and integrate with another solution, and deliver all of it on a global scale. That’s what Qualys is all about. It’s our fundamental belief that this is the right model — to build security into the fabric of the cloud.”

Courtot also teased some new features that are coming up from Qualys. The company already is using its cloud protection architecture to bring security to a range of platforms, such as Amazon Web Services, Azure and vCloud, and it is increasing scalability, working to expand the capabilities of protecting against network threats, focusing on Web application security and expanding the notion of continuous monitoring of the perimeter.

DHS Director Streufert: Continuous Monitoring Stops Attacks, Saves Money

LAS VEGAS — When John Streufert was CISO at the U.S. State Department he saw that the agency was losing a lot of money and wasting a lot of employee time trying to defend against cyber attacks. And despite all the audits and reports, the defense wasn’t working – the bad guys were getting in and stealing data.

Video: John Streufert Keynote at QSC

streufert-300So, he overtook a move to continuous monitoring of the network that was able to reduce as much as 90% of the security risk, he said in a keynote at the Qualys Security Conference 2013 today. Specifically, they were able to identify the worst problems in minutes rather than years, to fix the worst problems in days as opposed to months, and get costs down to about $200 million compared to $600 million per year.

Now, Streufert is bringing that same game plan to the Department of Homeland Security where he is director of federal network resilience.  “We are in the process of making a shift in the federal government as to how we handle our security challenges,” he said. “Continuous Diagnostics and Mitigation can stop 85% of cyber related attacks” and report on attacks in near real time, as well as enable system administrators to respond to exploits much faster.

The system can help the agency avoid being low-hanging fruit.  According to CSIS and Verizon reports: 75% of attacks use known vulnerabilities that could be patched; more than 90% of successful attacks require only the most basic techniques; and 96% of them could be avoided if there had been simple or intermediate controls in place.

At the State Department the statistics of the environment before the changes made for a strong economic case, Streufert said:

  • Every three days there were trillions of security events; millions of attempted attacks; thousands new flaws introduce; and hundreds of successful attacks.
  • Every three months there were over 10,000 successful attacks; terabytes of data stolen; 7,200 reports written; and hundreds of labor hours wasted.
  • Every three years there are thousands of assessments and other reports written, each requiring 3-9 months to prepare and out-of-date the moment they are printed; and the data provide only a snapshot in time versus real-time identification and mitigation of problems.

These manual processes, reports and audits cost between $600 million and $1.9 billion a year, or $1,400 per page, and result in the equivalent of 438 feet of paperwork. They also consume as much as 65% of the overall IT security effort in the agencies involved, according to Streufert.

He was asked to go to DHS to work on moving the agency from a cybersecurity defense strategy modeled on process and compliance to one focused on continuous diagnostics and mitigation. The first phase will be completed this year, the second phase next year and the final phase in 2015. The cost will be about $600 million over three years.

Update: See attachments for data sheet describing the US Department of Homeland Security Continuous Diagnostics and Mitigration Program.

Attachments

Continuous Diagnostics and Mitigation Program 175.4 K