Few would try to argue that cloud computing, DevOps, and mobile computing aren’t having a profound impact on IT and how enterprises secure their environments and data. But you just wait for what is in store for the decade ahead. In his Friday morning keynote, Securosis president and principal analyst Mike Rothman will discuss how these technologies and movements will change information security and how security professionals do their jobs for years to come.
Web application vulnerabilities, when not patched or mitigated, continue to pose significant risk to enterprise apps and data. Just prior to the Qualys Security Conference 2014 in Las Vegas, I reached out to Will Bechtel, director of product management for Qualys Web Application Scanning (WAS) to discuss web application security trends and get a few highlights on Qualys’ web application security strategies that will be discussed during the show.
There’s a connected device tsunami coming. Everything from light bulbs to refrigerators to cars to industrial control systems are increasingly becoming Internet connected. Many are under the impression that this “Internet of Things” is primarily a consumer security and privacy issue. Turns out, it’s not. But what precisely will the Internet of Things mean to enterprise security managers and CISOs and how they protect their organization’s infrastructure and information?
It was a great time all around at the 2013 Qualys Security Conference. There were plenty of bright, energetic security professionals who are deeply engaged in their work to best protect their organizations against advanced threats. The opportunity to take part in so many quality conversations with such security professionals is something that just isn’t possible at the mega cons.
At the show, attendees enjoyed a preview of features that are upcoming in the QualysGuard Cloud Platform, as well as insight on QualysGuard’s continuous monitoring capabilities.
As Elinor Mills covered in her post, Qualys CEO Courtot in QSC Keynote Says Security Should Be Felt, But Not Seen, details on product enhancements were covered, including the increased focus on web application security and expanding the notion of continuous monitoring of the network perimeter.
The challenges associated with continuous monitoring – vetting systems for weaknesses and policy posture at enough of a periodicity required to mitigate attack risk – was a significant focus of the conference. One of the highlights included Director, Federal Network Resilience (FNR), U.S. Department of Homeland Security John Streufert keynote, in which he comprehensively detailed DHS’s efforts to boost the security, resilience, and reliability of the nation’s IT and communications infrastructure. That included the continuous monitoring as a service contracts the FNR has put into place for Federal, state, and local governments.
Their continuous monitoring efforts also include security dashboards designed to inform and prioritize cyber risk assessments across the government.
Mills provided a great overview of Streufert’s talk in her post, DHS Director Streufert: Continuous Monitoring Stops Attacks, Saves Money.
Of course, one doesn’t need to be the size of DHS to benefit from the implementation of continuous monitoring. Securosis analyst and president Mike Rothman helped put continuous monitoring in perspective for the rest of organizations, both large and small. In his keynote, he served attendees pragmatic advice on how they can incorporate continuous monitoring by informing attendees what continuous monitoring entails, and strategies detailing how to put continuous monitoring in practice.
I provided more details on Rothman’s talk in my post: Focus Continuous Monitoring Efforts Where Breach Will Cause “Blood to Flow in the Streets,” Analyst Says.
Securosis also just published their paper on continuous monitoring.
In the final keynote of the show, journalist and author Steven Levy reminded everyone what it is we truly owe to hacker culture. In short: just about everything we do today digitally. His talk hailed back to the hacking culture of MIT in the late 50s and and early 60s and up through modern times, including the Internet, and how hacking culture remains a crucial part of the fabric as such companies as Google and Facebook. You can find coverage on his keynote, in the post Author Steven Levy: What We Owe to the Hackers.
LAS VEGAS – In an engaging keynote to close the 2013 Qualys Security Conference, technology journalist Steven Levy – author of the 1984 classic Hackers: Heroes of the Computer Revolution – shared his vision of what modern computing – and society – owes to hackers.
“In the three decades that I’ve been reporting about technology, I’ve found one subject that I keep coming back to again and again and again. One North star in all of the reporting. And that is hackers. I think that hackers are the key thinkers of our age,” Levy said. Levy acknowledged that the word “hacker” does come with unwelcome baggage today, and some of it deservedly so. But, “the spirit of hackers drive our technology, even today,” he said.
And he explained how his technology reporting has lead him to believe that the best in the technology industry have adopted from hackers the drive to strive to do the impossible. Levy cited Google’s efforts in such areas as space exploration, self-driving cars, and work to postpone aging as examples of hacker vision. “They [hackers] are telling us, all of us, that if you aren’t making use of this technological bonanza and make something new, you’re missing the great opportunity of our times,” he said.
He then challenged the room full of attendees to strive for the (seemingly) impossible when it came to making the advancements necessary to build securable technology and keep systems safe.
Levy also recalled how his initial reporting on hackers, for the book, revealed to him a much different culture than was portrayed by the near-universal stereotype. Hackers, in their first presentation for the general public, a Physiology Today article written by Philip Zimbardo, were debuted as anti-social nerds, losers, with unhealthy curiosities. Zimbardo said that a hacker’s "fascination with computer becomes an addiction, and with most addictions, the substance that gets abused is human relationships."
At the time of his assignment, Levy thought he would go to Stanford University in California and “talk to these weirdos,” and then write the story about these solitary outcasts.
What he actually found, when interviewing hackers, “blew away the stereotype,” he said.
“The people I met from academia, at Stanford, and the growing PC world were not anti-social nerds. They were explorers, adventurers. They were flushed with joy, really thrilled about what they were doing on the computer. I found so many amazing things by looking at the way they view the world. And they were a real joy because they felt they were empowering themselves by programming,” he said.
And from those humble, hacker beginnings, sprung the PC industry. At the time he wrote the book, he interviewed a programmer who hacked the very first word processor. Another student, who did his math homework on the computer, handed in his work to the professor, who flunked the student because “you can’t do this on a computer.” That application was a precursor to the modern spreadsheet. Levy also recalled how the first interactive computer game was a collaborative effort undertaken by hackers on a minicomputer donated to MIT from Digital Equipment Corporation.
Also, when the Defense Department sought to build what would become the Internet, “they turned to people with the hacker mentality to do it,” he said. “People realized that if they based the Internet on open principles, hacker principles, you would get something that would be more reliable and robust than forced cooperation. So the Internet itself is a hacker kind of creation,” he said.
“My point is that nearly all the things that masses of people do with computers today were things invented by hackers,” Levy said.
Obviously, with the considerable attacks upon organizations and websites over the past 20 years and the use of the term “hacking” within the media to describe many of those incidents, the term hacker now has negative connotations. “But as I present to you today, hackers are so much more than that,” he said.
And the word hacker is experiencing a kind of renaissance in Silicon Valley, thanks in part to the hacker spirit being summoned for creative efforts at organizations such as Google and Facebook. One Facebook example Levy used included Mark Zuckerberg’s letter to shareholders, which stated that “he runs his company by what he calls the hacker way.”
And that vision, drive, and even those hacker ideals are all things those working within the security industry should heed, Levy said. “And I really believe, and that goes for everyone in this room, in this field,” he said. “We live in this unbelievable unique time where technological advances make impossible things, actually possible. Which may be the biggest lesson of all from the hackers,” he said.
Focus Continuous Monitoring Efforts Where Breach Will Cause “Blood to Flow in the Streets,” Analyst Says.
The security industry is one that’s never short of buzzwords. And recently there’s certainly been a lot of buzz around the “continuous monitoring” of business-technology systems and how continuous monitoring can improve security, compliance, and even operations. But what does “continuous” really mean, and what really needs to be monitored so closely?
Last week, at the Qualys Security Conference 2013, Securosis president and analyst Mike Rothman tried to provide some answers, and offer attendees pragmatic advice on how they can incorporate continuous monitoring into their IT management and security efforts.
“It’s one of these things that when you drive down and really examine all the stuff that you could possibly monitor it quickly becomes overwhelming,” said Rothman. “Especially when people look at everything that they’re supposed to try to do,” he said.
To help make certain that security manager’s focus on the right things when they try to vet what events and data they should continuously monitor, Rothman went through a number of specific use cases where continuous monitoring and continuous security monitoring really makes sense.
Rothman broke down the following use cases:
The attack use case: In the attack use case, teams are monitoring assets for potential attack paths. This includes logs, network topology and traffic for anomalous behavior, as well as asset vulnerability and configuration posture.
The change control use case: Continuous monitoring is used to help better manage operational change, understand who made changes, and were the changes made within policy, or did they add any faults into the infrastructure. In this use case, configuration and vulnerability levels are also tracked.
The compliance use case: In many ways, the regulatory compliance continuous monitoring use case is the easiest use case to sell and obtain budget for, said Rothman. This would be monitoring assets, configuration and vulnerably management posture, event logs with an eye for staying within policy to any number of industry and government regulations depending on the nature of the organization’s industry.
The first thing enterprises must do is identify their core goals, or what continuous compliance use cases they’re going to focus upon. “It’s about identifying the problems you are going to try to solve. Is it a compliance problem? Are you trying to get a better operational handle on your environment? Are you trying to deal with attacks? Obviously those aren’t mutually exclusive, but they do tend to build toward the broadest, deepest, and ultimate goal of monitoring and that’s to deal with advanced attacks,” Rothman said.
Of course organizations can’t start to monitor every bit of data and every asset in their organization. They key is to focus on monitoring and protecting the data that would cause “blood to flow in the streets” should there be a breach and that data be compromised. “Go through a period of asset and data classification,” advises Rothman. “Identify what assets you are worried about, then determine how important each of those assets are,” he said.
The most critical are the ones where the continuous monitoring efforts should start.
Once assets are being monitored at a steady frequency, the alerts are going to start rolling in. That’s both good news, and bad news, Rothman explained. “There’s always going to be an aspect of false positives. Ultimately you want to minimize those. And as you’re monitoring and you mature the program, you get better at isolating the root cause of what happened around attacks. But I would say, even more important than that, is that you’re able to narrow down focus on a select set of alerts that are more likely to be problematic,” he said.
One way to help minimize those false positives is to get your security monitoring technologies working together properly, such as Security Information and Event Monitoring and Vulnerability Management applications , he said. Doing so will help tune the proper signal from all of the day-to-day noise.
Finally, how can enterprises gauge the success of their continuous monitoring programs? Rothman said in a chat outside of his talk that success comes when “you’re a lot more responsive and focusing on incidents and situations that actually create risk and can result in breaches. That’s when you know you’re making progress: when you’re not spending a lot of time in dark holes chasing down stuff that ultimately becomes inconsequential.”
LAS VEGAS — If there was one message a team of panelists conveyed here at the Qualys Security Conference 2013 it was this: When it comes to keeping their business-technology systems secure most organizations are concentrating on the wrong things.
Instead of building resilient systems, Pescatore said, enterprises are emphasizing their efforts on being compliant, and proving to auditors that they’re meeting regulatory mandates. Or they’re spending too much time creating hypothetical risk equations where they’re estimating the costs of incidents (typically a large imaginary number) with the likelihood of the breaches (typically a small imaginary number). In the meantime, with their heads down in speculative risk models, real-world attackers are infiltrating their systems.
It’s time to flip the perspective. “Compliance must follow security,” Pescatore said. “Compliance reports that say we are compliant are great, yet most credit card exposures occur on PCI compliant systems.”
It’s this misguided focus on compliance first that makes it too easy for most any large security consultancy to find evidence of live, ongoing compromises on most of the systems they evaluate for clients, he explained.
Surprisingly, he added, many of the persistently weak areas in these programs are the basics. Organizations continue to be lax in areas of vulnerability and configuration management, operate with no real visibility into advanced threats, or put forth much in the way of a mature application security program.
Considering that, how do security managers prioritize their spending and efforts so that security is the first priority?
Essentially, panelists argued, organizations need to combine the expertise of the security professional with automation of best-practice security processes and controls along with accurate and timely threat and vulnerability information. The panel, consisting of Pescatore and panelists Jonathan Trull, CISO, State of Colorado, Doug Dexter, audit lead, Cisco Systems, and Wolfgang Kandek, CTO, Qualys, all pointed to the value of the SANS 20 Critical Security Controls.
The SANS 20 Critical Controls, they explained, helps enterprises to change their focus from compliance to continuous security monitoring.
Cisco’s Doug Dexter told how Cisco started its journey on implementing the critical controls – such as inventorying unauthorized and authorized applications and devices or continuous security monitoring – before the list had been formalized.
One example Dexter shared was how Cisco, rather than scan for new vulnerabilities as they are publicly announced, the company continuously scans its systems and maintains a database of information around hosts for when new software flaws surfaced,” Dexter said. This way when there was an issue, “all we had to do was query the database to gain an understanding of the situation rather than have to conduct a full scan,” he said.
Colorado’s Jonathon Trull added that the SANS 20 Critical Controls is the primary security framework for the state’s IT security program. Trull said that the initial goal was to obtain quick wins and build momentum around the programs implementation.“We focused on things that we could achieve in 120 days, and by August of this year we will have the first five controls completed,” he said.
One of the main objectives for the state is move away from a compliance mindset to a security mindset. “We were investing 80 percent of our time on compliance and 20 percent on security,” he said. But those efforts didn’t create the level of security Trull thought adequate. “That’s why we’re flipping the equation to 20 percent compliance and 80 percent security.”