Webcast Q&A: DevSecOps – Building Continuous Security Into IT and App Infrastructures
Last updated on: September 6, 2020
As organizations adopt DevOps to create and deliver software quickly and continuously — a key step for supporting their digital transformation initiatives — they must not overlook security. In DevOps, development and operations teams add agility and efficiency to software lifecycles with automation tools and constant collaboration, but the added speed and flexibility can backfire if security is left out.
Rather, organizations should bake security personnel, tools and processes into the process to end up instead with DevSecOps, a topic whose business and technology aspects were explored in depth during a recent webcast by Qualys Product Management VP Chris Carlson and SANS Institute Analyst John Pescatore.
In this blog post, we’re providing an edited transcript of the question-and-answer portion of the webcast, during which participants asked Carlson and Pescatore about a variety of issues, including the dangers of using Java, the right tools for DevSecOps, and the best way to embed security into the process. We hope you find their explanations insightful and useful.
In addition, if you didn’t catch the live broadcast of the webcast — titled “DevSecOps – Building Continuous Security Into IT & App Infrastructures” — we invite you to listen to its recording, which we’re sure will provide you with a lot of practical tips, useful best practices and valuable insights about DevSecOps and digital transformation.
An organization is looking at an externally-hosted HR application, and the provider uses Java in their own environment. Is it safe to use Java?
John: There’s a lot of different parts of Java: the runtime environment, virtual machines, the browser plug-ins and so on. It can be done safely. It’s a very effective programming language. There’s lots of secure Java development. Unfortunately, there are lots of patches that come out from Oracle on the runtime environment, and Oracle keeps trying to trick people into installing toolbars and various third-party software with the Java updates.
So, ideally, if it was perfect world, we could say, “Yes. Abandon it.” That’d be great. The reality is it’s kind of a usual thing for providers to write applications that run across heterogeneous environments, which increasingly in the mobile world it’s not going to be just Windows, it’s not just going to be just Apple, it’s not going to be just Android. It’s going to be a heterogeneous world. So, typically, the answer is to ask the right questions about the secure development life cycle and admin processes of the provider. Are they doing Java securely? Are they updating things? Are they not using the browser plug-in? Lots of things like that. Chris, you want to weigh in?
Chris: That’s right. In fact, it’s not just about Java. It is really about a third-party provider, and what is the security of their offering. That may mean that you engage with them with a security assessment questionnaire for them to set up and respond to. Maybe they have to present the results of their vulnerability assessment, or their compliance programs to you so you can evaluate and weigh if that vendor is operating correctly. So really it’s more about third party vendor assessment, not necessarily Java, but they are linked together. [It’s] something to really engage with third party vendors in that aspect.
While DevSecOps infers an integrated team, would you recommend abstracting security from development (i.e., there’s security expertise still in app security, expertise in the security organization) or embedding security into the development team?
Chris: The answer is [that] it varies. And it varies depending on the organization, where the skillsets are, where that cultural transformation is happening. That last example I gave you about that financial services provider, they split the app sec task in half, where the easily-mitigated vulnerabilities are done by normal development teams. So a web developer can fix the input validation, can remediate SQL injection security issues.. But the higher order app sec issues, the penetration, supporting a third party open source Bugcrowd type engagement, that may live in the app sec role that’s within security. So at the end of the day, it’s really about driving efficiencies and optimizations, and it may not matter where it lives, but the true security hardcore and policy definition should live within security.
But how can you enable and empower developers and operations people who don’t know about security, don’t have a security background? How can you translate a security issue into a software defect, which they can fix like they fix other functional software defects? That’s where successful DevSecOps integrations come in — to transparently and automatically build security into the DevOps pipeline for developers and operations people to use and get benefit.
John: Whenever I see successful examples of companies that have turned that corner in reducing vulnerabilities, inevitably they’ve invested in integrated education of developers as to what bad coding practices are, from a security point of view, integrated security capabilities into their development environment, development and test environment. So when projects are run through test tools to determine if they should be allowed to check in or advance to the next stage, common security vulnerabilities are looked for and so on. So there’s definitely a level of integration. Just [like] a person who loses weight doesn’t have a dietitian with them all the time, they learn the basic rules of avoiding certain things. But the threats change pretty rapidly, and the security team is the only place we can put the expertise to keep up with that. So I’m all for embedding as much security knowledge into the standard development practices, tools, and environments.
How can we imbue or further imbue a mindset so that security defects are treated as functional defects (i.e., security given the same priorities as a feature, when software is being tested, and so on)?
John: To me that’s back to this embedded part. A lot of the successful success stories in the past are where the security group found a friend in QA, or whatever you call the last step, when someone finally says: “This application is okay to go on production systems.” And then, rather than after that happens, the security team [is] running tools to test and say: “Hey, you’ve got all these vulnerabilities.” That step prior to blessing it for production is the starting point for, quite often, getting security into the same conversation with functional defects or the availability defects, and then working your way upstream from that final step as the QA group starts to reject code and say: “We cannot approve for production, because of these security vulnerabilities.”
[In] the example I used [of the] financial organization that did the email authentication, [the] CISO focused on application security made friends with the VP of app dev, and was able to work their way up, all the way to the beginning of the software development life cycle. Not only did they increase software productivity overall, i.e fewer lines per hour of code, when you counted in rework, [but] they were also able to shorten time to market. Sort of addressing the two big myths: that secure software takes too long and costs too much. Nope. They actually showed increases in productivity, and decreases in time to market because they had gotten the QA organization to be the gatekeeper to say: “Nope. That’s not getting to market with all of these security vulnerabilities.” Just like we wouldn’t let it get to market if it was missing these functions it claims to have. Chris, any expansion?
Chris: That makes perfect sense, and that’s part of the cultural transformation, agree? That security is not the department of “no,” they’re not an adversary to the IT group. To expand beyond that, eventually security defects have to be fixed anyway. Maybe not all of them, but they’ve gotta be fixed anyway in production. And that is a cost and a time that is borne by the IT and the app dev teams.
So if you can help, and educate, and understand that you are going to have to fix this cross-site scripting [flaw] anyway, why not fix it earlier, when it’s two lines of code, and takes you five minutes, and QA can validate it? As opposed to “Oh my gosh, I’m going to have to completely re-architect that component,” and make sure that doesn’t break some other functional stuff because it’s already in production. So when you start to measure the time it takes to fix vulnerabilities in production, and how long it took to fix it in the early part of the DevOps process, it just becomes a win across the board for all groups.
I’m in QA and I was recently moved into the security testing team. What are the key points I should look at when deciding on what security tools we should use?”
John: First off, if you are a Gartner or a Forrester customer, they have great research notes out on the capabilities you should look for. Obviously, Gartner has the Magic Quadrant, but they also have these critical capability notes that give you great guidelines.
See how well the processes integrate with however your development process is. A lot of tools are great for security geeks to use, but when they give out information about where the vulnerability is or recommended fix is, it’s like a different language to developers. So have developers look at the output of the tools that you’re considering.
I’d love there to be the sort of standard crappy piece of software that we could all run these tools against, and then sort of do a false positive, false negative check. A good way is to involve the right people from your development team, run it against some of your typical code, and try to do a sanity check on the false positive side of things. That is the most common complaint from the developers’ side: “Too many of these trouble tickets that you generated here, we look into it and there’s nothing there.”
Chris, you obviously have a conflict of interest here, but what are the key things you see people looking for in choosing tools?
Chris: Yes certainly. Well, John, when you said “some crappy typical software,” unfortunately that list is a mile long, so there’s plenty of opportunity to find some of that. And Qualys is sponsoring this SANS webinar, so I certainly see what we’re doing as a company in this area, and how our offerings are improving what our customers are doing.
But for you, I think in that case it’s fantastic. It is good to see developers move over to security. It is good to see QA people move into security or networking people to go into security, because you understand how the application functions. You know what the positive test cases are, what the negative test cases are, what the happy paths are for these type of things. As you learn about security and apply that security lens to it, then it becomes: “Well, was this feature implemented because [of its] ease of use, but now it opens up a non-authenticated API? That’s not good.” So that domain knowledge is very powerful.
Sometimes it’s hard to take a pure play security person and move it into an IT group, because [sometimes those folks strongly believe that] it is only about security. But it’s good to see you are widening your career there, that you are moving into security. Some of the things we talked about during this webinar would be some early places to start.
Throughout this webinar, you’ve largely been talking about running automated vulnerability scanning type tools. That’s the only thing you’ve talked to. Isn’t there a lot of risk in getting a false sense of security, by relying solely upon automated vulnerability tools to determine if applications are safe or not?
Chris: Yeah, so it’s not solely. You know, this is just the example that we use in here. These customer examples were not only using Qualys, they’re using different tools. But it really comes down to how you can apply a multiplier, an automated capability, to reduce that attack surface. How can you take something, to take the known and obvious 100 percent vulnerabilities off the table. How you can reduce that scope so that ultimately the things that a human needs to look at, [that] a human needs to analyze — maybe do a pen test or do their own manual fuzzing, or their own input validation — it’s less workload for the manual process. So it’s about using tools to automate and extend the human capacity that you can bring into the job, in order to supply and support more business goals that are coming from you, from every corner. John?
John: What I’ll say is first off, a secure development lifecycle is not the old development life cycle, with vulnerability scanning tools jammed in at a couple places. That would still be an improvement over many development lifecycles, but that’s not a secure development lifecycle.
The example that I use is when my kids were little they would to cry at night. My wife would sleep right through it, but I’d wake up and go see why they were crying. Well, that’s because we had a pretty quiet house. Imagine if I had no doors or windows on my house, and there were dogs barking, and all kinds of noise in my house. I’d never be able to get to my kids crying. That’s the same thing in most environments. To run these tools and work off the vulnerabilities these tools will find, that’s the only way to apply the people manpower and peer review teams with security knowledge, real people, or developers that at least know insecurities, so when they do peer review, they can point out problems to lesser skilled developers.
If you can’t get the easy ones out of the way first — and if you look at the breach reports, 99 percent of the time it’s pretty easy vulnerabilities that are getting exploited — you can’t get to the next stage of the higher value human analysis, and unfortunately too often that happens.
Watch the on-demand webcast, “DevSecOps – Building Continuous Security Into IT & App Infrastructures”.