It's no surprise that security and application development teams often find themselves locking horns. One wants applications and new features to roll out -- and swiftly -- and the other is often more concerned with keeping systems and data snug. At some organizations, as they embrace more agile development and continuous integration/delivery methods, the tension runs even higher.
In continuous integration and deployment environments, teams integrate their development work continuously. Automated tests help to identify errors as work is completed, and these automated tests often include code analysts and functional testing -- all occurring on a deployment pipeline.
The problem is that these teams move rapidly and if their processes are not well established and proven to work, they end up automating bad processes. This, in turn, creates more mistakes, racks up more technical debt, and even introduces security vulnerabilities in the process.
The challenge for organizations develops as they move to swifter and more agile development programs. Then, the demands increase on the security, engineering, and quality assurance teams. "Developers expect much more self-service [when it comes to testing], and they expect to be able to operate with a much tighter feedback loop. They don't want to have to commit code and wait till the next day until a 12-hour set of tests has finished running before they find out whether it's any good or not," says Nigel Kersten, the CIO at Puppet Labs.
The rise of shadow QA
What has happened in recent years is that developers began to create their own "rogue" testing environments, what Kersten calls "shadow QA." This, in turn, has made it possible for software testing to move swifter. "In the last year or two, we're seeing a trend where quality assurance and quality engineering teams are being forced to provide faster feedback and more self-service to the development organization," he says.
Fortunately, as enterprises get more experience with continuous integration and delivery processes, the software development and automated testing also improves. "People are getting smarter when it comes to launching software projects," says Chris Cera, CTO at product design and development firm Arcweb. "If you looked at how many software projects failed 15 years ago versus how many fail now, I'm positive that, as an industry, we're learning how to increase the success and minimize the risk of projects."
Kevin Behr, chief science officer at the IT consultancy Praxis Flow agrees, and says that the benefits of continuous testing being done right include helping to improve the building of inherently secure software. "There are a lot of opportunities for security to plug into the existing continuous delivery frameworks right now, when it comes to software code testing," he says. "You not only can inspect the software more often, but will also find issues faster."
Large enterprises continuously slow
While these concepts are not new, they are certainly new to many larger organizations. "Many of the big enterprises still have not adopted CI/DC. If you have a large multinational workforce, trying to get them to do one or two week iterations is nearly impossible. So, there are just a lot of challenges with it," Cera says.
Some of those challenges include the complexity associated with composite applications. An enterprise's automated software testing capabilities need to be very robust and able to validate the continuous releases of code.
"From a security perspective, that also means that the security people who might previously have been invoked once a quarter to do reviews and testing now need to be involved much more frequently," says Cera.
But integrating security with continuous integration and delivery enterprises, Cera and others contend, requires the right level of investment and for the business to understand what time and energy need to go into quality software development. "Assume your traditional release cycles are quarterly; then you have to have compliance and risk people come in once a quarter to look at things. Now, if you're doing one or two week iterations, you've have them come in for the same cycles, or get them to agree and approve an automated set of tests," Cera says.
One key to success, especially when the goal is improving code quality and resiliency, is to enable developers to test the code in their own virtual environments that are identical to their production environment.
"Testing is often on a shared infrastructure that, using something like Puppet, can be configured to be exactly the same as the continuous integration production system. And that's good enough for the developers. They're getting their tight feedback loop. They're testing code in pretty much the same way as it's going to be tested in the official continuous integration pipeline, and yet you can still manage tight change control process in the merge from test to production," Kersten says.
None of this is to say that the move to continuous integration and deployment and maintaining, or even improving upon, security is a given -- or even easy. Bad processes and shortcuts will come back to haunt the organization as bugs and security holes are identified while the applications are live in production -- when it tends to be more expensive to remedy serious flaws. "I think continuous integration and deployment fall down when you don't have a very tightly disciplined development organization, and it's just open season on the production infrastructure. This actually becomes counterproductive because the feedback loops become longer and longer," Kersten says.
That certainly creates a "debt" of work that very likely will need to be done at some point because mistakes need to be corrected.
Still, most experts agree, when done right, continuous integration and deployment environments can improve the security and resiliency of the software they build. "I think you're going to start to see more secure deployments and more secure code going out. The quality will improve dramatically," says Behr.