For companies looking to differentiate themselves in a digital world, developing new applications that can be launched to market quickly is critical.
The ability to innovate fast is often the competitive edge that allows an organisation to rapidly scale, particularly in a technology-driven market, such as banking or healthcare. And as consumers generate more data than ever before, the opportunity for smart companies is to feed information into application development, improving the customer experience and generating new revenue streams.
And yet no company should develop applications that risk losing the trust of the customer. Every application that relies on customer data must use it in a responsible way, and security must be baked in from the start.
Speed-to-market, innovation and security are the most important considerations when developing applications, but often, speed and security are seen as mutually exclusive, with one coming at the cost of the other. Typically when choosing between these considerations, a desire for speed and innovation wins out.
The false belief that security hampers innovation is leading to severe security vulnerabilities for organizations and ultimately their reputation. Figures from HPE and Gartner point out that 80% of attacks are happening at an application layer. Many recent high-profile attacks that hit the headlines were carried out via exploits of applications.
The reason applications are such a target is that security tends to be ‘brushed on’ an application as it nears completion, rather than ‘baked in’ from the first line of code. To extend the cooking analogy further, organizations must consider the recipe and ingredients - the processes and tools - needed to make security an integral part of development from day one in order to minimize risk and establish the trust of the user. And contrary to popular belief, a security by design approach enables companies to ensure development does not take too long.
Baking in Security from Day One
When embarking on application development there are two main vulnerability pain points to look out for. The first is spotting flaws in the application design. A poor design, in itself, causes security vulnerabilities.
The second is coding flaws. For the application to be secure there needs to be a tight balance between coding and design. A good example is Apple’s iPhone. Apple’s flagship product has a very close alignment between the coding and the design of the phone, which tightens the defense system against outside threats. A lot of problems encountered in the software industry are due to a mismatch between design and the final product. Software is coded by developers, and if they’re not adding good code, there’s a higher risk of vulnerabilities emerging.
Design flaws are difficult to address, but there are tools already available on the market that detect coding flaws with up to 99% accuracy. Organizations need to work with their developers to instill in them an increased awareness of cybersecurity and provide the tools needed to work it into their current processes. For example, a tool that analyses a developer’s coding in near-real time, and alerts them to security flaws as they code. Not only does this catch any vulnerability early on, streamlining the process and enhancing security, it helps stimulate a culture shift towards a more ‘baked-in’ approach for cyber secure coding.
Considering all Third Party Ingredients
Identifying vulnerabilities within your own application is only half of the battle, as no application runs in a vacuum. They talk and receive data from others, such as web service interfaces, REST APIs or Open Source. For instance, if you want to include a login, there are several Open Source login frameworks available, but just like custom code, Open Source isn’t immune to vulnerabilities.
For example, the ShellShock bug a few years ago was a direct result of vulnerability in the code of an Open Source component, one that’s still impacting web servers and servers around the world. There are countless Open Source components available, but developers need to be cautious of any code they’ve not developed and tested themselves, and they need to be properly integrated into a development process.
Molding a Cultural Shift
While these points give organizations a flavor of the kind of processes and tools developers need, the real challenge comes when they try to make security an integrated part of their culture. Many organizations are creating sound security policies, but they mean nothing unless there’s a process in place to enforce them.
One model is a zero tolerance approach. If a vulnerable application is developed, a company should refuse to deploy it. Over a period of time, this causes a behavioral change, and sends a strong message that speed must not come at the cost of security.
A Recipe for Security Success
Application security is clearly important, and it is becoming even more so as the digital world begins to bleed into the physical one. A hack leading to leaked customer information or stolen credit card details is bad enough, but we only need to look at the automotive industry for a glimpse at the threats that lie ahead. As cars have become more connected, they have grown susceptible to remote attacks that can do anything from change the radio station to control the brakes and steering. If applications aren’t secure, they will soon lead to physical threats, not just digital or reputational ones.
By encouraging teams to place cybersecurity at the heart of development at each stage of the life cycle, applications will be ready to be deployed where they can generate benefits to the organization without requiring costly and embarrassing patches.
By Mike Turner, VP Cybersecurity at Capgemini
Read the July EURO 2016 issue of Business Review Europe magazine.