Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

Creating Trust in the Shifting Digital Landscape: Best Practices in Application Development

October 6, 2015 No Comments

Featured article by John Grimm, Senior Director, Thales e-Security

The gains in efficiency resulting from use of mobile computing and cloud-based services have come at a price of increased risk which is sometimes hard to see. More and more business logic resides and executes on insecure devices. Given this landscape, anyone developing code that will run in distributed locations needs to help ensure the integrity of their software as it runs in environments over which they have minimal control.

Enterprises are going to have to figure out how to navigate these choppy waters. Facebook, for instance, has announced that as of October 1, 2015, it will require application developers to move to a more secure type of hashing algorithm in support of digital signatures for their apps – using SHA-2 rather than SHA-1. SHA-2 is a newer, stronger hash algorithm, and far less prone to collision attacks than the 20-year-old SHA-1. Facebook production engineer Adam Gross describes this change as “part of a broader shift in how browsers and websites encrypt traffic to protect the contents of online communications.”

The importance and value of signing keys must not be overlooked as organizations weigh their options. Although they don’t encrypt data (like encryption keys do), signing key security is the backbone of code signing technology, an essential tool to verify the source of software, prove it has not been tampered with since it was published, and verify the identity of the publisher. This latter point is particularly important – today’s major operating systems all present warning dialogs to users prior to installing software, highlighting the lack of information about the publisher if the software is unsigned. Over time, user awareness of the risks of installing software from unknown, or untrusted, publishers has significantly increased, contributing to the likelihood of users abandoning the installation process on these grounds.

Signing Code

Digital signatures employ cryptographic techniques to dramatically increase security and transparency, both of which are critical in establishing trust and legal validity. This makes them superior to electronic versions of traditional signatures. However, merely requiring code to be signed does not ensure security.

A crucial underlying aspect of improving the assurance level of a code signing process is strong protection of the private signing key. If a code signing key is lost, the recovery process to publishing any further software upgrades for existing smart devices can be hampered. If a key is stolen, or the signature is performed using a weak algorithm, an attacker may be able to sign a malicious upgrade that either steals sensitive data or renders potentially millions of devices inoperable.

If the private key becomes known to anyone besides the authorized entity, then— just as with any PKI (Public Key Infrastructure)-based technology—that individual can create digital signatures that will be seen as “valid” when verified using the associated public key and will appear to come from the organization identified in the associated digital certificate. Private key compromise was one of the cornerstones of the infamous Stuxnet attack five years ago.

Advances in Attacks

A rise in malware in recent years has heralded a shift in the threat landscape. Business applications running on host servers are increasingly vulnerable to advanced persistent threats (APTs), introduced through malware, as well as insider attacks and hacking.

APTs are particularly tricky because malicious actors can change application code or device firmware (that’s what makes them “advanced”) without being noticed (that’s what makes them “persistent”). The threats are significant and don’t necessarily involve just corporate data theft but extend to malware on critical national infrastructure such as a flight computer in a plane, smart grids or even traffic lights. This becomes an even greater concern in light of the rising number of Internet- connected devices that are now routinely updated over the Internet. From smartphones to TVs, game machines to routers and industrial control equipment, upgrades can be anything from a new operating system to a new application or application plug in. The rise of the “app store” has further increased the range and number of applications that are downloaded over the Internet, with end users giving little thought to the author’s credentials. Against this backdrop, the potential impact of losing control of a code signing key could be catastrophic.

APTs use stolen private keys connected to valid digital certificates. This threat is putting many software-producing organizations, online service providers and enterprise IT organizations under pressure to increase the security assurance level of their code signing process as well as expand the scope of software being signed to include scripts, plug-ins, libraries and other tools. These requirements can be driven by multiple factors, but all tie back to reducing the risk of malicious software alteration, and the potential for associated reputation damage and revenue loss.

The lure of application code is that it provides targeted access to high-value data. Even if your data is encrypted in your storage environment, it will eventually be used by—and potentially exposed by—an application, at the point of use. What’s more, high-value applications are easy to identify – it is not hard for an attacker to work out that the billing system accesses account information for current, active users and could provide laser-like access to this valuable data.

Attacks at the application level run in stealth mode and are often very difficult to detect. They are often capable of covering their own tracks, turning off detection mechanisms and faking audit log entries. From an organization’s perspective, inability to quickly detect attacks can lead to long-term breaches and high volumes of data theft.

Hidden Key Vulnerabilities

Although it’s well known that lost or stolen signing keys can pose a real threat,

there are a number of factors than can make them challenging to protect. The first is that signing keys are typically held on developer workstations. Most developers are much more focused on writing code than system security, and attackers are wise to this.

Centralized code signing approval processes are in order, but this can be challenging for medium to large software organizations, where the volume and distribution of software build stations warrants shared services and resources (who will therefore require a shared signing resource to accommodate signing requests from multiple platforms).

As a final note on this topic, most of the research on application security focuses on the early stages of the app development lifecycle – making sure that developers create secure code with no natural flaws, followed by code analysis, to ensure that the design process has remained secure. In the wake of the increase in malware-based attacks, we should be looking beyond code creation, to guarantee secure code execution. How can we ensure that the app is not at risk of corruption, or vulnerable to “eavesdropping” or modification by rogue applications?

The Case for Hardware

A best-in-class solution for managing keys is to protect them in a dedicated key management device called a Hardware Security Module (HSM). HSMs provided a dedicated, certified environment used to protect private digital signing keys, and to perform the code signing operations. Three important strands of protection that HSMs offer to ensure that the process remains effective are as follows – firstly, simplification of key backup and archival to ensure that the keys can never be lost. Secondly, provision of independently certified lifecycle protection against accidental or malicious key theft from generation all the way through to destruction. And finally, enforcement of customizable controls over code signing procedures including dual-control, multi-factor authentication and other methods to protect against unauthorized use of the code signing keys.

Hardware contributes significantly to an overall application security strategy because it is a tested anchor of trust in the vast sea of untrusted processes. Additionally, some HSMs even offer the ability to execute security-sensitive application code within the safe, certified confines of the HSM – allowing users to move that code off of traditional application servers and construct a new and stronger layer of defence for it.

It may seem strange, even somewhat anachronistic, to use security based on hardware as a solution to software and cloud-based vulnerabilities. However, it’s important to remember that all virtualized workloads are deployed on a hardware platform, in a physical location, at one point in time. It’s all very well that the content of the HSMs is safe and sound, but the applications that “talk” to the HSMs via APIs are clearly under increasing threat.

As an illustration, let’s look at Bitcoin. The protocol for signing bitcoin transactions involves multiple stages. Even if the signatures are performed in an HSM, the temporary and transitory “secrets” that make up the signature can be exposed to attackers if they are processed on host servers before being passed on to the HSM.

Automation and Time Stamping

To further augment the security of a code signing system, digital or electronic time stamping technology can be used to provide an additional means to validate precisely when code was signed via an embedded, trusted time stamp – creating an auditable pathway to a trusted source of time. This is integral to an organization’s ability to enforce non-repudiation for electronic signing, to verify data and application integrity, and to ensure long-term auditability of electronic records. While software-only time-stamping solutions are vulnerable to threats such as computer clock tampering, high-assurance hardware-based time stamping appliances increase the trustworthiness of the solution, thereby increasing business confidence in its integrity.

As organizations gain more trust in digital records and electronic time stamps, they can also increase their level of automation, reducing the cost of processes that today rely on paper-based signatures and dates. But with that increased process automation, it is vital that we are able to trust the infrastructure that sits underneath those processes – and that’s a challenge as mobility and interconnectivity multiply at a remarkable rate.

The threat landscape has expanded as technology has evolved. Cyber criminals constantly seek to exploit new “gaps” and vulnerabilities in new service and delivery models. As the types of possible threats grow, they become increasingly difficult to discover and manage, and their economic impact keeps rising. Just recently, we have seen a fresh wave of headlines regarding an attack, dubbed Duqu 2.0, against Russian security firm Kaspersky Lab using digital credentials stolen from Foxconn, one of the world’s top electronics makers.

Facebook made its announcement about changing hashing algorithm requirements in an effort to stay ahead of attackers and protect their digital signatures. But we must not forget that the private code signing keys themselves and digital certificates are critically important as well. Signing keys in the wrong hands can spell disaster for an organization, so the utmost care must be taken to ensure that both the systems that regulate keys and credentials and the people who oversee them are secure.

grimm_headshot

About the Author

John Grimm has over 25 years of experience in the information security field, starting as a systems and firmware engineer building secure cryptographic key distribution systems for government applications, and progressing through product management, solution development, and marketing leadership roles. He received his bachelor’s degree in electrical engineering from Worcester Polytechnic Institute in Worcester, Mass., and is a member of Tau Beta Pi, the engineering honor society.

Leave a Reply

(required)

(required)


ADVERTISEMENT

Gartner

WomeninTech