What the Open Source Community Learned from HeartbleedMay 5, 2014 No Comments
Featured Article By Eren Niazi, CEO of Open Source Storage
Until April 7, at least two thirds of the digital world was unwittingly exposed to security vulnerability in OpenSSL, the popular Open Source toolkit for secure sockets layer (SSL) and transport layer security (TSL). For roughly two years, billions of web users were sending passwords, payment card data and other sensitive information over connections that hackers could have exposed. Overall, the Open Source community proved it’s great at clean up but can improve at prevention. The public response to Heartbleed was impressive, but we have much to learn from this near crisis.
In this article, I’ll take the “compliment sandwich” approach. First, we’ll examine what characteristics of Open Source facilitated a speedy and effective response, because we want to keep them around. Second, I’ll review where the open source community could have performed better. And finally, I’ll look at the good and bad to draw some takeaways that will help prevent similar security crises in the future.
What Went Right
Google security expert Neel Mehta discovered Heartbleed on March 21, 2014, and prepared a fix that same day. Word of Heartbleed went public on the morning of April 7, and by mid-morning, a new version of OpenSSL was available on OpenSSL’s web server. As of April 17, just 2% of websites in the Alexa top 1,000,000 were still vulnerable. Google and Facebook had in fact fixed their infrastructure so quickly that a password change wasn’t required.
The nearest comparison we have to Heartbleed is Apple’s SSL vulnerability, which the company revealed on February 21, 2014. The vulnerability had probably appeared in September 2012 when iOS 6.0 went live. However, Apple didn’t discover the bug until January 8th and did not fix the vulnerability until the February 21 and 25 patches. It’s a testament to Open Source security procedures that the response to Heartbleed was many times faster and more thorough.
What Went Wrong
The degree of behind-the-scenes info-swapping between March 21 and April 7 was alarming. Currently, the open source community does not have a standard process for disseminating information, so I do not point my finger at anyone. However, I do think the risk of leaking this info to cybercriminals outweighed the benefits of slipping info to friends at the tech company over. In the future, I would encourage those in the know to hold back chatter until a fix is publicly available. Had news of Heartbleed leaked to hackers, the digital world would have been a sitting duck
Stepping back even further from the revelation of Heartbleed, clearly the biggest problem was the flaw in the code. According to The Sydney Morning Herald’s Dr. Robin Seggelman, a cybersecurity researcher from Munster University of Applied Sciences in Germany and a contributor to OpenSSL, made a simple programming error while he was a PhD student. The reviewer, Dr. Stephen Henson, a UK-based OpenSSL contributor, also missed the mistake. This isn’t a rare occurrence, but for widely used open source code, clearly we need a more reliable review process.
The Takeaways From Heartbleed
Dr. Seggelman and Henson do not deserve chastising – in fact, we should all feel grateful that researchers with their skills are contributing to Open Source code for free. If we have more Seggelmans and Hensons in the world, we will have less Heartbleed scenarios. In fact, it is people who have never screwed up that I would fear more. Real security experts know how fallible they are.
And this is the basis of my primary takeaway: we need participation in open source development that is commensurate with its use. The ratio of contributors to users is too low. Particularly among those corporate developers who take a free ride on open source, we need higher participation. If you use it, you’re also responsible for its quality. If we have more code reviewers and more rounds of review, we will increase the probability of catching errors.
Second takeaway: let’s invest more in automation. Whether we use CFEngine, Puppet, Chef or another automation platform, the ability to roll out fixes to all machines quickly is important. These can close the gap between discovery and patching. 10 days to reach 98% coverage on the top 1,000,000 websites is good, but we can do even better.
Third, security audits are important and they should be performed on a regular basis. We discover vulnerabilities by battle testing everything. Complete security is impossible, and the moment we presume it is the moment we become vulnerable. Once we discover vulnerabilities, let’s not chit chat until a fix is publicly available.
Fourth, openness helped. The people who independently found this bug were researchers. With proprietary code, they could not have discovered the flaw and sounded the alarm.
We know Open Source can react quickly. We know there’s a dedicated community that has cleaned up Heartbleed. Let’s show our gratitude for those who contribute, rather than our disdain for those who made mistakes. Let’s turn our efforts to prevention knowing full well this will not be the last Open Source security vulnerability we discover , but we can handle cleanup if we reach this point again…
Eren Niazi is the founder and CEO of Open Source Storage who strives to lead the global marketplace in Open Source System and solutions. Niazi is a pioneer for open source and has worked with some of the biggest names in technology to date. He has driven companies to unheard of levels of growth, and has generated more tan $200+ million in revenue.
Fresh Ink, Inside the Briefcase, OPEN SOURCE