Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

IT Briefcase Interview: State of Secret Sprawl Q&A

April 18, 2022 No Comments

Thomas G

Thomas Segura – Content Writer @ GitGuardian

Could you give us the key findings of the report?

In 2021, GitGuardian Public Monitoring detected more than 6 million credentials, API keys, private encryption keys, and other sensitive data — defined as “secrets”. This is twice the number that was found in 2020. Three out of every 1,000 commits to GitHub leaked a secret. This a frequency 50% higher than 2020.

When it comes to enterprise internal repositories, this translates into the fact that one AppSec engineer had to handle 3.4K occurrences of secrets on average. A typical company with 400 developers discovered 1,050 unique secrets left behind in developers’ code, with 13 occurrences per secret on average. The effort required for remediation is beyond current AppSec teams’ capabilities (1 AppSec engineer for 100 developers).

Finally, the report also shows that all open-source platforms are concerned: scanning Docker Hub public images, GitGuardian was able to find on average 6 secrets every 100 layers, with 4.62% of images exposing at least one secret. For attackers, it is yet another chance of finding an access vector, just as we saw in the Codecov breach last year.

Did you change your approach compared to last year?

Yes. In our 2021 report, our main point was to raise awareness about the sheer amount of secrets pushed to GitHub every day, the associated risks, and how they’re challenging traditional security practices.

I think that the numerous large-scale attacks, especially supply-chain attacks, of the past year have in some way continued this work for us, unfortunately, and this is also the case for all kind of sensitive information leaks we see announced week after week.

So this time we wanted to look at the problem from a solution angle: given the current labor shortage in application security, how to improve on code security? How to help developers produce safer code without impacting their workflow?

We started looking at what’s happening on the internal corporate side to give a faithful view coming from the field, and I hope this will encourage companies to involve developers more closely with application security and create a shared responsibility model.

Can you tell us about some breaches related to leaked credentials?

Yes, as I was mentioning, Codecov is a well-known code coverage tool widely used in CI pipelines. It was breached because of a forgotten credential in the official Docker image. Attackers were allowed to tamper with a downstream CI script and extract all the environment variables from the development processes of hundreds of companies.

Secrets in git repositories are also regularly mentioned in vulnerability disclosure programs, for example, Sakura Samurai (a white hat group) found nearly forty credential pairs giving access to many of the Indian government IT systems last year.

You have to keep in mind that these are only the disclosed cases, but in almost all attacks, secrets are used in one way or another. Not always as initial access, but very often to elevate attackers’ privileges and move laterally into different systems.

From the report, developers are more likely to push credentials to GitHub on weekends and public holidays. What can explain this?

I think it mostly comes down to developers working on personal projects with fewer security checks and being a bit less careful. The problem is that you have to remember that GitHub is quite unique in the sense that if you have an account on GitHub.com, and if your organization is using GitHub, then you can use the same account for both!

The data (cf. 2021 report) show that this creates a weird confusion between what is work and what is personal development. Corporate keys are very often leaked in personal git repositories outside of corporate control and visibility.

Educating developers is paramount to stop this and that’s exactly why we encourage companies to involve developers more closely with application security and create a shared responsibility model. Our data show that involving a developer results in closing 72% more incidents and remediating twice as fast than when AppSec professionals have to go at it alone.

In the report, you make an explicit link between secrets occurrences and AppSec teams’ workload. Why?

When looking more closely at the secrets found inside internal repositories and their use, one realizes that they are key to making the various pieces of an application work together. Microservices, cloud-managed services, platforms, object storage, etc. are developed, or owned, by different teams most of the time.

When a security engineer revokes and rotates a secret, he has first to understand who was using it and then to redistribute it as many times as the secret appeared in the source code.

This being very time-consuming and fatigue-prone, it is a good indicator of the remediation workload AppSec are faced by. The best way to avoid overloading teams’ capabilities is, of course, to tackle the root cause of the problem, namely secrets leaking in the first place.

The end of the report is focused on recommendations. Do you feel this problem is sometimes underprioritized?

Yes, although most would agree controlling an organization’s secrets and making sure they don’t end up in source code is a basic, the reality we see is that many see it as a lost cause. And that’s understandable because anyone would be overwhelmed by such an amount of leaks. You have historical incidents, found when scanning the git history, plus real-time ones coming from commits, and you need to investigate all of them.

Even if you perfectly know that they represent a serious threat to the development process security, it can be hard to get started.

But there is no fatality. The key is to start thinking of the work to be done as technical debt and bring DevOps engineers, developers, security analysts to build their own layered approach with whatever process fits their workflow: shield yourself with a CLI, a native integration, manage incidents with an API, a SIEM or a web dashboard. We know that with a progressive approach and the right tool it is possible to move forwards to a “zero secrets-in-code” policy!

Author Bio

Thomas has worked both as an analyst and as a software engineer consultant for various big French companies. His passion for tech and open source led him to join GitGuardian as technical content writer. He focuses now on clarifying the transformative changes that cybersecurity and software are going through.

 

Sorry, the comment form is closed at this time.

ADVERTISEMENT

Gartner

WomeninTech