Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

Why is Jenkins the de facto Continuous Delivery engine?

May 22, 2015 No Comments

Featured article by Sacha Labourey, the Founder and CEO of CloudBees

Integration is king!

The first part of the answer lies in Jenkins core architecture. Jenkins is built around the notion of plugins. Jenkins, by itself, also known as Jenkins-Core, is relatively small. A giant part of the added value lies in the numerous plugins available to extend Jenkins core features. Whatever your requirements are, you are very likely to find the plugin you need. In early 2015, Jenkins had more than 1000 plugins available, able to serve the most exotic setup. As you engage in a continuous integration and/or delivery strategy, you won’t be able to start from an empty slate. You’ll have to coordinate/automate work based on a multitude of operating systems, build tools, testing tools, authentication systems, deployment targets, etc. If the CI/CD tool you chose can’t plug into those, you won’t be able to achieve your job, plain and simple. Furthermore, as more vendors and solutions come to market with a desire to fit in a CD ecosystem, they are likely to provide a Jenkins integration as their first objective, as this is where the critical mass of CD resides. From that standpoint, Jenkins acts as the “integration hubs” that will not only integrate but coordinate the work between all of those various systems.

public cloud

More than integrating those systems, Jenkins is used to implement the actual workflow logic that will sustain your continuous delivery pipeline. And this logic can get pretty complex, with parallel branches, fallback logic, retry logic, etc. Jenkins Workflow is the most sophisticated CI workflow tool you’ll find on the market, and the one with the most integrations, period. At any point in time, more than 100’000 servers will be operating Jenkins jobs around the world, 24×7.

Pipeline vs. production

In many organizations, the adoption of Jenkins starts within development team in order to achieve team-specific continuous integration. Those Jenkins setup, in many cases, are not considered as business critical, and for the most part, totally invisible to IT management, let alone the business.

So when moving to a full fledge continuous delivery process, the temptation is great to simply extend the existing systems and start from there. While this is technically totally fine, it actually introduces a threat to your CD strategy.

Nobody would challenge the assertion that a production environment is to be considered critical. If some part of your production systems goes down, chances are high you’ll have a problem. As you move towards Continuous Delivery, you’ll be building an automated “pipeline” that will start from your software code changes and flow down to fully tested, integrated binaries that will get pushed to production. The knowledge that used to sit in specific individuals – developers, devops, IT Ops, etc. – will now be formalized as part of a continuous delivery workflow definition that will encompasses all of your systems and checks. This creates a much more robust and faster environment, but it also means any change that you want to see in production has to go through this pipeline, from new features, bug corrections or security patches.

Now, imagine that a security vulnerability get discovered in your code/systems running in production but that the Jenkins environment is still the one that your development once setup on a machine (even possibly an old PC running under a desk). If that Jenkins environment is down, it means your continuous delivery pipeline can’t execute, which in turns means you won’t be in a position to push any changes to production, including security fixes.

CloudBees has been working extensively on those issues as we built the Jenkins-based “CloudBees Continuous Delivery Platform”. We offer a high-availability solution for Jenkins, as well as the ability to maintain farms of Jenkins instance from a single point of control, apply updates and changes en masse, white-/black-list plugins, etc. this is part of making Jenkins a strategic part of your IT strategy.

Conclusion

Jenkins represents the de facto leader for continuous integration and continuous delivery thanks to its extreme stability and extensibility, which makes it possible to fully customize it to your requirements as well as integrate with all systems you’ll be operating. Furthermore, it now features the leading and most flexible workflow implementation on the market, essential requirements for any meaningful continuous delivery implementation. As you do so, be prepared to consider your Jenkins environment as a first-tier production system that must remain up at all time as it will sustain your entire path to production, making it one of the key system that must be protected in your IT environment.

 

Leave a Reply

(required)

(required)


ADVERTISEMENT

Gartner

WomeninTech