Maven archetypes

We live in a micro-services world, lately, does not matter where you go, big, medium or small companies or start-ups, everyone is trying to implement microservices or migrating to them.

Maybe not initially, but when companies achieve a certain level of maturity, they start having a set of common practices, libraries or dependencies they apply or use in all the micro-services they build. Let’s say, for example, authentication or authorization libraries, metrics libraries, … or any other component they use.

When this level of maturity is achieved, usually, to start a project basically we take the “How-To” article in our wiki and start copying and pasting common code, configurations and creating a concrete structure in the new project. After that, it is all set to start implementing business logic.

This copy and paste process is not something that it usually takes a long time but, it is a bit tedious and prone to human errors. To make our lives easier and to try to avoid unnecessary mistakes we can use maven archetypes.

Taken from the maven website, an archetype is:

In short, Archetype is a Maven project templating toolkit. An archetype is defined as an original pattern or model from which all other things of the same kind are made. The name fits as we are trying to provide a system that provides a consistent means of generating Maven projects. Archetype will help authors create Maven project templates for users, and provides users with the means to generate parameterized versions of those project templates.

In the next two sections, we are going to learn how to build some basic archetypes and how to build a more complex one.

Creating a basic archetype

Following the maven documentation page we can see there are a few ways to create our archetype:

From scratch

I am not going to go into details here because the maven documentation is good enough and because it is the method we are going to use in the “Creating a complex archetype” section below. You just need to follow the four steps the documentation is showing:

  1. Create a new project and pom.xml for the archetype artefact.
  2. Create the archetype descriptor.
  3. Create the prototype files and the prototype pom.xml.
  4. Install the archetype and run the archetype plugin

Generating our archetype

This is a very simple one also described in the maven documentation. Basically, you use maven to generate the archetype structure for you

mvn archetype:generate \
    -DgroupId=[your project's group id] \
    -DartifactId=[your project's artifact id] \
    -DarchetypeGroupId=org.apache.maven.archetypes \
    -DarchetypeArtifactId=maven-archetype-archetype

As simple as that. After executing the command, we can add our personalisations to the project and proceed to install it as seen before.

From an existing project

This option allows us to create a project and when we are happy with how it is, to transform the project into an archetype. Basically we need to follow the next steps:

  1. Build the project layout by scratch and add files as need.
  2. Run the Maven archetype plugin on an existing project and configure from there.
mvn archetype:create-from-project

This will generate an “archetype” folder into the “target” folder:

target/generated-sources/archetype

We just need to copy this folder structure to the desired location and we will have our archetype ready to go. It needs to be installed as usual to be able to use it.

Using our archetype

Once we have install our archetype, we can start using it:

mvn archetype:generate \
    -DarchetypeGroupId=dev.binarycoders \
    -DarchetypeArtifactId=simple-archetype \
    -DarchetypeVersion=1.0-SNAPSHOT \
    -DgroupId=org.example \
    -DartifactId=project1

This will create a new project using the archetype. The information we need to modify in the previous command is:

  • archetypeGroupId: It is the archetype group id we have defined when we created the archetype.
  • archetypeArtifactId: It is basically the name of our archetype.
  • archetypeVersion: It is the version of the archetype we want to use in case the archetype has been evolving over time and we have different versions.
  • groupId: It is the group id our new project is going to have.
  • artifactId: It is the name of our new project.

Deleting our archetype

Right now, after installing our archetype, it is only available in our local repository. This fact allows us to delete the archetype in a very simple way. We just need to take a look at the archetype catalogue in our repository and manually remove the archetype. We can find this file at:

~/.m2/repository/archetype-catalog.xml

Creating a complex archetype

For most cases, the already reviewed ways to create archetypes should be enough but, not for all of them. What happens if we need to define some modules we want to define the name when creating the project? Or classes? Or some other customisations?

Luckily, Maven gives us some level of flexibility allowing us to define some variables and use some concrete patterns to define folders and files in our archetypes in a way they will be replaced when the projects using the archetype are created.

As a general rule we will be using two kinds of notation for our dynamically elements:

  • Defined in files: ${varName}
  • Defined in file system: __varName__ (two underscores)

This will help us to achieve our goals.

As an example, I am going to create a small complex archetype to be able to see this in action. The projects created with the archetype are going to have:

  • A parent project with <artifactId> name.
  • Two modules called <artifactId>-one and artifactId-two.
  • A main class called <classPrefix>OneApp and <classPrefix>TwoApp respectibely.
  • The classes will be located in the package <package>.one and <package>.two respectibely.
  • The module One will have a properties class stored in the resources folder.

The code of the archetype can be found at the GitHub repository.

The first file we can check is archetype-metadata-xml located in META-INF/maven.

We can see here the definition of the variable classPrefix and groupId with a default value assigned.

<requiredProperties>
    <requiredProperty key="classPrefix" />
    <requiredProperty key="groupId">
        <defaultValue>dev.binarycoders</defaultValue>
    </requiredProperty>
</requiredProperties>

After that, we can see the definition of the project structure we want to achieve. In this case, we have the fileSets node with the files on the parent project and, after that, the definition of the modules we want to include. Here we should pay special attention to the way the module attributes are defined:

<module id="${rootArtifactId}-one"
         dir="__rootArtifactId__-one"
         name="${rootArtifactId}-one">

As we can see they use the notation described before, using the “${}” notation for variables in files and the notation “__” (two underscores) for file system elements. The rest of the file is pretty simple.

If we explore the folder structure, we can see a few elements defined with these two underscores notation like the module names and the class names. This will be dynamic elements that will take the name from the variable defined when the project is created.

We can define different filesets for the files we want to be copied to our generated project. For example, we can copy all the .java files we can find inside the path src/main/java:

<fileSet filtered="true" packaged="true" encoding="UTF-8">
    <directory>src/main/java</directory>
        <includes>
            <include>**/*.java</include>
        </includes>
</fileSet>

Finally, if we explore one of the classes, we can see the next content:

#set( $symbol_pound = '#' )
#set( $symbol_dollar = '$' )
#set( $smbol_escape = '\' )
package ${package}.one;

public class ${classPrefix}OneApp {
}

The first three lines are just alias to be able to use the symbols that have a specific meaning not just as literals.

After that, we can see the package definition that it is going to be built with one part dynamically added and one part statically defined. We can see the class name follows the same pattern.

Deserves special attention to the fact that, despite we are defining packages into the classes, we are not replicating this structure in the project structure, Maven will take care of that for us. This is because when we defined the fileset we defined the attribute package equals true. If this attribute is set to false, we will be in charge of defining the desired structure.

It is worth it to mention that because of the files in the maven archetype act as velocity templates, we can introduce some logic and some dynamic content in our files. For example, print something or not in a determinate file:

<requiredProperty key="greeting">
    <defaultValue>y</defaultValue>
</requiredProperty>
#if (${greeting == 'y'})
    // Hello, welcome here!
#end

This variable can be set using the command line when we generate our new project:

-Dgreeting=n

Finally, there is one more interesting thing we can do. We can use a post-generation script write in groovy to execute some actions after the project has been generated. One interesting use, it is to remove not desired files based on some variables defined when generating the project. This script will be located in the folder src/main/resources/META-INF with the name archetype-post-generate.groovy.

import groovy.io.FileType

def rootDir = new File(request.getOutputDirectory() + "/"
    + request.getArtifactId())
def oneBundle = new File(rootDir, request.getArtifactId()
    + "-one")

def projectPackage = request.getProperties().get("package")

assert new File(oneBundle, "src/main/java/" 
    + projectPackage.split("\\.").join('/')
    + "/toDelete.txt").delete()

With this, every time that we use the archetype to create a new project we will obtain the desired results.

We can use our recently created archetype with:

mvn archetype:generate \
    -DarchetypeGroupId=dev.binarycoders \
    -DarchetypeArtifactId=simple-archetype \
    -DarchetypeVersion=1.0-SNAPSHOT \
    -DgroupId=org.example \
    -DartifactId=project

And the result:

And, one of the classes:

This is all. I hope is useful.

See you.

Maven archetypes

Starting a project

There are multiple ways to learn how to code. Some people do it with some kind of formal education like high school, university, master… Other people through Bootcamp or more modern initiatives we are seen lately. And, finally, there are people that it learns self-studying. No matter which one is your case, at the end of the day, the best way to learn and acquire some coding skills is to code.

As developers, we code (we do other things, not just code). Usually, if we do it professionally, enterprises have their tools and procedures. This is not the scope of this article. This article is going to focus on small projects we start outside these corporative environments, just for fun, for learning purposes or, because why not? And, I am talking about projects, not just code snippets or small demos trying something we have read in an article or blog, or testing this crazy idea we had in mind the last few days.

The purpose of the article is to offer some guidance on possible free tools we can use to work on a project following more or less a methodology and using some tools similar if not the same than the ones you can find on a corporative environment.

The focus of the article is people learning how to code to allow them to have a bigger picture, or people starting a long term open-source project, or just anyone curious. It going to focus not on the coding part but on the areas around the project.

Every project when it starts it needs a way to manage the code and a way to manage the efforts. I am certain about the first one, all of you agree but, I can hear from here people questioning the second one. Well, initially, and especially if we are the only developer, we can think it is not necessary but in the long run, even more, if we expect contributions in the future, it is going to be a very useful thing to have. It will keep our focus, it will make us think in advance, to do some planning and, it will give us a history of the project and why we took a certain decision at a certain point or why we added a concrete functionality. And, if you are learning how to code, it will give you the bigger picture I was talking about before.

To manage our code we need some kind of distributed version control system for tracking changes. There are a few of them out there like Git, Subversion, Perforce, Team Foundation Version Control or Mercurial. If you stay long enough in the industry, you will see all of them but, in this case, my favourite preference is Git. There are some cloud platforms that offer you an account to use it (GitHub, GitLab, Bitbucket). All of them are similar and at this level, there is no big difference, I invite you to test all of them but, in this case, I am going to recommend GitHub. I like it, I am used to it, it is hugely extended among the open-source community, and, integrates easily and smoothly with other tools we are going to see in this article.

To manage our efforts we need some kind of project management software tool for tracking tasks and the progress of them. As in the previous case, here event more, there are a lot of them out there. One very simple to use and very extended is Trello. Trello offers you some customizable boards we can use to track efforts, progress and plan in advance. In addition, there are a lot of useful plugins to improve and highly customize the boards and the cards (tasks) there. Here just a quick mention to the ‘Projects’ tab in GitHub that it allows you to create some automated Kanban boards. It is interesting to play with it. But, I have never seen it in a corporative environment where I have seen Trello multiple times. The first place here is for JIRA.

Once we start coding, creating pull requests and merging code in our repository it is nice to have in place a CI/CD environment. There are multiple advantages of this but, even if we are just learning, it will keep your code healthy making sure that any change made still compiles and pass all our tests. Again, in this category, we can find some cloud platforms and on-premises solution but, for the article, I have chosen Travis CI (the dot org). It is simple to register, great integration with GitHub, powerful enough and well documented.

One thing that developers should be worried about it is the quality and maintainability of the code they write. And, I am not talking just if our code passes all the test, I am talking about bugs, vulnerabilities, test coverage, code duplication, format (we should be using our IDE auto-format or save actions for the last one). To cover all this list we can find the tool SonarQube, and a cloud solution SonarCloud. This tools will report us with all the found problems every time a build is done, allowing us to correct them as soon as possible and not let them pailing and just be found when there is a code auditory or similar. Again, it is an easy tool to manage and to integrate with GitHub and Travis CI.

Are these tools the best ones? The more useful ones? Yes, no, maybe. I am a strong believer that there are not perfect tools, there are tools perfect for a job and, this is what sometimes we as developers need to decide, which tool fits best the job. The tools in the article are just examples and, they were perfect for the article.

Starting a project

The Sec in SecDevOps

More or less, everyone that works in the software industry should be aware by now of the term DevOps and the life cycle of any application. Something similar to:

  • Plan: Gathering requirements.
  • Code: Write the code.
  • Test: Executing the tests.
  • Package: Package our application for the release.
  • Release: When the application artifacts are generated.
  • Deploy: We make it available to the users (QA, final users, product, …).
  • Operate: Take care of the applications needs.
  • Monitor: Keep an eye on the application.

All these bullet points are a rough description of the main stages of the application no matter what methodologies we are using.

The thing is that, nowadays, we should not just be aware of the term DevOps, we should be aware and, using or introducing, the term SecDevOps in our environments.

The ‘Sec’ stands by ‘Security’ and, it means, basically to add the security component and the help of security professionals to all the stages in the cycle, not just after the deployment like usually happens.

In addition to the common tasks we have in the different stages, if we add the ‘Sec’ approach, we should be adding:

  • Plan: We should be analyzing the Threat Model. For example, user authentication, external servers, encryption, … Here, we will be generating security focus stories.
  • Code: In this stage, the security proffesinal should be training and helping the team with the security concepts: tools, terminology, attacks. This will help developers to be aware of possible problems and write some defensive measures integrated in their code or follow best practices and recommendations.
  • Test: In this stage, specific tests to check security capabilities will be created allowing us to early detect possible problems.
  • Package: Here, we will check vulnerabilities in external libraries, vulnerabilities in our containers or code static analyzers.
  • Release: This stage can be combined with the previous one from a security point of view. Some platforms where the containers are stored, for example, integrate vulnerability scanners.
  • Deploy: Here, we can run dynamic security tests with typical ethical hacking tools.
  • Operate: Here is when the Red Team steps up in addition to resilience checks.
  • Monitor: We can analyze all our logs (we hope previously centralized) trying to discover security problems.

I hope this gives us a simplify view of how we can integrate security awareness and processes in our application cycles.

The Sec in SecDevOps

CD: Continuous Deployment

At this point, we are an organization that uses Continuous Integration and Continuous Delivery but, we can go one step farther and start using Continuous Deployment.

Now we work in a continuous delivery environment but we still need to push a button to deploy things into our production environments, not a big deal but still not moving as fast as we want. Deploying to production can be a risky and costly exercise that sometimes requires putting all development on hold. And, we are still dependent on a deployment cycle and the more commits we fit into a release the more risky it is and the more possibilities we have to introduce an error.

Continuos deployment tries to solve all these drawbacks by automatically shipping every change pushed to the main repository to production. We can obtain benefits like:

  • No one is required to drop their work to make a deployment because everything is automated.
  • Releases become smaller and easier to understand.
  • The feedback loop with our customers is faster: new features and improvements go straight to production when they are ready.

STEPS WE NEED TO TAKE

We are already applying continuous integration what means we have plenty of tests, they cover our functionality and guarantee that we are not introducing breaking changes but, are they good enough?

One of the things we need is that the quality of our test suite will determine the level of risk for our release, and our team will need to make automating testing a priority during development. This was important before but now, takes a new level of importance. This means implementing tests for every new feature, as well as adding tests for any regressions discovered after release.

Another thing we need is that fixing a broken build for the main branch should also be first on the list. Why? Because if we don not do that, changes are going to be accumulated and we are going to end up killing the benefits of our new continuous deployment environment.

We need to start using some kind of coverage tools. We can do a rough estimation of our coverage but this does not work. Just try it. If we do the rough estimation and after that use a coverage tool, we will be surprised. It is said that a good goal is to aim for 80% coverage but, there is a big but. Coverage must be meaningful it is not enough writing tests that go through every line of code, the tests need to challenge the code. The tests need to cover business cases, edge conditions and any possible problem we can think of. Code reviews help with this, in addition with helping to transfer knowledge. Our test suite is now the keeper of our production environment, we want to have the best and strongest keeper we can achieve.

Once our code has been deployed to production we need to monitor the situation, we need to check that everything is working as expected and we are serving our customers appropriately. And this monitoring needs to be real-time monitoring. There are some tools out there that can help us with this step. We do not just need to monitor our system is up and running, we need to monitor CPU, memory consumption, average request time… All the parameters we decide we need to keep our systems healthy and to receive alerts if something is off.

No matter how confident we are in our pre-deployment process and our monitoring capabilities, after every deployment we should have some kind of tests running. These test are smoke test, now they earn a lot of importance. We do not need to run anything big, just a  few simple tests loading some static pages and a few more that require to use all production services, micro services, 3rd parties, databases… Our smoke tests should be able to guarantee us that everything is working properly.

At this point, I am sure that some of you (I was doing it) are thinking, what happen with my QA team now? Fair question. Now, the QA team should be working closely with the product manager and the development team to define the risks associated with new improvements. They can help to define cases that need to be tested not just increasing coverage but increasing the quality coverage. They can work in automatization and, they can respond and define properly together with the development team possible bugs found into production.

If any of you readers is a project manager, delivery manager or similar, I am sure by now you are thinking, what about the release notes? There is not space for release notes unless we want to spam our clients. In a big continuous deployment environment we can release code hundreds or thousands of times per day. The advice here is to focus on key announcements for features or enhancements. If you fix a bug that it is particularly affecting one of your customers, just inform this customer. Everything else, you use one of the infinity number of management tools out there to handled changes like JIRA or Trello.

I can be a hard path to move from continuous delivery to continuous deployment. If the effort is worth it or not, it is a decision that the companies and teams need to take. The only advice here is start small and build up your continuous deployment knowledge and experience. If you have a new project, try to apply everything we have mention here, even you can start building your production infrastructure before you even code and push your changes after that. Once you have manage to use continuous deployment in one project, try to apply all the lessons learned to the rest of your projects, the existing ones or even the legacy ones if it is worth it.

CD: Continuous Deployment

CD: Continuous Delivery

Nowadays, our development teams use Agile methodologies what means that we have accelerate our processes trying to deliver small chunks of functionalities to receive early and real feedback from users to continue iterating our ideas.

Now, we are an organization that follow and use Continuous Integration practices but, we are still missing something. We are not able to receive this feedback if it takes ages between our developers finishing their tasks and the code been deployed in to production where our users can use it. In addition, when a lot of changes or features are delivered at the same time, it makes more difficult to debug bugs on solve possible errors. For a long time, the deployment process has been seen as a risky process that requires a lot of preparation but, this needs to change if we want to be truly Agile.

Here it comes Continuous Delivery (CD). Continuous Delivery is a practice that tries to make tracking and deploying software trivial. The goal is to ship changes to our users early and often, multiple times a day if possible, to help us minimize the risk of releasing, and giving our developers the opportunity to get feedback as soon as possible.

As I have said before, we should have already a Continuous Integration environment to ensure that all changes pushed to the main repository are tested and ready to be deployed. Can it be done without a CI environment? Yes, probably, but more probably we are going to create a machine that it is just going to push our bugs faster to production increasing our risks.

Steps we need to take

Create a continuous delivery pipeline

The continuous delivery pipeline is a list of steps that happen every time our code changes till it finds its way to production. It includes building and testing the application as part of the CI process and extends it with the ability to deploy to and test staging and production environments.

With this we will achieve two things:

  • Our code will be always ready to be deployed to production.
  • Releasing changes will be as simple as clicking a button.

Create a staging environment

All of us, generally, have been writing code in some point in our lives, some of us still do it, some of us have evolve our careers but, something that I am sure all of us remember are these cases of “it worked on my machine…”. Configurations, differences in environments, networks… There are thousands of things that can be different and can go wrong, and probably it will go. No matter how many test we have locally, how many precautions we take, our local conditions are not going to be the same than our production conditions. For this reason we need an intermediate environment, the “staging” environment.

The “staging” environment does not need to support the same scale as our production environment but it should be as close as possible to the production configuration to ensure that the software is tested in similar conditions.

The “staging” environment should allow us to see things before it happen. If something is going to break, it should break in this environment. Using this environment, our release workflow should be similar to:

  1. Developers build and test locally a feature.
  2. Developers push their changes to the main repository where CI tests run automatically against their commits.
  3. If the build is green, the changes are released to the staging environment.
  4. Acceptance tests are run against the staging environments to make sure nothing is broken.
  5. Changes are now ready to be deployed to production.

An additional advantage is that it allows our QA team and product owners to verify the software works as intended before releasing to our users, and without requiring a special deployment or access to a local developer machine.

Automate our deployment

Or, we need to create a “green button” that once it has been pressed, without any other human intervention is going to deploy our code in staging or production.

We can start building some scripts that we run from our development machines and, after that, we can add them to a any CI platform.

There are many ways to deploy software, but there are common rules that maybe we can use as guidance:

  • Version the deployment scripts with our code. That way we will be able to audit changes to our deployment configuration easily if necessary.
  • Do not store passwords in our script. Instead, use environment variables that can be set before launching the deployment script.
  • Use SSH keys when possible to access the deployment servers. They will allow us to connect to our servers without providing a password and will resist a brute force attack.
  • Make sure that any build tools involved in the pipeline does not prompt for user input. Use a non-interactive mode or provide an option to automatically assume yes when installing dependencies.
  • Test it, test it and test it again. Make sure everything deploys as expected, that nothing is missing no matter what kind of changes you are doing.
  • Maybe, it is a good moment to write some smoke test, if you do not have it, to check if your machines are up an running.

Include data structure changes

Let’s face it, when our application changes the code is not the only thing changing. The data structures change too. And, we are not going to have a CD environment if this changes are not added to our automatic deployments.

First, create backups. No matter how good we are, how little is the change, something is going to fail in some point and we want to be able to restore the previous state of the application.

Second, there are multiple tool that can help manage data structure changes as code. Some frameworks bring their own to the table but, if not, we can find tools that fit our technologies and ideas. Just learn them and use them.

All together and the last detail, the CI server

Arriving here, we have set a CI environment with a CI server and we have a “button” to run our deployments, now, why not put everything together?

To do this, we just need to add a manual step in our pipeline to release (press the button) the code to our different environments.

Now, every time our code is merged and ready to production, following business needs, we just need to push the button and release. This give us a great control of our deployments. The only precaution we need to take is not to leave to many commits accumulated.

CD: Continuous Delivery

CI: Continuous Integration

Continuous integration (CI) is a practice where team members integrate their code early and often to the main branch or code repository. The objective is to reduce the risk, and sometimes pain, generated when we wait till the end of the sprint or project to do it.

One of the biggest benefits of the CI practices is that it allows us to identify and address possible conflicts as soon as possible with the obvious benefit of saving time during our development. In addition, it reduces the amount of time spent in regresion test fixing bugs because it encorages to have a good set of tests. Plus, it gives a better understaning of the features we are developing and the codebase due to the continuous integration of features in our codebase.

What do we need?

Tests, test, tests… Automatic tests

To get the full benefits of CI, we will need to automate our tests to be able to run them for every change that is made to the repository. And when I say repository, I want to say every branch and not just the main branch. Every branch should run the tests and it should not be merged till they are green, all of them. In this way, we will be able to capture issues early and minimise disruptions to our team.

Types of tests

There are many types of tests that can be implemented. We can start small and grow progressibely our coverage. The more meaningfull tests we have the better but, we are running a project, we should find a balance between releasing features and increasing our coverage.

How many should I implement?

To decide about that, we just need to remember two things. The first one is that we want meaninful tests, we should not care about the number of tests we should care about how useful are they. We should write enought tests to be confident that if we introduce a bug (technical or business) we are going to detect it. And second, we should take a look to “The Testing pyramid“, here you can find a link to an artiche of Martin Fowler. Basically, explaing from a cost-efective comparation point of view the amount and type of tests we should write.

Running your tests automatically

One of the things we have discussed we need, it is to run our tests on every change that gets pushed. To do so, we will need to have a service that can monitor our repository and listen to new pushes to the codebase. There are multiple solutions, both, on-premise and in the Cloud.

There are a few considerations we need to think about when we are trying to evaluate a solution like: Platform, resources, access to our repositories, … Some examples are Travis CI, Bamboo or Jenkins.

Immersion in CI

This is not just a technical change, we need to have in mind when we are trying to addopt CI that it is a cultural change too.

We need to start integrating code more often, creating shorter stories or breaking them in short deliverables, we need to keep always the build green, we need to add test in any story, we can use even refactor tasks to add tests and increase our code coverage. We should write test when we fix bugs and so on.

One group of your team that is going to be affected directly by this change is our QA group. They no longer need to test manually trivial capabilities of our application and they can now dedicate more time to providing tools to support developers as well as help them adopt the right testing strategies. Our QA Engineers will be able to focus on facilitating testing with better tooling and datasets as well as help developers grow in their ability to write better code. They will need to test manually some complex stuff but it will not be their main task anymore.

Quick summary

Juts a quick sumarry of the roadmap to addopt CI, we can list the next points:

  1. Start writing code for the critical parts in your system.
  2. Get a CI system to run our tests after every push.
  3. Pay attention to the culture change. Help our team to understand and to achieve.
  4. Keep the build green.
  5. Write tests as part of every story, every bug and every refactor.
  6. Keep doing 4, 5 and 6.

At the beginning, cultural changes are scary and they feel impossible but the rewards sometimes deserve the effort. A new project, if we have one, it is maybe a good option to start changing our minds and taking a CI approach in the development life cycle. If we start with and existing proyect, start slow, step by step but always going forward. And, we should always remember that, this is not just a technological change, it is a cultural change too, explain, share and be patient.

CI: Continuous Integration

CI, CD and CD

When we talk about moder development practices, we often listen some acronyms among we can find CI and CD when we refer the way we build and release software. CI is pretty straightforward and stands for continuous integration. But CD can either mean continuous delivery or continuous deployment. All these practices have things in common but also, they have some significant differences. We are going to explain these similarities and differences.

Continuous integration

In environments where continuous integration is used, developers merge their changes in the main branch as often as the can. These changes are validated by creating a build and running automated tests against the build. Doing this, we avoid the old times painful releases when everything was merged in the last minute.

Continuous integration practice puts a big emphasis on automation testing to keep a healthy build each time the commits are merged in the main branch warning quickly about possible problems.

Continuous delivery

Continuous delivery is the next step towards the release of your changes. This practice make sure you can release to your customers as often and quickly as you want. This means that on top of having automated your testing, you also have automated your release process and you can deploy your application at any point of time by clicking on a button.

With continuous delivery, you can decide to release daily, weekly, fortnightly, or whatever suits your business requirements. However, if you truly want to get the benefits of continuous delivery, you should deploy to production as soon as possible to make sure that you release small batches, that are easy to troubleshoot in case of problems.

Continuous deployment

But, we can go another step farther, and this step is continuous deployment. With this practice, every change that passes all stages of your production pipeline is released to your customers. There is no human intervention (no clicking a button to deploy), and only a fail in test time will prevent a new change to be deployed to production.

Continuous deployment is an excellent way to accelerate the feedback loop with your customers and take pressure off the team as there is not a ‘release day’ anymore. Developers can focus on building software, and they see their work go live minutes after they have finished working on it. Basically, when a developer merges a commit in the main branch, this branch is build, tested and, if everything goes well, deployed to production environments.

Can I use all of them together?

Of course you can, as I have said, each one of them its just a step closer to the production environment. You can set your continuous integration environment, after that, once the team is comfortable, you can add continuous delivery and, finally, continuous deployment can be added to the picture.

PIPELINE
Example of CI, CD and CD pipeline

Is it worth it?

Continuous integration:

What it needs from you:

  • Your team will need to write automated tests for each new feature, improvement or bug fix.
  • You need a continuous integration server that can monitor the main repository and run the tests automatically for every new commits pushed.
  • Developers need to merge their changes as often as possible, at least once a day.

What it gives to you:

  • Less bugs get shipped to production as regressions are captured early by the automated tests.
  • Building the release is easy as all integration issues have been solved early.
  • Less context switching as developers are alerted as soon as they break the build and can work on fixing it before they move to another task.
  • Testing costs are reduced drastically – your CI server can run hundreds of tests in the matter of seconds.
  • Your QA team spend less time testing and can focus on significant improvements to the quality culture.

Continuous delivery

What it needs from you:

  • You need a strong foundation in continuous integration and your test suite needs to cover enough of your codebase.
  • Deployments need to be automated. The trigger is still manual but once a deployment is started there should not be a need for human intervention.
  • Your team will most likely need to embrace feature flags so that incomplete features do not affect customers in production.

What it gives to you:

  • The complexity of deploying software has been taken away. Your team does not have to spend days preparing for a release anymore.
  • You can release more often, thus accelerating the feedback loop with your customers.
  • There is much less pressure on decisions for small changes, hence encouraging iterating faster.

Continuous deployment

What it needs from you:

  • Your testing culture needs to be at its best. The quality of your test suite will determine the quality of your releases.
  • Your documentation process will need to keep up with the pace of deployments.
  • Feature flags become an inherent part of the process of releasing significant changes to make sure you can coordinate with other departments (Support, Marketing, PR…).

What it gives to you:

  • You can develop faster as there is no need to pause development for releases. Deployments pipelines are triggered automatically for every change.
  • Releases are less risky and easier to fix in case of problem as you deploy small batches of changes.
  • Customers see a continuous stream of improvements, and quality increases every day, instead of every month, quarter or year.

As said before, you can adopt continuous integration, continuous delivery and continuous deployment. How you do it depends on your needs and your situation. If you are just starting a project and you do not have customers yet you can go for it and implement the three of them and just iterate on them at the same time you iterate on your project and your needs grow. If you have already a project in production you can just go step by step and adopting the practices first in your staging environments.

CI, CD and CD