Running X Apps on Mac with Docker

Today, just a quick article about how to run X Apss on Mac using Docker. Why? While it cannot be considered a sandbox, it can be considered a quick solution between running something in our system and running something inside a virtual machine. Again, why? Let’s say we want to open a PDF, we have to check it with VirusTotal, and while it looks like an innocuous file, it has some YARA rules warning and we do not want to take any risk. We can run a PDF viewer inside a container.

To do this we are going to need a few things:

That is all the pre-requirements we need. Once we have them installed, we can proceed to create our container.

In this case, I am going to use an Ubuntu container and I am going to install the Evince Document Viewer, but the steps should work for any flavour of the container you like and any document viewer you prefer. And, for any X App, we want.

First, we need to create our Dockerfile:

# Use a Linux base image
FROM ubuntu:latest

# Install necessary packages
RUN apt-get update && apt-get install -y \
    evince \
    x11-apps

# Set the display environment variable
ENV DISPLAY=:0

# Create a non-root user
RUN groupadd -g 1000 myuser && useradd -u 1000 -g myuser -s /bin/bash -m myuser

# Copy the entrypoint script
COPY entrypoint.sh /

# Set the entrypoint script as executable
RUN chmod +x /entrypoint.sh

# Switch to the non-root user
USER myuser

# Set the entrypoint
ENTRYPOINT ["/entrypoint.sh"]

Second, we need the entry point file to execute our app and pass the argument, in this case, the file we want to open:

#!/bin/bash
evince "$1"

Now, we can build our container by executing the proper docker command in the folder that contains both files:

docker build -t pdf-viewer .

Now, we just need to run it:

  1. Ensure docker is running in our system.
  2. Start the XQuartz application.
  3. Execute xhost +localhost, this command grants permission for the Docker container to access the X server display.
  4. Run our container with the following command:
docker run --rm -it \
    -e DISPLAY=$DISPLAY \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    -v /Users/username/pdf-location:/data \
    --entrypoint /entrypoint.sh pdf-viewer /data/file.pdf

Let’s elaborate on the command a bit:

  • docker run: This command is used to run a Docker container.
  • --rm: This flag specifies that the container should be automatically removed once it exits. This helps keep the system clean by removing the container after it finishes execution.
  • -it: These options allocate a pseudo-TTY and enable interactive mode, allowing you to interact with the container.
  • -e DISPLAY=$DISPLAY: This option sets the DISPLAY environment variable in the container to the value of the DISPLAY variable on your local machine. It enables the container to connect to the X server display.
  • -v /tmp/.X11-unix:/tmp/.X11-unix: This option mounts the X server socket directory (/tmp/.X11-unix) from your local machine to the same path (/tmp/.X11-unix) within the container. It allows the container to access the X server for display forwarding.
  • -v /Users/username/pdf-location:/data: This option mounts the /Users/username/pdf-location directory on your local machine to the /data directory within the container. It provides access to the PDF file located at /Users/username/pdf-location inside the container.
  • --entrypoint /entrypoint.sh: This option sets the entry point for the container to /entrypoint.sh, which is a script that will be executed when the container starts.
  • pdf-viewer: This is the name of the Docker image you want to run the container from.
  • /data/file.pdf: This specifies the path to the PDF file inside the container that you want to open with the PDF viewer.

With this, we are already done. Executing this command the PDF Document Viewer will open with our PDF. Happy reading!

There is one extra and optional step we can take, it is to create an alias to facilitate the execution:

alias run-pdf-viewer='function _pdf_viewer() { xhost +localhost >/dev/null 2>&1;docker run -it --rm --env="DISPLAY=host.docker.internal:0" -v $1:/data --entrypoint /entrypoint.sh pdf-viewer /data/$2 };_pdf_viewer'

The above alias allows us to open a PDF file inside the container without remembering the more complex commands, we only need to ensure Docker is running. As an example, we can open a file with:

run-pdf-viewer ~/Downloads example.pdf

Running X Apps on Mac with Docker

Diagrams as Code

I imagine that everyone reading this blog should be, by now, familiar with the term Infrastructure as Code (IaC). If not because we have written a few articles on this blog, probably because it is a widely extended-term nowadays.

At the same time I assume familiarity with the IaC term, I have not heard a lot of people talking about a similar concept called Diagram as Code (DaC) but focus on diagrams. I am someone that, when I arrive at a new environment, finds diagrams very useful to have a general view of how a new system works or, when given explanations to a new joiner. But, at the same time, I must recognise, sometimes, I am not diligent enough to update them or, even worst, I arrived at projects where, or they are too old to make sense or there are none.

The reasons for that can be various, it can be hard for developers to maintain diagrams, lack of time, lack of knowledge on the system, not obvious location of the editable file, only obsolete diagrams available and so on.

I have been playing lately with the DaC concept and, with a little bit of effort, all new practices require some level of it till they are part of the workflow, it can fill this gap and help developers and, other profiles in general, to keep diagrams up to date.

In addition, there are some extra benefits of using these tools such as easy version control, the need for only a text editor to modify the diagram and, the ability to generate the diagrams everywhere, even, as part of our pipelines.

This article is going to cover some of the tools I have found and play with it lately, the goal is to have a brief introduction to the tools, be familiar with their capabilities and restrictions and, try to figure out if they can be introduced on our production environments as a long term practise. And, why not, compare which tool offers the most eye-catching ones, in the end, people are going to pay more attention to these ones.

Just a quick note before we start, all the tools we are going to see are open source tools and freely available. I am not going to explore any payment tool and, I am not involved in any way in the tools we are going to be using.

Graphviz

The first tool, actually a library, we are going to see is Graphviz. In advance, I am going to say, we are not exploring this option in depth because it is too low level for my taste or the purpose we are trying to achieve. It is a great solution if you want to generate diagrams as part of your applications. In fact, some of the higher-level solution we are going to explore use this library to be able to generate the diagrams. With that said, they define themselves as:

Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks. It has important applications in networking, bioinformatics, software engineering, database and web design, machine learning, and in visual interfaces for other technical domains.

The Graphviz layout programs take descriptions of graphs in a simple text language, and make diagrams in useful formats, such as images and SVG for web pages; PDF or Postscript for inclusion in other documents; or display in an interactive graph browser. Graphviz has many useful features for concrete diagrams, such as options for colors, fonts, tabular node layouts, line styles, hyperlinks, and custom shapes.

https://graphviz.org

The installation process in an Ubuntu machine is quite simple using the package manager:

sudo apt install graphviz

As I have said before, we are not going to explore deeper this library, we can check on the documentation for some examples built in C and, the use of the command line tools it offers but, for my taste, it is a too low-level solution to directly use it when implementing DaC practises.

PlantUML

The next tool is PlantUML. Regardless of what the name seems to imply, the tool is able to generate multiple types of diagrams, all of them listed on their web page. Some examples are:

The diagrams are defined using a simple and intuitive language and, images can be generated in PNG, SVG or, LaTeX format.

They provide an online tool it can be used for evaluation and testing purposes but, in this case, we are going to be installing it locally, especially to test how hard it is and how integrable is in our CI/CD pipelines as part of our evaluation. By the way, this is one of the tools it uses Graphviz behind the scenes.

There is no installation process, the tool just needs the download of the corresponding JAR file from their download page.

Now, let’s use it. We are going to generate a State Diagram. It is just one of the examples we can find on the tool’s page. The code used to generate the example diagram is the one that follows and is going to be stored in a TXT file:

@startuml
scale 600 width

[*] -> State1
State1 --> State2 : Succeeded
State1 --> [*] : Aborted
State2 --> State3 : Succeeded
State2 --> [*] : Aborted
state State3 {
  state "Accumulate Enough Data\nLong State Name" as long1
  long1 : Just a test
  [*] --> long1
  long1 --> long1 : New Data
  long1 --> ProcessData : Enough Data
}
State3 --> State3 : Failed
State3 --> [*] : Succeeded / Save Result
State3 --> [*] : Aborted

@enduml

Now, let’s generate the diagram:

java -jar ~/tools/plantuml.jar plantuml-state.txt

The result is going to be something like:

State Diagram generated with PlantUML

As we can see, the result is pretty good, especially if we consider that it has taken us around five minutes to write the code without knowing the syntax and we have not had to deal with positioning the element on a drag and drop screen.

Taking a look at the examples, the diagrams are not particularly beautiful but, the easy use of the tool and the variety of diagrams supported makes this tool a good candidate for further exploration.

WebSequenceDiagrams

WebSequenceDiagrams is just a web page that allows us to create in a quick and simple way Sequence Diagrams. It has some advantages such as offering multiple colours, there is no need to install anything and, having only one purpose, it covers it quite well in a simple way.

We are not going to explore this option further because it does not cover our needs, we want more variety of diagrams and, it does not seem integrable on our daily routines and CI/CD pipelines.

Asciidoctor Diagram

I assume everyone is more or less aware of the existence of the Asciidoctor project. The project is a fast, open-source text processor and publishing toolchain for converting AsciiDoc content to HTML5, DocBook, PDF, and other formats.

Asccidoctor Diagram is a set of Asciidoctor extensions that enable you to add diagrams, which you describe using plain text, to your AsciiDoc document.

The installation of the extension is quite simple, just a basic RubyGem that can be installed following the standard way.

gem install asciidoctor-diagram

There are other options of usage but, we are going to do an example using the terminal and, using the PlantUML syntax we have already seen.

[plantuml, Asciidoctor-classes, png]     
....
class BlockProcessor
class DiagramBlock
class DitaaBlock
class PlantUmlBlock

BlockProcessor <|-- DiagramBlock
DiagramBlock <|-- DitaaBlock
DiagramBlock <|-- PlantUmlBlock
....

The result been something like:

Generated with Asciidoctor Diagrams extension

One of the advantages of this tool is that it supports multiple diagram types. As we can see, we have used the PlantUML syntax but, there are many more available. Check the documentation.

Another of the advantages is it is based on Asciidoctor that is a very well known tool and, in addition to the image it generates an HTML page with extra content if desired. Seems worth it for further exploration.

Structurizr

I was going to skip this one because, despite having a free option, requires some subscription for determinate features and, besides, it does not seem as easy to integrate and use as other tools we are seeing.

Despite all of this, I thought it was worth it to mention it due to the demo page they offer where, with just some clicking, you can see the diagram expressed on different syntaxes such as PlantUML or WebSequenceDiagrams.

Diagrams

Diagrams is a tool that seems to have been implemented explicitly to follow the Diagram as Code practice focus on infrastructure. It allows you to write diagrams using Python and, in addition to support and having nice images for the main cloud providers, it allows you to fetch non-available images to use them in your diagrams.

Installation can be done using any of the available common mechanism in Python, in our case, pip3.

pip3 install diagrams

This is another one of the tools that, behind the scenes, uses Graphviz to do its job.

Let’s create our diagram now:

from diagrams import Cluster, Diagram
from diagrams.onprem.analytics import Spark
from diagrams.onprem.compute import Server
from diagrams.onprem.database import PostgreSQL
from diagrams.onprem.inmemory import Redis
from diagrams.onprem.aggregator import Fluentd
from diagrams.onprem.monitoring import Grafana, Prometheus
from diagrams.onprem.network import Nginx
from diagrams.onprem.queue import Kafka

with Diagram("Advanced Web Service with On-Premise", show=False):
    ingress = Nginx("ingress")

    metrics = Prometheus("metric")
    metrics << Grafana("monitoring")

    with Cluster("Service Cluster"):
        grpcsvc = [
            Server("grpc1"),
            Server("grpc2"),
            Server("grpc3")]

    with Cluster("Sessions HA"):
        master = Redis("session")
        master - Redis("replica") << metrics
        grpcsvc >> master

    with Cluster("Database HA"):
        master = PostgreSQL("users")
        master - PostgreSQL("slave") << metrics
        grpcsvc >> master

    aggregator = Fluentd("logging")
    aggregator >> Kafka("stream") >> Spark("analytics")

    ingress >> grpcsvc >> aggregator

And, let’s generate the diagram:

python3 diagrams-web-service.py

With that, the result is something like:

Diagram generated with Diagrams

As we can see, it is easy to understand and, the best part, it is quite eye-catching. And, everything looks in place without the need to mess with a drag and drop tool to position our elements.

Conclusion

As always, we need to evaluate which tool is the one that best fit our use case but, after seeing a few of them, my conclusions are:

  • If I need to generate infrastructure diagrams I will go with the Diagrams tools. Seems very easy to use been based on Python and, the results are very visually appealing.
  • For any other type of diagram, I will be inclined to use PlantUML. It seems to support a big deal of diagram types and, despite not being the most beautiful ones, it seems the results can be clear and useful enough.

Asciidoctor Diagrams seems a good option if your team or organisation is already using Asciidoctor and, it seems a good option if we want something else than just a diagram generated.

Diagrams as Code

Ubuntu Multipass

The ambit of IT, software development, operations or similar tends to be full of people that likes to try new trends or tools related directly with their day to day tasks or just out of curiosity. One quick way of doing this, it is to install all the tools and libraries in our machines and, after we have finished, try to clean everything or, at least, revert all the mistakes or, not very good practices we did when learning. Despite this been a valid way, overtime, our machines get polluted will lost dependencies, configuration files or libraries.

To avoid that, it seems a better way to try all the new stuff on an isolated environment and, if we like it and we decide do use it in our daily environments, to install it from scratch again probably correcting some initial mistakes or avoiding some bad practices.

There are plenty of solutions out there to achieve this and, to have an easy to set up throw-away environment. Most of them based on virtual machines or some kind of virtualisation. More traditional ones such as VirtualBox or VMWare or, some based on management solutions for virtual machines such as Vagrant.

Today, I just want to bring to the table a different one I have been playing with lately and, I did not know a few months ago. I do not know how popular is it or how extended it is but, I think that knowing different options it is always a plus. The tool is called Multipass. And, as Ubuntu describes it, it is “Ubuntu VMs on demand for any workstation. Multipass can launch and run virtual machines and configure them with cloud-init like a public cloud. Prototype your cloud launches locally for free.”

I have found it very easy to use and, for the purposes of having trow-away isolated environments laying around, quite useful.

We are going to see the install process and, the basic execution of a few commands related with an instance.

Installation

Before we start applying the steps to install Multipass on our machines, there are a couple of requirement we need to consider. They are related with the platform is going to be used to virtualise the images. In new operative systems, no extra requirements are needed but, some old ones have them. Check on the official documentation.

For Linux:

sudo snap install multipass

For Windows:

Just download the installer from the web page and proceed with the suggested steps.

For MacOS:

MacOS offers us two different alternatives. One based on an installation file similar to Windows and, one based on a package manager solution like Homebrew. If installing using the installation file, just execute it and follow the suggested steps and, if installing using Homebrew just execute the appropriate command (here):

brew install --cask multipass

Once the installation is done, any other command executed should be the same in all three environments.

Just as a side note, there is the possibility of using VirtualBox as a virtualisation platform if we desire it but, this is completely optional and depends only on our preferences. I am not using it. The command to install it can be found below but, I encourage you to go to the official documentation on this specific point.

Now we have finished the installation, let’s create our first instance.

Creating and using an instance

Let’s check what images are available:

Show the result of 'multipass find'
‘find’ execution – List of available images

We can see there are multiple images available but, in this case, we are going to create an instance using the latest version (20.10). By default, if not image is specified, multipass uses the las LTS version.

It is worth it to mention that, by default, multipass assign some values to our instance in terms of CPU, disk, memory and, others.

default instance values
default values – multipass launch -h
Show the result of 'multipass launch'
‘launch’ execution – Creates a new instance

As we can see it is quite fast and, if we create a second image, it will be even faster.

We can execute a command inside the instance:

Show the result of 'multipass exec'
‘exec’ execution – Executes a command inside the instance

Or, we can just login into the instance:

Show the result of 'multipass shell'
‘shell’ execution – Login into the instance

From now on, we can just work with this instance as we please.

There are a few commands we can use to manage our instances such as instances running, available instances or information about running instances. All these command are available on the help menu.

Show the result of 'multipass --help'
‘help’ execution – List available commands

Mounting a shared folder

Multipass offers the possibility of mounting folders to share information between the host and the instance using the command mount.

Show the share a folder process
Sharing a folder between host and instance

Cleaning (Deleting an instance)

Finally, as we do not want to leave throw-away instances laying around, after we have finished working, we can remove it.

Shows the 'multipass delete' execution
‘delete’ execution – Removes an instance

This is just a brief introduction to multipass. More complex scenarios can be found on the official documentation.

Ubuntu Multipass

Git branch on terminal prompt

There are a lot of GUI tools to interact with git these days but, a lot of us, we are still using the terminal for that. One thing I have found very useful is, once you are in a folder with a git repository, to be able to see the branch in use on the terminal prompt. We can achieve this with a few steps.

Step 1

Let’s check the actual definition of our prompt. Mine looks something like:

echo $PS1
\[\e]0;\u@\h: \w\a\]\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]$

Step 2

Open with our favourite the file ‘~/.bashrc. In my case, ‘Vim’.

Step 3

Let’s create a small function to figure out the branch we are once we are on a folder with a git repository.

git_branch() {
  git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'
}

Step 4

Let’s redefine the variable ‘PS1’ to include the function we have just defined. In addition, let’s add some colour to the branch name to be able to see it easily. Taking my initial values and adding the function call the result should be something like:

export PS1="\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\] \[\033[00;32m\]\$(git_branch)\[\033[00m\]\$ "

And, it should look like:

Git branch on terminal prompt

Maven archetypes

We live in a micro-services world, lately, does not matter where you go, big, medium or small companies or start-ups, everyone is trying to implement microservices or migrating to them.

Maybe not initially, but when companies achieve a certain level of maturity, they start having a set of common practices, libraries or dependencies they apply or use in all the micro-services they build. Let’s say, for example, authentication or authorization libraries, metrics libraries, … or any other component they use.

When this level of maturity is achieved, usually, to start a project basically we take the “How-To” article in our wiki and start copying and pasting common code, configurations and creating a concrete structure in the new project. After that, it is all set to start implementing business logic.

This copy and paste process is not something that it usually takes a long time but, it is a bit tedious and prone to human errors. To make our lives easier and to try to avoid unnecessary mistakes we can use maven archetypes.

Taken from the maven website, an archetype is:

In short, Archetype is a Maven project templating toolkit. An archetype is defined as an original pattern or model from which all other things of the same kind are made. The name fits as we are trying to provide a system that provides a consistent means of generating Maven projects. Archetype will help authors create Maven project templates for users, and provides users with the means to generate parameterized versions of those project templates.

In the next two sections, we are going to learn how to build some basic archetypes and how to build a more complex one.

Creating a basic archetype

Following the maven documentation page we can see there are a few ways to create our archetype:

From scratch

I am not going to go into details here because the maven documentation is good enough and because it is the method we are going to use in the “Creating a complex archetype” section below. You just need to follow the four steps the documentation is showing:

  1. Create a new project and pom.xml for the archetype artefact.
  2. Create the archetype descriptor.
  3. Create the prototype files and the prototype pom.xml.
  4. Install the archetype and run the archetype plugin

Generating our archetype

This is a very simple one also described in the maven documentation. Basically, you use maven to generate the archetype structure for you

mvn archetype:generate \
    -DgroupId=[your project's group id] \
    -DartifactId=[your project's artifact id] \
    -DarchetypeGroupId=org.apache.maven.archetypes \
    -DarchetypeArtifactId=maven-archetype-archetype

As simple as that. After executing the command, we can add our personalisations to the project and proceed to install it as seen before.

From an existing project

This option allows us to create a project and when we are happy with how it is, to transform the project into an archetype. Basically we need to follow the next steps:

  1. Build the project layout by scratch and add files as need.
  2. Run the Maven archetype plugin on an existing project and configure from there.
mvn archetype:create-from-project

This will generate an “archetype” folder into the “target” folder:

target/generated-sources/archetype

We just need to copy this folder structure to the desired location and we will have our archetype ready to go. It needs to be installed as usual to be able to use it.

Using our archetype

Once we have install our archetype, we can start using it:

mvn archetype:generate \
    -DarchetypeGroupId=dev.binarycoders \
    -DarchetypeArtifactId=simple-archetype \
    -DarchetypeVersion=1.0-SNAPSHOT \
    -DgroupId=org.example \
    -DartifactId=project1

This will create a new project using the archetype. The information we need to modify in the previous command is:

  • archetypeGroupId: It is the archetype group id we have defined when we created the archetype.
  • archetypeArtifactId: It is basically the name of our archetype.
  • archetypeVersion: It is the version of the archetype we want to use in case the archetype has been evolving over time and we have different versions.
  • groupId: It is the group id our new project is going to have.
  • artifactId: It is the name of our new project.

Deleting our archetype

Right now, after installing our archetype, it is only available in our local repository. This fact allows us to delete the archetype in a very simple way. We just need to take a look at the archetype catalogue in our repository and manually remove the archetype. We can find this file at:

~/.m2/repository/archetype-catalog.xml

Creating a complex archetype

For most cases, the already reviewed ways to create archetypes should be enough but, not for all of them. What happens if we need to define some modules we want to define the name when creating the project? Or classes? Or some other customisations?

Luckily, Maven gives us some level of flexibility allowing us to define some variables and use some concrete patterns to define folders and files in our archetypes in a way they will be replaced when the projects using the archetype are created.

As a general rule we will be using two kinds of notation for our dynamically elements:

  • Defined in files: ${varName}
  • Defined in file system: __varName__ (two underscores)

This will help us to achieve our goals.

As an example, I am going to create a small complex archetype to be able to see this in action. The projects created with the archetype are going to have:

  • A parent project with <artifactId> name.
  • Two modules called <artifactId>-one and artifactId-two.
  • A main class called <classPrefix>OneApp and <classPrefix>TwoApp respectibely.
  • The classes will be located in the package <package>.one and <package>.two respectibely.
  • The module One will have a properties class stored in the resources folder.

The code of the archetype can be found at the GitHub repository.

The first file we can check is archetype-metadata-xml located in META-INF/maven.

We can see here the definition of the variable classPrefix and groupId with a default value assigned.

<requiredProperties>
    <requiredProperty key="classPrefix" />
    <requiredProperty key="groupId">
        <defaultValue>dev.binarycoders</defaultValue>
    </requiredProperty>
</requiredProperties>

After that, we can see the definition of the project structure we want to achieve. In this case, we have the fileSets node with the files on the parent project and, after that, the definition of the modules we want to include. Here we should pay special attention to the way the module attributes are defined:

<module id="${rootArtifactId}-one"
         dir="__rootArtifactId__-one"
         name="${rootArtifactId}-one">

As we can see they use the notation described before, using the “${}” notation for variables in files and the notation “__” (two underscores) for file system elements. The rest of the file is pretty simple.

If we explore the folder structure, we can see a few elements defined with these two underscores notation like the module names and the class names. This will be dynamic elements that will take the name from the variable defined when the project is created.

We can define different filesets for the files we want to be copied to our generated project. For example, we can copy all the .java files we can find inside the path src/main/java:

<fileSet filtered="true" packaged="true" encoding="UTF-8">
    <directory>src/main/java</directory>
        <includes>
            <include>**/*.java</include>
        </includes>
</fileSet>

Finally, if we explore one of the classes, we can see the next content:

#set( $symbol_pound = '#' )
#set( $symbol_dollar = '$' )
#set( $smbol_escape = '\' )
package ${package}.one;

public class ${classPrefix}OneApp {
}

The first three lines are just alias to be able to use the symbols that have a specific meaning not just as literals.

After that, we can see the package definition that it is going to be built with one part dynamically added and one part statically defined. We can see the class name follows the same pattern.

Deserves special attention to the fact that, despite we are defining packages into the classes, we are not replicating this structure in the project structure, Maven will take care of that for us. This is because when we defined the fileset we defined the attribute package equals true. If this attribute is set to false, we will be in charge of defining the desired structure.

It is worth it to mention that because of the files in the maven archetype act as velocity templates, we can introduce some logic and some dynamic content in our files. For example, print something or not in a determinate file:

<requiredProperty key="greeting">
    <defaultValue>y</defaultValue>
</requiredProperty>
#if (${greeting == 'y'})
    // Hello, welcome here!
#end

This variable can be set using the command line when we generate our new project:

-Dgreeting=n

Finally, there is one more interesting thing we can do. We can use a post-generation script write in groovy to execute some actions after the project has been generated. One interesting use, it is to remove not desired files based on some variables defined when generating the project. This script will be located in the folder src/main/resources/META-INF with the name archetype-post-generate.groovy.

import groovy.io.FileType

def rootDir = new File(request.getOutputDirectory() + "/"
    + request.getArtifactId())
def oneBundle = new File(rootDir, request.getArtifactId()
    + "-one")

def projectPackage = request.getProperties().get("package")

assert new File(oneBundle, "src/main/java/" 
    + projectPackage.split("\\.").join('/')
    + "/toDelete.txt").delete()

With this, every time that we use the archetype to create a new project we will obtain the desired results.

We can use our recently created archetype with:

mvn archetype:generate \
    -DarchetypeGroupId=dev.binarycoders \
    -DarchetypeArtifactId=simple-archetype \
    -DarchetypeVersion=1.0-SNAPSHOT \
    -DgroupId=org.example \
    -DartifactId=project

And the result:

And, one of the classes:

This is all. I hope is useful.

See you.

Maven archetypes

Starting a project

There are multiple ways to learn how to code. Some people do it with some kind of formal education like high school, university, master… Other people through Bootcamp or more modern initiatives we are seen lately. And, finally, there are people that it learns self-studying. No matter which one is your case, at the end of the day, the best way to learn and acquire some coding skills is to code.

As developers, we code (we do other things, not just code). Usually, if we do it professionally, enterprises have their tools and procedures. This is not the scope of this article. This article is going to focus on small projects we start outside these corporative environments, just for fun, for learning purposes or, because why not? And, I am talking about projects, not just code snippets or small demos trying something we have read in an article or blog, or testing this crazy idea we had in mind the last few days.

The purpose of the article is to offer some guidance on possible free tools we can use to work on a project following more or less a methodology and using some tools similar if not the same than the ones you can find on a corporative environment.

The focus of the article is people learning how to code to allow them to have a bigger picture, or people starting a long term open-source project, or just anyone curious. It going to focus not on the coding part but on the areas around the project.

Every project when it starts it needs a way to manage the code and a way to manage the efforts. I am certain about the first one, all of you agree but, I can hear from here people questioning the second one. Well, initially, and especially if we are the only developer, we can think it is not necessary but in the long run, even more, if we expect contributions in the future, it is going to be a very useful thing to have. It will keep our focus, it will make us think in advance, to do some planning and, it will give us a history of the project and why we took a certain decision at a certain point or why we added a concrete functionality. And, if you are learning how to code, it will give you the bigger picture I was talking about before.

To manage our code we need some kind of distributed version control system for tracking changes. There are a few of them out there like Git, Subversion, Perforce, Team Foundation Version Control or Mercurial. If you stay long enough in the industry, you will see all of them but, in this case, my favourite preference is Git. There are some cloud platforms that offer you an account to use it (GitHub, GitLab, Bitbucket). All of them are similar and at this level, there is no big difference, I invite you to test all of them but, in this case, I am going to recommend GitHub. I like it, I am used to it, it is hugely extended among the open-source community, and, integrates easily and smoothly with other tools we are going to see in this article.

To manage our efforts we need some kind of project management software tool for tracking tasks and the progress of them. As in the previous case, here event more, there are a lot of them out there. One very simple to use and very extended is Trello. Trello offers you some customizable boards we can use to track efforts, progress and plan in advance. In addition, there are a lot of useful plugins to improve and highly customize the boards and the cards (tasks) there. Here just a quick mention to the ‘Projects’ tab in GitHub that it allows you to create some automated Kanban boards. It is interesting to play with it. But, I have never seen it in a corporative environment where I have seen Trello multiple times. The first place here is for JIRA.

Once we start coding, creating pull requests and merging code in our repository it is nice to have in place a CI/CD environment. There are multiple advantages of this but, even if we are just learning, it will keep your code healthy making sure that any change made still compiles and pass all our tests. Again, in this category, we can find some cloud platforms and on-premises solution but, for the article, I have chosen Travis CI (the dot org). It is simple to register, great integration with GitHub, powerful enough and well documented.

One thing that developers should be worried about it is the quality and maintainability of the code they write. And, I am not talking just if our code passes all the test, I am talking about bugs, vulnerabilities, test coverage, code duplication, format (we should be using our IDE auto-format or save actions for the last one). To cover all this list we can find the tool SonarQube, and a cloud solution SonarCloud. This tools will report us with all the found problems every time a build is done, allowing us to correct them as soon as possible and not let them pailing and just be found when there is a code auditory or similar. Again, it is an easy tool to manage and to integrate with GitHub and Travis CI.

Are these tools the best ones? The more useful ones? Yes, no, maybe. I am a strong believer that there are not perfect tools, there are tools perfect for a job and, this is what sometimes we as developers need to decide, which tool fits best the job. The tools in the article are just examples and, they were perfect for the article.

Starting a project

VirtualBox: Increase space

No one can discuss that virtual machines are a very important tool. Maybe, nowadays, with all the container solutions they are a bit less important but they are still very useful.

When we are using a virtual machine, one of the possible problems we can find in some point is that our hard drive can reach its maximum capacity. Luckily, this is not the end of the world and we can expand our HDs.

Important note: We are going to loss the snapshots we have. (But, it is a small price to pay to avoid to start a new machine from scratch.

This quick manual is going to be based on VirtualBox, I guess that for other virtualization tools it must be similar using the appropriate tools.

First thing we need to do, it is to stop our virtual machine.

After that, we have some command line tools that they are going to do this process “simple”.

The first command we are going to execute is going to clone our HD in “vmdk” format to a new one with “vdi” format:

VBoxManage clonehd <virtual_machine_path>/<hd_name>.vmdk" <new_name>.vdi --format vdi

This process will take some time depending on the HD size.

Once the command has finish its execution, we are going to increase the size of the cloned HD. Les’t imagine the initial size of the HD was 80GB and we want to duplicate it:

VBoxManage modifyhd <new_name>.vdi --resize 163840

Again, once the operation is finished, we need to clone the new HD from a “vdi” format to “vmdk” format:

VBoxManage clonehd <new_name>.vdi <hd_name_new>.vmdk --format vmdk

After waiting for the operation to finish, we will have our new HD ready to plug in our virtual machine. This is going to be the next step. Go to the VirtualBox user interface and replace the old HD device with the new one.

Now, if we start our virtual machine we will still see the old size and we will not be able to find the new 80GB added. This is because we are missing one step. Turn off your virtual machine again if you have turned it on before reading these lines and follow the next step.

We need a tool to edit our HD partitions. In this case, I am going to use a live iso called GParted (wikipedia).

In a similar way we have replace the old HD with the need one, we are going to load in the CD-ROM unit the GParted live iso.

Now, we should run again our virtual machine but, instead of leaving to boot as usual, we will press F12 on startup to be able to choose the unit we want to use to boot the virtual machine. CD-ROM will be one of the offered option. After this and a few options selected during GParted start, GParted will boot. Now we just need to expand the current partition to cover the new added space.

And, that is all. We can shutdown the virtual machine, remove the live iso from our devices attached to the virtual machine and boot it again. Now, we will be able to see the 160GB HD.

VirtualBox: Increase space

Checking certificate dates

Sometime, when we are working or doing some investigations in our spare time we need to check the dates a web certificate has, especially, the expiration date. Obviously, we can go to our browser, introduce the desired url and with a few clicks check the issued and expiration dates.

But, there is another way more simple, easier and, in case we need it, we can script.

echo | openssl s_client -servername www.google.co.uk -connect www.google.co.uk:443 2>/dev/null | openssl x509 -noout -dates

This simple command gives us the information we want.

I hope it is useful.

Checking certificate dates

Change file date and time

Sometimes if we need to perform some tests we need some files to have a modification day and time matching our criteria. If we are in a Unix/Linux based system, we can use easily the command line tool “touch”.

An example command will be:

touch -t 01010001 file.txt
touch execution

Extracted from “man touch”. The option “-t” changes the access and modification times to the specified time instead of the current time of day.  The argument is of the form:[[CC]YY]MMDDhhmm[.SS]

Change file date and time

Headless browser

A lot of people do not know but some browsers have a “headless” option we can use to execute operations using the console or terminal. This is useful for scripts, invocations from our applications or anything we can think of.

We just need to execute the next instruction:

firefox --headless --screenshot ~/image.png www.google.com

The result is going to be something like:

Headless browser