Running X Apps on Mac with Docker

Today, just a quick article about how to run X Apss on Mac using Docker. Why? While it cannot be considered a sandbox, it can be considered a quick solution between running something in our system and running something inside a virtual machine. Again, why? Let’s say we want to open a PDF, we have to check it with VirusTotal, and while it looks like an innocuous file, it has some YARA rules warning and we do not want to take any risk. We can run a PDF viewer inside a container.

To do this we are going to need a few things:

That is all the pre-requirements we need. Once we have them installed, we can proceed to create our container.

In this case, I am going to use an Ubuntu container and I am going to install the Evince Document Viewer, but the steps should work for any flavour of the container you like and any document viewer you prefer. And, for any X App, we want.

First, we need to create our Dockerfile:

# Use a Linux base image
FROM ubuntu:latest

# Install necessary packages
RUN apt-get update && apt-get install -y \
    evince \
    x11-apps

# Set the display environment variable
ENV DISPLAY=:0

# Create a non-root user
RUN groupadd -g 1000 myuser && useradd -u 1000 -g myuser -s /bin/bash -m myuser

# Copy the entrypoint script
COPY entrypoint.sh /

# Set the entrypoint script as executable
RUN chmod +x /entrypoint.sh

# Switch to the non-root user
USER myuser

# Set the entrypoint
ENTRYPOINT ["/entrypoint.sh"]

Second, we need the entry point file to execute our app and pass the argument, in this case, the file we want to open:

#!/bin/bash
evince "$1"

Now, we can build our container by executing the proper docker command in the folder that contains both files:

docker build -t pdf-viewer .

Now, we just need to run it:

  1. Ensure docker is running in our system.
  2. Start the XQuartz application.
  3. Execute xhost +localhost, this command grants permission for the Docker container to access the X server display.
  4. Run our container with the following command:
docker run --rm -it \
    -e DISPLAY=$DISPLAY \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    -v /Users/username/pdf-location:/data \
    --entrypoint /entrypoint.sh pdf-viewer /data/file.pdf

Let’s elaborate on the command a bit:

  • docker run: This command is used to run a Docker container.
  • --rm: This flag specifies that the container should be automatically removed once it exits. This helps keep the system clean by removing the container after it finishes execution.
  • -it: These options allocate a pseudo-TTY and enable interactive mode, allowing you to interact with the container.
  • -e DISPLAY=$DISPLAY: This option sets the DISPLAY environment variable in the container to the value of the DISPLAY variable on your local machine. It enables the container to connect to the X server display.
  • -v /tmp/.X11-unix:/tmp/.X11-unix: This option mounts the X server socket directory (/tmp/.X11-unix) from your local machine to the same path (/tmp/.X11-unix) within the container. It allows the container to access the X server for display forwarding.
  • -v /Users/username/pdf-location:/data: This option mounts the /Users/username/pdf-location directory on your local machine to the /data directory within the container. It provides access to the PDF file located at /Users/username/pdf-location inside the container.
  • --entrypoint /entrypoint.sh: This option sets the entry point for the container to /entrypoint.sh, which is a script that will be executed when the container starts.
  • pdf-viewer: This is the name of the Docker image you want to run the container from.
  • /data/file.pdf: This specifies the path to the PDF file inside the container that you want to open with the PDF viewer.

With this, we are already done. Executing this command the PDF Document Viewer will open with our PDF. Happy reading!

There is one extra and optional step we can take, it is to create an alias to facilitate the execution:

alias run-pdf-viewer='function _pdf_viewer() { xhost +localhost >/dev/null 2>&1;docker run -it --rm --env="DISPLAY=host.docker.internal:0" -v $1:/data --entrypoint /entrypoint.sh pdf-viewer /data/$2 };_pdf_viewer'

The above alias allows us to open a PDF file inside the container without remembering the more complex commands, we only need to ensure Docker is running. As an example, we can open a file with:

run-pdf-viewer ~/Downloads example.pdf

Running X Apps on Mac with Docker

Docker: Parent context

Sometimes when building Docker images, if we have custom files or resources we want to use and multiple images are going to use them, it is interesting to have then just once in a shared folder and to have all Dockerfile commands pointing to that shared folder.

The problem with this is that, when we are running a build, docker use as a context the current directory where the docker build command runs and, this, initially, can cause some fails.

Let’s see an example of this. Let’s imagine we have a few projects with a structure like this:

projectA -> DockerfileA
         -> fileA
         -> resources -> script.sh
         -> resources -> certs -> cert
projectB -> DockerfileA
         -> fileB
         -> resources -> script.sh
         -> resources -> certs -> cert
projectC -> DockerfileA
         -> fileC
         -> resources -> script.sh
         -> resources -> certs -> cert

In the above folder structure, we have three different projects and we can see we have some duplicated resources making more difficult to maintain and been error-prone. For this reason, we decide to restructure the folders and shared resources in something like:

projects -> projectA -> DockerfileA
         -> projectA -> fileA
         -> projectB -> DockerfileB
         -> projectB -> fileB
         -> projectC -> DockerfileC
         -> projectC -> fileC
         -> resources -> script.sh
         -> resources -> certs -> cert

As we can see, the above structure seems easier to maintain and less error-prone. The only consideration we need to have now is the docker build context.

Let’s say our Dockerfile files, among others, has the ADD or COPY commands inside. In this case, something like:

...
COPY ./resources/script.sh /opt/
COPY ./resources/certs/cert /opt/cert
...

Building docker images under the first folder structure is something like:

(pwd -> ~/projectA)
$ docker build -t projectA .

(pwd -> ~/projectB)
$ docker build -t projectB .

(pwd -> ~/projectC)
$ docker build -t projectC .

But, if we try to do it in the same way using the second folder structure we are going to be prompted with an error:

COPY failed: Forbidden path outside the build context.

This is because the context when executing the docker command does not have access to the folders allocated in the parent folder.

To avoid this problem, the only thing we need to do is to execute the build command from the parent folder and to add an extra flag (-f) to our command to point to the Dockerfile file we want to use when building the image. Something like:

(pwd -> projects)
$ docker build -t projectA -f projectA/DockerfileA .
$ docker build -t projectB -f projectB/DockerfileB .
$ docker build -t projectC -f projectC/DockerfileC .

This should solve the problem as now, the context of the docker build command is the parent folder.

Docker: Parent context

Container Security: Anchore Engine

Nowadays, containers are taking over the world. We still have big systems, legacy system and, obviously, not every company out there has enough speed to migrate to containerized solutions but, wherever you look, people are talking about containers.

And, if you look in the opposite direction, people are talking about security. Breaches, vulnerabilities, systems not properly patched, all kind of problems that put at risk enterprise security and users data.

With all of this, and it is not new, projects involving both topics have been growing and growing. The ecosystem is huge, and the amount of options is starting to be overwhelming.

We have projects like:

  • Docker Bench for Security: The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production. The tests are all automated and are inspired by the CIS Docker Benchmark v1.2.0.
  • Clair: Clair is an open-source project for the static analysis of vulnerabilities in application containers (currently including apps and docker).
  • Cilium: Cilium is open source software for providing and transparently securing network connectivity and load-balancing between application workloads such as application containers or processes. Cilium operates at Layer 3/4 to provide traditional networking and security services as well as Layer 7 to protect and secure use of modern application protocols such as HTTP, gRPC and Kafka. Cilium is integrated into common orchestration frameworks such as Kubernetes and Mesos.
  • Anchore Engine: The Anchore Engine is an open-source project that provides a centralized service for inspection, analysis and certification of container images. The Anchore Engine is provided as a Docker container image that can be run standalone or within an orchestration platform such as Kubernetes, Docker Swarm, Rancher, Amazon ECS, and other container orchestration platforms.
  • OpenSCAP: The OpenSCAP ecosystem provides multiple tools to assist administrators and auditors with assessment, measurement, and enforcement of security baselines. We maintain great flexibility and interoperability, reducing the costs of performing security audits.
  • Dagda: Dagda is a tool to perform static analysis of known vulnerabilities, trojans, viruses, malware & other malicious threats in docker images/containers and to monitor the docker daemon and running docker containers for detecting anomalous activities.
  • Notary: The Notary project comprises a server and a client for running and interacting with trusted collections. See the service architecture documentation for more information.
  • Grafaes: An open artefact metadata API to audit and govern your software supply chain.
  • Sysdig Falco: Falco is a behavioural activity monitor designed to detect anomalous activity in your applications. Powered by sysdig’s system call capture infrastructure, Falco lets you continuously monitor and detect container, application, host, and network activity – all in one place – from one source of data, with one set of rules.
  • Banyan Collector: Banyan Collector is a light-weight, easy to use, and modular system that allows you to launch containers from a registry, run arbitrary scripts inside them, and gather useful information.

As we can see, there are multiple tools within this container security scope. These are just some example.

In this article, we are going to explore a bit more Archore Engine. We are going to create a basic Jenkins pipeline to scan one container. Fro this, we are going to need:

  • A repository in GitHub with a simple dockerized project. In my case, I will be using this one. It’s a simple Spring Boot app with a hello endpoint and a very simple ‘Dockerfile’.
  • We are going to need a Docker Hub repository to store our image. I will be using this one.
  • Docker and docker-compose.

And, that’s all. Let’s go.

We can see in the next image the pipeline we are going to implement:

Install Anchore Engine

We just need to execute a few commands to have Anchore Engine up and running.

mkdir -p ~/aevolume/config 
mkdir -p ~/aevolume/db/
cd ~/aevolume/config && curl -O https://raw.githubusercontent.com/anchore/anchore-engine/master/scripts/docker-compose/config.yaml && cd - 
cd ~/aevolume
curl -O https://raw.githubusercontent.com/anchore/anchore-engine/master/scripts/docker-compose/docker-compose.yaml

After that, we should see a folder ‘aevolume’ with a content similar to:

Running Anchore Engine

As we can see, the previous step has provided us with a docker-compose file to run in an easy way Anchore Engine. We just need to execute the command:

docker-compose up -d

When docker-compose finishes, we should be able to see the two containers for Anchore Engine executing. One for the application itself and one for the database.

Install the Anchore CLI

It is not necessary but, it is going to be very useful to debug integration problem if we have (I had a few the first time). For this, we just need to execute a simple command that it will make the executable ‘anchore-cli’ available in our system.

pip install anchorecli

Install the Jenkins plugin

Now, we start working on the integration with Jenkins. The first step is to install the Anchore integration on Jenkins. We just need to go to the Jenkins management plugin area and install one called ‘Anchore Container Image Scanner Plugin’.

Configure Anchore in Jenkins

There is one more step we need to take to configure the Anchore plugin in Jenkins. We need to provide the engine URL and the access credentials. This credentials can be found in the file ‘~/aevolume/config/config.yaml’.

Configure Docker Hub repository

The last configuration we need to do, it is to add our access credential for our Docker Hub repository. I recommend here to generate an access token and not to use our real credentials. Once we have the access credential, we just need to add them to Jenkins.

Create a Jenkins pipeline

To be able to run our builds and to analyze our containers, we need to create a Jenkins pipeline. We are going to use the script feature for this. The script will look like this:

pipeline {
    environment {
        registry = "fjavierm/anchore_demo"
        registryCredential = 'DOCKER_HUB'
        dockerImage = ''
    }
    agent any
        stages {
            stage('Cloning Git') {
                steps {
                    git 'https://github.com/fjavierm/demo.git'
                }
            }

            stage('Building image') {
                steps {
                    script {
                        dockerImage = docker.build registry + ":$BUILD_NUMBER"
                    }
                }
            }

            stage('Container Security Scan') {
                steps {
                    sh 'echo "docker.io/fjavierm/anchore_demo:latest `pwd`/Dockerfile" > anchore_images'
                    anchore name: 'anchore_images'
                }
            }
            stage('Deploy Image') {
                steps{
                    script {
                        docker.withRegistry( '', registryCredential ) {
                            dockerImage.push()
                        }
                    }
                }
            }
            stage('Cleanup') {
                steps {
                sh'''
                    for i in `cat anchore_images | awk '{print $1}'`;do docker rmi $i; done
                '''
            }
        }
    }
}

This will create a pipeline like:

Execute the build

Now, we just need to execute the build and see the results:

Conclusion

With this, we finish the demo. We have installed Anchore Engine, integrate it with Jenkins, run a build and check the analysis results.

I hope it is useful.

Container Security: Anchore Engine

Docker Java Client

No questions about containers been one of the latest big things. They are everywhere, everyone use them or want to use them and the truth is they are very useful and they have change the way we develop.

We have docker for example that provides us with docker and docker-compose command line tools to be able to run one or multiple containers with just a few lines or a configuration file. But, we, developers, tend to be lazy and we like to automize things. For example, what about when we are running our application in our favorite IDE the application starts the necessary containers for us? It will be nice.

The point of this article was to play a little bit the a docker java client library, the case exposed above is just an example, probably you readers can think about better cases but, for now, this is enough for my learning purposes.

Searching I have found two different docker java client libraries:

I have not analyze them or compare them, I have just found the one by GitHub before and it looks mature and usable enough. For this reason, it is the one I am going to use for the article.

Let’s start.

First, we need to add the dependency to our pom.xml file:

<dependency>
    <groupId>com.github.docker-java</groupId>
    <artifactId>docker-java</artifactId>
    <version>3.1.1</version>
</dependency>

The main class we are going to use to execute the different instructions is the DockerClient class. This is the class that establishes the communication between our application and the docker engine/daemon in our machine. The library offers us a very intuitive builder to generate the object:

DockerClient dockerClient = DockerClientBuilder.getInstance().build();

There are some options that we can configure but for a simple example is not necessary. I am just going to say there is a class called DefaultDockerClientConfig where a lot of different properties can be set. After that, we just need to call the getInstance method with our configuration object.

Image management

Listing images

List<Image> images = dockerClient.listImagesCmd().exec();

Pulling images

dockerClient.pullImageCmd("postgres")
                .withTag("11.2")
                .exec(new PullImageResultCallback())
                .awaitCompletion(30, TimeUnit.SECONDS);

Container management

Listing containers

// List running containers
dockerClient.listContainersCmd().exec();

// List existing containers
dockerClient.listContainersCmd().withShowAll(true).exec();

Creating containers

CreateContainerResponse container = dockerClient.createContainerCmd("postgres:11.2")
                .withName("postgres-11.2-db")
                .withExposedPorts(new ExposedPort(5432, InternetProtocol.TCP))
                .exec();

Starting containers

dockerClient.startContainerCmd(container.getId()).exec();

Other

There are multiple operations we can do with containers, the above is just a short list of examples but, we can extended with:

  • Image management
    • Listing images: listImagesCmd()
    • Building images: buildImageCmd()
    • Inspecting images: inspectImageCmd("id")
    • Tagging images: tagImageCmd("id", "repository", "tag")
    • Pushing images: pushImageCmd("repository")
    • Pulling images: pullImageCmd("repository")
    • Removing images: removeImageCmd("id")
    • Search in registry: searchImagesCmd("text")
  • Container management
    • Listing containers: listContainersCmd()
    • Creating containers: createContainerCmd("repository:tag")
    • Starting containers: startContainerCmd("id")
    • Stopping containers: stopContainerCmd("id")
    • Killing containers: killContainerCmd("id")
    • Inspecting containers: inspectContainerCmd("id")
    • Creating a snapshot: commitCmd("id")
  • Volume management
    • Listing volumes: listVolumesCmd()
    • Inspecting volumes: inspectVolumeCmd("id")
    • Creating volumes: createVolumeCmd()
    • Removing volumes: removeVolumeCmd("id")
  • Network management
    • Listing networks: listNetworksCmd()
    • Creating networks: createNetworkCmd()
    • Inspecting networks: inspectNetworkCmd().withNetworkId("id")
    • Removing networks: removeNetworkCmd("id")

And, that is all. There is a project suing some of the operations listed here. It is call country, it is one of my learning projects, and you can find it here.

Concretely, you can find the code using the docker java client library here, and the code using this library here, specifically in the class PopulationDevelopmentConfig.

I hope you find it useful.

Docker Java Client

Docker for Java developers

Today, it has been one of this days to learn something new. If you have seen any of my blogs, you should know that I like to write about what I am learning and I like to write it myself, I am not a fan of just paste resources or other people’s materials or tutorials but, sometimes, I need to make exceptions.

Today, I was learning Docker, you know, the container platform that allows us to deploy applications in containers in a easy way. Unless you have been living for a long time under a rock probably you what I am talking about.

Looking for resources I have found a video of one of the presentation in the Devnexus conferences in 2015 that it has help me a lot. Considering it is a great resource, I am gonna link the video here, instead of writing myself an article, because I consider that it is very well explained and the tricks and solutions he shows are very interesting.

The speaker is Burr Sutter, a former employee (at the time the article is written) of Red Hat. He has a blog, a github account and a twitter account you can check. The video is quite interesting and not boring at all, it is more than one hour but it feels much shorter than that.

In addition, I strongly recommend you to go to his github account (listed before) and take a look to the tutorials he is using during the video:

  • Docker tutorial: Simple deploy of a Java EE application.
  • Docker MySQL tutorial: Same application than the previous one but with two different containers, one with the application server and the application and another one with the MySQL database.

Just a tiny addition to the excellent job he has done. Right now, we can download from the Docker page the native version of Docker for MAC and for Windows. They are still in beta version but I have not had any problem with that and everything looks a bit simpler.

Good luck with your learning and enjoy it.

See you.

Docker for Java developers