A few days ago I was talking with one of my acquaintances and he told me he has recently been the victim of a phishing attack. Luckily, he realised just after introduce his data in a bogus form something was not right, he could contact his service provider company and fix the problem without too many headaches.
During the conversation he showed me the message he received with the link he clicked on and, I was quick to notice it was a fake message. It was just an SMS message, not enough text to check as it can be done in an email but, the link was obviously (to me) a fake one.
From other conversations I have had with him in the past, I know he has some basic knowledge and security awareness, especially because his company make them do some basic training about it, which is very good, but when discussing with him, I realised he had never heard about the term domain impersonation and, other than a basic comment about checking links before you click them he was never given any examples of what to look for.
Trying to put my grain of salt out there to raise awareness, we are going to review quickly a few of the most common techniques and try to learn a bit more by example.
Let’s say we have a service provider that offers its services through a web page hosted on telecomexample.org. This will be our starting address, the real and original one. Now, let’s see what kind of techniques we can apply to mislead and trick unfortunate users.
This technique consists into skip one character on the original address. For example, telecmexample.org. As we can see, the example is omitting an “o“. The longer the address is, the easier is to miss that.
This technique consists of replacing a character with a similar one. For example, telecomexemple.org. As we can see, we are replacing the “a” with an “e“. Other common replacement are: “i -> 1” or “i -> l“.
This technique consists of replacing a character with another similar-looking character from a different alphabet. Usually from Latin, Greek or Cyrillic. For example, teʟecomexample.org. In this case, we have replaced the “l“.
This technique consists of adding an extra character to the address. For example, tellecomexample.org. Reading it carefully, we can see an extra “l” has been added.
This technique just alters the order of one or more characters on the address. For example, telecomxeample.org. In this case, we have swapped the “e” and the “x“.
This technique uses similar-sounding words such as “narrows” and “narroughs“. Like, telecomsample.org. Where the word “example” has been replaced by the similar-sounding word “sample“. Note: Probably there are better examples, but given the address domain, and not being a native speaker, it is hard, feel free to comment with better suggestions.
The service provider domain is used as a subdomain for a domain owned by the attackers. Such as telecomexample.accounts.org. Where the attacker owns the domain “accounts.org“.
Using the service provider domain but with a different top-level domain. For example, telecomexample.com. Where the top-level domain COM is used instead of the real one ORG.
Adding an intermediate hyphen to the domain. Like, telecom-example.org. Where a hyphen has been added between the words “telecom” and “example“.
In this case, an extra word is added to the original domain trogon to mislead users. For example, telecomexamplelogin.org. Where the word “login” has been added to the original domain.
Today, just a short article but, I hope it helps to raise some awareness about very common impersonation domain techniques used by attackers to deceive users.se
The index of this series of articles can be found here.
Confidentiality, integrity and availability are the three basic components around which we should build and maintain our security model. Encryption is one of the tools we have available to achieve this and it can help us to make communications safer and ensure that only the sender and receiver can read clear text data.
Cryptography is the study of secure communications techniques that allow only the sender and intended recipient of a message to view its contents. The term is derived from the Greek word kryptos, which means hidden. It is closely associated with encryption, which is the act of scrambling ordinary text into what is known as ciphertext and then back again upon arrival. In addition, cryptography also covers the obfuscation of information in images using techniques such as microdots or merging. Ancient Egyptians were known to use these methods in complex hieroglyphics, and Roman Emperor Julius Caesar is credited with using one of the first modern cyphers.
The objective of cryptography is not only confidentiality, but it also includes integrity, authentication and non-repudiation.
Types of Cryptography
Symmetric key algorithms are those which use a single set of keys for both encryption and decryption of data. This key is generally a shared secret between multiple parties who want to encrypt or decrypt the data.
Most widely used symmetric cyphers are AES and DES.
Asymmetric Cryptography / Public Key Cryptography
Asymmetric cryptography, also known as public-key cryptography, is a process that uses a pair of related keys, one public key and one private key, to encrypt and decrypt a message and protect it from unauthorized access or use. A public key is a cryptographic key that can be used by any person to encrypt a message so that it can only be deciphered by the intended recipient with their private key. A private key, also known as a secret key, is shared only with key’s initiator.
Many protocols rely on asymmetric cryptography, including the transport layer security (TLS) and secure sockets layer (SSL) protocols, which make HTTPS possible. The encryption process is also used in software programs such as browsers that need to establish a secure connection over an insecure network like the Internet or need to validate a digital signature.
RSA, DSA and Diffie-Hellman algorithm are popular examples of asynchronous cyphers.
Usually, private keys are known only by the owner and public keys are issued by using a Public Key Infrastructure (PKI) where a trusted Certification Authority certifies the ownership of the key pairs.
Government Access to Keys
By the Government Access to Keys (GAK) schema, software companies will give copies of all keys to the government and the government promises that they will hold on to the keys in a secure way, and will only use them when a court issues a warrant to do so.
A cypher is a set of rules by which we implement encryption. Thousands of cyphers algorithms are available on the Internet. Some of them are proprietary while others are open source. Common methods by which cyphers replace original data with encrypted data are:
The simple substitution cypher is a cypher that has been in use for many hundreds of years (an excellent history is given in Simon Singhs ‘the Code Book’). It basically consists of substituting every plaintext character for a different ciphertext character. It differs from the Caesar cypher in that the cypher alphabet is not simply the alphabet shifted, it is completely jumbled.
The simple substitution cypher offers very little communication security, and it will be shown that it can be easily broken even by hand, especially as the messages become longer (more than several hundred ciphertext characters).
The development of Polyalphabetic Substitution Ciphers was the cryptographers answer to Frequency Analysis. The first known polyalphabetic cypher was the Alberti Cipher invented by Leon Battista Alberti in around 1467. He used a mixed alphabet to encrypt the plaintext, but at random points he would change to a different mixed alphabet, indicating the change with an uppercase letter in the ciphertext. In order to utilise this cypher, Alberti used a cypher disc to show how plaintext letters are related to ciphertext letters.
A stream cypher is an encryption algorithm that encrypts 1 bit or byte of plaintext at a time. It uses an infinite stream of pseudorandom bits as the key. For a stream cypher implementation to remain secure, its pseudorandom generator should be unpredictable and the key should never be reused. Stream cyphers are designed to approximate an idealized cypher, known as the One-Time Pad.
The One-Time Pad, which is supposed to employ a purely random key, can potentially achieve “perfect secrecy”. That is, it is supposed to be fully immune to brute force attacks. The problem with the one-time pad is that, in order to create such a cypher, its key should be as long or even longer than the plaintext.
Popular Stream Cyphers
RC4: Rivest Cipher 4 (RC4) is the most widely used of all stream cyphers, particularly in software. It is also known as ARCFOUR or ARC4. RC4 stream cyphers have been used in various protocols like WEP and WPA (both security protocols for wireless networks) as well as in TLS. Unfortunately, recent studies have revealed vulnerabilities in RC4, prompting Mozilla and Microsoft to recommend that it be disabled where possible. In fact, RFC 7465 prohibits the use of RC4 in all versions of TLS. There are newer version RC5 and RC6.
A block cypher is an encryption algorithm that encrypts a fixed size of n-bits of data, known as a block, at one time. The usual sizes of each block are 64 bits, 128 bits, and 256 bits. So for example, a 64-bit block cypher will take in 64 bits of plaintext and encrypt it into 64 bits of ciphertext. In cases where bits of plaintext is shorter than the block size, padding schemes are called into play. Majority of the symmetric cyphers used today are actually block cyphers. DES, Triple DES, AES, IDEA, and Blowfish are some of the commonly used encryption algorithms that fall under this group.
Popular Block Cyphers
DES: Data Encryption Standard (DES) used to be the most popular block cypher in the world and was used in several industries. It is still popular today, but only because it is usually included in historical discussions of encryption algorithms. The DES algorithm became a standard in the US in 1977. However, it is already been proven to be vulnerable to brute force attacks and other cryptanalytic methods. DES is a 64-bit cypher that works with a 64-bit key. Actually, 8 of the 64 bits in the key are parity bits, so the key size is technically 56 bits long.
3DES: As its name implies, 3DES is a cypher based on DES. It is practically DES that is run three times. Each DES operation can use a different key, with each key being 56 bits long. Like DES, 3DES has a block size of 64 bits. Although 3DES is many times stronger than DES, it is also much slower (about 3x slower). Because many organizations found 3DES to be too slow for many applications, it never became the ultimate successor of DES.
AES: A US Federal Government standard since 2002, AES or Advanced Encryption Standard is arguably the most widely used block cypher in the world. It has a block size of 128 bits and supports three possible key sizes: 128, 192, and 256 bits. The longer the key size, the stronger the encryption. However, longer keys also result in longer processes of encryption.
Blowfish: This is another popular block cypher (although not as widely used as AES). It has a block size of 64 bits and supports a variable-length key that can range from 32 to 448 bits. One thing that makes blowfish so appealing is that Blowfish is unpatented and royalty-free.
Twofish: This cypher is related to Blowfish but it is not as popular. It is a 128-bit block cypher that supports key sizes up to 256 bits long.
DSA and Related Signature Schemes
The DSA algorithm works in the framework of public-key cryptosystems and is based on the algebraic properties of modular exponentiation, together with the discrete logarithm problem, which is considered to be computationally intractable. The algorithm uses a key pair consisting of a public key and a private key. The private key is used to generate a digital signature for a message, and such a signature can be verified by using the signer’s corresponding public key. The digital signature provides message authentication (the receiver can verify the origin of the message), integrity (the receiver can verify that the message has not been modified since it was signed) and non-repudiation (the sender cannot falsely claim that they have not signed the message).
A digital certificate contains various items that are:
Subject: Certificate’s holder name.
Serial Number: Unique number to identify the certificate.
Public key: A public copy of the public key of the certificate holder.
Issuer: Certificate issuing authority’s digital signature to verify that the certificate is real.
Signature algorithm: Algorithm used to digitally sign a certificate by the Certification Authority (CA).
Validity: Validity of a certificate mark by expiration date and time.
RSA is an encryption algorithm, used to securely transmit messages over the internet. It is based on the principle that it is easy to multiply large numbers, but factoring large numbers is very difficult. For example, it is easy to check that 31 and 37 multiply to 1147, but trying to find the factors of 1147 is a much longer process.
RSA is an example of public-key cryptography, which is illustrated by the following example: Suppose Alice wishes to send Bob a valuable diamond, but the jewel will be stolen if sent unsecured. Both Alice and Bob have a variety of padlocks, but they don’t own the same ones, meaning that their keys cannot open the other’s locks.
In RSA, the public key is generated by multiplying two large prime numbers p and q together, and the private key is generated through a different process involving p and q. A user can then distribute his public key pq, and anyone wishing to send the user a message would encrypt their message using the public key. For all practical purposes, even computers cannot factor large numbers into the product of two primes, in the same way, that factoring a number like 414863 by hand is virtually impossible.
The implementation of RSA makes heavy use of modular arithmetic, Euler’s theorem, and Euler’s totient function. Notice that each step of the algorithm only involves multiplication, so it is easy for a computer to perform:
First, the receiver chooses two large prime numbers p and q. Their product, n = pq, will behalf of the public key.
The receiver calculates ϕ(pq) = (p−1)(q−1) and chooses a number e relatively prime to ϕ(pq). In practice, e is often chosen to be (2^16) + 1 = 65537, though it can be as small as 3 in some cases. e will be the other half of the public key.
The receiver calculates the modular inverse d of e modulo ϕ(n). In other words, de ≡ 1(modϕ(n)). d is the private key.
The receiver distributes both parts of the public key: n and e. d is kept secret.
Now that the public and private keys have been generated, they can be reused as often as wanted. To transmit a message, follow these steps:
First, the sender converts his message into a number m. One common conversion process uses the ASCII alphabet:
For example, the message “HELLO” would be encoded as 7269767679. It is important that m<n, as otherwise the message will be lost when taken modulo n, so if n is smaller than the message, it will be sent in pieces.
The sender then calculates c ≡ m^e (mod n). c is the ciphertext or the encrypted message. Besides the public key, this is the only information an attacker will be able to steal.
The receiver computes c^d ≡ m(modn), thus retrieving the original number m.
The receiver translates m back into letters, retrieving the original message.
Note that step 3 makes use of Euler’s theorem.
Message Digest (One-Way Hash) Functions
A message digest is a cryptographic hash function containing a string of digits created by a one-way hashing formula.
Message digests are designed to protect the integrity of a piece of data or media to detect changes and alterations to any part of a message. They are a type of cryptography utilizing hash values that can warn the copyright owner of any modifications applied to their work.
Message digest hash numbers represent specific files containing the protected works. One message digest is assigned to particular data content. It can reference a change made deliberately or accidentally, but it prompts the owner to identify the modification as well as the individual(s) making the change. Message digests are algorithmic numbers.
This term is also known as a hash value and sometimes as a checksum.
The message digest is a unique fixed-size bit string that is calculated in a way that if a single bit is modified, it will change fifty per cent of the message digest value.
Message Digest Function (MD5)
The MD5 function is a cryptographic algorithm that takes an input of arbitrary length and produces a message digest that is 128 bits long. The digest is sometimes also called the “hash” or “fingerprint” of the input. MD5 is used in many situations where a potentially long message needs to be processed and/or compared quickly. The most common application is the creation and verification of digital signatures.
MD5 was designed by well-known cryptographer Ronald Rivest in 1991. In 2004, some serious flaws were found in MD5. The complete implications of these flaws have yet to be determined.
Secure Hashing Algorithm (SHA)
Secure Hash Algorithms (SHA) are a family of cryptographic functions designed to keep data secured. It works by transforming the data using a hash function: an algorithm that consists of bitwise operations, modular additions, and compression functions. The hash function then produces a fixed-size string that looks nothing like the original. These algorithms are designed to be one-way functions, meaning that once they are transformed into their respective hash values, it is virtually impossible to transform them back into the original data. A few algorithms of interest are SHA-1, SHA-2, and SHA-3, each of which was successively designed with increasingly stronger encryption in response to hacker attacks. SHA-0, for instance, is now obsolete due to the widely exposed vulnerabilities.
SHA-1 produces 160-bits hashing values. SHA-2 is a group of different hashing including SHA-256, SHA-384 and SHA-512
Hashed Message Authentication Code (HMAC)
A hashed message authentication code (HMAC) is a message authentication code that makes use of a cryptographic key along with a hash function. The actual algorithm behind a hashed message authentication code is complicated, with hashing being performed twice. This helps in resisting some forms of cryptographic analysis. A hashed message authentication code is considered to be more secure than other similar message authentication codes, as the data transmitted and key used in the process are hashed separately.
Secure Shell (SSH)
Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network. Typical applications include remote command-line, login, and remote command execution, but any network service can be secured with SSH.
SSH provides a secure channel over an unsecured network by using client-server architecture, connecting an SSH client application with an SSH server. The protocol specification distinguishes between two major versions, referred to as SSH-1 and SSH-2. The standard TCP port for SSH is 22.
Secure shell protocol consist of three major components:
The Transport Layer Protocol (SSH-TRANS) provides server authentication, confidentiality and integrity. It may optionally also provide compression. The transport layer will typically run over a TCP/IP connection, but might also run of any other reliable data stream.
The User Authentication Protocol (SSH-USERAUTH) authenticates the client-side user to the server. It runs over the transport layer protocol.
The Connection Protocol (SSH-CONNECT) multiplexes the encrypted tunnel into several logical channels. It runs over the user authentication protocol.
Public Key Infrastructure
Public Key Infrastructure (PKI) is a combination of policies, procedures, hardware, software and people that are required to create, manage and revoke digital certificates.
Public and Private Key Pair
Public and private keys work as a pair to enforce the encryption and decryption process. The public key can be provided to anyone and the private key must be kept it secret.
Both encryption/decryptions are valid, using the public key to encrypt and the private key to decrypt or the opposite, where the private key is used for encryption and the public key for decryption. Both ways have different applications.
Certification Authorities (CA) is a computer or entity that creates and issues digital certificates. Information like IP address, fully qualified domain name and the public key are present on these certificates. CAs also assign serial numbers to the digital certificates and sign the certificate with its digital signature.
Root certificates provide the public key and other details of CAs. Different OS store root certificates in different ways.
The purpose of identity certificates is similar to root certificates but they cover client computers or devices. For example, a router or a web server that want to make SSL connections with other peers.
Signed Certificate Vs. Self-Signed Certificate
A self-signed certificate is a public key certificate that is signed and validated by the same person. It means that the certificate is signed with its own private key and is not relevant to the organization or person identity that does sign process.
A signed certificate is supported by a reputable third-party certificate authority (CA). The issue of a signed certificate requires verification of domain ownership, legal business documents, and other essential technical perspectives. To establish a certificate chain, certificate authority also itself issues a certificate a root certificate.
The digital signature is used to validate the authenticity of digital documents. Digital signatures ensure the author of the document, the date and time of signing and authenticate the content of the message.
There are two categories of digital signatures:
Direct digital signature: The Direct Digital Signature is only include two parties one to send a message and another one to receive it. According to direct digital signature both parties trust each other and knows there public key. The message are prone to get corrupted and the sender can declines about the message sent by him any time.
Arbitrated Digital Signature: The Arbitrated Digital Signature includes three parties in which one is the sender, second is the receiver and the third is the arbiter who will become the medium for sending and receiving the message between them. The message are less prone to get corrupted because of timestamp being included by default.
Secure Sockets Layer
Secure Sockets Layer (SSL) is a standard security technology for establishing an encrypted link between a server and a client—typically a web server (website) and a browser, or a mail server and a mail client (e.g., Outlook).
SSL allows sensitive information such as credit card numbers, social security numbers, and login credentials to be transmitted securely. Normally, data sent between browsers and web servers is sent in plain text – leaving you vulnerable to eavesdropping. If an attacker is able to intercept all data being sent between a browser and a web server, they can see and use that information.
More specifically, SSL is a security protocol. Protocols describe how algorithms should be used. In this case, the SSL protocol determines variables of the encryption for both the link and the data being transmitted.
All browsers have the capability to interact with secured web servers using the SSL protocol. However, the browser and the server need what is called an SSL Certificate to be able to establish a secure connection.
SSL and TLS for Secure Communication
A popular implementation of public-key encryption is the Secure Sockets Layer (SSL). Originally developed by Netscape, SSL is an Internet security protocol used by Internet browsers and Web servers to transmit sensitive information. SSL has become part of an overall security protocol known as Transport Layer Security (TLS).
TLS and its predecessor SSL make significant use of certificate authorities. Once your browser requests a secure page and adds the “s” onto “http“, the browser sends out the public key and the certificate, checking three things:
The certificate comes from a trusted party.
The certificate is currently valid.
The certificate has a relationship with the site from which it is coming.
The following are some important functionalities SSL/TLS has been designed for:
Server authentication to client and vice versa.
Select a common cryptographic algorithm.
Generate shared secrets between peers.
Protection of normal TCP/UDP connection.
How SSL/TLS works
These are the essential principles to grasp for understanding how SSL/TLS works:
Secure communication begins with a TLS handshake, in which the two communicating parties open a secure connection and exchange the public key.
During the TLS handshake, the two parties generate session keys, and the session keys encrypt and decrypt all communications after the TLS handshake.
Different session keys are used to encrypt communications in each new session.
TLS ensures that the party on the server-side, or the website the user is interacting with, is actually who they claim to be.
TLS also ensures that data has not been altered, since a message authentication code (MAC) is included with transmissions.
With TLS, both HTTP data that users send to a website (by clicking, filling out forms, etc.) and the HTTP data that websites send to users is encrypted. Encrypted data has to be decrypted by the recipient using a key.
TLS communication sessions begin with a TLS handshake. A TLS handshake uses something called asymmetric encryption, meaning that two different keys are used on the two ends of the conversation. This is possible because of a technique called public-key cryptography.
In public-key cryptography, two keys are used: a public key, which the server makes available publicly, and a private key, which is kept secret and only used on the server-side. Data encrypted with the public key can only be decrypted with the private key and vice versa.
During the TLS handshake, the client and server use the public and private keys to exchange randomly generated data, and this random data is used to create new keys for encryption, called the session keys.
Pretty Good Privacy
Pretty Good Privacy (PGP) is a type of encryption program for online communication channels. The method was introduced in 1991 by Phil Zimmerman, a computer scientist and cryptographer. PGP offers authentication and privacy protection in files, emails, disk partitions and digital signatures and has been dubbed as the closest thing to military-grade encryption. PGP encrypts the contents of e-mail messages using a combination of different methods. PGP uses hashing, data compression, symmetric encryption, and asymmetric encryption. In addition to e-mail encryption, PGP also supports the use of a digital signature to verify the sender of an e-mail.
OpenPGP is the most widely applied standard when it comes to modern PGP practices. OpenPGP programs allow users to encrypt private and confidential messages before uploading or downloading content from a remote server. This prevents cybersecurity threats from the open channels of the Internet.
The disk encryption covers the encryption of disk to secure files and directories by converting them into an encrypted format. Disk encryption encrypts every bit on a disk to prevent unauthorised access to data storage.
The standard process for booting up an operating system is that the first section of the disk, called the master boot record, instructs the system where to read the first file that begins the instructions for loading the operating system.
When disk encryption is installed, the contents of the disk, except the master boot record and a small system that it loads, are encrypted using any suitable modern symmetric cypher by a secret key. The master boot record is modified to first load this small system, which can validate authentication information from the user.
Cryptographic attacks aim to recover the recover encryption keys. The process of finding vulnerabilities in code, encryption algorithms or key management schemes is called Cryptanalysis.
There are different attacks that can be applied in order to recover an encryption key:
Known-plaintext attacks: They are applied when cryptoanalyst have access to the plaintext message and its corresponding ciphertext and seeks to discover a correlation between them.
Cyphertext-only attacks: Cryptoanalysts only have access to the cyphertexts and they try to extract the plain text or the key by analysing the text and trying to extract the plain text. Frequency analysis, for example, is a great tool for this.
Chosen-plaintext attacks: A chosen-plaintext attack (CPA) is a model for cryptanalysis which assumes that the attacker can choose random plaintexts to be encrypted and obtain the corresponding ciphertexts. The goal of the attack is to gain some further information which reduces the security of the encryption scheme. In the worst case, a chosen-plaintext attack could expose secret information after calculating the secret key. Two forms of chosen-plaintext attack can be distinguished:
Batch chosen-plaintext attack, where the cryptanalyst chooses all plaintexts before any of them are encrypted. This is an unprofessional use of “chosen-plaintext attack”.
Adaptive chosen-plaintext attack, where the professional cryptanalyst makes a series of interactive queries, choosing subsequent plaintexts based on the information from the previous encryptions.
Chosen-cypher text attacks: A cryptanalyst can analyse any chosen ciphertexts together with their corresponding plaintexts. His goal is to acquire a secret key or to get as many information about the attacked system as possible.
Adaptive-chosen-ciphertext attacks: The adaptive-chosen-ciphertext attack is a kind of chosen-ciphertext attacks, during which an attacker can make the attacked system decrypt many different ciphertexts. This means that the new ciphertexts are created based on responses (plaintexts) received previously. The attacker can request decrypting of many ciphertexts.
Adaptive-chosen-plaintext attacks: An adaptive-chosen-plaintext attack is a chosen-plaintext attack scenario in which the attacker has the ability to make his or her choice of the inputs to the encryption function based on the previous chosen-plaintext queries and their corresponding ciphertexts.
Rubber hose attacks: The rubber hose attack is extracting secrets from people by use of torture or coercion. Other means is governmental and corporate influence over other sub-entities.
Code Breaking Methodologies
Some examples of methodologies that can help to break encryptions are:
The index of this series of articles can be found here.
Cloud computing has two meanings. The most common refers to running workloads remotely over the internet in a commercial provider’s data centre, also known as the “public cloud” model. Popular public cloud offerings such as Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure, all exemplify this familiar notion of cloud computing. Today, most businesses take a multi-cloud approach, which simply means they use more than one public cloud service.
The second meaning of cloud computing describes how it works: a virtualized pool of resources from raw compute power to application functionality, available on demand. When customers procure cloud services, the provider fulfils those requests using advanced automation rather than manual provisioning. The key advantage is agility: the ability to apply abstracted compute, storage, and network resources to workloads as needed and tap into an abundance of pre-built services. Major characteristics of cloud computing include:
Typed of Cloud Computing Services
The array of available cloud computing services is vast, but most fall into one of the following categories:
SaaS (Software as a Service)
This type of public cloud computing delivers applications over the internet through the browser. The most popular SaaS applications for business can be found in Google’s G Suite and Microsoft’s Office 365; among enterprise applications, Salesforce leads the pack. But virtually all enterprise applications, including ERP suites from Oracle and SAP, have adopted the SaaS model. Typically, SaaS applications offer extensive configuration options as well as development environments that enable customers to code their own modifications and additions.
IaaS (Infrastructure as a Service)
At a basic level, IaaS public cloud providers offer storage and compute services on a pay-per-use basis. But the full array of services offered by all major public cloud providers is staggering: highly scalable databases, virtual private networks, big data analytics, developer tools, machine learning, application monitoring, and so on. Amazon Web Services was the first IaaS provider and remains the leader, followed by Microsoft Azure, Google Cloud Platform, and IBM Cloud.
PaaS (Platform as a Service)
PaaS provides sets of services and workflows that specifically target developers, who can use shared tools, processes, and APIs to accelerate the development, testing, and deployment of applications. Salesforce’s Heroku and Force.com are popular public cloud PaaS offerings; Pivotal’s Cloud Foundry and Red Hat’s OpenShift can be deployed on-premises or accessed through the major public clouds. For enterprises, PaaS can ensure that developers have ready access to resources, follow certain processes, and use only a specific array of services, while operators maintain the underlying infrastructure.
FaaS (Functions as a Service)
FaaS, the cloud version of serverless computing, adds another layer of abstraction to PaaS so that developers are completely insulated from everything in the stack below their code. Instead of futzing with virtual servers, containers, and application runtimes, they upload narrowly functional blocks of code and set them to be triggered by a certain event (such as a form submission or uploaded file). All the major clouds offer FaaS on top of IaaS: AWS Lambda, Azure Functions, Google Cloud Functions, and IBM OpenWhisk. A special benefit of FaaS applications is that they consume no IaaS resources until an event occurs, reducing pay-per-use fees.
Cloud Deployment Models
In a public cloud, individual businesses share on-premise and access to basic computer infrastructure (servers, storage, networks, development platforms etc.) provided by a CSP. Each company shares the CSP’s infrastructure with the other companies that have subscribed to the cloud. Payment is usually pay-as-you-go with no minimum time requirements. Some CSPs derive revenue from advertising and offer free public clouds.
Public clouds are usually based on massive hardware installations distributed in locations throughout the country or across the globe. Their size enables economies of scale that permit maximum scalability to meet requirements as a company’s needs expand or contract, maximum flexibility to meet surges in demand in real-time, and maximum reliability in case of hardware failures.
Public clouds are highly cost-effective because the business only pays for the computer resources it uses. In addition, the business has access to state-of-the-art computer infrastructure without having to purchase it and hire IT staff to install and maintain it.
The main disadvantage of public clouds is that advanced security and privacy provisions are beyond their capabilities. For example, public clouds cannot meet many regulatory compliance requirements because their tenants share the same computer infrastructure. In addition, large CSP’s often implement their public clouds on hardware installations located outside the United States which may be a concern for some businesses.
Public clouds are well suited for hosting development platforms or web browsers, for big data processing that places heavy demands on computer resources, and for companies that do not have advanced security concerns.
In a private cloud, a business has access to infrastructure in the cloud that is not shared with anyone else. The business typically deploys its own platforms and software applications on the cloud infrastructure. The business’s infrastructure usually lies behind a firewall that is accessed through the company intranet over encrypted connections. Payment is often based on a fee-per-unit-time model.
Private clouds have the significant advantage of being able to provide enhanced levels of security and privacy because computer infrastructure is dedicated to a single client. Sarbanes Oxley, PCI and HIPAA compliance are all possible in a private cloud. In addition, private cloud CSPs are more likely to customize the cloud to meet a company’s needs.
An important disadvantage of private clouds for some companies is that the company is responsible for managing their own development platforms and software applications on the CSP’s infrastructure. While this gives the business substantial control on the software side, it comes at the cost of having to employ IT staff that can handle the company’s cloud deployment. Recognizing this disadvantage, some CSPs provide software applications and a virtual desktop within a company’s private cloud.
Private clouds have the additional disadvantages that they tend to be more expensive and the company is limited to using the infrastructure specified in their contract with the CSP.
In a hybrid cloud, a company’s cloud deployment is split between public and private cloud infrastructure. Sensitive data remains within the private cloud where high-security standards can be maintained. Operations that do not make use of sensitive data are carried out in the public cloud where infrastructure can scale to meet demands and costs are reduced.
Hybrid clouds are well suited to carrying out big data operations on non-sensitive data in the public cloud while keeping sensitive data protected in the private cloud. Hybrid clouds also give companies the option of running their public-facing applications or their capacity intensive development platforms in the public portion of the cloud while their sensitive data remains protected.
Community clouds are a recent variation on the private cloud model that provide a complete cloud solution for specific business communities. Businesses share infrastructure provided by the CSP for software and development tools that are designed to meet community needs. In addition, each business has its own private cloud space that is built to meet the security, privacy and compliance needs that are common in the community.
Community clouds are an attractive option for companies in the health, financial or legal spheres that are subject to strict regulatory compliance. They are also well-suited to managing joint projects that benefit from sharing community-specific software applications or development platforms.
The recent development of community clouds illustrates how cloud computing is evolving. CSPs can combine different types of clouds with different service models to provide businesses with attractive cloud solutions that meet a company’s needs.
NIST Cloud Computing Reference Architecture
The NIST cloud computing reference architecture defines five major actors: cloud consumer, cloud provider, cloud carrier, cloud auditor and cloud broker. Each actor is an entity (a person or an organization) that participates in a transaction or process and/or performs tasks in cloud computing. This reference architecture is based on recommendations of the National Institute of Standards and Technology.
A person or organization that maintains a business relationship with, and uses service from, Cloud Providers.
A person, organization or entity responsible for making a service available to interested parties.
A party that can conduct an independent assessment of cloud services, information system operations, performance and security of the cloud implementation.
An entity that manages the use, performance and delivery of cloud services, and negotiates relationships between Cloud Providers and Cloud Consumers.
An intermediary that provides connectivity and transport of cloud services from Cloud Providers to Cloud Consumers.
There are multiple benefits offered by cloud computing, some of them are:
Cost Savings: Cost saving is the biggest benefit of cloud computing. It helps organisations to save substantial capital cost as it does not need any physical hardware investments. Also, they do not need trained personnel to maintain the hardware. The buying and managing of equipment are done by the cloud service provider.
Security: A cloud host’s full-time job is to carefully monitor security, which is significantly more efficient than a conventional in-house system, where an organization must divide its efforts between a myriad of IT concerns, with security being only one of them.
Flexibility: By relying on an outside organization to take care of all IT hosting and infrastructure, organisations will have more time to devote toward the aspects of their business that directly affect their bottom line.
Mobility: Cloud computing allows mobile access to corporate data via smartphones and devices, which, considering over 2.6 billion smartphones are being used globally today, is a great way to ensure that no one is ever left out of the loop.
Quality Control: In a cloud-based system, all documents are stored in one place and in a single format. With everyone accessing the same information, you can maintain consistency in data, avoid human error, and have a clear record of any revisions or updates.
Disaster Recovery: Cloud-based services provide quick data recovery for all kinds of emergency scenarios, from natural disasters to power outages.
Loss Prevention: With a cloud-based server, all the information employees have uploaded to the cloud remains safe and easily accessible from any computer with an internet connection, even if the computer users regularly use is not working.
Automatic Software Updates: Cloud-based applications automatically refresh and update themselves, instead of forcing an IT department to perform a manual organization-wide update.
Virtualisation creates a simulated or virtual, computing environment as opposed to a physical environment. Virtualisation often includes computer-generated versions of hardware, operating systems, storage devices and more. This allows organisations to partition a single physical computer or server into several virtual machines. Each virtual machine can then interact independently and run different operating systems or applications while sharing the resources of a single host machine.
By creating multiple resources from a single computer or server, virtualisation improves scalability and workloads while resulting in the use of fewer overall servers, less energy consumption and fewer infrastructure costs and maintenance. There are four main categories that virtualisation falls into. The first is desktop virtualisation, which allows one centralised server to deliver and manage individualised desktops. The second is network virtualisation, designed to split network bandwidth into independent channels to then be assigned to specific servers or devices. The third category is software virtualisation, which separates applications from the hardware and operating system. And the fourth is storage virtualisation, which combines multiple network storage resources into a single storage device which multiple users can access.
Key Properties of Virtual Machines
VMs have the following characteristics, which offer several benefits.
Partitioning: Allow to run multiple operating systems on one physical machine and to divide system resources between virtual machines.
Isolation: Provide fault and security isolation at the hardware level and preserve performance with advanced resource controls.
Encapsulation: Save the entire state of a virtual machine to files and move and copy virtual machines as easily as moving and copying files.
Hardware Independence: Provision or migrate any virtual machine to any physical server.
Types of Virtualization
Server Virtualization: Server virtualization enables multiple operating systems to run on a single physical server as highly efficient virtual machines.
Network Virtualization: By completely reproducing a physical network, network virtualization allows applications to run on a virtual network as if they were running on a physical network — but with greater operational benefits and all the hardware independencies of virtualization.
Desktop Virtualization; Deploying desktops as a managed service enables IT organizations to respond faster to changing workplace needs and emerging opportunities.
Virtualization vs. Cloud Computing
Although equally buzz-worthy technologies, virtualization and cloud computing are not interchangeable. Virtualization is software that makes computing environments independent of physical infrastructure, while cloud computing is a service that delivers shared computing resources (software and/or data) on-demand via the Internet. As complementary solutions, organizations can begin by virtualizing their servers and then moving to cloud computing for even greater agility and self-service.
Cloud Computing Threats
Weak identity, credential and access management
Insecure interfaces and APIs
System and application vulnerability
Advanced persistent threats
Insufficient due diligence
Abuse and nefarious use of cloud services
Denial of service
Shared technology issues
Cloud Computing Attacks
Cloud malware injection attacks
Malware injection attacks are done to take control of a user’s information in the cloud. For this purpose, hackers add an infected service implementation module to a SaaS or PaaS solution or a virtual machine instance to an IaaS solution. If the cloud system is successfully deceived, it will redirect the cloud user’s requests to the hacker’s module or instance, initiating the execution of malicious code. Then the attacker can begin their malicious activity such as manipulating or stealing data or eavesdropping.
Abuse of cloud services
Hackers can use cheap cloud services to arrange DoS and brute force attacks on target users, companies, and even other cloud providers. For instance, security experts Bryan and Anderson arranged a DoS attack by exploiting capacities of Amazon’s EC2 cloud infrastructure in 2010. As a result, they managed to make their client unavailable on the internet by spending only $6 to rent virtual services.
An example of a brute force attack was demonstrated by Thomas Roth at the 2011 Black Hat Technical Security Conference. By renting servers from cloud providers, hackers can use powerful cloud capacities to send thousands of possible passwords to a target user’s account.
Denial of service attacks
DoS attacks are designed to overload a system and make services unavailable to its users. These attacks are especially dangerous for cloud computing systems, as many users may suffer as the result of flooding even a single cloud server. In the case of high workload, cloud systems begin to provide more computational power by involving more virtual machines and service instances. While trying to prevent a cyberattack, the cloud system actually makes it more devastating. Finally, the cloud system slows down and legitimate users lose any availability to access their cloud services. In the cloud environment, DDoS attacks maybe even more dangerous if hackers use more zombie machines to attack a large number of systems.
A side-channel attack is arranged by hackers when they place a malicious virtual machine on the same host as the target virtual machine. During a side-channel attack, hackers target system implementations of cryptographic algorithms. However, this type of threat can be avoided with secure system design.
A wrapping attack is an example of a man-in-the-middle attack in the cloud environment. Cloud computing is vulnerable to wrapping attacks because cloud users typically connect to services via a web browser. An XML signature is used to protect users’ credentials from unauthorized access, but this signature does not secure the positions in the document. Thus, XML signature element wrapping allows attackers to manipulate an XML document.
For example, a vulnerability was found in the SOAP interface of Amazon Elastic Cloud Computing (EC2) in 2009. This weakness allowed attackers to modify an eavesdropped message as a result of a successful signature wrapping attack.
During this type of attack, hackers intercept and reconfigure cloud services by exploiting vulnerabilities in the synchronization token system so that during the next synchronization with the cloud, the synchronization token will be replaced with a new one that provides access to the attackers. Users may never know that their accounts have been hacked, as an attacker can put back the original synchronization tokens at any time. Moreover, there’s a risk that compromised accounts will never be recovered.
An insider attack is initiated by a legitimate user who is purposefully violating the security policy. In a cloud environment, an attacker can be a cloud provider administrator or an employee of a client company with extensive privileges. To prevent the malicious activity of this type, cloud developers should design secure architectures with different levels of access to cloud services.
Account or service hijacking
Account or service hijacking is achieved after gaining access to a user’s credentials. There are various techniques for achieving this, from fishing to spyware to cookie poisoning. Once a cloud account has been hacked, attackers can obtain a user’s personal information or corporate data and compromise cloud computing services. For instance, an employee of Salesforce, a SaaS vendor, became the victim of a phishing scam which led to the exposure of all of the company’s client accounts in 2007.
Advanced persistent threats (APTs)
APTs are attacks that let hackers continuously steal sensitive data stored in the cloud or exploit cloud services without being noticed by legitimate users. The duration of these attacks allows hackers to adapt to security measures against them. Once unauthorized access is established, hackers can move through data centre networks and use network traffic for their malicious activity.
New attacks: Spectre and Meltdown
Cloud computing security refers to the security implementations, deployment and preventions to defend against security threats.
Cloud Security Control Layers
Application layer: All the controls that can be added at application level including those that can be deployed together with an application like web applications firewalls and those included in the system development life cycle like code analysis, online secure transactions, script analysis, etc.
Information layer: At this layer, mechanisms to provide confidentiality and integrity are implemented together with different policies to monitor any data loss and content management. Prevention of data leakages and enforcement of compliance with rules and regulations.
Management layer: Governance, risk management, compliance, identity and access management and, patch and configuration management help to control the secure access to the organisation resource and to manage them.
Network layer: Anything that can be applied to the network level like IDS/IPs, firewalls and other tools already discussed in previous chapters to secure networks.
Trust computing: The Root of Trust (RoT) is established by validating each component of hardware and software from the end entity up to the root certificate. It is intended to ensure that only trusted software and hardware can be used while still retaining flexibility.
Computer and storage: Integrity checks, file system monitoring, log file analysis, connection analysis, encryption, etc are solutions normally deployed for the protection of resources.
Physical security: Prevention and protection of physical damage, stealing, unauthorised physical access and environmental disaster are things to consider when securing resources.
Responsibilities in Cloud Security
Cloud Service Provider
Responsibilities of a cloud service provider include:
Web applications firewalls (WAF)
Real traffic grabber (RGT)
Intrusion prevention systems
Secure web gateway (SWG)
Application security (App Sec)
Virtual private networks (VPN)
Trusted platform module
Netflow and others
Cloud Service Consumer
Responsibilities of a cloud service consumer include:
The index of this series of articles can be found here.
IoT is the concept of basically connecting any device with an on and off switch to the Internet (and/or to each other). This includes everything from cellphones, coffee makers, washing machines, headphones, lamps, wearable devices and almost anything else you can think of. This also applies to components of machines, for example, a jet engine of an aeroplane or the drill of an oil rig. As I mentioned, if it has an on and off switch then chances are it can be a part of the IoT.
On a broader scale, the IoT can be applied to things like transportation networks: “smart cities” which can help us reduce waste and improve efficiency for things such as energy use; this helping us understand and improve how we work and live. Take a look at the visual below to see what something like that can look like.
The architecture of IoT depends upon five layers which are:
Application layer: Layer responsible for delivering the data to the users at the application layer. This is the user interface to control, manage and command IoT devices.
Middleware layer: It is for device and information management.
Internet layer: It is responsible for end-points connectivity.
Access gateway layer: It is responsible for protocol transmission and messaging.
Edge technology layer: It covers IoT capable devices.
IoT Communication Models
There are several ways in which IoT devices can communicate. The following are some of these models:
Device-to-Device Model: It is a basic model where two devices talk to each other without interfering any other device. Communication is established using some kind of wireless connection. Wi-Fi, Bluetooth, NFC or RFID can be examples of this model.
Device-to-Cloud Model: In this model, IoT devices communicate to each other communicating through an application server. For example, manufacturing environments where a usually big amount of sensors send information to a server. Application servers process the data and perform automated actions based on that analysis.
Device-to-Gateway Model: Similar to the Device-to-Cloud model, and IoT device gateway is added. The function of this gateway is to collect the data from the devices and, send it to a remote application server. In addition, offers a consolidated point where checks that the data is flowing can be done. Plus, it can provide security and protocol translation functionalities.
Back-End Data-sharing Model: This model extends the Device-to-Cloud model in a scalable scenario where multiple parties can access and control IoT devices and sensors. In this model, IoT devices communicate with an application server too.
Understanding IoT Attacks
In addition to the traditional attacks ones, other major challenges can be found in IoT environments:
Weak, Guessable, or Hardcoded Password: Use of easily brute-forced, publicly available, or unchangeable credentials, including backdoors in firmware or client software that grants unauthorized access to deployed systems.
Insecure Network Services: Unneeded or insecure network services running on the device itself, especially those exposed to the internet, that compromise the confidentiality, integrity/authenticity, or availability of information or allow unauthorized remote control.
Insecure Ecosystem Interfaces: Insecure web, backend API, cloud, or mobile interfaces in the ecosystem outside of the device that allows compromise of the device or its related components. Common issues include a lack of authentication/authorization, lacking or weak encryption, and a lack of input and output filtering.
Lack of Secure Update Mechanism: Lack of ability to securely update the device. This includes lack of firmware validation on devices, lack of secure delivery (un-encrypted in transit), lack of anti-rollback mechanisms, and lack of notifications of security changes due to updates.
Use of Insecure or Outdated Components: Use of deprecated or insecure software components/libraries that could allow the device to be compromised. This includes insecure customization of operating system platforms and the use of third-party software or hardware components from a compromised supply chain.
Insufficient Privacy Protection: User’s personal information stored on the device or in the ecosystem that is used insecurely, improperly, or without permission.
Insecure Data Transfer and Storage: Lack of encryption or access control of sensitive data anywhere within the ecosystem, including at rest, in transit, or during processing.
Lack of Device Management: Lack of security support on devices deployed in production, including asset management, update management, secure decommissioning, systems monitoring, and response capabilities.
Insecure Default Settings: Devices or systems shipped with insecure default settings or lack the ability to make the system more secure by restricting operators from modifying configurations.
Lack of Physical Hardening: Lack of physical hardening measures, allowing potential attackers to gain sensitive information that can help in a future remote attack or take local control of the device.
IoT Attack Areas
The following are the most common attack areas for IoT networks:
Device memory containing credentials
Resetting to an insecure state
Removal of storage media
Network services attacks
Unencrypted local data storage
Confidentiality and integrity issues
Cloud computing attacks
Mobile application threats
DDoS attacks: Using this technique all the services associated with an IoT network can be targeted, devices, gateways and application servers.
Rolling code attacks: Rolling code or code hooping is another technique where attacker capture the code, sequence or signal coming from transmitter devices and simultaneously block the receivers. The code will be used later to gain unauthorised access. For example, the opening signal of a car that can be recorded and reproduce it later.
BlueBorne attacks: It is the use of different techniques to exploit Bluetooth vulnerabilities to gain unauthorised access.
Jamming jack: Jamming a signal to prevent devices communication.
Backdoor: Deploying a backdoor on a computer of an employee or victim to gain access to the IoT network. Tricks do not always need to apply to de IoT devices.
Other general attacks are:
Forged malicious devices
IoT Hacking Methodology
The methodology applied on IoT platforms is the same than the one is applied to other platforms.
Information gathering: IP addresses, running protocols, open ports, type of devices, vendor’s information, etc. Shodan, Censys and Thingful are search engines to find information about IoT devices. Shodan is a great tool for discovering and gathering information from IoT devices deployed around the world.
Vulnerability scanner: Scanning network and devices looking for vulnerabilities, weak passwords, software and firmware bugs, default configurations, etc. Nmap and others are very helpful tools.
Launch attack: Exploiting the vulnerabilities using different attacks like DDoS, Rolling code, jamming, etc. RFCrack, Attify Zigbee and HackRF One are popular tools for hacking.
Gain access: Taking control over an IoT environment. Gaining access, escalating privileges, and backdoor installation are included in this phase among others.
Maintain attack: Includes login out without being detected, clearing logs and covering tracks.
The index of this series of articles can be found here.
Mobile phones, they are nowadays everywhere. They are used for entertainment, work, personal finances and services, almost anything we can imagine. In addition, there are a in the market a big variety of systems running on these mobile devices such as iOS, Blackberry OS, Android, Symbian, Windows, etc.
For all these reasons, these mobile devices must have strong security and not just a feeling of been secure to protect their users and all the private information they store. Plus, with the Bring Your Own Device philosophy, devices can cause multiple problems in corporate environments and networks.
Mobile Platform Attack Vectors
The OWASP project publishes an unbiased and practical list of the top 10 most common attacks on mobile platforms:
There are several threads and attacks on mobile devices. Some of the most basics examples are malware, data loss, integrity attacks, social engineering attacks, etc. Mobile attack vectors include:
Vulnerabilities and Risks on Mobile Platforms
Some of the risks for mobile platforms are:
Malicious third-party applications
Malicious applications on Store
Malware and rootkits
Operative system update issues
Application updates issues
Jailbreak and rooting
Application Sandbox Issue
Application sandboxing, also called application containerization, is an approach to software development and mobile application management (MAM) that limits the environments in which certain code can execute.
The goal of sandboxing is to improve security by isolating an application to prevent outside malware, intruders, system resources or other applications from interacting with the protected app. The term sandboxing comes from the idea of a child’s sandbox, in which the sand and toys are kept inside a small container or walled area.
Application sandboxing is controversial because its complexity can cause more security problems than the sandbox was originally designed to prevent. The sandbox has to contain all the files the application needs to execute, which can also create problems between applications that need to interact with one another. Still, it is one of the best security methods to be used when developing for mobile devices.
However, advance malicious applications can be designed to bypass the sandbox technology. Fragmented codes and sleep timers are common techniques adopted by attackers to bypass the inspection process.
Mobile Spam and Phishing
Mobile devices and technologies are just another path attackers can choose to send emails or messages spamming users or trying to convince them to click and access malicious links searching for credentials or information.
Open Wi-Fi and Bluetooth Networks
Public or unencrypted Wi-Fi or Bluetooth networks are another easy way for attackers to intercept communications and reveal information.
Hacking Android OS
Android is an operating system developed by Google for smartphones. But, it is not only present in smartphones, but it can also be found in other devices like gaming consoles, PCs and IoT devices. Android OS brings flexible features with an open-source platform.
Android OS has very wide support for and integration with different hardware and services what is one of its major features, and receives periodically updates.
One of the most successful features is also one of the major security flows for Android devices been this the flexibility to install third-party apps not just from trusted stores by applications (APKs) from other sources of the Internet.
Device Administration API
In version 2.2 of the Android SO the Device Administration API was introduced to ensure the administration of the device at the system level and offering control over Android devices within a corporate network. Using this security-aware API, administrators can perform several actions including wiping the device remotely or manage installed applications.
Root Access / Android Rooting
Rooting is basically the process of gaining privileged control over a device, commonly known as Root access. As in one other Linux kernel-based system, root access gives superuser permissions. These permissions allow to modify the system settings and configurations and overcome limitations and restrictions. The rooting process can be used for malicious intentions such as the installation of malicious applications, analysing custom firmware of given unnecessary permission to applications.
Android Phone Security Tools
There are multiple Android security tools that can be found in the stores but, when installing them, users need to keep in mind and be sure of their authenticity and that the companies or developers behind them are legitimate.
iOS is the operative system developed by Apple for their iPhones and nowadays it can be found in other devices of the company like iPads and iPods. Together with Android, they are the two most popular operative systems for mobile devices.
Major versions of the operative system tend to be released yearly. Two of the major security improvements iOS brings to the table are hardware-accelerated encryption and application isolation where one application cannot access another application’s data.
A jailbreak is a form of rooting resulting in privilege escalation. Jailbreak is usually done to remove or bypass the factory default restrictions by using kernel patches or device customisation. Jailbreak allows root access to the device what allows users to install unofficial applications.
Types of Jailbreak
Userland exploits: This jailbreak allows user-level access without scaling to about-level access.
iBoot exploits: This jailbreak allows user-level and boot-level access.
Bootrom exploits: This jailbreak allows user-level and boot-level access.
Tethered Jailbreak: A tethered jailbreak is one that temporarily pwns a handset for a single boot. After the device is turned off (or the battery dies), it cannot complete a boot cycle without the help of a computer-based jailbreak application and a physical cable connection between the device and the computer in question.
Semi-tethered Jailbreak: A semi-tethered jailbreak is one that permits a handset to complete a boot cycle after being pwned, but jailbreak extensions will not load until a computer-based jailbreak application is deployed over a physical cable connection between the device and the computer in question.
Semi-untethered Jailbreak: A semi-untethered jailbreak is one that permits a handset to complete a boot cycle after being pwned, but jailbreak extensions will not load until a side-loaded jailbreak app on the device itself is deployed.
Untethered Jailbreak: An untethered jailbreak is one that permits a handset to complete a boot cycle after being pwned without any interruption to jailbreak-oriented functionality.
There are multiple jailbreak tools such as:
Hacking Windows Phone OS
Windows Phone OS is another mobile operative system developed by Microsoft. Windows Phone 8 is the second generation of the Windows Phone mobile operating system from Microsoft. It was released on October 29, 2012, and, like its predecessor, it features a flat user interface based on the Metro design language. It was succeeded by Windows Phone 8.1, which was unveiled on April 2, 2014.
Windows Phone 8 replaces the Windows CE-based architecture used in Windows Phone 7 with the Windows NT kernel found in Windows 8. Current Windows Phone 7 devices cannot run or update to Windows Phone 8, and new applications compiled specifically for Windows Phone 8 are not made available for Windows Phone 7 devices. Developers can make their apps available on both Windows Phone 7 and Windows Phone 8 devices by targeting both platforms via the proper SDKs in Visual Studio.
Windows Phone 8 devices are manufactured by Microsoft Mobile (formerly Nokia), HTC, Samsung and Huawei.
Some features supported are:
Native code support (C++)
Remote device management
VoIP and video chat integration
UEFI and firmware over the air for windows phone update
BlackBerry OS is a proprietary mobile operating system designed specifically for Research In Motion’s (RIM) BlackBerry devices. The BlackBerry OS runs on Blackberry variant phones.
The BlackBerry OS is designed for smartphone environments and is best known for its robust support for push Internet email and was considered as the most prominent and secure mobile phones.
BlackBerry Attack Vectors
Malicious code signing: It the process where attacker after obtaining a code-signing key from the code-signing service sign a malicious application and uploads it to the BlackBerry App Store to be distributed to users.
JAD file exploits: JAD stands for Java Application Descriptor file. Files with the .jad extension are descriptor files that are commonly used to describe the contents of a MIDlet that are created for the Java ME virtual machine. Attackers can trick users to install malicious .jad files pointing to malicious download links to obtain an application or, even, they can be crafted to run DoS attacks.
Mobile Device Management (MDM)
Mobile device management (MDM), is the process of managing everything about a mobile device. MDM includes storing essential information about mobile devices, deciding which apps can be present on the devices, locating devices, and securing devices if lost or stolen. Many businesses use a third-party mobile device management software such as Mobile Device Manager Plus to manage mobile devices. Mobile Device Management has expanded its horizons to evolve into Enterprise Mobility Management (EMM).
Mobile devices now have more capabilities than ever before, which has ultimately led to many enterprises adopting a mobile-only or mobile-first workforce. In these types of environments, both personal (BYOD) and corporate-owned mobile devices are the primary devices used for accessing or interacting with corporate data.
Mobile Device Management (MDM) is important for enterprises focussing on improving productivity and security. They allow:
Manage multiple device types
And provide some functions such as:
Enforcing a device to be locked after certain login failures.
Enforcement of strong password policies for all BYOD devices.
MDM can detect any attempt of hacking on BYOD devices and limit their network access for those affected devices.
Enforcing confidentiality by using encryption as per organizations policy.
Administration and implementation of Data Loss Prevention (DLP) for BYOD devices.
MDM Deployment Methods
Generally, there are two types of deployments:
On-site MDM Deployment
Involves the installation of MDM applications on local servers inside the corporate data centres or offices and its managed by local staff available on-premises.
The major advantage is the granular control over the management of the BYOD devices, which, in some extend, extends the security.
The on-site MDM deployment has the next components or areas:
Data centre: All the necessary services and serves to manage the infrastructure, connectivity, access and security policies.
Internet edge: Its basic purpose is to provide connectivity to the public internet. Firewalls, filters, monitors for ingress and egress traffic and, wireless controllers and access points for guest users.
Services layer: Contain wireless controllers and access points used by users in the corporate environment. And sometimes services like NTP or other support services.
Core layer: Just like every other design, the core is the focal point of the whole network regarding routing of traffic in a corporate network environment.
Campus Building: A distribution layer that acts as ingress/egress point for all traffic in a campus building, where users can connect using switches or wireless access points.
Cloud-based MDM deployment
In this type of deployment, the MDM software is installed and managed by a third-party service and, this is one of the best advantages of this type due to the maintenance and troubleshooting been the responsibility of the service provider.
The cloud-based MDM deployment has the next components or areas:
Data centre: All the necessary services and serves to manage the infrastructure, connectivity, access and security policies.
Internet edge: Its basic purpose is to provide connectivity to the public internet. Firewalls, filters, monitors for ingress and egress traffic and, wireless controllers and access points for guest users.
WAN: Provides VPN connectivity from branch offices to the corporate office, internet access from branch offices and connectivity to cloud-based MDM application software. MAintain policies and configurations for BYOD devices connected to the corporate network.
WAN edge: This component act as a focal point for all ingress/egress WAN traffic from and going to branch offices.
Services layer: Contain wireless controllers and access points used by users in the corporate environment. And sometimes services like NTP or other support services.
Core layer: Just like every other design, the core is the focal point of the whole network regarding routing of traffic in a corporate network environment.
Branch offices: This component is compromised of few routers acting as the focal point of ingress and egress traffic out of branch offices. USer can connect using access switches or wireless access points.
Bring Your Own Device (BYOD)
The BOYD concept makes life easier for users but represents some new challenges for network engineers and designers. Network engineers and designers need to find a way to balance the constant mutation of their networks and the offering of seamless wireless connectivity with maintaining good security for organisations.
Some reason to implement BYOD solutions are:
A wide variety of consumer devices: Smartphones, tablets, laptops and others of multiple brands and types belonging to users need to be, nowadays, added to the network and, they need to complain with organisation’s policies and, of course, have all the connectivity.
No schedules: Not any more strict working hours, users can join a network when is convenient for them early, late, launch time even weekends.
Deslocalisation: Not just working from offices buildings or corporative environments, users can now connect from everywhere and have the need to access to the company resources.
BYOD Architecture Framework
Some elements that can be found in BYOD environments are:
BYOD devices: All the devices allowed to connect to the corporate network to allow users to perform their job.
Wireless access points: They provide wireless connectivity on-premises and they are installed in the physical network of a company.
Wireless LAN controllers: WLAN controllers provide centralised management and monitoring of the WLAN solution. They are integrated with the identity service engine to enforce the authentication and authorisation of the BYOW devices.
Identity service engine: They implement the authentication, authorisation and accounting for end-points devices.
VPN solutions: They provide connectivity to corporate networks for end-users allowing confidentiality of data.
Integrated services router (ISR): Prefered in BYOD architectures to provide WAN and Internet access in corporate environments to BYOD devices.
Aggregation services router (ASR): It provides WAN and Internet access in corporate environments and acts as aggregation points for connections coming from the branches and home-offices.
Cloud web security (CWS): It provides enhanced web security for all BYOD devices that access the Internet using public 3G/4G networks.
Adaptive security appliance (ASA): It provides standard security solutions at the Internet edge like IDS or IPS and acts as a termination point for the VPN connections.
RSA SecurID: It provides one-time passwords to access network applications for BYOD devices.
Active Directory: It provides central command and control of domain users, computers and network printers. It restricts access to network resources.
Certificate authority: It allows to provide access to the network only to BYOD devices that have a valid certificate installed.
Mobile Security Guidelines
Mobil devices have a big amount of in-build security features and measures, this together with tools available on the Stores can craft good security but, in addition, some beneficial guidelines to secure mobile phones are as follows:
Avoid auto-upload of files and photos.
Perform security assessments of applications.
Turn off the Bluetooth.
Allow only necessary GSP-enabled applications.
Do not connect to open networks or public networks unless it is necessary.
Install applications for trusted or official stores.
The index of this series of articles can be found here.
A wireless network allows devices to stay connected to the network but roam untethered to any wires. Access points amplify Wi-Fi signals, so a device can be far from a router but still be connected to the network. Previously it was thought that wired networks were faster and more secure than wireless networks. But continual enhancements to wireless network technology such as the Wi-Fi 6 networking standard have eroded speed and security differences between wired and wireless networks.
Usually, wireless communications rely on radio communications. Different frequency ranges are used for different types of wireless technologies depending upon the requirements.
GSM (Global System for Mobile communications) is an open, digital cellular technology used for transmitting mobile voice and data services. GSM supports voice calls and data transfer speeds of up to 9.6 kbps, together with the transmission of SMS (Short Message Service).
GSM operates in the 900MHz and 1.8GHz bands in Europe and the 1.9GHz and 850MHz bands in the US. GSM services are also transmitted via 850MHz spectrum in Australia, Canada and many Latin American countries. The use of harmonised spectrum across most of the globe, combined with GSM’s international roaming capability, allows travellers to access the same mobile services at home and abroad. GSM enables individuals to be reached via the same mobile number in up to 219 countries.
Terrestrial GSM networks now cover more than 90% of the world’s population. GSM satellite roaming has also extended service access to areas where terrestrial coverage is not available.
A wireless access point (WAP), or more generally just access point (AP), is a networking hardware device that allows other Wi-Fi devices to connect to a wired network. The AP usually connects to a router (via a wired network) as a standalone device, but it can also be an integral component of the router itself. An AP is differentiated from a hotspot which is a physical location where Wi-Fi access is available.
A Wi-Fi network’s SSID is the technical term for its network name. SSID stands for “Service Set Identifier”. Under the IEEE 802.11 wireless networking standard, a “service set” refers to a collection of wireless networking devices with the same parameters. So, the SSID is the identifier (name) that tells you which service set (or network) to join.
The BSSID is the MAC address of the wireless access point (WAP) generated by combining the 24-bit Organization Unique Identifier (the manufacturer’s identity) and the manufacturer’s assigned 24-bit identifier for the radio chipset in the WAP.
Industrial, Scientific and Medical band, as a part of the radio spectrum that can be used for any purpose without a license in most countries. 902-928 MHz, 2.4 GHz and 5.7-5.8 GHz bands are used for machines that emitted radio frequencies, industrial heaters and microwave ovens, but not for radio communications.
Orthogonal Frequency Division Multiplexing (OFDM)
Orthogonal Frequency Division Multiplexing is a digital transmission technique that uses a large number of carriers spaced apart at slightly different frequencies. First promoted in the early 1990s for wireless LANs, OFDM is used in many wireless applications including Wi-Fi, WiMAX, LTE, ultra-wideband (UMB), as well as digital radio and TV broadcasting in Europe and Japan. It is also used in land-based ADSL (see OFDMA).
Frequency-hopping Spread Spectrum (FHSS)
Frequency-hopping spread spectrum (FHSS) is a method of transmitting radio signals by rapidly changing the carrier frequency among many distinct frequencies occupying a large spectral band. The changes are controlled by a code known to both transmitter and receiver. FHSS is used to avoid interference, to prevent eavesdropping, and to enable code-division multiple access (CDMA) communications.
Types of Networks
Types of wireless networks deployed in a geographical area can be categorised as:
Wireless personal area network (WPAN)
Wireless local area network (WLAN)
Wireless metropolitan area network (WMAN)
Wireless wide area network (WWAN)
However, a wireless network can be defined in different types depending upon the deployment scenarios. The following are some of the wireless network types that are used in different scenarios:
Extension to a wired network
Multiple access points
10 – 66 GHz
70 – 1000 Mbps
1 – 3 Mbps
Wi-Fi is a family of wireless networking technologies, based on the IEEE 802.11 family of standards, which are commonly used for local area networking of devices and Internet access. Wi‑Fi is a trademark of the non-profit Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that successfully complete interoperability certification testing.
They transmit at frequencies of 2.4 GHz or 5 GHz. This frequency is considerably higher than the frequencies used for cell phones, walkie-talkies and televisions. The higher frequency allows the signal to carry more data.
They use 802.11 networking standards, which come in several flavours:
802.11a transmits at 5 GHz and can move up to 54 megabits of data per second. It also uses orthogonal frequency-division multiplexing (OFDM), a more efficient coding technique that splits that radio signal into several sub-signals before they reach a receiver. This greatly reduces interference.
802.11b is the slowest and least expensive standard. For a while, its cost made it popular, but now it is becoming less common as faster standards become less expensive. 802.11b transmits in the 2.4 GHz frequency band of the radio spectrum. It can handle up to 11 megabits of data per second, and it uses complementary code keying (CCK) modulation to improve speeds.
802.11g transmits at 2.4 GHz like 802.11b, but it is a lot faster – it can handle up to 54 megabits of data per second. 802.11g is faster because it uses the same OFDM coding as 802.11a.
802.11n is the most widely available of the standards and is backwards compatible with a, b and g. It significantly improved speed and range over its predecessors. For instance, although 802.11g theoretically moves 54 megabits of data per second, it only achieves real-world speeds of about 24 megabits of data per second because of network congestion. 802.11n, however, reportedly can achieve speeds as high as 140 megabits per second. 802.11n can transmit up to four streams of data, each at a maximum of 150 megabits per second, but most routers only allow for two or three streams.
802.11ac is the newest standard as of early 2013. It has yet to be widely adopted and is still in draft form at the Institute of Electrical and Electronics Engineers (IEEE), but devices that support it are already on the market. 802.11ac is backwards compatible with 802.11n (and therefore the others, too), with n on the 2.4 GHz band and ac on the 5 GHz band. It is less prone to interference and far faster than its predecessors, pushing a maximum of 450 megabits per second on a single stream, although real-world speeds may be lower. Like 802.11n, it allows for transmission on multiple spatial streams – up to eight, optionally. It is sometimes called 5G WiFi because of its frequency band, sometimes Gigabit WiFi because of its potential to exceed a gigabit per second on multiple streams and sometimes Very High Throughput (VHT) for the same reason.
Wi-Fi Authentication Modes
There are different authentication methods for WiFi-based networks:
Open Authentication to the Access Point
Open authentication allows any device to authenticate and then attempt to communicate with the access point. Using open authentication, any wireless device can authenticate with the access point, but the device can communicate only if it is Wired Equivalent Privacy (WEP) keys match the access point’s WEP keys. Devices that are not using WEP do not attempt to authenticate with an access point that is using WEP. Open authentication does not rely on a RADIUS server on your network.
Shared Key Authentication to the Access Point
During shared key authentication, the access point sends an unencrypted challenge text string to any device that is attempting to communicate with the access point. The device that is requesting authentication encrypts the challenge text and sends it back to the access point. If the challenge text is encrypted correctly, the access point allows the requesting device to authenticate.
Both the unencrypted challenge and the encrypted challenge can be monitored, however, which leaves the access point open to attack from an intruder who calculates the WEP key by comparing the unencrypted and encrypted text strings. Because of this vulnerability to attack, shared key authentication can be less secure than open authentication. Like open authentication, shared key authentication does not rely on a RADIUS server on your network.
EAP Authentication to the Network
This authentication type provides the highest level of security for your wireless network. By using the Extensible Authentication Protocol (EAP) to interact with an EAP-compatible RADIUS server, the access point helps a wireless client device and the RADIUS server to perform mutual authentication and derive a dynamic unicast WEP key. The RADIUS server sends the WEP key to the access point, which uses the key for all unicast data signals that the server sends to or receives from the client. The access point also encrypts its broadcast WEP key (which is entered in the access point’s WEP key slot 1) with the client’s unicast key and sends it to the client.
MAC Address Authentication to the Network
The access point relays the wireless client device’s MAC address to a RADIUS server on your network, and the server checks the address against a list of allowed MAC addresses. Because intruders can create counterfeit MAC addresses, MAC-based authentication is less secure than EAP authentication. However, MAC-based authentication provides an alternate authentication method for client devices that do not have EAP capability. See the “Assigning Authentication Types to an SSID” section for instructions on enabling MAC-based authentication.
Combining MAC-Based, EAP, and Open Authentication
You can set up the access point to authenticate client devices that use a combination of MAC-based and EAP authentication. When you enable this feature, client devices that use 802.11 open authentications to associate to the access point first attempt MAC authentication. If MAC authentication succeeds, the client device joins the network. If MAC authentication fails, EAP authentication takes place.
Using WPA Key Management
Wi-Fi Protected Access (WPA) is a standards-based, interoperable security enhancement that strongly increases the level of data protection and access control for existing and future wireless LAN systems. It is derived from and will be forward-compatible with the upcoming IEEE 802.11i standard. WPA leverages TKIP (Temporal Key Integrity Protocol) for data protection and 802.1X for authenticated key management.
WPA key management supports two mutually exclusive management types: WPA and WPA-Pre-shared key (WPA-PSK). Using WPA key management, clients and the authentication server authenticate to each other using an EAP authentication method, and the client and server generate a pairwise master key (PMK). Using WPA, the server generates the PMK dynamically and passes it to the access point. Using WPA-PSK, however, you configure a pre-shared key on both the client and the access point, and that pre-shared key is used as the PMK
Wi-Fi Chalking includes several methods to detect open wireless networks, there are some of them:
WarWalking: Walking around to detect open networks.
WarChalking: Using symbols and signs to advertise open wireless networks.
WarFlying: Detection of open wireless using drones.
WarDriving: Driving around to detect open wireless networks.
Types of Wireless Antennas
Directional Antenna: Directional antennas, as the name implies, focus the wireless signal in a specific direction resulting in a limited coverage area. An analogy for the radiation pattern would be how a vehicle headlight illuminates the road. Types of Directional antennas include Yagi, Parabolic grid, patch and panel antennas.
Omni-Directional: Omni-directional antennas provide a 360º doughnut-shaped radiation pattern to provide the widest possible signal coverage in indoor and outdoor wireless applications. An analogy for the radiation pattern would be how an un-shaded incandescent light bulb illuminates a room. Types of Omni-directional antennas include “rubber duck” antennas often found on access points and routers, Omni antennas found outdoors, and antenna arrays used on cellular towers.
Parabolic Antenna: A parabolic antenna is an antenna that uses a parabolic reflector, a curved surface with the cross-sectional shape of a parabola, to direct the radio waves. The most common form is shaped like a dish and is popularly called a dish antenna or parabolic dish.
Yagi Antenna: A Yagi–Uda antenna, commonly known as a Yagi antenna, is a directional antenna consisting of multiple parallel elements in a line, usually half-wave dipoles made of metal rods.
Dipole Antenna: A dipole antenna or doublet is the simplest and most widely used class of antenna. The dipole is any one of a class of antennas producing a radiation pattern approximating that of an elementary electric dipole with a radiating structure supporting a line current so energized that the current has only one node at each end.
Wired Equivalent Privacy (WEP), introduced as part of the original 802.11 standards ratified in 1997, it is probably the most used Wi-Fi Security protocol out there. It is pretty recognizable by its key of 10 or 26 hexadecimal digits (40 or 104 bits). In 2004, both WEP-40 and WEP-104 were declared deprecated. There were 128-bit (most common) and 256-bit WEP variants, but with ever-increasing computing power enable attackers to exploit numerous security flaws. All in all, this protocol is “dead“.
Breaking this encryption can be performed by following the next steps:
Monitor the access point channel.
Test injection capability of the access point.
Use a tool for fake authentication.
Sniff the packets in the network.
Use an encryption tool to inject packets.
Use a cracking tool to extract the encryption key from the initialisation vector (IV).
Wi-Fi Protected Access (WPA), became available in 2003, and it was the Wi-Fi Alliance’s direct response and replacement to the increasingly apparent vulnerabilities of the WEP encryption standard. The most common WPA configuration is WPA-PSK (Pre-Shared Key). The keys used by WPA are 256-bit, a significant increase over the 64-bit and 128-bit keys used in the WEP system.
WPA included message integrity checks (to determine if an attacker had captured/altered packets passed between the access point and client) and the Temporal Key Integrity Protocol (TKIP). TKIP employs a per-packet key system that was radically more secure than the fixed key system used by WEP. The TKIP encryption standard was later superseded by Advanced Encryption Standard (AES).
TKIP uses the same underlying mechanism as WEP and consequently is vulnerable to a number of similar attacks (e.g. Chop-Chop, MIC Key Recovery attack).
Usually, people do not attack WPA protocol directly, but a supplementary system that was rolled out with WPA – Wi-Fi Protected Setup (WPS).
WPA2 replaced WPA. Certification began in September 2004 and from March 13, 2006, it was mandatory for all new devices to bear the Wi-Fi trademark. The most important upgrade is the mandatory use of AES algorithms (instead of the previous RC4) and the introduction of CCMP (AES CCMP, Counter Cipher Mode with Block Chaining Message Authentication Code Protocol, 128 Bit) as a replacement for TKIP (which is still present in WPA2, as a fallback system and WPA interoperability).
Access control attack: Attackers obtaining access to a non-authorised network.
Integrity and confidentiality attacks: Attacker intercept confidential information going through the network.
Availability attacks: Attackers prevent legitimate users to access a network.
Authentication attacks: Attacker try to impersonate legitimate users of the network.
Rogue access point attacks: By starting a rogue access point with the same SSID that an existent and legitimate one in the same location, attackers try to gain access to the network and the existent traffic.
Client mis-association: Placing a rogue access point outside areas where the legitimate ones are to take advantage of the auto-connect setting in user devices and capture the traffic generated.
Misconfigured access point attacks: Attackers gain access to existing access points by taking advantage of existing misconfigurations on the device.
Unauthorised association: By taking advantage of a user’s troyanised computer attackers can be allowed to connect to private networks.
Ad-hoc connection attacks: Ad-hoc connections tend to be insecure because they do not provide strong authentication and encryption making it possible for attackers to take advantage of them.
Jamming signal attacks: By simply emitting an interference signal, a jamming attacker can effectively block the communication on a wireless channel, disrupt the normal operation, cause performance issues, and even damage the control system.
Wireless Attack Methodology
Wi-Fi discovery: Collect information by active footprinting.
GPS mapping: Creation of a list of existing access points and their locations.
Wireless traffic analysis: Capturing packets to reveal any information about the access point and the network.
Launch wireless attacks: Using a tool like Aircrack-ng to run one or multiple of the possible attacks against a wireless network.
Bluetooth is a wireless technology which is found in pretty much every phone you can get your hands on. But it is also in many other devices and gadgets around the home and the office, such as laptops, speakers, headphones and more. Bluetooth is used to connect devices that are in close proximity, cutting down on cables and giving you flexibility and freedom. Bluetooth is designed to allow devices to communicate wirelessly with each other over relatively short distances. It typically works over a range of fewer than 100 meters. The range has been intentionally limited in order to keep its power drain to a minimum. Bluetooth operates at 2.4 GHz frequency.
Bluetooth has a discovery feature that enables devices to be discoverable by other Bluetooth devices.
BlueSmacking: Basically, a DoS attack against a Bluetooth device overflowing it with random packets, for example, echo packets.
BlueBugging: In this type of attacks, attackers exploit devices to gain access and compromise their security.
BlueJacking: It is the act of sending unsolicited messages to Bluetooth enabled devices.
BluePrinting: It is a method or technique to extract information and details about a remote device. Information such as firmware, manufacturers information, model, etc.
BlueSnarfing: Exploiting security vulnerabilities, attackers steal the information on Bluetooth devices.
Keep checking the paired devices list.
Keep devices in non-discoverable mode.
Use a strong ping pattern.
Install host-based security.
Do not accept an unknown or suspectable request.
When idle, keep your Bluetooth disabled.
Wireless Security Tools
Wireless Intrusion Prevention Systems
A wireless intrusion prevention system (WIPS) operates at the Layer 2 (data link layer) level of the Open Systems Interconnection model. WIPS can detect the presence of rogue or misconfigured devices and can prevent them from operating on wireless enterprise networks by scanning the network’s RFs for denial of service and other forms of attack.
WIDS monitors the radio spectrum for the presence of unauthorized, rogue access points and the use of wireless attack tools. The system monitors the radio spectrum used by wireless LANs, and immediately alerts a systems administrator whenever a rogue access point is detected. Conventionally it is achieved by comparing the MAC address of the participating wireless devices.
Wi-Fi Security Auditing Tool
There are several tools that can use defenders to audit, troubleshoot, detect, prevent intrusions, mitigate threats, detect rogue, protect against day-zero threats, investigate incidents (forensics) and create compliance reports helping to protect wireless networks. Tools like:
Multiple techniques and practices can be tacking to prevent attacks on wireless networks, some of them already discussed previously such as using monitoring and auditing tools, configuring strict access control policies, following best practices and techniques and, using appropriate encryption like WPA2 and strong authentication. Some of these basic techniques are:
The index of this series of articles can be found here.
A SQL injection attack consists of insertion or “injection” of a SQL query via the input data from the client to the application. A successful SQL injection exploit can read sensitive data from the database, modify database data (Insert/Update/Delete), execute administration operations on the database (such as shutdown the DBMS), recover the content of a given file present on the DBMS file system and in some cases issue commands to the operating system. SQL injection attacks are a type of injection attack, in which SQL commands are injected into data-plane input in order to affect the execution of predefined SQL commands.
SQL injection is a very popular, powerful and dangerous attack. The seriousness of the attack can rank from very simple to very high, allowing attackers with good knowledge of the SQL language and DBMS capabilities to perform serious damage to organisations.
SQL injection can be a big threat to web applications. SQL injection impact can be measured by observing the following parameters that attackers are intended to overcome:
Bypassing the authentication
Revealing sensitive information
Compromised data integrity
Erasing the database
Remote code execution
SQL – How it works
Structured Query Language (SQL) is used to communicate with a database. It is the standard language for relational database management systems. SQL statements are used to perform tasks such as update, insert, delete or retrieve data from a database.
Some examples can be:
Select: select * from User where name = ‘John’; It will select all users from the table User with the name equal to John.
Insert: insert into User (id, name, surname, age) values (1, ‘John’, ‘Doe’, 25); It will add a new user to the table User with the given values.
Update: update User set age = 30 where id = 1; It will update the age of the user with id equal to 1 on the table User.
Delete: delete from User where name = ‘John’; It will delete from the table User all the users called John.
SQL Injection tools
SQLMap: Automatic SQL Injection And Database Takeover Tool.
BSQL Hacker: BSQL Hacker is an automated SQL Injection Tool designed to exploit SQL injection vulnerabilities in virtually any database.
Marathon Tool: Marathon Tool is a POC for using heavy queries to perform a Time-Based Blind SQL Injection attack.
SQL Power Injector: SQL Power Injector is an application created in .Net 1.1 that helps the penetration tester to inject SQL commands on a web page.
Havij: Havij is an automated SQL Injection tool that helps penetration testers to find and exploit SQL Injection vulnerabilities on a web page.
Types of SQL Injection
SQL injection can be classified into three major categories:
In-Band SQL Injection
This category includes injection techniques using the same communication channel to launch the attack and gather information from the response. Include:
Error-based SQL injection
Union-based SQL injection
Error-Based SQL Injection
It relies on error messages from the database server to reveal information about the structure of the database. It is a very effective way to enumerate an entire database. Despite error messages are very useful during development time, they should be disabled when the application goes live. Error-based SQL injection can be performed by the following techniques:
System stored procedure
End of line comment
Illegal / Logically incorrect query
Union-based SQL injection
Imply the use of the command UNION in SQL to combine the result of two or more different tables.
Inferential SQL Injection (Blind Injection)
In this type of injection, no data is transferred from a web application; i.e. the attacker is unable to see the result of an attack hence referred to as Blind injection. the attacker can just observe the behaviour of the server. The two main types are:
Boolean-based SQLi: Boolean-based SQL Injection is an inferential SQL Injection technique that relies on sending an SQL query to the database which forces the application to return a different result depending on whether the query returns a TRUE or FALSE result.
Time-based SQLi: Time-based SQL Injection is an inferential SQL Injection technique that relies on sending an SQL query to the database which forces the database to wait for a specified amount of time (in seconds) before responding.
Out-of-band SQL Injection
It requires different channels to launch the attack and receive the response. For example, some features as DSN or HTTP request on database server hence it is not very common.
SQL Injection Methodology
Information Gathering and vulnerability detection
An important step is to examine the target web application. Information can be gathered from input fields, hidden fields, get and post requests, cookies, string values, errors messages and more. All this information will provide some initial points of injection and, even, database structure or some existing vulnerabilities.
Launching the Attack
Once all the information has been collected, making use of it, attackers will proceed to execute one or more than one of the attack types seen it above trying to gather more information, extracting some data or bypass the authentication.
Advanced SQL Injection
Here, attackers will try to enumerate the database obtaining users, privilege levels assigned to them, account information about the administrator and as much information as possible about the structure. It also includes passwords and hashes grabbing and, transferring the database to a remote machine.
One tool that can be used by security professional to try to prevent SQL injection is the installation of an IDS. By installing an IDS, attackers not only need to attack the database, but they also need to bypass the IDS too. IDSs use a signature-based detection mechanism to match input strings against the existing signatures to detect intrusions.
Despite the extra security offered by adding an IDS, there are som evasion techniques that can be used to evade signature-based detection:
Inserting inline comment in between keywords
White spaces manipulation
In order to prevent SQL injection, developers can utilise parameterized database queries with bound, typed parameters and careful use of parameterized stored procedures in the database.
Additionally, developers, system administrators, and database administrators can take further steps to minimize attacks or the impact of successful attacks:
Keep all web application software components including libraries, plug-ins, frameworks, web server software, and database server software up to date with the latest security patches available from vendors.
Utilise the principle of least privilege when provisioning accounts used to connect to the SQL database. For example, if a web site only needs to retrieve web content from a database using SELECT statements, do not give the web site’s database connection credentials other privileges such as INSERT, UPDATE, or DELETE privileges. In many cases, these privileges can be managed using appropriate database roles for accounts. Never allow your web application to connect to the database with Administrator privileges.
Do not use shared database accounts between different web sites or applications.
Validate user-supplied input for expected data types, including input fields like drop-down menus or radio buttons, not just fields that allow users to type in the input.
Configure proper error reporting and handling on the webserver and in the code so that database error messages are never sent to the client web browser. Attackers can leverage technical details in verbose error messages to adjust their queries for successful exploitation.
The index of this series of articles can be found here.
Hundreds, thousands, millions, billions of systems are online nowadays, they offer services to their users and, in some cases, i.e. critical systems, they are indispensable. Some of these online services are web applications running on web servers. Organisations have embraced them and, they are not just used in the corporate sector to perform important and, some times, critical tasks, they have expanded globally for social and entertainment purposes.
Web applications present a great security challenge. They need high availability and smooth performance but, they are always exposed to a big number of users. For all these reasons, ensure security measures and eliminate vulnerabilities is crucial.
A web application is an application that runs on a remote server and it is available to clients over the Internet. This access is offered through clients, sometimes just the browser or specialised client software. These clients can be very complex having code or logic on their own or dummy clients where all the logic resides at the server.
It is the person who takes care of the webserver in terms of safety, security, functioning and performance. It is responsible for estimating security measures and deploying security models, finding and eliminating vulnerabilities.
It is the person responsible for the management and configuration required for the web application. It ensures the availability and high performance of the web application.
Clients are designed to interact with the web applications and they can range from simple dummy clients to very complex ones.
Web Application Threats
Multiple different threats apply to web applications:
Insecure storage: The software stores sensitive information without properly limiting read or write access by unauthorized actors.
Information leakage: Information leakage happens whenever a system that is designed to be closed to an eavesdropper reveals some information to unauthorized parties nonetheless.
Directory traversal: Directory traversal or Path Traversal is an HTTP attack which allows attackers to access restricted directories and execute commands outside of the web server’s root directory.
Parameter/Form tampering: Parameter tampering is a form of web-based attack in which certain parameters in the Uniform Resource Locator (URL) or web page form field data entered by a user is changed without that user’s authorization.
DoS Attacks: A denial-of-service attack (DoS attack) is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to the Internet.
Buffer overflow: Buffer overflow is an anomaly that occurs when software writing data to a buffer overflows the buffer’s capacity, resulting in adjacent memory locations being overwritten. In other words, too much information is being passed into a container that does not have enough space, and that information ends up replacing data in adjacent containers.
Log tampering: Weblogs tampering attacks involve an attacker injecting, deleting or otherwise tampering with the contents of web logs typically for the purposes of masking other malicious behaviour. Additionally, writing malicious data to log files may target jobs, filters, reports, and other agents that process the logs in an asynchronous attack pattern.
Injection: Injection is the placement of malicious code via an input.
SQL injection: SQL injection is the placement of malicious code in SQL statements, via web page input.
Command injection: Command injection is an attack in which the goal is the execution of arbitrary commands on the host operating system via a vulnerable application.
LDAP injection: LDAP injection is a crafted query that can manipulate vulnerable LDAP servers, leading to serious cases of data and identity theft.
Cross-site scripting: Cross-site scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted websites.
Cross-site request forgery: Cross-site request forgery (CSRF) is an attack that forces an end user to execute unwanted actions on a web application in which they’re currently authenticated.
Security misconfiguration: Security misconfiguration is simply defined as failing to implement all the security controls for a server or web application or implementing the security controls, but doing so with errors.
Broken session management: Weakness of the session management systems like:
User authentication credentials are not protected when stored.
Predictable login credentials.
Session IDs are exposed in the URL.
Session IDs are vulnerable to session fixation attacks.
Session value does not timeout or does not get invalidated after logout.
Session IDs are not rotated after successful login.
Passwords, session IDs, and other credentials are sent over unencrypted connections.
DMZ attack: Attack attempts to take down and bypass a DMZ.
Session Hijacking: The Session Hijacking attack compromises the session token by stealing or predicting a valid session token to gain unauthorized access to the Web Server.
Network Access Attacks: Everything that covers an attempt to access another user account or network device through improper means.
Web Application Pentesting
One of the tools security professionals have to prevent attacks is pentesting. In a pentest, security professional tries to take the place of an attacker and break into systems to, posteriorly fix that problem before they get exploited. Pentest try to covert the same ground attacker would cover:
Collection of information
Web services testing
Web Application Attack Methodology
Analyse web application: Observing the functionality and input parameters to identify vulnerabilities, entry points and server technologies that can be exploited. HTTP request and HTTP fingerprinting techniques are used to diagnose their parameters.
Attack authentication mechanism: Trying to bypass authentication. Some attack mechanisms are:
Authorisation attack schemes: Using different techniques to manipulate URLs, requests, POST data, query strings, cookies, parameters, HTTP headers, etc. to escalate privileges once low-level access has been achieved.
Session management attack: There are different techniques that can be used to impersonate a legitimate user:
Session token prediction
Session token tampering
Attack data connectivity: Design to exploit the connection between a server and a database. It includes:
Connection string injection
Connection string parameters pollution (CSPP)
Connection Pool DoS
There are literally hundreds of things that can be done to try to mitigate web application attacks. They can not be just listed but, a great starting point could be the OWASP Top 10 project where are explained to top ten more common vulnerabilities, how they work and possible ways to mitigate them.
The index of this series of articles can be found here.
A web server is server software or hardware dedicated to running this software, that can satisfy client requests on the World Wide Web. A web server can, in general, contain one or more websites. A web server processes incoming network requests over HTTP and several other related protocols. The primary function of a web server is to store, process and deliver web pages to clients.
On the software side, a web server includes several parts that control how web users access hosted files, at a minimum an HTTP server. An HTTP server is a piece of software that understands URLs and HTTP. It can be accessed through the domain names and, it stores and delivers their content to the end-users device.
Web Server Security Issue
Security issues for web server may include network-level and operative system-level attacks. Usually, attackers target vulnerabilities or mistakes in the configuration and exploit them. These vulnerabilities may include:
Improper permissions of file directories
Unnecessary services enabled
Lack of security
Misconfigured SSL certificates
Once a web server has been compromised it can result in compromising all user accounts, DoS offered by the server, defacement, launching further attacks using the compromised web server and access to resources and data.
Open Source Web Servers
They are servers where the code is available to the public and maintained by communities like open source. they can be hosted on-premises or by third-party companies. Example are:
Apache HTTP Server
IIS Web Server
Internet Information Service (IIS) is a windows-based service which provides a request processing architecture. IIS contains multiple components which are responsible for several functions such as listening to the request, managing processes, reading configuration files, etc. Some of these components are:
Protocol listener: Protocol listeners are responsible for receiving protocol-specific requests. They forward these requests to IIS for processing and then return responses to requestors.
HTTP.sys: HTTP listeners are implemented as a kernel-mode service device driver called the HTTP protocol stack (HTTP.sys). HTTP.sys is responsible for listening HTTP requests, forwarding these requests to IIS for processing and then, return processed responses to client browsers.
WWW Service and WAS: World Wide Web Publishing Service (WWW Service) and Windows Processing Activation Service (WAS) run _svchost.exe_ on the local system and share same binaries. WWW Service was used previously to version 7, whereas in version 7 and later WAS is used.
Web Server Attacks
A lot of attack techniques can be found for web serves, some of them are listed below:
DoS/DDoS attack: Used to flood fake requests toward the web server resulting in the crashing, unavailability or denial of service for all users.
DNS server hijacking: By compromising the DNS configuration, attackers can redirect request targeting a web server to a malicious server owned or controlled by them.
DNS amplification attack: Using the DSN recursive method, attackers can, by spoofing the lookup requests and amplifying the size of the request, originate DDoS attacks.
Directory traversal attacks: Using trial and error methods, attackers ca access restricted directories using dots and slashes sequences revealing sensitive information.
Man-in-the-middle/sniffing attacks: Attacker can extract and intercept sensitive information or altering packets.
Phishing attacks: Using phishing attacks, attackers can compromise legitimate user credentials to compromise a web server.
Website defacement: After a successful intrusion, attackers can alter or modify the content or appearance of a website.
Web server misconfiguration: Default features or credentials, misconfigurations, default certificates, active debugging capabilities, unnecessary services running, etc. All of this can help attackers to compromise a web server.
HTTP response splitting attacks: HTTP response splitting is a form of web application vulnerability, resulting from the failure of the application or its environment to properly sanitize input values. The attack consists of making the server print a carriage return (CR, ASCII 0x0D) line feed (LF, ASCII 0x0A) sequence followed by content supplied by the attacker in the header section of its response, typically by including them in input fields sent to the application. Per the HTTP standard, headers are separated by one CRLF and the response’s headers are separated from its body by two. Therefore, the failure to remove CRs and LFs allows the attacker to set arbitrary headers, take control of the body, or break the response into two or more separate responses—hence the name.
Web cache poisoning attacks: In this attack, attackers wipe the cache of the web server and store fake entries by sending crafted requests into the cache to redirect users to malicious websites.
SSH brute-force attacks: Obtaining access to unauthorised systems by forcing the brute-forcing the access to an available SSH tunnel.
Web application attacks: Web servers run web application among others. A vulnerability in any pf the application can be used or affect a web server.
Web Server Attack Methodology
Web server footprinting: It includes footprinting focused on a web server using different tools like Maltego, Netcraft, Httprecon, etc. Usually, allows discovering server name, type, operative system, running applications and other interesting information about the target.
Mirroring a website: Download a copy of an entire website to explore it and try to find vulnerabilities avoiding the constant contact with the web server.
Vulnerability scanner: Vulnerability scanners are automated tools specially designed and built to find vulnerabilities, weakness, problems and holes on the operative system, network, software or applications. These tools perform a deep inspection of scripts, open ports, banners, running services, configuration and other areas.
Session Hijacking: As established before, steal some legitimate session to access the web server.
Hacking web passwords: Password cracking is one very common technique where attackers to break the security of the credentials using different methods like:
Active online attacks
Passive online attacks
The basic recommendation is to place the web servers on a secure zone protected by appropriate tools like we have seen in previous chapters, IDS, firewalls. Placing the server into an isolated zone like a DMZ can protect them from threats.
A few specific measures can be:
Disabling insecure and unnecessary ports
Using port 443 HTTPS over 80 HTTP
Code access security policies
Disable debug compiles
A very important action to be taken to maintain web servers secure is to have a patch management policy to be able to incorporate updates or hotfixes fixing known security problems.
This process can be manual or automatic, with a preference for the second one. A patch management system should perform the next tasks:
The index of this series of articles can be found here.
Attacks are getting more common day after day and attackers seem more numerous by the minute but, despite this being true or not, the truth is that companies are subjected to constant attacks. Can it be that the number of attackers is exponentially growing? Maybe, but a more simple explanation is automation.
Attacks are not just performed manually, they are performed automatically by computers with software specifically implemented for this purpose.
Attackers have been developing for a long time tools to make their life easier and simpler. All these tools run attacks without or just minimal human supervision.
For this reason, automation, security tools need to be equipped to deal with all this volume of attacks and the huge amount of traffic generated by them. Doing this manually, it will be impossible, well, probably disconnecting the system from all external signals.
Modern networked business environments require a high level of security to ensure safe and trusted communication of information between various organizations. This is why tools like Firewalls, Intrusion Detection Systems, Intrusion Prevention Systems and Honeypots, receive and have so much importance when networks and systems need to be protected.
Intrusion Detection Systems
An intrusion detection system (IDS) is a device or software application that monitors a network for malicious activity or policy violations. Any malicious activity or violation is typically reported or collected centrally using a security information and event management system. Some IDSs are capable of responding to detected intrusion upon discovery. These are classified as intrusion prevention systems (IPS).
An intrusion detection system acts as an adaptable safeguard technology for system security after traditional technologies fail. Cyber attacks will only become more sophisticated, so it is important that protection technologies adapt along with their threats.
IDS Detection Types
There is a wide array of IDS, ranging from antivirus software to tiered monitoring systems that follow the traffic of an entire network. The most common classifications are:
Network intrusion detection systems (NIDS): A system that analyzes incoming network traffic.
Host-based intrusion detection systems (HIDS): A system that monitors important operating system files.
There is also a subset of IDS types. The most common variants are based on signature detection and anomaly detection.
Signature-based: Signature-based IDS detects possible threats by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. This terminology originates from antivirus software, which refers to these detected patterns as signatures. Although signature-based IDS can easily detect known attacks, it is impossible to detect new attacks, for which no pattern is available.
Anomaly-based: A newer technology designed to detect and adapt to unknown attacks, primarily due to the explosion of malware. This detection method uses machine learning to create a defined model of trustworthy activity, and then compare new behaviour against this trust model. While this approach enables the detection of previously unknown attacks, it can suffer from false positives: previously unknown legitimate activity can accidentally be classified as malicious.
IDS Usage in Networks
When placed at a strategic point or points within a network to monitor traffic to and from all devices on the network, an IDS will perform an analysis of passing traffic, and match the traffic that is passed on the subnets to the library of known attacks. Once an attack is identified, or abnormal behaviour is sensed, the alert can be sent to the administrator.
Being aware of the techniques available to cybercriminals who are trying to breach a secure network can help IT departments understand how IDS systems can be tricked into not missing actionable threats:
Fragmentation: To send fragmented packets allows attackers to stay under the radar, bypassing the detection system’s ability to detect the attack signature.
Avoiding defaults: A port utilized by a protocol does not always provide an indication to the protocol that’s being transported. If an attacker had reconfigured it to use a different port, the IDS may not be able to detect the presence of a trojan.
Coordinated, low-bandwidth attacks: Coordinating a scan among numerous attackers, or even allocating various ports or hosts to different attackers. This makes it difficult for the IDS to correlate the captured packets and deduce that a network scan is in progress.
Address spoofing/proxying: Attackers can obscure the source of the attack by using poorly secured or incorrectly configured proxy servers to bounce an attack. If the source is spoofed and bounced by a server, it makes it very difficult to detect.
Pattern change evasion: IDS rely on pattern matching to detect attacks. By making slight adjustments to the attack architecture, detection can be avoided.
Intrusion Prevention Systems
An intrusion prevention system (IPS) is an automated network security device used to monitor and respond to potential threats. Like an intrusion detection system (IDS), an IPS determines possible threats by examining network traffic. Because an exploit may be carried out very quickly after an attacker gains access, intrusion prevention systems administer an automated response to a threat, based on rules established by the network administrator.
The main functions of an IPS are to identify suspicious activity, log relevant information, attempt to block the activity, and finally to report it.
IPSs include firewalls, anti-virus software, and anti-spoofing software. In addition, organizations will use an IPS for other purposes, such as identifying problems with security policies, documenting existing threats and deterring individuals from violating security policies. IPS have become an important component of all major security infrastructures in modern organizations.
An intrusion prevention system acts as an adaptable safeguard technology for system security after traditional technologies. The ability to prevent intrusions through an automated action, without requiring IT intervention means lower costs and greater performance flexibility. Cyber attacks will only become more sophisticated, so it is important that protection technologies adapt along with their threats.
How An IPS Works
An intrusion prevention system works by actively scanning forwarded network traffic for malicious activities and known attack patterns. The IPS engine analyzes the network traffic and continuously compares the bitstream with its internal signature database for known attack patterns. An IPS might drop a packet determined to be malicious, and follow up this action by blocking all future traffic from the attacker’s IP address or port. Legitimate traffic can continue without any perceived disruption in service.
Intrusion prevention systems can also perform more complicated observation and analysis, such as watching and reacting to suspicious traffic patterns or packets. Detection mechanisms can include:
HTTP string and substring matching
Generic pattern matching
TCP connection analysis
Packet anomaly detection
Traffic anomaly detection
TCP/UDP port matching
An IPS will typically record information related to observed events, notify security administrators, and produce reports. To help secure a network, an IPS can automatically receive prevention and security updates in order to continuously monitor and block emerging Internet threats.
Intrusion prevention systems can be organized into four major types:
Network-based intrusion prevention system (NIPS): Analyzes protocol activity across the entire network, looking for any untrustworthy traffic.
Wireless intrusion prevention system (WIPS): Analyzes network protocol activity across the entire wireless network, looking for any untrustworthy traffic.
Host-based intrusion prevention system (HIPS): A secondary software package that follows a single host for malicious activity, and analyzes events occurring within the host.
Network behaviour analysis (NBA): Examines network traffic to identify threats that generate strange traffic flows. The most common threats being distributed denial of service attacks, various forms of malware, and policy abuses. pattern matching to detect attacks. By making slight adjustments to the attack architecture, detection can be avoided.
IPS Detection Methods
The majority of intrusion prevention systems use one of three detection methods: signature-based, statistical anomaly-based, and stateful protocol analysis.
Signature-based detection: Signature-based IDS monitors packets in the network and compares with predetermined attack patterns, known as “signatures”.
Statistical anomaly-based detection: An anomaly-based IDS will monitor network traffic and compare it to expected traffic patterns. The baseline will identify what is “normal” for that network – what sort of packets generally through the network and what protocols are used. It may, however, raise a false positive alarm for the legitimate use of bandwidth if the baselines are not intelligently configured.
Stateful protocol analysis detection: This method identifies protocol deviations by comparing observed events with pre-determined activity profiles of normal activity.
Many IPS can also respond to a detected threat by actively preventing it from succeeding. They use several response techniques, which involve:
Changing the security environment, for example, by configuring a firewall to increase protections against previously unknown vulnerabilities.
Changing the attack’s content, for example, by replacing otherwise malicious parts of an email, like false links, with warnings about the deleted content.
Sending automated alarms to system administrators, notifying them of possible security breaches.
Dropping detected malicious packets.
Resetting a connection.
Blocking traffic from the offending IP address.
In-line in the network. Every packet goes through it
Not-inline with the network. It receives a copy of every packet
Introduces delay because every packet is analysed before forwarded to a destination
Does not introduce delay because it is not in-line within the network
Point of failure?
Yes, depending on the configuration. Fail-open or fail-close will drop or not all the traffic
No impact on traffic as is not in-line with the network
Ability to mitigate an attack?
Yes, ability to drop malicious traffic unless in ‘Tap mode’
Not directly but, it can help other in-line solutions
Can do packet manipulation?
Yes. It can modify traffic based on defined rules
No. Just receives mirrored traffic
Host-Based vs Network-Based solutions
Not scalable as the number of host increases
Highly scalable. Normally deployed at the perimeter gateway
Low. More systems, means more IDS/IPS systems
High. One pair can monitor the overall networ
Capable of verifying if an attack was successful or not
Only capable of generating an alert of an attack
The processing power of the host device used
Must have a high processing power to overcome latency issues
Host-Based IPS/IDS Types
File system monitoring: It monitors changes in system files trying to detect manipulation or alteration. Usually, hash functions are used, storing the previous version and comparing it with the actual.
Log Files Analysis: It works by analysing log files and generating appropriate warning for administrators. A number of modern tools analyse the pattern of behaviours and performing a further correlation with actual events.
Connection Analysis: Tries to analyse all network connections and tries to differentiate between legitimate and unauthorised traffic.
Kernel Level Detection: The OS kernel itself tries to detect alteration on the system binaries and, in case of anomalies, raises the intrusion attempt.
Network firewalls are security devices used to stop or mitigate unauthorized access to private networks connected to the Internet, especially intranets. The only traffic allowed on the network is defined via firewall policies – any other traffic attempting to access the network is blocked. Network firewalls sit at the front line of a network, acting as a communications liaison between internal and external devices.
A network firewall can be configured so that any data entering or exiting the network has to pass through it – it accomplishes this by examining each incoming message and rejecting those that fail to meet the defined security criteria. When properly configured, a firewall allows users to access any of the resources they need while simultaneously keeping out unwanted users, hackers, viruses, worms or other malicious programs trying to access the protected network.
Software vs. Hardware Firewalls
Firewalls can be either hardware or software. In addition to limiting access to a protected computer and network, a firewall can log all traffic coming into or leaving a network, and manage remote access to a private network through secure authentication certificates and logins.
Hardware firewalls: These firewalls are released either as standalone products for corporate use, or more often, as a built-in component of a router or other networking device. They are considered an essential part of any traditional security system and network configuration. Hardware firewalls will almost always come with a minimum of four network ports that allow connections to multiple systems. For larger networks, a more expansive networking firewall solution is available.
Software firewalls: These are installed on a computer, or provided by an OS or network device manufacturer. They can be customized, and provide a smaller level of control over functions and protection features. A software firewall can protect a system from standard control and access attempts but have trouble with more sophisticated network breaches.
A firewall is considered an endpoint protection technology. In protecting private information, a firewall can be considered the first line of defence, but it cannot be the only defence.
Firewalls are relied upon to secure home and corporate networks. A simple firewall program or device will sift through all information passing through the network – this process can also be customized depending on the needs of the user and the capabilities of the firewall. There are a number of major firewall types that prevent harmful information from passing through the network:
Application-layer: This is a hardware appliance, software filter, or server plug-in. It layers security mechanisms on top of defined applications, such as FTP servers, and defines rules for HTTP connections. These rules are built for each application, to help identify and block attacks to a network.
Packet Filtering: This filter examines every packet that passes through the network – and then accepts or denies it as defined by rules set by the user. Packet filtering can be very helpful, but it can be challenging to properly configure. Also, it’s vulnerable to IP spoofing.
Circuit-level: This firewall type applies a variety of security mechanisms once a UDP or TCP connection has been made. Once the connection is established, packets are exchanged directly between hosts without further oversight or filtering.
Proxy Server: This version will check all messages that enter or leave a network, and then hide the real network addresses from any external inspection.
Next-Generation (NGFW): These work by filtering traffic moving through a network – the filtering is determined by the applications or traffic types and the ports they are assigned to. These features comprise a blend of a standard firewall with additional functionality, to help with greater, more self-sufficient network inspection.
Stateful Firewalls: Sometimes referred to as third-generation firewall technology, stateful filtering accomplishes two things: traffic classification based on the destination port, and packet tracking of every interaction between internal connections. These newer technologies increase usability and assist in expanding access control granularity – interactions are no longer defined by port and protocol. A packet’s history in the state table is also measured.
All of these network firewall types are useful for power users, and many firewalls will allow for two or more of these techniques to be used in tandem with one another.
Cloud Firewalls are software-based, cloud-deployed network devices, built to stop or mitigate unwanted access to private networks. As a new technology, they are designed for modern business needs and sit within online application environments.
Cloud Firewall Benefits
Scalability: Because deployment is much simpler, organizations can adjust the size of their security solution without the frustrations inherent with on-site installation, maintenance and upgrading. As bandwidth increases, cloud firewalls can automatically adjust to maintain parity. For example, distributed denial-of-service (DDoS) attacks can be mitigated without having to worry about bandwidth limits.
Availability: Cloud firewall providers account for the built-in cost of high availability by supporting infrastructure. This means guaranteeing redundant power, HVAC, and network services, and automating backup strategies in the event of a site failure. This availability is hard to match with on-premises firewall solutions because of the cost and support required. This also means that necessary updates can be implemented immediately, without the need for large system downloads or updates.
Extensibility: Cloud firewalls can be reached and installed anywhere an organization can provide a protected network communication path. With an on-premises device, this extensibility is limited by the available resources of the organization looking for a firewall solution.
Migration Security: A cloud firewall is capable of filtering traffic from a variety of sources; the internet, between virtual networks, between tenants, or even a virtual data centre. It is capable of guaranteeing the security of connections made between physical data centres and the cloud – this is very beneficial for organizations looking for a means of migrating current solutions from an on-prem location to a cloud-based infrastructure.
Secure Access Parity: Cloud firewalls provide the same level of secure access as on-prem firewalls. This means advanced access policy, connection management, and filtering between clients and the cloud. This also extends to encrypted content.
Identity Protection: Cloud firewalls can integrate with access control providers and give users granular control over filtering tools.
Performance Management: Cloud firewalls provide tools for controlling performance, visibility, usage, configuration, and logging – all things normally associated with an on-prem solution.
Cloud Firewall Types
There are two types of cloud firewalls – with the distinction being defined by what users need help securing. Both types exist as cloud-based software that monitors all incoming and outgoing data packets, and filters this information against access policies with the goal of blocking and logging suspicious traffic.
SaaS Firewalls: They are designed to secure an organization’s network and its users, not unlike a traditional on-premises hardware or software firewall. The only difference is that it is deployed off-site from the cloud. This type of firewall can be called:
Software-as-a-service firewall (SaaS firewall)
Next-Generation Firewalls: They are cloud-based services intended to deploy within a virtual data centre. They protect an organization’s own servers in a platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS) model. The firewall application exists on a virtual server and secures incoming and outgoing traffic between cloud-based applications.
A bastion host is a computer system that is placed in between the public and the private network. It is intended to be the crossing point where all traffic is passed through. Certain roles and responsibilities are assigned to this computer to perform. Bastion hosts have two interfaces, one connected to the public network while the other connected to the private network.
A screened subnet can be set with a firewall with three interfaces. These interfaces connect the public, private and to the demilitarised zone (DMZ). In this architecture, the attempt is to isolate the different zones.
A multi-homed firewall is a firewall that has more than one network interface, with each interface connected to logically and physically separate network segments. It increases the efficiency and reliability of a network.
A DMZ network functions as a subnetwork containing an organization’s exposed, outward-facing services. It acts as the exposed point to an untrusted network, commonly the Internet.
The goal of a DMZ is to add an extra layer of security to an organization’s local area network. A protected and monitored network node that faces outside the internal network can access what is exposed in the DMZ, while the rest of the organization’s network is safe behind a firewall.
When implemented properly, a DMZ network gives organizations extra protection in detecting and mitigating security breaches before they reach the internal network, where valuable assets are stored.
The DMZ network exists to protect the hosts most vulnerable to attack. These hosts usually involve services that extend to users outside of the local area network, the most common examples being email, web servers, and DNS servers. Because of the increased potential for attack, they are placed into the monitored subnetwork to help protect the rest of the network if they become compromised.
Hosts in the DMZ have tightly controlled access permissions to other services within the internal network because the data passed through the DMZ is not as secure. On top of that, communications between hosts in the DMZ and the external network are also restricted to help increase the protected border zone. This allows hosts in the protected network to interact with the internal and external network, while the firewall separates and manages all traffic shared between the DMZ and the internal network. Typically, an additional firewall will be responsible for protecting the DMZ from exposure to everything on the external network. They usually implement some features like:
Virtual Routing Forwarding (VRF)
Honeypots are decoy systems or servers deployed alongside production systems within your network. When deployed as enticing targets for attackers, honeypots can add security monitoring opportunities for blue teams and misdirect the adversary from their true target. Honeypots come in a variety of complexities depending on the needs of your organization and can be a significant line of defence when it comes to flagging attacks early. This page will get into more detail on what honeypots are, how they are used, and the benefits of implementing them.
Honeypots offer plenty of security benefits to organizations that choose to implement them, including the following:
They break the attacker kill chain and slow attackers down.
They are straightforward and low-maintenance.
They help you test your incident response processes.
Within production and research honeypots, there are also different tiers depending on the level of complexity your organization needs:
Pure honeypot: This is a full-scale, completely production-mimicking system that runs on various servers. It contains “confidential” data and user information and is full of sensors. Though these can be complex and difficult to maintain, the information they provide is invaluable.
High-interaction honeypot: This is similar to a pure honeypot in that it runs a lot of services, but it is not as complex and does not hold as much data. High-interaction honeypots are not meant to mimic a full-scale production system, but they do run (or appear to run) all the services that a production system would run, including a proper operating system. This type of honeypot allows the deploying organization to see attacker behaviours and techniques. High-interaction honeypots are resource-intensive and come with maintenance challenges, but the findings can be worth the squeeze.
Mid-interaction honeypot: These emulate aspects of the application layer but do not have their own operating system. They work to stall or confuse attackers so that organizations have more time to figure out how to properly react to an attack.
Low-interaction honeypot: This type of honeypot is the most commonly deployed in a production environment. Low-interaction honeypots run a handful of services and serve as an early warning detection mechanism more than anything. They are easy to deploy and maintain, with many security teams deploying multiple honeypots across different segments of their network.
Several honeypot technologies in use include the following:
Malware honeypots: These use known replication and attack vectors to detect malware. For example, honeypots (e.g., Ghost) have been crafted to emulate as a USB storage device. If a machine is infected by malware that spreads via USB, the honeypot will trick the malware to infect the emulated device.
Spam honeypots: These are used to emulate open mail relays and open proxies. Spammers will test the open mail relay by sending themselves an email first. If they succeed, they then send out large quantities of spam. This type of honeypot can detect and recognize this test and successfully block the massive volume of spam that follows.
Database honeypot: Activities such as SQL injections can often go undetected by firewalls, so some organizations will use a database firewall, which can provide honeypot support to create decoy databases.
Client honeypots: Most honeypots are servers listening for connections. Client honeypots actively seek out malicious servers that attack clients, monitoring for suspicious and unexpected modifications to the honeypot. These systems generally run on virtualization technology and have a containment strategy to minimize risk to the research team.
Honeynets: Rather than being a single system, a honeynet is a network that can consist of multiple honeypots. Honeynets aim to strategically track the methods and motives of an attacker while containing all inbound and outbound traffic.
IDSs can accept packets that end-systems reject. IDSs that do this make the mistake of believing that end-systems have accepted and processed the packet when they actually have not. Attackers can exploit this condition by sending packets to end-systems that they will reject, but that IDSs will think are valid. By doing this, the attackers are “inserting” data into IDSs, no other systems on the network care about the bad packets. Attackers can use insertion attacks to defeat signature analysis, allowing them to slip attacks past IDSs.
Taking advantage of a vulnerability attackers can insert packets with a bad checksum or TTL values and send them out of order. IDSs and end hosts, when reassembling the packet, they might have two different streams. For example, attackers may send the following stream:
End-systems can accept packets that an IDSs reject. IDSs that mistakenly reject such a packet miss their content entirely. This condition can also be exploited, this time by slipping crucial information past the IDSs in packets that IDSs is too strict about processing. These packets are ‘evading‘ the scrutiny of IDSs.
We call these ‘evasion‘ attacks, and they are the easiest to exploit and most devastating to the accuracy of IDSs. Entire sessions can be carried forth in packets that evade an IDS, and blatantly obvious attacks couched in such sessions will happen right under the nose of even the most sophisticated analysis engines.
Evasion attacks foil pattern matching in a manner quite similar to insertion attacks. Again, attackers cause IDSs to see a different stream of data than the end-system, this time, however, end-systems see more than IDSs and the information that IDSs miss is critical to the detection of an attack.
IP fragmentation is the process of dividing packets into smaller chunks. These need to be of a specific size so that the receiving parties could process them and transfer data successfully. All these packets are then reassembled by the receiving party so they can understand the data they got.
This technique is usually adopted when IDSs and hosts have different timeout configured. For example, a host with a 20 seconds timeout and an IDS with 10 seconds. If attackers send packets with 15 seconds delays, they will not be reassembled at the IDS but they will be at the host.
Similarly, overlapping fragments can be sent configuring their TCP sequence as overlapped. The reassembly of these overlapping packets will depend on how the operative system is configured to perform this action.
Passive IDSs devices are configured as ‘Fail-Open‘. Taking advantage of this limitation, attackers can run a DoS attack to overload an IDS system. They can target the CPU or memory by sending specially crafted packets or a large number of fragmented out-of-order packets.
Obfuscation is the encryption of the payload in a way that an IDS cannot reserve the encryption but the final system can. Encrypted protocols are not inspected by IDSs unless they have the private key configured. Similarly, attackers can use polymorphic shellcodes to create unique patterns to evade IDSs.
False Positive Generation
Attackers can generate a large number of false-positive alerts trying to hide the malicious packet with in the noise.
Unicode Evasion Technique
In this case, attackers can use Unicode, a form of character encoding, to evade IDSs inspection. Converting a string to Unicode characters can avoid signature matching and alerts in the IDSs.
The identification of firewalls includes firewall fingerprinting to obtain sensitive information such as open ports, version information, services running, etc. Different techniques can be used among the ones the next can be found:
Port Scanning: Special packets can be sent to particular hosts to analyse the responses and infer information about the environment, especially open ports.
Fire-walking: It is a technique that using ICMP packets finds out the location of the firewall and allows to map a network by probing the ICMP echo request with TTL values exceeding one by one. It helps to find the number of hoops.
Banner Grabbing: The different devices that can be found in a network display different banner, vendor information, that can be identified and device and firmware can be extracted.
IP Address Spoof: Attackers can send packets with spoofed IP addresses to impersonate user machines and gain unauthorised access.
Source Routing: This technique sends a spoofed packet using a specific route that mimics the legitimate user path.
By-passing blocked sites using IP addresses: Instead of the use of an URL and its domain to access it, attackers can try to access using its IP address if this type of access is not blocked.
By-passing blocked sites using proxies: The use of proxies to access restricted websites is very common hiding the real IP and using the proxy IP address to access the website.
By-passing through ICMP tunnelling method: ICMP tunnelling is a technique of injecting arbitrary data in the payload of an echo packet and forwarded to the target host. In this scenario, a TCP connection is tunnelled over ping requests and replays because firewalls do not examine the payload field of ICMP packets.
By-passing through HTTP tunnelling method: It is taking advantage of a legitimate HTTP server deployed, encapsulating data into the HTTP traffic and using services like FTP.
By-passing through SSH tunnelling method: An attacker can use OpenSSH to encrypt the traffic and avoid the detection by security devices.
By-passing through external systems: This process involves to hijack the session of a valid user with permissions to connect to external networks and using that session to by-pass the firewall.