The index of this series of articles can be found here.
Denial of Service (DoS) and Distributed Denial of Service (DDoS) are very common attacks nowadays.
The purpose of the attack is either denied access, reduce the functionality or prevent the access to resources even to the legitimate users.
The attack is based on the generation of large amounts of requests targeting a system. This large amount of incoming requests overloads the system capacity to respond resulting in a denial of services to all the users of that service.
There are a number of different ways that DoS attacks can be used. These include the following:
Buffer overflow attacks: This type of attack is the most common DOS attack experienced. Under this attack, the attacker overloads a network address with traffic so that it is put out of use.
Ping of Death or ICMP flood: An ICMP flood attack is used to take unconfigured or misconfigured network devices and uses them to send spoof packets to ping every computer within the network. This is also known as a ping of death (POD) attack.
SYN flood: SYN flood attacks send requests to connect to a server but don’t complete the handshake. The end result is that the network becomes inundated with connection requests that prevent anyone from connecting to the network.
Teardrop Attack: During a teardrop DOS attack an attacker sends IP data packet fragments to a network. The network then attempts to recompile these fragments into their original packets. The process of compiling these fragments exhausts the system and it ends up crashing. It crashes because the fields are designed to confuse the system so that it can not put them back together.
During a DoS attack, multiple systems target a single system with a DoS attack. The targeted network is then bombarded with packets from multiple locations. By using multiple locations to attack the system attackers can put the system offline more easily. The reason for this is that there is a larger number of machines at the attackers’ disposal and it becomes difficult for the victim to pinpoint the origin of the attack.
The DDoS attack is the next step. A DDoS attack is one of the most common types of DoS attack in use today. The attackers use a network of devices called a botnet controlled by them (previously infected) to launch the attack. Botnets can be made up of anywhere between a handful of bots to hundreds of different bots. This can make extremely difficult for defenders to deal with this kind of attack.
There are a number of broad categories that DOS/DDoS attacks fall into for taking networks offline:
Volumetric attacks: Volumetric attacks are classified as any form of attack where a network’s bandwidth resources are deliberately consumed by an attacker. Once network bandwidth has been consumed it is unavailable to legitimate devices and users within the network. Volumetric attacks occur when the attacker floods network devices with ICMP echo requests until there is no more bandwidth available.
Fragmentation attacks: Fragmentation attacks are any kind of attack that forces a network to reassemble manipulated packets. During a fragmentation attack the attacker sends manipulated packets to a network so that once the network tries to reassemble them, they can’t be reassembled. This is because the packets have more packet header information than is permitted. The end result is packet headers which are too large to reassemble in bulk.
TCP-State exhaustion attacks: In a TCP-State Exhaustion attack the attacker targets a web server or firewall in an attempt to limit the number of connections that they can make. The idea behind this style of attack is to push the device to the limit of the number of concurrent connections.
Application layer attacks: Application layer or Layer 7 attacks are attacks that target applications or servers in an attempt to use up resources by creating as many processes and transactions possible. Application layer attacks are particularly difficult to detect and address because they don’t need many machines to launch an attack.
Bandwidth attacks: Bandwitch attacks require multiple sources to generate a request to overload the target. Using a single machine makes impossible to generate enough traffic to overwhelm a service, this is why a DDoS attack is necessary where, the more machines, called Zombies or bots, are used, the more possibilities have the attack to succeed. The goal of a bandwidth attack is to consume the bandwidth completely.
Service request floods: Similar to the previous one, attackers generate enough requests towards a service web to overload it and make the service to deny legitimate connections too.
SYN attack/Flooding: In this attack, the three-way handshake is exploited. Attackers generate a lot of SYN request with a fake source IP. When the target machine responses, keeps waiting for the ACK to arrive but, because the IP was a fake one not reachable by the target, the waiting period ties up a connection “listen to queue” to the system. The waiting period can be up to 75 seconds.
ICMP flood attacks: In this attack, attackers send ICMP packages and, all these requests and responses, without waiting for the responses, overflood the resources of the network device.
Peer-to-Peer attacks: Peer to peer networks are deployed among a large number of hosts, once one of the network hubs is compromised it becomes easy for the attacker to launch a DDoS attack.
Permanent DoS attacks (PDoS): In this attack instead of focusing on services, attackers are going to focus on hardware sabotage, making necessary to replace or reinstall the hardware involved in the attack. Considering the difficult access or remote locations of this hardware, this can cause very permanent damage. PDoS attack method is known as ‘Phlashing’ or ‘Bricking’ with one sole purpose, to infect and permanently damage a device. Specifically, nowadays, phlashing attacks are targeting the Internet of Things (IoT) connected devices to exploit known vulnerabilities in IoT device security and software.
Application-level flood attacks: This type of attack target applications servers or client computer running applications taking advantage of exiting vulnerabilities and exploiting them to bypass the access controls.
Distributed reflection DoS (DRDoS): Using spoofing mechanisms attackers involve intermediary or secondary victims in the process of the attack.
A botnet refers to a group of computers which have been infected by malware and have come under the control of a malicious actor. The term botnet is a portmanteau from the words robot and network and each infected device is called a bot. Botnets can be designed to accomplish illegal or malicious tasks including sending spam, stealing data, ransomware, fraudulently clicking on ads or distributed denial-of-service (DDoS) attacks.
The barrier to creating a botnet is also low enough to make it a lucrative business for some software developers, especially in geographic locations where regulation and law enforcement are limited. This combination has led to a proliferation of online services offering attack-for-hire.
A core characteristic of a botnet is the ability to receive updated instructions from the botmaster (attackers system). The ability to communicate with each bot in the network allows the attacker to alternate attack vectors, change the targeted IP address, terminate an attack, and other customized actions. Botnet designs vary, but the control structures can be broken down into two general categories:
The client/server botnet model
The client/server model mimics the traditional remote workstation workflow where each individual machine connects to a centralized server (or a small number of centralized servers) in order to access information. In this model, each bot will connect to a command-and-control centre (CnC) resource like a web domain or an IRC channel in order to receive instructions. By using these centralized repositories to serve up new commands for the botnet, an attacker simply needs to modify the source material that each botnet consumes from a command centre in order to update instructions to the infected machines. The centralized server in control of the botnet may be a device owned and operated by the attacker, or it may be an infected device.
A number of popular centralized botnet topologies have been observed, including:
Star Network Topology
Multi-Server Network Topology
Hierarchical Network Topology
The peer-to-peer botnet model
To circumvent the vulnerabilities of the client/server model, botnets have more recently been designed using components of decentralized peer-to-peer filesharing. Embedding the control structure inside the botnet eliminates the single-point-of-failure present in a botnet with a centralized server, making mitigation efforts more difficult. P2P bots can be both clients and command centres, working hand-in-hand with their neighbouring nodes to propagate data.
Peer to peer botnets maintain a list of trusted computers with which they can give and receive communications and update their malware. By limiting the number of other machines the bot connects to, each bot is only exposed to adjacent devices, making it harder to track and more difficult to mitigate. Lacking a centralized command server makes a peer-to-peer botnet more vulnerable to control by someone other than the botnet’s creator.
Propagation of Malicious Code
There are three most common ways to propagate malicious code. These methods are:
Central source propagation: Requires a central source where the attack toolkit is installed. When an attacker gains access to a machine, it connects to this central system and transfers the toolkit.
Back-Chaining Propagation: In this case, the attack toolkit is installed in the attacker’s machine and when they gain access to a machine, they just connect back and download the toolkit to the vulnerable machine.
Autonomous propagation: In this case, attackers send malicious code to the vulnerable machine, this malicious code installs the attack toolkit and search for other vulnerable machines.
There are numerous bot toolkits and attack tools out there.
There are several ways to detect and prevent DoS/DDoS attacks. The following are common security techniques:
Monitoring a profiling a network is a very easy way of detecting these types of attacks due to the traffic increment that we can compare with the average traffic seen usually.
Wavelet-based signal analysis is an automated process of detecting attacks by analysing input signals. This automated detection is used to detect volume-based anomalies. Wavelet analysis evaluates the traffic and filters on a certain scale whereas adaptative threshold techniques are used to detect the attacks.
Sequencial Change-Point Detection
Change-Point detection is an algorithm which is used to detect DoS attacks. The detection technique uses non-parametric algorithms to detect traffic patterns. This method required very low computational overhead hence efficient and immune to attacks resulting in high accuracy.
Protect secondary victims
Detect and neutralise handlers
Enable ingress and egress filtering
Deflect attacks by diverting it to honeypots
Mitigate attacks by load balancing
Mitigate attacks disabling unnecessary services
Enable router throttling
Using a reverse proxy
Absorbing the attack
Intrusion detection systems
Techniques to Defend Against Botnets
RFC 3704 filtering: RFC 3704 is designed for ingress filtering for multi-homed networks to limit DDoS attacks. It denies the traffic with a spoofed address to access the network and ensure the trace to its source.
Cisco IPS Source IP Reputation Filtering: Ensured by Cisco devices, filter traffic based on a reputation score and other factors. Devices collect real-time information and, in addition, they receive threat intelligence updates from Cisco.
Black Hole Filtering: This process silently drops incoming or outcoming traffic so that the source is not notified about discarding of the packet. Remote Triggered Black Hole Filtering is a routing technique to mitigate DoS attacks by using Border Gateway Protocol.
Enabling TCP Intercept on Cisco IOS Software: TCP Intercept command is used to protect TCP servers from TCP SYN flooding attacks. It prevents the attack by intercepting and validating TPC connections. These connections are matched against extended access lists avoiding the connection to reach the destination server.
The index of this series of articles can be found here.
Social engineering is a different technique from all the ones seen previously. The main difference is that it does not require a deep understanding of networking, operative systems or other previously seen domains. Social engineering is a non-technical technique to gather information and gain access to resources. It is very popular because tries to exploit one of the most vulnerable points in security the users. People tend to make mistakes, trust people in certain situations, not being aware of the importance of determinate information or, just, not have the proper training to manage appropriately a situation.
If a user is careless to secure its credentials, any architecture will fail. If a user opens malware emails an organisation will have, probably, big problems.
Spreading awareness, training and briefing the user about social engineering, social engineering attacks and the impact of their careless can help to strengthen the security from the rest of the measures in place.
Social engineering is considered the art of convincing a target to reveal information through social interactions despite if it is done in the real world or the virtual one using social online platforms.
As it has been said, one of the major vulnerabilities which lead to this type of attack is “Trust“. Open an email, open the door to some, allow them to access the facilities, try to be helpful during a call, all these actions can be exploited by a social engineer.
A social engineering attack can be divided into four phases:
Research: It includes a collection of information about the target organisation.
Target selection: In these phases, attackers select a concrete employee from the organisation to specifically target it.
Relationship: In this phase, attackers create a relationship with the target in a way that the target could not identify the real intention and, attackers can earn some trust.
Exploit: Exploit the relationship to gain access to sensitive information or resources.
Types of Social Engineering
There are numerous social engineering techniques that can be classified as follow:
Human-based social engineering
For these techniques, it is necessary the interaction with the real world.
Impersonation means pretending to be someone or something. Attackers pretend to be a legitimate user, an authorised person or a representative of authority. This impersonation can be in person or through different channels like email, telephone, etc.
Eavesdropping and Shoulder Surfing
Eavesdropping is a technique of obtaining information listening to conversations covertly. Listening to conversations, reading or accessing any resource without been noticed.
Shoulder surfing is just a technique of obtaining information by standing behind targets when they are interacting with sensitive information.
Already discussed previously, this technique consists of accessing target’s trash like printer trash, user desk, company’s trash and finding phone bills, contact information, financial information or any other helpful materials.
Reverse Social Engineering
In this technique, attackers present themselves as problem fixers for something that is not working right now or may fail in the future. If victims are convinced, they will provide the information required by attackers. The execution of this attack follow the next steps:
Attackers damage victim’s systems or identify a vulnerability.
Attackers advertise themselves as authorised people for solving the problem.
Attackers gain the trust of the victim and gain access to sensitive information.
After the relationship has been created, victims often call attackers for help, as trusted contacts.
Piggybacking and Tailgating
Piggybacking is the technique of waiting for an authorised person to gain access to a restricted area.
Tailgating is the technique of following closely an authorised person to gain access to a restricted area.
Computer-based social engineering
For these techniques, it is only necessary the online interaction.
Phishing techniques address a fake email looking like a legitimate one to targets. When the targets interact with the email, they will be redirected to a fake page where they will be requested for some sensitive information.
A spear-phishing attack targets a specific individual and it is tailored for that person. Usually, they are more difficult to create but, the rate of success is higher.
Mobile-based Social Engineering
Publishing Malicious Apps
In this technique, attackers publish fake applications on app stores or similar sources trying to achieve large scale attacks. Usually, these apps are copies of popular apps. Once users provide their sensitive information, the app sends this information to the attackers’ servers.
Re-packaging legitimate Apps
Here, attackers download a legitimate app, re-package it with some malware and re-upload the application to 3rd party stores. This is particularly done with very popular apps like games or anti-viruses. Users, maybe, are not aware of the application being available on legitimate stores or, they download a paid app from a free link intentionally or accidentally. When users supply their sensitive information this is sent to the attackers.
Fake Security Apps
Similar to the previous one but, in this case, usually, the download of the application is offered using a pop-up when the user is surfing the Internet.
Not all the attacks are conducted by third parties, sometimes a frustrated or unhappy employee can perpetrate the attacks. They can be acting as vengeance or being paid by a competitor to spy and steal information.
Impersonation on Social Networks
After collect information about a target, attackers can create fake profiles in social networks to deceive the target friends or groups the real user has links to.
After joining a group or contacting some friends or colleagues attackers can start receiving juicy updates or, even, request people for some specific and sensitive information.
Identify theft is stealing the identification information of a person and is one of the most popular frauds.
Identify theft can be split in three steps:
Gathering information: Where using some or all of the methods seen previously, attackers can obtain information like full names, address or contact information of a person. In addition, accounts information, birth information, or utility bills.
Fake identity proof: In this phase, attackers try to create fake driving licence cards, company ids, id cards or any other documents that can prove they are who they are saying.
Fraud: Armed with all the fake documentation, attackers can try to get credits, a mortgage, spend money shopping, access to the company premises or use the ids for future frauds.
Social Engineering Countermeasures
Social engineering can be mitigated by several methods:
Educate yourself: As said before, training and self-awareness for users are one of the best things to invert on.
Be aware of the information you are releasing: Everything shared online when talking or creating a profile can be found by attackers.
Determine which of your assets are most valuable to criminals: Companies tend to do assessments about what is valuable for them as a business but, this is not always what is valuable for attackers. Attackers want anything they can monetise.
Write a policy and back it up with good awareness training: Write a security policy for protecting valuable data assets. Then back up that policy with good awareness training.
Keep your software up to date: Hackers using social engineering techniques are often seeking to determine whether you are running unpatched, out-of-date software they can exploit.
Give employees a sense of ownership when it comes to security: Employees need to feel involved, not just cold instructions to follow, they need to feel is important for them and they contribute.
When asked for information, consider whether the person you are talking to deserves the information they are asking about: Before answering any questions, users should think if the person is asking needs to have this information and is allowed to have it.
Watch for questions that do not fit the pretext: Users should pay attention to questions or request of information that does not fit the profile, or the expected behaviour, of the person asking.
Stick to your guns: Common sense is the best defence. If, as a user, something feels off, trust the feeling, there will be always time to supply information after pertinent checks have been done.
A quick list involving all the previous points could be:
The index of this series of articles can be found here.
A packet sniffer is a utility that listens on a network for transferred data. Packet sniffing allows individuals to capture data as it is transmitted over a network. This technique is used by network professionals to diagnose network issues, and by malicious users to capture unencrypted data, like passwords and usernames. Using sniffers attackers can gain knowledge of information that might be helpful on further attacks.
To be able to use sniffing techniques the promiscuous mode needs to be enabled in the network interface, this will allow the capture of all the traffic in the network, not just the traffic directed to the interface. Promiscuous mode is a mode of the interface in which the network interface card (NIC) respond for every package it receives. Using these techniques anyone can sniff traffic in a LAN.
There are two types of sniffing techniques:
Active sniffing: Active sniffing is the sniffing type in which attackers have to send additional packets to the connected device such as a switch to start receiving packets. As it is known, a unicast packet from the switch is transmitted to a specific port only. Attackers use certain techniques such as MAC Flooding, DHCP Attacks, DNS Poisoning, Switch Port Stealing, ARP Poisoning and Spoofing to monitor traffic passing through the switch.
Passive sniffing: Passive sniffing is the sniffing type in which there is no need for sending additional packets or interfering the device such as a hub to receive packages. As it is known, a hab broadcast every packet to its port, which helps the attacker to monitor all traffic passing through the hub without any effort.
Two different types of network analysers can be found. Those based on hardware and those based on software.
Hardware Analysers: These are physical pieces of equipment that can be plugged into a network and analyse the traffic without interfering with it. The mayor advantages they offer are that they are mobile, flexible and throughput.
Software Analyser: So call switch port analysis (SPAN). Easy to configure and start, used to diagnose most of the network problems enterprises. And administrator can just run one if one of its users is having problems. The can be configured to monitor inbound, outbound traffic or both. They have some limitations, for example, some types of traffic can no be forwarded like BDPUs, CDP, DTP, VTP, STP. And, if a source port with higher bandwidth than the destination port is used, some of the traffic, if the link gets congested, can be dropped.
Wiretapping is a type of sniffing. This maybe sounds like an old fashion thing saw in films but, multiple governmental, security or enforcement agencies used to monitor third party conversations, and usually, it needs a court order or some kind of legal permission. But, attackers can do the same without the legal considerations. Wiretapping is basically electrical tap on the telephone line.
Wiretapping can be classified into its own two types:
Active wiretapping: This type includes monitoring, recording and maybe alter a communication.
Passive wiretapping: This type, just, includes monitoring and recording a communication.
When talking about active sniffing, some techniques that help attackers to generate traffic in a network and gather information have been named. It is time to describe these techniques:
Stands for “Media Access Control Address and a MAC address is a hardware identification number that uniquely identifies each device on a network. The MAC address is manufactured into every network card, such as an Ethernet card or Wi-Fi card, and therefore cannot be changed.
Because there are millions of networkable devices in existence, and each device needs to have a unique MAC address, there must be a very wide range of possible addresses. For this reason, MAC addresses are made up of six two-digit hexadecimal numbers, separated by colons. For example, an Ethernet card may have a MAC address of 00:0d:83:b1:c0:8e.
All devices on the same network subnet have different MAC addresses. MAC addresses are very useful in diagnosing network issues, such as problems with IP addresses. MAC addresses are useful for network diagnosis because they never change, as opposed to a dynamic IP address, which can change from time to time. For a network administrator, that makes a MAC address a more reliable way to identify senders and receivers of data on the network.
Address Resolution Protocol (ARP) is a protocol for mapping an Internet Protocol address (IP address) to a physical machine address that is recognized in the local network. A table is used to maintain a correlation between each MAC address and its corresponding IP address. ARP provides the protocol rules for making this correlation and providing address conversion in both directions.
Content Addressable Memory (CAM) table is a system memory construct used by Ethernet switch logic which stores information such as MAC addresses available on physical ports with their associated VLAN Parameters. The CAM table, or content addressable memory table, is present in all switches for layer 2 switching. This allows switches to facilitate communications between connected stations at high speed and in full-duplex regardless of how many devices are connected to the switch. Switches learn MAC addresses from the source address of Ethernet frames on the ports, such as Address Resolution Protocol (ARP) response packets.
The MAC Flooding is an attacking method intended to compromise the security of the network switches. Usually, the switches maintain a table structure called MAC Table. As it has been already seen, the hubs broadcast the data to the entire network allowing the data to reach all hosts on the network but switches send the data to the specific machines which the data is intended to be sent. This goal is achieved by the use of MAC tables.
The aim of the MAC Flooding is to takedown this MAC Table. In a typical MAC Flooding attack, the attacker sends Ethernet Frames in a huge number. When sending many Ethernet Frames to the switch, these frames will have various sender addresses. The intention of the attacker is consuming the memory of the switch that is used to store the MAC address table. The MAC addresses of legitimate users will be pushed out of the MAC Table. Now the switch cannot deliver the incoming data to the destination system. So a considerable number of incoming frames will be flooded at all ports.
MAC Address Table is full and it is unable to save new MAC addresses. It will lead the switch to enter into a fail-open mode and the switch will now behave like a network hub. It will forward the incoming data to all ports broadcasting it.
As the attacker is a part of the network, the attacker will also get the data packets intended for the victim machine. So that the attacker will be able to steal sensitive data from the communication of the victim and other computers. Usually, a packet analyzer is used to capture these sensitive data.
Switch Port Stealing
This attack uses MAC flooding to sniff traffic between two hosts. Switch port stealing works by stealing the switches port of the target host. Switches learn to bind MAC addresses to each port by seeing the source MAC addresses in the packets that arrive from each port. The user wanting to sniff the traffic steals the switches port to the target host so the traffic will go through it first, then to the target host.
The attack starts by having the attacker flood the switch with forged gratuitous ARP packets with the source MAC address being that of the target host and the destination MAC address being that of the attacker. The flooding process described here is different than the flooding process used in CAM table flooding. Since the destination MAC address of each flooding packet is the attacker’s MAC address, the switch will not forward these packets to other ports, meaning they will not be seen by other hosts on the network. Now, a race condition exists because the target host will send packets too. The switch will see packets with the same source MAC address on two different ports and will constantly change the binding of the MAC address to the port. Remember that the switch binds a MAC address to a single port. If the attacker is fast enough, packets intended for the target host will be sent to the attacker’s switch port and not the target host. The attacker has now stolen the target hosts’ switch port. When a packet arrives at the attacker, the attacker performs an ARP request asking for the target hosts’ IP address. Next, the attacker stops the flooding and waits for the ARP reply. When the attacker receives the reply, it means that the target hosts’ switch port has been restored to its original binding. Now, the attacker can sniff the packet, then forward it to the target host and restart the flooding process waiting for new packets.
Defence Against MAC Attacks
There are several ways to mitigate these packet sniffing attacks. The first of these actions is to enable port security on the switch. Port security is a feature found on high-end switches that ties a physical port to a MAC address. This allows you to either specify one or more MAC addresses for each port or learn a certain number of MAC addresses per port. A change in the specified MAC address for a port or flooding of a port can be controlled in many different ways through switch administration.
An important fact to know is that port security capabilities are dependant on the platform meaning that different switch manufacturers have different capabilities.
The second way to mitigate sniffing is through the use of static ARP entries. Static ARP entries are permanent entries that will not time out from the ARP cache. This method does have a drawback though. Administrators have to create new entries on every host on the network every time a new host is connected, or when a network card is replaced.
The final method of defence is through detection. Intrusion detection systems can be configured to listen for high amounts of ARP traffic. There are also tools specifically designed to listen for ARP replies on networks. This method is prone to reporting false positives though. It should be remembered that detection is always an important step in mitigation.
Acronym of Dynamic Host Configuration Protocol. It is a network protocol used on IP networks where a DHCP server automatically assigns an IP address and other information to each host on the network so they can communicate efficiently with other endpoints.
In addition to the IP address, DHCP also assigns the subnet mask, a default gateway address, a domain name server (DNS) address and other pertinent configuration parameters. Request for Comments (RFC) 2131 and 2132 define DHCP as an Internet Engineering Task Force (IETF) – defined standard based on the BOOTP protocol.
When working with DHCP, it is important to understand all of the components. Below is a list of them and what they do:
DHCP server: A networked device running the DCHP service that holds IP addresses and related configuration information. This is most typically a server or a router but could be anything that acts as a host, such as an SD-WAN appliance.
DHCP client: The endpoint that receives configuration information from a DHCP server. This can be a computer, mobile device, IoT endpoint or anything else that requires connectivity to the network. Most are configured to receive DHCP information by default.
IP address pool: The range of addresses that are available to DHCP clients. Addresses are typically handed out sequentially from lowest to highest.
Subnet: IP networks can be partitioned into segments known as subnets. Subnets help keep networks manageable.
Lease: The length of time for which a DHCP client holds the IP address information. When a lease expires, the client must renew it.
DHCP relay: A router or host that listens for client messages being broadcast on that network and then forwards them to a configured server. The server then sends responses back to the relay agent that passes them along to the client. This can be used to centralize DHCP servers instead of having a server on each subnet.
DHCP Starvation Attack
DHCP starvation attack is an attack vector in which an attacker broadcasts a large number of DHCP requests packets with some spoofed MAC Address. DHCP starvation attack is called an attack on a computer network, in which the entire range of available and DHCP award IP addresses to a single client be registered. The automatic assignment of network addresses to other computers is thus made impossible. This is a sort of DHCP flooding attack or DHCP denial of service attack in which all the IP addresses of the IP pool will be consumed by the attacker and no new client will be able to connect to the DHCP server.
Rogue DHCP Server Attack
A rogue DHCP server is a DHCP server which is on a network but is not authorized and permissible by a network administrator. This DHCP server is created by the attacker by which when all the IP addresses will be starved it will make the victim connect to its own malicious DHCP server into that same network.
Preventing DHCP Starvation Attacks and Rogue Servers
Port security can currently prevent a DHCP starvation attack launched from a PC connected to a switch that is using a tool such as Gobbler. The inability of the attack to succeed is due more to a limitation of the tool than the mitigation offered by port security. The only reason such an attack fails is that Gobbler uses a different source MAC address to generate a different DHCP request and can be mitigated by port protection.
Rogue DHCP servers can be mitigated by the DHCP snooping feature. DHCP snooping is a feature available on switches. In order to defend against rogue DHCP servers, configure DHCP snooping on the port on which the valid DHCP server is connected. Once you configure DHCP snooping, it does not allow other ports on the switch to respond to DHCP discover packets sent by clients. Thus, even if an attacker manages to build a rogue DHCP server and connects to the switch, he or she cannot respond to DHCP discover packets.
Address Resolution Protocol (ARP) is a network protocol used to find the hardware (MAC) address of a host from an IP address. ARP is used on Ethernet LANs because hosts that want to communicate with each other need to know their respective MAC addresses. It is a request-reply protocol; ARP request messages are used to request the MAC address, while ARP reply messages are used to send the requested MAC address.
ARP Poisoning Attack
Address Resolution Protocol (ARP) poisoning or ARP spoofing is when an attacker sends falsified ARP messages over a local area network (LAN) to link an attacker’s MAC address with the IP address of a legitimate computer or server on the network. Once the attacker’s MAC address is linked to an authentic IP address, the attacker can receive any messages directed to the legitimate MAC address. As a result, the attacker can intercept, modify or block communicates to the legitimate MAC address.
ARP spoofing can be used for:
Preventing ARP spoofing Attack
Rely on Virtual Private Networks: One way to prevent ARP spoofing from happening in the first place is to rely on Virtual Private Networks (VPNs). When you connect to the internet, you typically first connect to an Internet Service Provider (ISP) in order to connect to another website. However, when you use a VPN, you’re using an encrypted tunnel that largely blocks your activity from ARP spoofing hackers. Both the method by which you’re conducting the online activity and the data that goes through it is encrypted.
Use a Static ARP: Creating a static ARP entry in your server can help reduce the risk of spoofing. If you have two hosts that regularly communicate with one another, setting up a static ARP entry creates a permanent entry in your ARP cache that can help add a layer of protection from spoofing.
Get a Detection Tool: Even with ARP knowledge and techniques in place, it is not always possible to detect a spoofing attack. Hackers are becoming increasingly stealthy at remaining undetected and use new technologies and tools to stay ahead of their victims. Instead of strictly focusing on prevention, make sure you have a detection method in place. Using a third-party detection tool can help you see when a spoofing attack is happening so you can work on stopping it in its tracks.
Avoid Trust Relationships: Some systems rely on IP trust relationships that will automatically connect to other devices in order to transmit and share information. However, you should completely avoid relying on IP trust relationships in your business. When your devices use IP addresses only to verify another machine or user’s identity, it is easy for a hacker to infiltrate and spoof your ARP.
Set-Up Packet Filtering: Some ARP attackers will send ARP packets across the LAN that contain an attacker’s MAC address and the victim’s IP address. Once the packets have been sent, an attacker can start receiving data or wait and remain relatively undetected as they ramp up to launch a follow-up attack. And when a malicious packet has infiltrated your system, it can be difficult to stop a follow-up attack and ensure your system is clean.
Look at Your Malware Monitoring Settings: The antivirus and malware tools you already use may offer some recourse against ARP spoofing. Look at your malware monitoring settings and look for categories and selections that monitor for suspicious ARP traffic from endpoints. You should also enable any ARP spoofing prevention options and stop any endpoint processes that send suspicious ARP traffic.
Run Spoofing Attacks: Identification and prevention are key to preventing spoofing attacks. However, you can increase your chances of staying safe and protecting your data by running your own spoofing attacks. Work with your security officer or IT team to run a spoofing attack to see if the techniques you’re using are enough to keep your system and data safe.
A MAC spoofing attack is where the intruder sniffs the network for valid MAC addresses and attempts to act as one of the valid MAC addresses. The intruder then presents itself as the default gateway and copies all of the data forwarded to the default gateway without being detected. This provides the intruder valuable details about applications in use and destination host IP addresses. This enables the spoofed CAM entry on the switch to be overwritten as well.
MAC address spoofing is used to impersonate legitimate devices, circumvent existing security mechanisms and to hide malicious intent. It can be an effective attack on defensive strategies where user and device identity provide a basis for access control policies.
In a typical MAC spoofing sequence, the attacker:
Identifies the MAC address of a device with authorized access to the network.
Connects a computer to the network, changing its MAC address to match (impersonate) that of the authorized device.
Exploits security controls based on static MAC addresses to access network segments, applications and sensitive information.
Non-Legitimate uses of MAC spoofing: An example of an illegitimate use is when an attacker changes the MAC address of his station to enter a target network as an authorized user-taking over a computer’s identity that is authorized to function on the network. With this new identity, an attacker can wreak havoc: for example to launch denial of service attacks or to bypass access control mechanisms to advance more intrusion. An attacker might choose to change one’s MAC address in an attempt to evade network intrusion detection systems, to become invisible to security measures, allowing more time to act without detection.
Legitimate uses of MAC spoofing: An example of a legitimate use of MAC spoofing is changing the function of a single computer from router to computer and back to router through MAC spoofing. If you only have a single public IP, you can only hook up one unit directly (PC or router). If one has two WAN IPs, the MAC address of the two devices must be different.
How to Defend it
There are many tools and practices that organizations can employ to reduce the threat of spoofing attacks. Common measures that organizations can take for spoofing attack prevention include:
Packet filtering: Packet filters inspect packets as they are transmitted across a network. Packet filters are useful in IP address spoofing attack prevention because they are capable of filtering out and blocking packets with conflicting source address information (packets from outside the network that show source addresses from inside the network and vice-versa).
Avoid trust relationships: Organizations should develop protocols that rely on trust relationships as little as possible. It is significantly easier for attackers to run spoofing attacks when trust relationships are in place because trust relationships only use IP addresses for authentication.
Use spoofing detection software: There are many programs available that help organizations detect spoofing attacks, particularly ARP Spoofing. These programs work by inspecting and certifying data before it is transmitted and blocking data that appears to be spoofed.
Use cryptographic network protocols: Transport Layer Security (TLS), Secure Shell (SSH), HTTP Secure (HTTPS) and other secure communications protocols bolster spoofing attack prevention efforts by encrypting data before it is sent and authenticating data as it is received.
Attackers can poison a DNS cache by tricking DNS resolvers into caching false information, with the result that the resolver sends the wrong IP address to clients, and users attempting to navigate to a website will be directed to the wrong place.
DNS cache poisoning is the act of entering false information into a DNS cache so that DNS queries return an incorrect response and users are directed to the wrong websites. DNS cache poisoning is also known as ‘DNS spoofing.’ IP addresses are the ‘room numbers’ of the Internet, enabling web traffic to arrive in the right places. DNS resolver caches are the ‘campus directory,’ and when they store faulty information, traffic goes to the wrong places until the cached information is corrected. (Note that this does not actually disconnect the real websites from their real IP addresses.)
Because there is typically no way for DNS resolvers to verify the data in their caches, incorrect DNS information remains in the cache until the time to live (TTL) expires, or until it is removed manually. A number of vulnerabilities make DNS poisoning possible, but the chief problem is that DNS was built for a much smaller Internet and based on a principle of trust (much like BGP). A more secure DNS protocol called DNSSEC aims to solve some of these problems, but it has not been widely adopted yet.
Attackers can poison DNS caches by impersonating DNS nameservers, making a request to a DNS resolver, and then forging the reply when the DNS resolver queries a nameserver. This is possible because DNS servers use UDP instead of TCP, and because currently there is no verification for DNS information.
Types of DNS Spoofing
Intranet DNS Spoofing: Intranet DSN spoofing is normally performed over a switched LAN by attackers with the help of ARP poisoning techniques. Attackers sniff the packets, extract the ID of DNS requests and reply with the fake IP translation directing the traffic to the malicious site. Attackers must be quick enough to respond before the legitimate DSN server resolves the query.
Internet DNS Spoofing: Internet DSN spoofing is performed by replacing the DNS configuration on the target machine. All DNS queries will be directed to a malicious DSN server controlled by the attacker, directing the traffic to malicious sites. Usually, internet DNS spoofing is performed by deploying a trojan or infecting the target.
Proxy Server DNS Poisoning: Similar to the previous one, proxy server DNS poisoning is performed by replacing the DNS configuration from the web browser of a target. All web queries will be directed to a malicious proxy server controlled by the attacker.
DNS Cache Poisoning: Users tend to use the DSN servers provided by Internet Service Providers (ISP) but, organisations tend to have their own servers to improve performance by caching frequently or previously generated queries. Attackers can add or alter entries in the DNS record cache to redirect users to malicious sites. When an internal DNS server is unable to validate a DNS response from an authoritative DSN server, it updates the entry locally to entertain the user requests.
How to Defend it
Audit your DNS zones
First things first. The most important thing you will have to review apart from the DNS server main configuration is your DNS zone.
As time passes, we tend to forget about test domain names or subdomains that sometimes run outdated software or unrestricted areas vulnerable to attack, or if an A record is showing an internal/reserved intranet area by mistake.
Start exploring all your DNS public records using SecurityTrails: review all your zones, records and IPs. Audit your A, CNAME and MX records today. It is easy, as we have seen in past blog posts, like when we explored Google DNS or Microsoft subdomains.
Keep your DNS servers up-to-date
Running your own Name Servers gives you the ability to configure, test and try things that you may not be able to on private DNS servers like the ones your hosting provider gives you, or when you sign up for an account at Cloudflare.
When you decide to run your own DNS servers, probably using software like BIND, PowerDNS, NSD, or Microsoft DNS, and like the rest of the operating system software, it is crucial to keep these packages up-to-date in order to prevent service exploits targeting bugs and vulnerabilities.
Latest versions of all popular DNS servers include patches against known vulnerabilities, as well as support for security technologies like DNSSec and RRL (Response Rate Limiting) that are pretty useful in preventing DNS reflection attacks.
Hide BIND version
While some people cannot consider this as a security practice, security through obscurity is just another way to hide information from attackers when they are performing their initial security audit against your server.
Restrict Zone Transfers
A DNS zone transfer is just a copy of the DNS zone, and while this technique is often used by slave name servers to query master DNS servers, sometimes attackers can try to perform a DNS zone transfer in order to have a better understanding of your network topology.
One of the things that can be done to prevent these kinds of tricks is to restrict which DNS servers are allowed to perform a zone transfer or at least limit the allowed IP addresses that can make such requests.
That is why limiting zone transfers are one of the best ways to protect your precious DNS zone information.
Disable DNS recursion to prevent DNS poisoning attacks
DNS recursion is enabled by default on most Bind servers on all major Linux distributions, and this can lead to serious security issues, like DNS poisoning attacks, among others.
When DNS recursion is enabled on your server configuration, the DNS server allows recursive queries for other domains that are actually not real master zones located on the same name server, this simply allows third-party hosts to query the name servers as they want.
This setting can also increase your exposure to DNS amplification attacks, that is why you should always disable DNS recursion on your DNS servers if your plan is not to receive recursive DNS queries.
Use isolated DNS servers
Running your own DNS server is possible using a dedicated server or cloud where you host the rest of the web services like an application server, HTTP server or database server.
This is a common practice among small companies who often store all their server services in a single cPanel or Plesk box.
If you decide to put all your eggs in one basket, you must ensure that this box has a pretty solid server hardening for each daemon you are running, as well as for the applications running inside of the operating system.
Although the best you can do is to use your own dedicated DNS server environment, it does not matter if it is based on Cloud or Dedicated servers as long as it is 100% dedicated to DNS services only.
Having this DNS server isolated from the rest of your application servers will help to reduce the chance of getting hit by web application attacks.
Close all unneeded server ports, stop unwanted OS services, filter your traffic using a firewall, and only allow basic services such as SSH and the DNS server itself. This will help a lot to mitigate the chances of a DNS attack.
Use a DDOS mitigation provider
While small and midsize DOS and DDOS can be mitigated by tweaking network filters, HTTP services, and kernel response from the operating system, when a big DDOS comes after you, only a few Data Centers will be able to help their customers with a real anti-DDOS service.
If you run your own DNS servers and you are under massive DDOS attack, your usage in terms of bandwidth usage or packets per second will probably cause you a big downtime, if your service provider does not apply a null route to your IP addresses first.
That is why the best thing you can do is to hire an anti-DDOS specialized service like Cloudflare, Incapsula or Akamai to mitigate DDOS in the best possible way and keep your DNS servers secure and responding well at all times.
If you are not running your own DNS servers and decide to use a third-party DNS managed service like Cloudflare DNS or DNSMadeEasy, you can be sure their servers are pretty well secured.
However, none (not even their CEO) is safe from getting an account compromise, but the probabilities are very low, to be honest.
And, even in the worst case, an attacker can gain access to your username and password, but you can still have your account under control if you are using two-factor authentication.
There are som sniffers out there but, probably, the most well know is Wireshark. It is what is called a network protocol analyser. It is a free and open-source tool. And it allows multiple filter options when capturing traffic.
Best practices against sniffing include the following approaches to protect the network traffic:
Using HTTPS instead of HTTP
Using SFTP instead of FTP
Use a switch instead of a hub
Configure DHCP snooping
Configure Dynamic ARP inspection
Configure source guard
Use sniffing detection tools to detect NIC functioning in a promiscuous mode
Use strong encryption protocols
Sniffing Detection Techniques
Ping method: Ping technique can be used to detect sniffers but, been an older technique is not very reliable. A ping request is sent to the suspicious IP address, if it is running in promiscuous mode, it will respond.
ARP method: Using ARP, sniffers can be detected with the help of the cache. By sending a non-broadcast ARP package to the suspect, the MAC address will be cached if the NIC is running in promiscuous mode. Next step is to send a broadcast ping with the spoofed address. If the machine is running in promiscuous mode, it will be able to reply to the packet only as it has already learned the actual MAC from the sniffed non-broadcast ARP packet.
Promiscuous Detection Tool: Promiscuous detection tools like Nmap can also be used to detect NIC running in promiscuous mode.
The index of this series of articles can be found here.
The term malware is a contraction of malicious software. Put simply, malware is any piece of software that was written with the intent of damaging devices, stealing data, and generally causing a mess.
Malware is often created by teams of hackers: usually, they’re just looking to make money, either by spreading the malware themselves or selling it to the highest bidder on the Dark Web. However, there can be other reasons for creating malware too — it can be used as a tool for protest, a way to test security, or even as weapons of war between governments. But no matter why or how malware comes to be, it is always bad news when it winds up on your PC.
Malware is the collective name for several malicious software variants, including viruses, ransomware and spyware. Malware is typically delivered in the form of a link or file over email and requires the user to click on the link or open the file to execute the malware.
Each type of malware has its own unique way of causing havoc, and, as it has been said before, most rely on user action of some kind. Some strains are delivered over email via a link or executable file. Others are delivered via instant messaging or social media. Even mobile phones are vulnerable to attack. It is essential that organizations are aware of all vulnerabilities so they can lay down an effective line of defence.
Some of the methods that are popularly used for the propagation of malware are:
Free Software: The term free software refers to all the licensed software that can be found for free usually cracked or with some extra files to crack it. Usually, it is going to contain malicious software or, sometimes, it just contains the malware.
File-Sharing Services: Torrent server or Peer-to-peer file-sharing services are flooding with malware, legitimate files can be infected and re-uploaded trying to capture innocent people and their systems.
Removable Media: Malware can also propagate through removable media like USBs. Any media device can contain hidden malware, especially is its origin is unknown.
Email Communication: Nowadays, emails are one of the most popular ways of communication, especially in organisations. Malware can be sent via email in the form of an attachment or a link.
Not Using a Firewall or Anti-virus: Not exactly ways to deliver malware but, systems that can prevent known malware to be downloaded or installed in the target’s machines.
Types of Malware
Malware is a very broad category, and what malware does or how malware works changes from file to file. The following is a list of common types of malware, but it is hardly exhaustive:
This kind of malware disguises itself as legitimate software or is hidden in legitimate software that has been tampered with. It tends to act discreetly and create backdoors in your security to let other malware in. But there some other uses for trojans:
Gaining unauthorised access
Infect connected devices
Using victim for spamming
Using victim as botnet
Download other malicious software
The infection process using trojans is comprised of some steps. This combination of steps is taken by attackers to infect target systems.
Creation of a trojan using some kind of construction kit: A construction kit allows attackers to create customised trojans tailored to the target. Besides, construction kits help to avoid detection from protection tools. Some of these kits use crypters to encrypt, obfuscate and manipulate the malware making more difficult the detection.
Create a dropper: A dropper is a software or program that is specially designed to deliver a payload on the target machine. Its main objective is to install malware code on to the victim’s machine without alerting or been detected.
Create a wrapper: A wrapper is a non-malicious file that binds the malicious file to propagate the trojan and try to avoid detection. Usually, executable files like games, music or video files.
Propagate the trojan: Attackers just need to upload their trojans to servers where they will be downloaded when the victims click in a link.
Execute the dropper: Once the trojan has been downloaded, it will install itself and execute any procedure for what it has been prepared.
Types of Trojans
There a multiple types of trojans, some of them are:
Command Shell Trojans: They provide a remote shell on the target’s computer. Netcat is a very well know in this category.
Defacement Trojans: This type of trojan changes the appearance of the existing software, usually text and images, to leave their mark. The most well-known cases are web defacements.
HTTP/HTTPS Trojans: This kind of trojan bypasses the firewall and open a tunnel to communicate with the attacker.
Botnet Trojans: These are trojans designed to create a large-scale group of infected machines that can work together to achieve future objectives. In this category falls the DoS/DDoS Trojans. These trojans run attacks that bring networks to their knees by flooding them with useless traffic. Many Do/DDoS attacks, such as the Ping of Death and Teardrop attacks, exploit limitations in the TCP/IP protocols.
Proxy Trojans: This kind of trojan is designed to use the victim’s computer as a proxy server. This lets the attacker do anything from your computer, including credit card fraud and other illegal activities and even use your system to launch malicious attacks against other networks.
Remote Access Trojans: Abbreviated as RATs. This type of trojan is designed to provide attackers with complete control of the victim’s system. Attackers usually hide them in games and other small programs that unsuspecting users then execute on their systems.
Data Sending Trojans: This type of trojan is designed to provide the attacker with sensitive data such as passwords, credit card information, log files, e-mail address or IM contact lists. These Trojans can look for specific pre-defined data (e.g., just credit card information or passwords), or they install a keylogger and send all recorded keystrokes back to the attacker.
And, much more.
There are actually two areas to consider where protection is concerned: protective tools and user vigilance. The first is often the easiest to implement, simply because some best-in-class protective software that manages and updates itself can often set and forget. Users, on the other hand, can be prone to temptation (“check out this cool website!“) or easily led by other emotions such as fear (“install this anti-virus software immediately“). Education is key to ensure users are aware of the risk of malware and what they can do to prevent an attack.
With good user policies in place and the right anti-malware solutions constantly monitoring the network, email, web requests and other activities that could put an organization at risk, malware stands less of a chance of delivering its payload.
Some specific actions can be:
Avoid clicking on suspicious emails
Block unused ports
Monitor network traffic
Avoid downloads from untrusted sources
Install updated security software and anti-viruses
Scan removable media before using it
Configured host-based firewall
Intrusion detection software
Possibly the most common type of malware, viruses attach their malicious code to clean code and wait for an unsuspecting user or an automated process to execute them. Like a biological virus, they can spread quickly and widely, causing damage to the core functionality of systems, corrupting files and locking users out of their computers. They are usually contained within an executable file.
Stages of a Virus Life
The process of developing a virus until its detection is divided into the next following six stages. These stages include the whole lifecycle of a virus:
Design: In the design phase, the virus is created. This can be done completely from scratch or using one of the existing construction kits.
Replication: In this phase, the virus is deployed and it starts replicating itself on the target systems for a certain period of time.
Launch: In this stage, when a user non-intentionally runs the virus, this, performs the task for what it was built.
Detection: In this phase, the behaviour of the virus is observed and identified as a potential threat to the system.
Incorporation: After identification, the signature of the virus is added to anti-virus software to be able to detect it in the future. And some defensive code is created to be able to deal with it.
Elimination: Once anti-virus software has been updated it can detect the virus and eliminate it.
Working with Viruses
Working with a virus has two differentiated phases:
Infection phase: This is the phase were, once the virus has been planted in a system, starts to replicate itself. This replication or reproduction is done infecting legitimate files and programs on the target’s machines, waiting for users to execute them. During this reproduction they will try to spread as much as possible using whatever means necessary, emails, shared file systems, media devices, everything is fair play.
Attack phase: This phase starts when an unprevented user executes the virus clicking in one of the infected files. Usually, a triggering action is necessary to execute them. Once they have been executed, they will carry on with any task they have been developed for.
File virus: This type of virus infects the system by appending itself to the end of a file. It changes the start of a program so that the control jumps to its code. After the execution of its code, the control returns back to the main program. Its execution is not even noticed. It is also called Parasitic virus because it leaves no file intact but also leaves the host functional.
Boot sector virus: It infects the boot sector of the system, executing every time system is booted and before an operating system is loaded. It infects other bootable media like floppy disks. These are also known as memory virus as they do not infect file systems.
Macro virus: Unlike most virus which are written in a low-level language (like C or assembly language), these are written in a high-level language like Visual Basic. These viruses are triggered when a program capable of executing a macro is run. For example, a macro virus can be contained in spreadsheet files.
Source code virus: It looks for source code and modifies it to include virus and to help spread it.
Polymorphic virus: A virus signature is a pattern that can identify a virus (a series of bytes that make up virus code). So in order to avoid detection by anti-virus a polymorphic virus changes each time it is installed. The functionality of the virus remains the same but its signature is changed.
Encrypted virus: In order to avoid detection by anti-virus, this type of virus exists in encrypted form. It carries a decryption algorithm along with it. So the virus first decrypts and then executes.
Stealth virus: It is a very tricky virus as it changes the code that can be used to detect it. Hence, the detection of the virus becomes very difficult. For example, it can change the read system call such that whenever the user asks to read a code modified by the virus, the original form of code is shown rather than infected code.
Tunneling virus: This virus attempts to bypass detection by anti-virus scanner by installing itself in the interrupt handler chain. Interception programs, which remain in the background of an operating system and catch viruses, become disabled during the course of a tunnelling virus. Similar viruses install themselves in device drivers.
Multipartite virus: This type of virus is able to infect multiple parts of a system including boot sector, memory and files. This makes it difficult to detect and contain.
Armored virus: An armoured virus is coded to make it difficult for antivirus to unravel and understand. It uses a variety of techniques to do so like fooling antivirus to believe that it lies somewhere else than its real location or using compression to complicate its code.
Obviously, there are more types of virus, this is just a short list of them.
Also known as scareware, ransomware comes with a heavy price. Able to lock down networks and lockout users until a ransom is paid, ransomware has targeted some of the biggest organizations in the world today — with expensive results.
Worms get their name from the way they infect systems. Starting from one infected machine, they weave their way through the network, connecting to consecutive machines in order to continue the spread of infection. This type of malware can infect entire networks of devices very quickly.
Spyware, as its name suggests, is designed to spy on what a user is doing. Hiding in the background on a computer, this type of malware will collect information without the user knowing, such as credit card details, passwords and other sensitive information.
Malware analysis is necessary to develop effective malware detection technique. It is the process of analyzing the purpose and functionality of malware, so the goal of malware analysis is to understand how a specific piece of malware works so that defence can be built to protect the organization’s network. There are three types of malware analysis which achieve the same goal of explaining, how malware works, their effects on the system but the tools, time and skills required to perform the analysis are very different.
It is also called as code analysis. It is the process of analyzing the program by examining it i.e. software code of the malware is observed to gain the knowledge of how malware’s functions work. In this technique, reverse engineering is performed by using a disassemble tool, decompile tool, debugger, source code analyzer tools such as IDA Pro and Ollydbg in order to understand the structure of malware. Before the program is executed, static information is found in the executable including header data and the sequence of bytes is used to determine whether it is malicious. Disassembly technique is one of the techniques of static analysis. With static analysis, an executable file is disassembled using disassemble tools like XXD, Hexdump, NetWide command, to get the assembly language program file. From this file, the opcode is extracted as a feature to statically analyze the application behaviour to detect the malware.
It is also called behavioural analysis. Analysis of infected file during its execution is known as dynamic analysis. Infected files are analyzed in a simulated environment like a virtual machine, simulator, emulator, sandbox etc. After that malware researchers use SysAnalyzer, Process Explorer, ProcMon, RegShot, and other tools to identify the general behaviour of file. In a dynamic analysis, the file is detected after executing it in a real environment, during the execution, its system interaction, its behaviour and effects on the machine are monitored. The advantage of dynamic analysis is that it accurately analyzes the known as well as unknown, new malware. It is easy to detect unknown malware also it can analyze the obfuscated, polymorphic malware by observing their behaviour but this analysis technique is more time-consuming. It requires as much time as to prepare the environment for malware analysis such as virtual machine environment or sandboxes.
This technique is proposed to overcome the limitations of static and dynamic analysis techniques. It firstly analyses the signature specification of any malware code & then combines it with the other behavioural parameters for enhancement of complete malware analysis. Due to this approach hybrid analysis overcomes the limitations of both static and dynamic analysis.
Malware detection techniques are used to detect the malware and prevent the computer system from being infected, protecting it from potential information loss and system compromise. They can be categorized into signature-based detection, behaviour-based detection and specification-based detection.
It is also called as Misuse detection. It maintains the database of signature and detects malware by comparing pattern against the database. The general flow of signature-based malware detection and analysis is explained in detail in. Most of the antivirus tools are based on signature-based detection techniques. These signatures are created by examining the disassembled code of malware binary. Disassembled code is analyzed and features are extracted. These features are used in constructing the signature of a particular malware family. A library of known code signatures is updated and refreshed constantly by the antivirus software vendor so this technique can detect the known instances of malware accurately. The main advantages of this technique are that it can detect known instances of malware accurately, less amount of resources are required to detect the malware and it mainly focuses on the signature of the attack. The major drawback is that it can’t detect the new, unknown instances of malware as no signature is available for such type of malware.
It is also called as behaviour or anomaly-based detection. The main purpose is to analyze the behaviour of known or unknown malware. The behavioural parameter includes various factors such as source or destination address of malware, types of attachments, and other countable statistical features. It usually occurs in two-phase: The training phase and detection phase. During the training phase, the behaviour of the system is observed in the absence of attack and machine learning technique is used to create a profile of such normal behaviour. In the detection phase, this profile is compared against the current behaviour and differences are flagged as potential attacks.
The advantage of this technique is that it can detect known as well as new, unknown instances of malware and it focuses on the behaviour of the system to detect unknown attacks. The disadvantage of this technique is that it needs to update the data describing the system behaviour and the statistics in normal profile but it tends to be large. It needs more resources like CPU time, memory and disk space and level of false positive is high.
It is derivative of behaviour-based detection that tries to overcome the typical high false alarm rate associated with it. Specification-based detection relies on program specifications that describe the intended behaviour of security-critical programs. It involves monitoring program executions and detecting deviation of their behaviour from the specification, rather than detecting the occurrence of specific attack patterns. This technique is similar to anomaly detection but the difference is that instead of relying on machine learning techniques, it will be based on manually developed specifications that capture legitimate system behaviour. The advantage of this technique is that it can detect known and unknown instances of malware and level of false positive is low but the level of false negative is high and not as effective as behaviour-based detection in detecting new attacks; especially in network probing and denial of service attacks. Development of detailed specification is time-consuming.
The index of this series of articles can be found here.
At this point, attackers have gathered enough information or should have gathered enough information, to try to compromise the target systems.
This is, without questions, the most difficult phase of an attack or a pentest. In both cases, it is needed patience, tenacity and perseverance. Failures are going to happen, theories are going to be proven wrong, mistakes are going to be made and disappointments are going to happen. After all of this, maybe, attackers or security professionals will get some results. But, as Thomas Edison said, “I did not fail. I just found 2,000 ways not to make a lightbulb; I only needed to find one way to make it work.”. Attackers do not fail, they just need to find one way to compromise the system to achieve their objective.
Compromising a system is not a matter of, if it will be or not, it is just about the time and resources necessary to compromise it. Security professionals try to increase the time needed as much as possible and attackers reduce it as much as they can. A system will never be completely secure but, if it is secure enough, the time and resources that need to be invested will not be worth it. Still, there will be attempts just for fun, curiosity or as a challenge but, the ratio of potentially serious attacks will be lower.
Compromise a system is a very broad term. The intentions of an attacker when compromising a system are:
Maintain remote access
Steal information, data or any other type of asset
Clean and hiding pieces of evidence of the attack
There are multiple system hacking methodologies that, at least, include the next steps and match the concept of compromise a system:
It is said that a secure system should base its strengthness on three factors:
Something the user knows, like credentials i.e. username and password.
Something the user is, like biometrics.
Something the user has, like a security card or a token generator.
The implementation of the three mechanisms is, maybe, a not simple approach and only very secure systems use it. And, nowadays, there is a tendency on the use of second-factor authentication, usually based on something the user knows and something the user has. This is an excellent tendency that it should be mainstream. But, the unfortunate reality is that a lot of systems are just protected by a pair os username and password.
If attackers have been diligent enough, at this point, they will have a list of enumerated usernames to try in the target systems. At this point, it is where password cracking plays an important part. Guessable passwords, short passwords, passwords with weak encryption, simple passwords with only letters and or numbers make it easy for attackers to crack them.
The best defence against these cracking password techniques is to have a strong lengthy and difficult password to guess. Typically, a good password contains:
Case sensitive letters
At least, 8 characters length if not more
Types of Password Attacks
Attackers do not need any technical knowledge or tool to perform this attack. Things like:
Active Online Attacks
Attackers perform password cracking by directly communicating with the victim machine.
Dictionary attack: The dictionary attack, as its name suggests, is a method that uses an index of words that feature most commonly as user passwords. This is a slightly less-sophisticated version of the brute force attack but it still relies on hackers bombarding a system with guesses until something sticks.
Brute force attack: Similar in function to the dictionary attack, the brute force attack is regarded as being a little more sophisticated. Rather than using a list of words, brute force attacks are able to detect non-dictionary terms, such as alpha-numeric combinations. This means passwords that include strings such as “aaa1” or “zzz10” could be at risk from a brute force attack.
Hash Injection: A pass the hash attack is an exploit in which an attacker steals a hashed user credential and, without cracking it, reuses it to trick an authentication system into creating a new authenticated session on the same network.
Phishing: There is an easy way to hack: ask the user for his or her password. A phishing email leads the unsuspecting reader to a faked log in page associated with whatever service it is the hacker wants to access, requesting the user to put right some terrible problem with their security. That page then skims their password and the hacker can go use it for their own purpose.
Malware: A keylogger, or screen scraper, can be installed by malware which records everything users type or takes screenshots during a login process, and then forwards a copy of this file to hacker central.
Password Guessing: The password crackers best friend, of course, is the predictability of the user. Unless a truly random password has been created using software dedicated to the task, a user-generated random’ password is unlikely to be anything of the sort. Instead, thanks to our brains’ emotional attachment to things we like, the chances are those random passwords are based upon our interests, hobbies, pets, family and so on. In fact, passwords tend to be based on all the things we like to chat about on social networks and even include in our profiles.
Passive Online Attacks
Attackers perform password cracking without communicating with the authorizing party.
Wire Sniffing: Sniffing attack or a sniffer attack, in context of network security, corresponds to theft or interception of data by capturing the network traffic using a sniffer (an application aimed at capturing network packets). When data is transmitted across networks, if the data packets are not encrypted, the data within the network packet can be read using a sniffer. Using a sniffer application, an attacker can analyze the network and gain information to eventually cause the network to crash or to become corrupted, or read the communications happening across the network.
Man-in-the-Middle: A man-in-the-middle attack (MITM) is an attack where the attacker secretly relays and possibly alters the communications between two parties who believe that they are directly communicating with each other. The attacker must be able to intercept all relevant messages passing between the two victims and inject new ones. This is straightforward in many circumstances; for example, an attacker within the reception range of an unencrypted wireless access point (Wi-Fi) could insert themselves as a man-in-the-middle.
Replay Attack: A replay attack (also known as playback attack) is a form of network attack in which valid data transmission is maliciously or fraudulently repeated or delayed. This is carried out either by the originator or by an adversary who intercepts the data and re-transmits it, possibly as part of a masquerade attack by IP packet substitution. This is one of the lower-tier versions of a Man-in-the-middle attack.
As mentioned before, a default password is supplied by the manufacturer with new equipment (e.i. switches, hubs, routers) that is password protected. Attackers can easily find lists with compilations of these passwords and use them to access a system.
Attacker copies the target’s password file and then tries to crack passwords in his own system at a different location.
Pre-Computed Hashes and Rainbow Tables: Rainbow tables might sound innocuous, but they are in fact incredibly useful tools in a hacker’s arsenal. When passwords are stored on a computer system, they are hashed using encryption – the 1-way nature of this process means that it is impossible to see what the password is without the associated hash. Simply put, rainbow tables function as a pre-computed database of passwords and their corresponding hash values. This will then be used as an index to cross-reference hashes found on a computer with those already pre-computed in the rainbow table. Compared to a brute force attack, which does a lot of the computation during the operation, rainbow tables boil the attack down to just a search through a table.
Distributed Network: A Distributed Network Attack (DNA) technique is used for recovering passwords from hashes or password-protected files using the unused processing power of machines across the network to decrypt passwords. The DNA Manager is installed in a central location where machines running on DNA Client can access it over the network. DNA Manager coordinates the attack and allocates small portions of the key search to machines that are distributed over the network. DNA Client runs in the background, consuming only unused processor time. The program combines the processing capabilities of all the clients connected to the network and uses it to crack the password.
For this attack, the attacker needs physical access to the target machine. The attacker will insert a USB drive previously prepared with a password cracker tool and an autorun mechanism on the targeted computer. Once the device is connected the tool will try to crack the password.
In computer environments, authentication is the verification process to identify a user or device to probe it has legitimate access right to resources. This avoids impostors making use of or accessing resources they should not be allowed ensuring the authentication of users, computers and services.
Microsoft platform implements multiple authentication protocols, among them we can find:
Security Account Manager (SAM)
NT LAN Manager (NTLM)
Kerberos is a network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography. The Kerberos protocol uses strong cryptography so that a client can prove its identity to a server (and vice versa) across an insecure network connection. After a client and server have used Kerberos to prove their identity, they can also encrypt all of their communications to assure privacy and data integrity as they go about their business.
Here are the most basic steps taken to authenticate in a Kerberized environment.
Client requests an authentication ticket (TGT) from the Key Distribution Center (KDC).
The KDC verifies the credentials and sends back an encrypted TGT and session key.
The TGT is encrypted using the Ticket Granting Service (TGS) secret key.
The client stores the TGT and when it expires the local session manager will request another TGT (this process is transparent to the user).
If the Client is requesting access to a service or other resource on the network, this is the process:
The client sends the current TGT to the TGS with the Service Principal Name (SPN) of the resource the client wants to access.
The KDC verifies the TGT of the user and that the user has access to the service.
TGS sends a valid session key for the service to the client.
Client forwards the session key to the service to prove the user has access, and the service grants access.
Security Account Manager (SAM)
Windows stores and manages the local user and group accounts in a database file called SecurityAccount Manager (SAM). It authenticates local user logons. On a domain controller, it simply stores the administrator account from the time it was a server, which serves as the Directory Services Restore Mode (DSRM) recovery account.
In the SAM, each user account can be assigned a local area network (LAN) password and a Windows password. Both are encrypted. If someone attempts to log on to the system and the user name and associated passwords match an entry in the SAM, a sequence of events takes place ultimately allowing that person access to the system. If the user name or passwords do not properly match any entry in the SAM, an error message is returned requesting that the information be entered again.
In personal computers (PCs) not connected into a LAN and for which there is only one user, Windows asks for only one password when the system is booted up. This function can be disabled if the user does not want to enter authentication data every time the computer is switched on or restarted. The main purpose of the SAM in a PC environment is to make it difficult for a thief to access the data on a stolen machine. It can also provide some measure of security against online hackers.
The user passwords are stored in a hashed format in a registry hive either as an LM hash or as an NTLM hash. Windows XP or later versions do not store the value of LM hash or, if it exceeds fourteen characters, it stores blank or a dummy value instead. This file can be found in ‘%SystemRoot%/system32/config/SAM‘ and is mounted on ‘HKLM/SAM‘. This information is stored following the next format:
NT LAN Manager (NTLM)
NT (New Technology) LAN Manager (NTLM) is a suite of Microsoft security protocols intended to provide authentication, integrity, and confidentiality to users.
NTLM is a challenge-response authentication protocol which uses three messages to authenticate a client in a connection-oriented environment (connectionless is similar), and a fourth additional message if integrity is desired.
First, the client establishes a network path to the server and sends a ‘NEGOTIATE_MESSAGE‘ advertising its capabilities.
Next, the server responds with ‘CHALLENGE_MESSAGE‘ which is used to establish the identity of the client.
Finally, the client responds to the challenge with an ‘AUTHENTICATE_MESSAGE‘.
The NTLM authentication process can be observed in the image below.
Before Kerberos, Microsoft used NTLM technology. The biggest difference between the two systems is the third-party verification and stronger encryption capability in Kerberos. This extra step in the process provides a significant additional layer of security over NTLM.
When passwords are stored they should not be stores as plain text, to avoid this, there are a few techniques that can be used:
Encryption: Encryption is the practice of scrambling information in a way that only someone with a corresponding key can unscramble and read it. Encryption is a two-way function. When users encrypt something, they are doing so with the intention to decrypting it later. To encrypt data it is used an algorithm – a series of well-defined steps that can be followed procedurally – to encrypt and decrypt information.
Hashing: Hashing is the practice of using an algorithm to map data of any size to a fixed length. This is called a hash value. Whereas encryption is a two-way function, hashing is a one-way function. While it is technically possible to reverse-hash something, the computing power required makes it unfeasible.
Salting: Salting is a concept that typically pertains to password hashing. Essentially, it is a unique value that can be added to the end of the password to create a different hash value. This adds a layer of security to the hashing process, specifically against brute force attacks.
Despite all these techniques, hashes can be cracked or, at least, there a few techniques that can try to crack a hash.
Dictionary and Brute Force Attacks: The simplest way to crack a hash is to try to guess the password, hashing each guess, and checking if the guess’s hash equals the hash being cracked. If the hashes are equal, the guess is the password. The two most common ways of guessing passwords are dictionary attacks and brute-force attacks.
Lookup Table: Lookup tables are an extremely effective method for cracking many hashes of the same type very quickly. The general idea is to pre-compute the hashes of the passwords in a password dictionary and store them, and their corresponding password, in a lookup table data structure. A good implementation of a lookup table can process hundreds of hash lookups per second, even when they contain many billions of hashes.
Reverse Lookup Tables: This attack allows an attacker to apply a dictionary or brute-force attack to many hashes at the same time, without having to pre-compute a lookup table. First, the attacker creates a lookup table that maps each password hash from the compromised user account database to a list of users who had that hash. The attacker then hashes each password guess and uses the lookup table to get a list of users whose password was the attacker’s guess. This attack is especially effective because it is common for many users to have the same password.
Rainbow Tables: Already seen it before but, rainbow tables are a time-memory trade-off technique. They are like lookup tables, except that they sacrifice hash cracking speed to make the lookup tables smaller. Because they are smaller, the solutions to more hashes can be stored in the same amount of space, making them more effective. Rainbow tables that can crack any md5 hash of a password up to 8 characters long exist.
Password Cracking Tools
There are a plethora of password cracking tools out there. A few of them are:
John the Ripper
Privilege escalation happens when a malicious user exploits a bug, design flaw, or configuration error in an application or operating system to gain elevated access to resources that should normally be unavailable to that user. The attacker can then use the newly gained privileges to steal confidential data, run administrative commands or deploy malware – and potentially do serious damage to a target operating system, server applications, organization, and reputation.
Attackers start by exploiting a privilege escalation vulnerability in a target system or application, which lets them override the limitations of the current user account. They can then access the functionality and data of another user (horizontal privilege escalation) or obtain elevated privileges, typically of a system administrator or other power user (vertical privilege escalation). Such privilege escalation is generally just one of the steps performed in preparation for the main attack.
While usually not the main aim of an attacker, privilege escalation is frequently used in preparation for a more specific attack, allowing intruders to deploy a malicious payload or execute malicious code in the targeted system. This means that whenever users detect or suspect privilege escalation, they also need to look for signs of other malicious activity. However, even without evidence of further attacks, any privilege escalation incident is an information security issue in itself, because someone could have gained unauthorized access to personal, confidential or otherwise sensitive data. In many cases, this will have to be reported internally or to the relevant authorities to ensure compliance.
To make matters worse, it can be hard to distinguish between routine and malicious activity to detect privilege escalation incidents. This is especially true for rogue users, who might legitimately perform malicious actions that compromise security. However, if security personal can quickly detect successfully or attempted privilege escalation, they have a good chance of stopping an attack before the intruders can establish a foothold to launch their main attack.
Horizontal Privileges Scalation
With horizontal privilege escalation, attackers remain on the same general user privilege level but can access data or functionality of other accounts or processes that should be unavailable to the current account or process. For example, this may mean using a compromised office workstation to gain access to other office users’ data. For web applications, one example of horizontal privilege escalation might be getting access to another user’s profile on a social site or e-commerce platform, or their bank account on an e-banking site.
Vertical Privileges Scalation
With vertical privilege escalation (also called privilege elevation), attackers start from a less privileged account and obtain the rights of a more powerful user – typically the administrator or system user on Microsoft Windows, or root on Unix and Linux systems. With these elevated privileges, the attacker can wreak all sorts of havoc on computer systems and applications: steal access credentials and other sensitive information, download and execute malware, erase data, or execute arbitrary code. Worse still, skilled attackers can use elevated privileges to cover their tracks by deleting access logs and other evidence of their activity. This can potentially leave the victim unaware that an attack took place at all. That way, cybercriminals can covertly steal information or plant malware directly in company systems.
When Escalation Success
Once attackers gain unauthorised access to a system and escalate privileges, now the next step of the attacker is to execute malicious applications on the target system to “own” the system. Attackers goals are:
Installation of malware to collect information: To do this, tools like ‘RemoteExec‘ or ‘PDQ Deploy‘ can be used.
To set up a backdoor to maintain access.
To crack existing passwords.
To install keyloggers for monitoring or capture user actions: I the access has been physical, it can be a hardware keylogger attached to the physical machine, otherwise, it can be a software keylogger.
Protecting from Privilege Escalation
Attackers can use many privilege escalation techniques to achieve their goals. But to attempt privilege escalation in the first place, they usually need to gain access to a less privileged user account. Possible protection measures are:
Enforce password policies.
Create specialized users and groups with minimum necessary privileges and file access.
Avoid common programming errors in applications.
Secure databases and sanitize user input.
Keep systems and applications patched and updated.
Ensure correct permissions for all files and directories.
Close unnecessary ports and remove the unused user accounts.
Remove or tightly restrict all file transfer functionality.
Change default credentials on all devices, including routers and printers.
Regularly scan systems and applications for vulnerabilities.
Spyware is unwanted software that infiltrates computing devices, stealing internet usage data and sensitive information. Spyware is classified as a type of malware – malicious software designed to gain access to or damage computers, often without the owners’ knowledge. Spyware gathers personal information and relays it to advertisers, data firms, or external users.
Spyware is used for many purposes. Usually, it aims to track and sell users internet usage data, capture credit cards or bank account information, or steal personal identities monitoring internet activity, tracking log in and password information, and spying on sensitive information.
The most common types of spyware are:
Adware: This type of spyware tracks browser history and downloads, with the intent of predicting what products or services users are interested in. The adware will display advertisements for the same or related products or services to entice users to click or make a purchase. Adware is used for marketing purposes and can slow down computers.
System monitors: This type of spyware can capture just about everything users do on their computers. System monitors can record all keystrokes, emails, chat-room dialogues, websites visited, and programs run. System monitors are often disguised as freeware.
Tracking cookies: These track the user’s web activities, such as searches, history, and downloads, for marketing purposes.
Trojans: This kind of malicious software disguises itself as legitimate software. For example, Trojans may appear to be a Java or Flash Player update upon download. Trojan malware is controlled by third parties. It can be used to access sensitive information such as Social Security numbers and credit card information.
Some of the spyware features are:
Monitoring the user’s activity
Blocking applications and services
Remote delivery of logs
Email communication tracking
Recording removable media communications
A rootkit is a clandestine computer program designed to provide continued privileged access to a computer while actively hiding its presence. The term rootkit is a connection between the two words “root” and “kit“. Originally, a rootkit was a collection of tools that enabled administrator-level access to a computer or network. Root refers to the Admin account on Unix and Linux systems, and kit refers to the software components that implement the tool. Today rootkits are generally associated with malware – such as Trojans, worms, viruses – that conceal their existence and actions from users and other system processes.
A rootkit allows someone to maintain command and control over a computer without the computer user/owner knowing about it. Once a rootkit has been installed, the controller of the rootkit has the ability to remotely execute files and change system configurations on the host machine. A rootkit on an infected computer can also access log files and spy on the legitimate computer owner’s usage.
Some different types of rootkits can be found classified in the below categories.
Hardware or firmware rootkit: The name of this type of rootkit comes from where it is installed on a computer. This type of malware could infect a computer’s hard drive or its system BIOS, the software that is installed on a small memory chip in the computer’s motherboard. It can even infect routers. Hackers can use these rootkits to intercept data written on the disk.
Bootloader rootkit: Computer’s bootloader is an important tool. It loads the computer’s operating system when it turns the machine on. A bootloader toolkit, then, attacks this system, replacing the computer’s legitimate bootloader with a hacked one. This means that this rootkit is activated even before the computer’s operating system turns on.
Memory rootkit: This type of rootkit hides in a computer’s Random Access Memory (RAM). These rootkits will carry out harmful activities in the background. These rootkits have a short lifespan. They only live in the computer’s RAM and will disappear once the system reboots – though sometimes further work is required to get rid of them.
Application rootkit: Application rootkits replace standard files in a computer with rootkit files. They might also change the way standard applications work. These rootkits might infect programs such as Word, Paint, or Notepad. Every time users run these programs, they will give hackers access to their computer. The challenge here is that the infected programs will still run normally, making it difficult for users to detect the rootkit.
Kernel-mode rootkits: These rootkits target the core of a computer’s operating system. Cybercriminals can use these to change how an operating system functions. They just need to add their own code to it. This can give them easy access to a computer and make it easy for them to steal personal information.
Detecting and Defending Rootkits
Because rootkits are so dangerous and so difficult to detect, it is important to exercise caution when surfing the internet or downloading programs. There is no way to magically protect systems from all rootkits. It is difficult to detect rootkits. There are no commercial products available that can find and remove all known and unknown rootkits. There are various ways to look for a rootkit on an infected machine. Detection methods include behavioural-based methods (e.g., looking for strange behaviour on a computer system), signature scanning and memory dump analysis. Often, the only option to remove a rootkit is to completely rebuild the compromised system.
Fortunately, the odds of avoiding these attacks can be increased by following the same common-sense strategies usually are taken to avoid all computer viruses, including these.
Do not ignore updates: Updates computer’s applications and operating system can be annoying, especially when it seems as if there is a new update to approve every time it turns on. But they should not be ignored. Keeping operating systems, antivirus software, and other applications updated is the best way to protect from rootkits.
Watch out for phishing emails: Phishing emails are sent by scammers who want to trick users into providing them with financial information or downloading malicious software, such as rootkits, onto computers.
Be careful of drive-by downloads: Drive-by downloads can be especially troublesome. These happen when users visit a website and it automatically installs malware on their computer. They do not have to click on anything or download anything from the site for this to happen. And it is not just suspicious websites that can cause this. Hackers can embed malicious code in legitimate sites to cause these automatic downloads.
Do not download files sent by unknown people: Users need to be careful, too, when opening attachments. They should not open attachments sent by unknown people. Doing so could cause a rootkit to be installed on their computer.
Steganography is the practice of sending data in a concealed format so the very fact of sending the data is disguised. The word steganography is a combination of the Greek words στεγανός (steganos), meaning “covered, concealed, or protected”, and γράφειν (graphein) meaning “writing”.
Unlike cryptography, which conceals the contents of a secret message, steganography conceals the very fact that a message is communicated. The concept of steganography was first introduced in 1499, but the idea itself has existed since ancient times. There are stories of a method being used in the Roman Empire whereby a slave chosen to convey a secret message had his scalp shaved clean and a message was tattooed onto the skin. When the messenger’s hair grew back, he was dispatched on his mission. The receiver shaved the messenger’s scalp again and read the message.
Steganography is classified into two types, Technical and Linguistic. Technical includes concealing information using methods like invisible ink, microdots, and other methods to hide information. Linguistic uses the text as covering media to hide the information like cyphers and codes.
As steganography types we can find:
Text steganography: The techniques in text steganography are the number of tabs, white spaces, capital letters, just like Morse code is used to achieve information hiding.
Image Steganography: Taking the cover object as an image in steganography is called image steganography. In this technique pixel intensities are used to hide the information. The 8 bit and 24-bit images are common. The image size is large then hides more information. Larger images may require compression to avoid detection and the Techniques are LSB insertion and Masking and filtering.
Network Steganography: Taking cover objects as network protocol i.e. TCP, UDP, IP etc, where the protocol used as a carrier is called network protocol steganography. In the OSI model there exist the channels where steganography can be achieved in unused header bits of TCP/IP fields.
Audio Steganography: Taking audio as a carrier for information hiding is called audio steganography. It is a very important medium due to voice over IP (VOIP) popularity. It is used for digital audio formats such as WAVE, MIDI, and AVI MPEG for steganography. The methods are LSB coding, echo hiding, parity coding, etc.
Video Steganography: It is a technique to hide any type of files or information into digital video format. Video i.e. the combination of pictures is used as a carrier for hidden information. The discrete cosine transforms i.e. DCT change the values e.g., 8.667 to 9 which is used to hide the information in each of the images in the video, which is not justified by the human eye. It is used such as H.264, Mp4, MPEG, AVI or other video formats.
Steganalysis is the discovery of the existence of hidden information; therefore, like cryptography and cryptanalysis, the goal of steganalysis is to discover hidden information and to break the security of its carriers. Steganalysis is the practice of attacking steganography methods for the detection, extraction, destruction and manipulation of the hidden data in a stego object.
Attacks can be of several types, for example, some attacks merely detect the presence of hidden data, some try to detect and extract the hidden data, some just try to destroy the hidden data by finding the existence without trying to extract hidden data and some try to replace hidden data with other data by finding the exact location where the data is hidden.
Detection is enough to foil the very purpose of steganography even if the secret message is not extracted because detecting the existence of hidden data is enough if it needs to be destroyed. Detection is generally carried out by identifying some characteristic feature of images that is altered by the hidden data. A good steganalyst must be aware of the methods and techniques of the steganography tools to efficiently attack.
Classification of attacks based on information available to the attacker:
Stego only attack: Only stego object is available for analysis.
Known cover attack: Both cover and stego are known.
Known message attack: In some cases, the message is known and analyzing the stego object pattern for this embedded message may help to attack similar systems.
Chosen stego attack: Steganographic algorithm and stego object are known.
Chosen message attack: Here steganalyst creates some sample stego objects from many steganographic tools for a chosen message and analyses these stego objects with the suspected one and tries to find the algorithm used.
Known stego attack: Cover object and the steganographic tool used are known.
Visual attacks: By analyzing the images visually, like considering the bit images and try to find the difference visually in these single bit images.
Structural attacks: The format of data file often changes as the data to be hidden is embedded, identifying these characteristic structural changes can detect the existence of image, for example in palette-based steganography the palette of an image is changed before embedding data to reduce the number of colours so that the adjacent pixel colour difference should be very less. This shows that groups of pixels in a palette have the same colour which is not the case in normal images.
Statistical attacks: In these type of attacks the statistical analyses of the images by some mathematical formulas is done and the detection of hidden data is done based on these statistical results. Generally, the hidden message is more random than the original data of the image thus finding the formulae to know the randomness reveals the existence of data.
Covering Tracks is the final stage of a penetration test as a process – all the rest is paperwork. In a nutshell, its goal is to erase the digital signs left out by the pentester during the earlier stages of the test. These digital signs, in essence, prove the pentester’s presence in the targeted computer system. The same applies to an attacker, well, probably without the paperwork.
The purpose of this phase is to cover up all the little clues that would give away the nature of attackers’ deeds. Covering Tracks consists of:
Measures for the prevention of real-time detection (Anti-Incident Response).
Measures for the prevention of digital evidence collection during a possible post factum inquiry (Anti-Forensics).
Most common techniques that are often used by attackers to cover tracks on a target system are:
Moving, hiding, altering or renaming log files
Operative systems have active auditing tools detecting, monitoring and tracking events. One of the best methods attackers can use is not leaving any trace they have been there. If once they have access to a system, they disable the auditing system, their activity will not be registered. Even better, if they enable the auditing system when they leave.
Moving, Hiding, Altering or Renaming Files
Things, like moving given files, changing extensions, renaming, split files into small partitions and conceal each partition at the end of other files or hide one file inside another, seem naive but very effective, especially, when we consider that sometimes people involved in a cyber investigation do not have the time to examine one by one all the files residing in a computer system.
And, timestamping, due to the lack of time of investigators, one approach which allows them to prioritize their search of information potentially relevant to the investigation is to arrange this information in chronological order so that they can focus on the important pieces of data occurred around the moment of the cybercrime if it is known but, attackers can tackle this approach by modifying the metadata about any files they want. In most cases, they change the date on which each file was created, last accessed, or last modified. This effective anti-forensic technique is named time stopping, and tools to detect its creations do exist.
A common delusion among persons who count on commercial disk cleaners or privacy protection tools to delete some data they do not want others to see is the belief that these tools remove everything from the hard disc once and for all.
Despite the imperfectness of the delete method. A well-done erase will irreversibly dispose of evidence, leaving investigators empty-handed. Nevertheless, not so proficient users are prone to make mistakes, which may cost them a lot in cases of unsuccessful attempts to delete the data on the hard disk.
Unless we discuss SSD drives (which are programmed to destroy data automatically), hard drives and storage media are susceptible to almost full recovery via data carving. All in all, this method is very popular but not so effective.
In Windows-based computer systems, all of the log files are stored in the event viewer, easily findable via the “Search” bar. In almost all Linux and UNIX operating systems the log files are located in the ‘/var/log‘ directory, and in MAC operating systems one should open the Finder, click “Go to Folder” in the Go menu and type in ‘/Library/Logs‘ and press Enter to activate the Log File Management which will display all log files.
If administrators want to check for malicious activities within the system for which they are responsible, they simply examine the log files. There are two kinds of log files: system generated and application generated.
When it comes to log manipulation, the attacker usually has two options. The first option is to delete the log, and the second one is to alter its content. Deletion of log files and replacement of system binaries with Trojan malware ensures that the security staff employed by the targeted company will not detect evidence of the cyber intrusion.
The first choice – to delete the log files – is not always the ultimate solution to undetectability, since the removal of such information might create a gap between logs files and raise suspicion. One look at the processes and log files would be enough for a system administrator at the target’s premises to establish the existence of any malicious activities.
The index of this series of articles can be found here.
Vulnerability analysis is part of the scanning phase been one of the major and more important parts of an attack. Trying to find existing vulnerabilities is going to allow attackers to exploit known problems, bug or defects to obtain their way in the target’s systems. Besides, this is going to be one of the most important tasks for a penetration tester discovering vulnerabilities in the environment.
Vulnerability assessment includes discovering weaknesses in an environment, design flaws and other security concerns that can allow attackers to vulnerate and access the system or do a use of the systems different to the one they were designed for.
There are multiple types of vulnerabilities, misconfigurations, default configurations, buffer overflows, flaws in the operative systems or software and others.
Multiple tools can be found in this space allowing legitimate users, pen-testers and attacker to find these vulnerabilities.
When found, vulnerabilities get classified usually by level of impact, they can be, i.e., very low, low, medium, high, very high. A classification can be done as locally or remotely exploitable.
A vulnerability assessment can be defined as the process of examination, discovery and identification of security measures and weaknesses of systems and applications. Also, helps to recognise the vulnerabilities that can be exploited, the need for additional security layers or measures and to identify possible information revealed using scanners.
Vulnerability Assessment Types
Admins planning their vulnerability scanning strategy have multiple approaches at their disposal. In fact, you may wish to try out a variety of scan types as part of your overall security management, as testing your system from different angles can help you cover all the bases. As outlined below, two key distinctions concern the location (internal vs. external) and scope (comprehensive vs. limited) of the scan.
Internal vs. External: With an internal network scan, you will want to run threat detection on the local intranet, which will help you understand security holes from the inside. Similarly, admins should test their network as a logged-in user to determine which vulnerabilities would be accessible to trusted users or users who have gained access to the network. On the other hand, there are benefits to performing an external scan, approaching the evaluation from the wider internet, as many threats arise from intentional and/or automatic outside hacks. Likewise, it is important to scan the network as an intruder might, to understand what data could fall into the hands of those without trusted network access.
Comprehensive vs. Limited: A comprehensive scan accounts for just about every type of device managed on the network, including servers, desktops, virtual machines, laptops, mobile phones, printers, containers, firewalls, and switches. This means scanning operating systems, installed software, open ports, and user account information. Additionally, the scan might identify unauthorized devices. Ideally, with a comprehensive scan, no risks go overlooked.
Vulnerability Assessment Life-Cycle
The vulnerability assessment has a life-cycle that attackers do not need to follow (there are some parts related to remediation*) but, security professionals should follow. There are multiple versions of this vulnerability assessment life-cycle but, all of them are very similar. This document is going to use a 6 phases system but, systems with less or more phases only differentiate from this in the fact that they have decided to group areas less or more than the exposed system.
All the phases will be executed in a loop and in a continuous way to keep systems and resources secure. This is not a just one-time attempt. In the same way that systems evolved and grow, new vulnerabilities can appear allowing attackers to find their way in.
1. Creating a Baseline
In this phase, security professionals create an inventory of all resources an assets that will help to manage and prioritise the assessment. Also, they will try to gather as much knowledge as possible about infrastructure, security controls, policies and standards implemented in the organisation. Gathering this information pursues the objective to create a plan, schedule the tasks and, manage and execute them considering priorities adequately.
In this phase, the system is exhaustively examined, security measures, policies and controls. Default configurations, misconfigurations, faults, vulnerabilities. A close exam of all the elements involved using tools and manual inspection of individual systems. The objective is to have, at the end of the assessment, a report that shows all the detected vulnerabilities and problems, their scopes and their priorities.
In this phase, all the detected vulnerabilities will be reviewed, scoping them and their impact on the corporate network or organisation.
Fix of all the detected vulnerabilities usually following the impact priority assigned to them.
Check that all the remediated vulnerabilities are not there anymore and, even more important, check that with the remediations, no additional vulnerabilities have been introduced.
In this phase, security professionals just keep an eye on the network traffic and system behaviours trying to detect any intrusions.
Vulnerability Assessment Approaches
There are multiple approaches that an organisation can take trying to keep themselves safe. Things like to buy a security product that will be installed on the internal network, or hire a third-party service-based solution. To have different protocols or tests depending on the type of system reviewed or to adapt when discovering new information about the environment making a more dynamic approach to the test experience.
Vulnerability Assessment Best Practices
Some recommendation for effective vulnerability assessments can be:
Security professionals should have a full understanding of the tools they are going to use. On one hand, to be able to use all the power of the tools and to choose the appropriate tools. On the other hand, to understand the possible consequences or downsides of running the tools in the organisation’s network.
Security professionals should be disciplined and organised to about jumping from one system to another skipping or forgetting systems.
Security professionals, when time is limited, should focus on priorities and follow some kind of classification criteria to inspect the system from more critical to less critical.
Security professionals should run vulnerability scans as often as possible.
Vulnerability Scoring Systems
Common Vulnerability Scoring System
The Common Vulnerability Scoring System (CVSS) is an open framework for communicating the characteristics and severity of software vulnerabilities. CVSS consists of three metric groups: Base, Temporal, and Environmental. The Base metrics produce a score ranging from 0 to 10, which can then be modified by scoring the Temporal and Environmental metrics. A CVSS score is also represented as a vector string, a compressed textual representation of the values used to derive the score. Thus, CVSS is well suited as a standard measurement system for industries, organizations, and governments that need accurate and consistent vulnerability severity scores. Two common uses of CVSS are calculating the severity of vulnerabilities discovered on one’s systems and as a factor in prioritization of vulnerability remediation activities.
The CVSS v3.0 Ratings are as follow:
Base Score Range
Common Vulnerabilities and Exposures
Common Vulnerabilities and Exposures (CVE) is a list of common identifiers for publicly known cybersecurity vulnerabilities. CVE is:
One identifier for one vulnerability or exposure.
One standardized description for each vulnerability or exposure.
A dictionary rather than a database.
How disparate databases and tools can “speak” the same language.
The way to interoperability and better security coverage.
A basis for evaluation among services, tools, and databases.
Free for public download and use.
Industry-endorsed via the CVE Numbering Authorities, CVE Board, and numerous products and services that include CVE.
There are a lot of vulnerability scanners, manual or automated, that can help security professional or attackers to find vulnerabilities. Some of them are:
Nessus: Nessus tool is a branded and patented vulnerability scanner created by Tenable Network Security.
OpenVAS: This is an open-source tool serving as a central service that provides vulnerability assessment tools for both vulnerability scanning and vulnerability management.
Nikto: Nikto is a greatly admired and open-source web scanner employed for assessing the probable issues and vulnerabilities.
Retina CS Community: Retina CS is an open-source and web-based console that has helped the vulnerability management to be both simplified and centralized.
Wireshark: The Wireshark free vulnerability scanner relies on packet sniffing to understand network traffic, which helps admins design effective countermeasures.
* Attackers can decide, some times, to fix a system previous installation of their own backdoor just to avoid other attackers from compromise it, allowing them not to be disturbed.
The index of this series of articles can be found here.
In the previous phases, attackers have gathered information about a target, by this point, they should have a very good picture and understanding of the target resources and an extensive knowledge about it but, at this point, it is not a very detailed picture, it is more like a sketch. Now, it is the time to add the details. Attackers need more concrete information about the different resources discovered, information that can help them to gain access to the systems. Examples of this sensitive information can be things like routing paths, DNS information, users information, groups information, deeper knowledge about the network resource and the network itself, protocol-related information, SNMP, etc.
To obtain all this extra information, in this phase, attackers will start to actively connect to the target systems and generating queries to the different systems to extract as much information as possible. As named before, information like the listed below can be obtained and it is desired:
Application and banners
Network sharing information
There are some points in the list that have been named in previous phases, the main difference here it is that, while before everything attackers have done was analysing passive information, now, they are actively contacting the target systems. This implies a big change, everything done till this point was not too concerned with any legal issue but, at this point, enumeration may cross some legal boundaries and there are some chances of being traced as attackers actively connecting with the target.
Multiple enumeration techniques can be used in this phase. Some of the most well know are going to be listed in this document.
One tool attacker can use to perform SNMP enumerations is snmp-check that allows them to enumerate the SNMP devices and places the output in a very human-readable friendly format. It could be useful for penetration testing or systems monitoring.
Enumeration Using Email Ids
Extraction of information using Email Ids can provide attackers with usernames, domain names, organisation divisions, etc. It depends on the format the emails has.
Enumeration Using Default Passwords
Another way of enumeration is based on the use of default passwords. Devices and software usually come with a default configuration, including passwords. UnUnfortunately, unless the administrator or person in charge of their installation customises the configuration, a lot of them keep their defaults settings. It is not difficult for attackers to find this default setting and try to access devices under the target’s supervision.
Enumeration Using SNMP
Simple Network Management Protocol (SNMP) is an Internet Protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behaviour. Devices that typically support SNMP include cable modems, routers, switches, servers, workstations, printers, and more.
SNMP is widely used in network management for network monitoring. SNMP exposes management data in the form of variables on the managed systems organized in a management information base (MIB) which describe the system status and configuration. These variables can then be remotely queried (and, in some circumstances, manipulated) by managing applications.
The attackers can use default community strings or guessed strings to extract information about the devices. This community string is in a different form in different versions of SNMP.
The features of available SNMP variants are:
V1: No support for encryption and hashing. Plain text community string is used for authentication.
V2: Get data in bulk from agents is implemented.
V3: Support for both encryption (DES) and hashing (MD5 or SHA) is added.
Brute Force Attack on Active Directory
Active Directory (AD) is a directory service developed by Microsoft for Windows domain networks. It is included in most Windows Server operating systems as a set of processes and services. Initially, Active Directory was only in charge of centralized domain management. Starting with Windows Server 2008, however, Active Directory became an umbrella title for a broad range of directory-based identity-related services. It authenticates and authorizes all users and computers in a Windows domain type network—assigning and enforcing security policies for all computers and installing or updating software.
Targeting an active directory server can give the attackers access to usernames, addresses, credentials, privileged information, etc.
Enumeration through DNS Zone Transfer
Zone transfer is the process of copying the contents of the zone file on a primary DNS server to a secondary DNS server. Using zone transfer provides fault tolerance by synchronizing the zone file in a primary DNS server with the zone file in a secondary DNS server.
UDP 53 is used for DNS request from DNS servers. TCP 53 is used for DNS zone transfers to ensure the transfer went through.
This can offer attackers information like the location of DNS servers, DNS records, and some other network information such as hostnames, IP addresses, usernames, etc.
Some interesting ports and services to enumerate are:
DNS zone transfer
Microsoft RPC Endpoint Mapper
Global catalogue service
NetBIOS (Network Basic Input/Output System) is a program that allows applications on different computers to communicate within a local area network (LAN). It does not in itself support a routing mechanism so applications communicating on a wide area network (WAN) must use another “transport mechanism” (such as Transmission Control Protocol) rather than or in addition to NetBIOS.
NetBIOS service uses a unique 16-ASCII character string to identify the network devices over TCP. First 15 characters identify the device and the character number 16th identifies the service. NetBIOS service uses the TCP port 139 and NetBIOS over TCP (NetBT) uses the following TCP and UPD ports:
UDP port 137 – name services
UDP port 138 – datagram services
TCP port 139 – session services
Using NetBIOS attackers can discover:
List of machines within a domain
One tool that serves attackers to perform NetBIOS enumeration is nbstat for Windows systems or nbtscan for GNU/Linux based systems.
For example, we can execute nbtscan -v -s : 192.168.1.0/24 to execute scans on a C-class network and print the results in a script-friendly format using a colon as a field separator.
LDAP, the Lightweight Directory Access Protocol, is a mature, flexible, and well supported standards-based mechanism for interacting with directory servers. It is often used for authentication and storing information about users, groups, and applications, but an LDAP directory server is a fairly general-purpose data store and can be used in a wide variety of applications.
There are a lot of tools that allow attackers to execute LDAP enumeration, one of them been enum4linux.
The Network Time Protocol (NTP) is used to synchronize the time of a computer client or server to another server or reference time source, such as a radio or satellite receiver or modem. It provides client accuracies typically within a millisecond on LANs and up to a few tens of milliseconds on WANs relative to a primary server synchronized to Coordinated Universal Time (UTC) via a Global Positioning Service (GPS) receiver, for example. Typical NTP configurations utilise multiple redundant servers and diverse network paths, in order to achieve high accuracy and reliability. Some configurations include cryptographic authentication to prevent accidental or malicious protocol attacks.
Multiple services rely on clock settings for login and logging purposes. NTP helps with the correlation of events occurred in a system.
There are two main concerns around NTP. The first one, it is that attackers can replace legitimate servers or introduce a new NTP server in a network to mislead forensic investigator when investigating an attack.
The second, it is that attackers can extract some useful information like host information from services using the NTP server, client IP addresses, machine names and operating systems, network information even internal IPs depending on network configurations.
Some tools are ntpdc, used to query the ntdd daemon current state and request changes in state. Or others like ntptrace or ntpq.
In addition to these tools, nmap making use of its NSE script can execute NTP enumerations. As an example, the next command can be executed:
nmap -sU -p 123 –script ntp-info <ip>
The Simple Mail Transfer Protocol (SMTP) is a communication protocol for electronic mail transmission. As an Internet standard, SMTP was first defined in 1982 by RFC 821 and updated in 2008 by RFC 5321 to Extended SMTP additions, which is the protocol variety in widespread use today. Mail servers and other message transfer agents use SMTP to send and receive mail messages. SMTP servers commonly use the Transmission Control Protocol on port number 25.
SMTP has multiple commands to make possible communication between users. By inspecting the different responses to this command, attackers can figure out which user ar valid or invalid. Some of these commands are:
HELO: To identify the domain name of the server.
EXPN: Verify Mailbox on localhost.
MAIL FROM: To identify the sender of the email.
RCPT TO: Specify the message recipients.
SIZE: To specify maximum supported size information.
DATA: To define data.
RSET: Reset the communication and buffer of SMTP.
VRFY: Verify the availability of the mail server.
HELP: Show help.
QUIT: To terminate a session.
An interesting tool for SMTP enumeration is smtp-user-enum been a username guessing tool primarily for the SMTP service.
The Server Message Block (SMB) is a protocol for sharing files, printers, serial ports, and communications abstractions such as named pipes and mail slots between computers.
SMB is a client-server, request-response protocol. The only exception to the request-response nature of SMB is when the client has requested opportunistic locks (oplocks) and the server subsequently has to break an already granted oplock because another client has requested a file open with a mode that is incompatible with the granted oplock. In this case, the server sends an unsolicited message to the client signalling the oplock break.
Servers make file systems and other resources (printers, mailslots, named pipes, APIs) available to clients on the network. Client computers may have their own hard disks, but they also want access to the shared file systems and printers on the servers.
Clients connect to servers using TCP/IP (NetBIOS over TCP/IP), NetBEUI or IPX/SPX. Once they have established a connection, clients can then send commands (SMBs) to the server that allow them to access shares, open files, read and write files, and generally do all the sort of things that you want to do with a file system.
If found, a Samba server can give attackers extensive access to a hard drive in a server if the configuration has not been properly done.
Configure additional restrictions for anonymous connections.
Upgrade SNMP to SNMPv3.
Change the default community string.
DNS (Zone Transfer)
Avoid publishing private addresses information in the zone files.
Disable zone transfer for untrusted hosts.
Hide sensitive information from the public.
Remove sensitive information from mail responses.
Disable open relay.
Ignore email to unknown recipients.
Configure different usernames and email addresses.
The index of this series of articles can be found here.
At this point, attackers have collected enough information about the target to take the next step, network scanning. In this phase, attackers will try to obtain concrete network information about the target resources. Things like:
Identify live hosts.
Identify open and closed ports.
Identify the operating system information.
Identity services running on a network.
Identity running processes.
Identify existing security devices.
Identify system architecture.
Identify running services.
During this phase attackers will start to establish contact with the target resources and extract information from the responses, trying to gain more knowledge of the network architecture and possible attack vectors.
The Internet Protocol (IP) is the principal communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking and essentially establishes the Internet.
IP has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information.
Two types of IP traffic can be found:
The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport Layer of the TCP/IP suite.
TCP is connection-oriented, and a connection between client and server is established (passive open) before data can be sent. Three-way handshake (active open), retransmission, and error-detection adds to reliability but lengthens latency. This handshake ensures a successful and reliable connection between to serves.
Here it is shown the format of the TCP header:
The field ‘Flag’ deserves a deeper analysis of the possible values that it can contain because some of the types of scanners we are going to see are closely related to them. We can find the next flag values:
Initiates a connection between to host to facilitate communication
Acknowledge the receipt of a packet
Indicates that the data contained in the packet is urgent and should be processed immediately
Instructs the sending system to send all buffered data immediately
Tells de remote system about the end of the communication, close the connection gracefully
Reset a connection
As named before, a TCP communication starts with a three-way handshake.
There are multiple network scanners that it will allow to use and send packets containing the different flags but, it is worth to say that there are some tools that can be used to handcraft packets. Python, for example, using the Scrapy library gives versatility to create them programmatically and the tool hping3 can help with it too. This will allow attackers a more fine control when testing a firewall or doing advanced port scanning. Also, some low point of view is always instructive.
We can generate some packets with the flag SYN to do some port scanning:
This test returns SYN/ACK if the communication has been accepted or RST/ACK if the port is closed or filtered. In this case, the destination port of the packet is open.
Hping3 is a very versatile tool with multiple options.
The User Datagram Protocol (UDP) is one of the core members of the Internet protocol suite. The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768. With UDP, computer applications can send messages, in this case, referred to as datagrams, to other hosts on an Internet Protocol (IP) network. Prior communications are not required to set up communication channels or data paths.
UDP uses a simple connectionless communication model with a minimum of protocol mechanisms. UDP provides checksums for data integrity, and port numbers for addressing different functions at the source and destination of the datagram. It has no handshaking dialogues, and thus exposes the user’s program to any unreliability of the underlying network; there is no guarantee of delivery, ordering, or duplicate protection.
In the same way, TCP packets have been generated with hping3, UDP packets can be generated with hping3:
In this case, it is not possible to reach the server because its port 80 is using the TCP protocol.
It is good for attackers to follow some kind of methodology or system to avoid missing something on their attempts. As said before, every attacker has its methodology (even if it is chaos), the steps shown here are just a suggestion:
Check for live systems
Discovering which host is alive in the target’s network. This can be done using ICMP packets. The attacker sends an ICMP Echo and the server responds with an ICMP Echo Reply if it is alive. The tool ping is an example of this.
ICMP scan: The technique to identify live servers using ICMP packets.
Ping sweep: The technique to identify live server using ICMP packets at a large scale using IP ranges.
Discovering open ports: Once attackers have a list of live servers they can try to discover what ports are open on them.
SSDP scanning: The Simple Service Discovery Protocol (SSDP) is a network protocol based on the Internet protocol suite for advertisement and discovery of network services and presence information. It accomplishes this without the assistance of server-based configuration mechanisms, such as Dynamic Host Configuration Protocol (DHCP) or Domain Name System (DNS), and without special static configuration of a network host. SSDP is the basis of the discovery protocol of Universal Plug and Play (UPnP) and is intended for use in residential or small office environments.
Port scan tools are widely spread. They give us multiple information about a live host and its ports.
Without questions, the most well know is Nmap. Nowadays, it is not just a port scanner, it can perform some other things but, here, the only interest is its scanning capabilities. Nmap can discover live hosts, open ports, services version and operative systems among other things.
When talking about the different scanning techniques, some command will be shown referring to Nmap syntax.
hping2 and hping3
hping3 has been already named but, it has not been listed the things that can be done with it and its great capabilities to handcraft packets. Things like:
Test firewall rules
Advanced port scanning
Testing network performance
Path MTU discovery
Transfering rules between complex firewall rules
Traceroute-like under different protocols
Remote fingerprinting and others
There is a variety of different scanning techniques that attackers can use to gather the desired information:
Full Open Scan
In this type of scanner, the three-way handshake is initiated and completed. It is easy to detect and log by security devices. Does not require superuser privileges.
To perform this type of scan with Nmap, the next command can be executed:
nmap -sT <ip_address or range>
Stealth Scan – Half Open Scan
Half Open Scan is also known as stealth scan. This type of scan starts the three-way handshake but, once it has received an initial response that allows deciding if a port is open or closed, interrupts the handshake, making the scan more difficult to detect.
To perform this type of scan with Nmap, the next command can be executed:
nmap -sS <ip_address or range>
Inverse TCP Flag Scan
Inverse TCP flag scanning works by sending TCP probe packets with or without TCP flags. Based on the response, it is possible to determine whether the port is open or closed. If there is no response, then the port is open. If the response is RST, then the port is closed.
Probes with flags scan are known as Xmas scans. Probes without flags are known as Null scans.
Xmas scan works by sending a TCP frame with FIN, URG, and PUSH flags set to the target device. Based on the response, it is possible to determine whether the port is open or closed. If there is no response, then the port is open. If the response is RST, then the port is closed. It is important to note that this scan works only for UNIX hosts.
To perform this type of scan with Nmap, the next command can be executed:
nmap -sX <ip_address or range>
A Null Scan works sending a TCP packet that contains a sequence number of 0 and no flags set. Because the Null Scan does not contain any set flags, it can sometimes penetrate firewalls and edge routers that filter incoming packets with particular flags.
The expected result of a Null Scan on an open port is no response. Since there are no flags set, the target will not know how to handle the request. It will discard the packet and no reply will be sent. If the port is closed, the target will send an RST packet in response.
To perform this type of scan with Nmap, the next command can be executed:
nmap -sN <ip_address or range>
A FIN scan works sending a packet only with the flag FIN. Packets can bypass firewalls without modification. Closed ports reply to a FIN packet with the appropriate RST packet, whereas open ports ignore the packet on hand. This is typical behaviour due to the nature of TCP and is, in some ways, an inescapable downfall.
To perform this type of scan with Nmap, the next command can be executed:
nmap -SF <ip_address or range>
ACK Flag Probe Scan
ACK flag probe scanning works by sending TCP probe packets with ACK flag set to determine whether the port is open or closed. This is done by analyzing the TTL and WINDOW field of the received RST packet’s header. The port is open if the TTL value is less than 64.
Similarly, the port is also considered to be open if the WINDOW value is not 0 (zero). Otherwise, the port is considered to be closed.
ACK flag probe is also used to determine the filtering rules of the target network. If there is no response, then that means that a stateful firewall is present. If the response is RST, then the port is not filtered.
IDLE/IPID Header Scan
IDLE/IPID header scan works by sending a spoofed source address to the target to determine which services are available. In this scan, attackers use the IP address of a zombie machine for sending out the packets. Based on the IPID of the packer (fragment identification number), it is possible to determine whether the port is open or closed.
Idle scans take advantage of predictable Identification field value from IP header: every IP packet from a given source has an ID that uniquely identifies fragments of an original IP datagram; the protocol implementation assigns values to this mandatory field generally by a fixed value increment. Because transmitted packets are numbered in a sequence you can say how many packets are transmitted between two packets that you receive.
An attacker would first scan for a host with a sequential and predictable sequence number (IPID). The latest versions of Linux, Solaris, OpenBSD, and Windows Vista are not suitable as a zombie since the IPID has been implemented with patches that randomized the IPID. Computers chosen to be used in this stage are known as “zombies”.
Once a suitable zombie is found the next step would be to try to establish a TCP connection with a given service (port) of the target system, impersonating the zombie. It is done by sending an SYN packet to the target computer, spoofing the IP address from the zombie, i.e. with the source address equal to a zombie IP address.
If the port of the target computer is open it will accept the connection for the service, responding with an SYN/ACK packet back to the zombie.
The zombie computer will then send an RST packet to the target computer (to reset the connection) because it did not send the SYN packet in the first place.
Since the zombie had to send the RST packet it will increment its IPID. This is how an attacker would find out if the targets port is open. The attacker will send another packet to the zombie. If the IPID is incremented only by a step then the attacker would know that the particular port is closed.
The method assumes that zombie has no other interactions: if there is any message sent for other reasons between the first interaction of the attacker with the zombie and the second interaction other than RST message, there will be a false positive.
UDP scanning uses the UDP protocol to test whether the port is open or closed. In this scan, there is no flag manipulation. Instead, ICMP is used to determine if the port is open or not. So, if a packet is sent to a port and the ICMP port unreachable packet is returned, then that means that the port is closed. If, however, there is no response, then the port is open.
To perform this type of scan with Nmap, the next command can be executed:
nmap -sU -v <ip_address or range>
Scanning beyond IDS
Another common technique used to bypass security measures like firewalls, IDS and IPS is fragmentation.
Fragmentation of payload and sending small packets makes more difficult the detection usually based on known payloads. To be able to decide if an attack is taking place security measures like the named before need to assemble the packet and the contained payload to be able to compare it. The packet fragmentation can be combined with sending the packets out of order and with pauses to create a delay.
Banner grabbing is a technique that focuses its efforts to determinate the services that are running in a target machine and its versions. Listen to the responses send it by the different services running in the target machine and examinates them to extract the service banner information. By gathering information about the running services, attackers can determinate existing vulnerabilities and bugs and try to exploit them.
There are multiple tools to perform banner grabbing like:
OS fingerprinting is a technique is used to identify the operating systems running in the target machines. By gathering information about the running operative systems, attackers can determinate existing vulnerabilities and bugs and try to exploit them. There are two types of OS fingerprinting:
Active OS fingerprinting
Passive OS fingerprinting
Active OS fingerprinting
The active OS fingerprinting is a similar technique to scanning. It sends TCP and UDP packets and observes the response from the target host.
To perform this type of scan with Nmap, the next command can be executed:
nmap -O -v <ip_address or range>
Passive OS fingerprinting
The passive OS fingerprint requires a detailed assessment of traffic. It can be performed analysing network traffic paying special attention to the TTL (Time to Live) value and Window size found in the headers or TCP packets. Some common examples of these values are:
TCP Window Size
Google customised Linux
Windows Vista, 7 and Server 2008
Cisco Router (iOS 12.4)
A vulnerability scanner is an application that identifies and creates an inventory of all the systems, from the server to the coffee maker, connected to a network. For each device that it identifies it also attempts to identify the operating system it runs and the software installed on it, along with other attributes such as open ports and user accounts. Most vulnerability scanners will also attempt to log in to systems using default or other credentials in order to build a more detailed picture of the system.
After building up an inventory, the vulnerability scanner checks each item in the inventory against one or more databases of known vulnerabilities to see if any items are subject to any of these vulnerabilities.
The result of a vulnerability scan is a list of all the systems found and identified on the network, highlighting any that have known vulnerabilities that may need attention.
Many vulnerability scanners are proprietary products, but they are also a small number of open-source vulnerability scanners or free “community” versions of proprietary scanners. This includes:
Nessus (It has one free limited version)
At the end of the scanning phase, the attacker’s objective is to possess extensive knowledge about the target’s network, to keep this information updated and to use it to compromise the system.
There are different ways to keep track of the diagrams attacker are going to be able to generate, they range from pen and paper to digital diagraming tools.
In addition to all the scans an attacker can perform, some advanced network monitoring tools can be used to generate these network diagrams.
The combination of all these scans and tools should leave attackers with a pretty good knowledge of the target’s network.
For obvious reasons, attackers want to remain anonymous to avoid to be catch and prosecuted for their actions. For this purpose, proxies can be a very handy tool.
A proxy server is basically another computer which serves as an intermediary through which internet requests are processed. By connecting through one of these servers, the attacker’s computer sends requests to the server which then processes these requests and returns the responses. In this way, it serves as an intermediary between the attacker machine and the target machines. Proxies are used for several reasons such as to filter web content, to go around restrictions such as parental blocks, to screen downloads and uploads and to provide anonymity when surfing the internet.
Proxy chaining is a basic technique that makes use of multiple proxy servers to make harder to detect the real origin of the internet requests. Attackers connect to one server after the other to create a chain of proxy servers between them and the target system, making any effort of reverse tracing harder and harder the more proxy servers they have used. The downside of this technique is that connections are less stable and tend to slow down the traffic with every extra connection.
An anonymiser is a tool that completely hides or removes identity-related information to make activity untraceable. The basic purposes of using anonymisers are:
Identity theft prevention
Bypass restrictions and censorships
Untraceable activity on the Internet
A very popular anonymiser is Tails It is a very popular censorship circumvention tools based on GNU/Linux. It is a live image designed to help users to navigate leaving no trace behind. Trails preserve privacy and anonymity.
Spoofing IP Addresses
IP spoofing is the creation of Internet Protocol (IP) packets which have a modified source address in order to either hide the identity of the sender, to impersonate another computer system or both.
Sending and receiving IP packets is a primary way in which networked computers and other devices communicate, and constitutes the basis of the modern internet. All IP packets contain a header which precedes the body of the packet and contains important routing information, including the source address. In a normal packet, the source IP address is the address of the sender of the packet. If the packet has been spoofed, the source address will be forged.
It is a technique often used by bad actors to invoke DDoS attacks against a target device or the surrounding infrastructure. Spoofing is also used to masquerade as another device so that responses are sent to that targeted device instead.
The index of this series of articles can be found here.
The footprinting phase allows an attacker to gather information regarding internal and external security architectures. The collection of information also helps to:
Identify possible vulnerabilities within a system
Reduce the focus of the attack
Discover obvious and non-obvious resources available
Identify possible vulnerabilities
Draw a network map
This is the first phase of an ethical hacking test. The person performing the test is going to gather as much information as possible regarding the target or its infrastructure.
Possible sources of information or techniques can be:
Publicly available information like social networks, newspapers or enterprise online resources like webpages
Employees’ social and or professional networks
WHOIS and DNS registers
Google hacking techniques
The overall objective of this phase is to keep the interaction with the target at minimum levels and gathering information without any detection or alerting.
A step by step methodology can be:
Authorisation and scope definition of the assessment
Footprinting using search engines
Footprinting using social network sites
Document all findings
Footprinting Using Search Engines
Search engines are an amazing tool to find information about a target and, also, they allow to gather this information without having real contact with the target. Pages like Google or Bing allow to search for any information and find and collect it from every available place on the Internet. Information like office addresses, founders, employee names, employee information, partners, competitors, websites and much more. Also, sometimes a good resource is the cache information these search engines store.
Once we have found the official websites, we can explore them to obtain multiple good information accessing the public parts of the webpages. But, we can explore the restricted parts of the webpages, this can be done by trial error or using available tools for this purpose like Netcraft – Search Web by Domain. Another interesting tool they offer is the Netcraft – Site report.
If we execute the Search Web by Domain tool, we can see the next result:
Another very interesting tool is worth it to mention is Shodan. As they describe themselves, Shodan is the world’s first search engine for Internet-connected devices. It can give infrastructure information, even available ports, a lot of very useful information.
Collecting Location Information
Obvious tools to find information about a company and its surrounding without the need to go to their location or locations are map tools like Google Maps or Bing Maps. But, in general, any other map or location service it will do it.
The reason to do this is not just in case a physical test needs to be done, all the gathered information about the surrounding can be used for social engineering attacks.
People Search Online Services
As it has been mentioned before, the more information it is gathered the better. Part of this important information is the one around employees of a company. Nowadays, multiple online services are offering the possibility of identifying phone numbers, address and people.
Some of them are paid services but with a few searches, free ones can be found.
Gather Information from Financial Services
Some search engines can provide financial information about a target. And not just its financial information, they can provide a list of competitors and or some information about its competitors.
Job sites can be a true gold mine, they are going to offer us not only information about employees, positions, curriculums and relations but, if close attention is paid to the job descriptions and or job offers, a huge amount of information can also be gathered i.e. departments, technologies or software.
A different approach can be followed creating, for example, a fake job position to target a specific person and collect their personal information.
Information Gathering Using Groups, Forums and Blogs
All these elements can leak sensitive information and it can allow attackers with a fake profile to reach and interact with the companies and or the people working there, it does not matter if there are official or non-official channels, information can be leaked in either.
Google Advanced Search (Google Hacking)
Everyone knows the Google search engine, everyone uses it and know its basic functionality but, what not everyone knows, it is that the search engine offers some specific operators that can help to refine and focus the search operations, making the results more relevant. Some of the operators are:
site: – Search for the results in a given domain.
related: – Search for similar websites.
cache: – Display the webpages stored in the cache.
link: – Link all the websites having a link to a specific website.
allintext: – Search for websites containing a specific keyword.
intext: – Search for documents containing a specific keyword.
allintitle: – Search for websites containing a specific keyword in the title.
intitle: – Search for documents containing a specific keyword in the title.
allinurl: – Search for websites containing a specific keyword in the URL.
inurl: – Search for documents containing a specific keyword in the URL.
Google Hacking or Google Dorking is a combination of hacking techniques that allow finding security holes within an organisation’s network and systems using the Google search engine and or other applications powered by Google. Whit endless combinations of operators and endless use cases a database categorising possible queries has been created and, it is well known as GHDB.
Social Networking Sites
In the old days, attackers needed to be creative to obtain people information, nowadays, people just throw their information online and an attacker just needs to go there to collect it. The places where the information and the attackers meet are social networks. There are tons of useful information waiting to be collected that can easily be used to focus attacks or social engineering attacks.
Some of the information that can be found in social networks can correlate to the information an attacker is looking for:
Social network user action
People maintain their profile
Photo of the target
Date of birthday
People update their status
Most recent personal information
Most recent location
Family and friends information
Activities and interests
Upcoming events information
Web Application Security Scanners
Here, it is included the monitorisation and investigation of the target organisation websites. Attackers will try to obtain information like software running and their versions, operative systems, folder structures, databases information and, in general, any information that will be able to leverage in the next steps of the attack.
Before, we have named the Netcraft tool but, there are other tools it can be used for these purposes. Here, just a couple of examples:
Burp Suite: It is an open-source web application security scanner.
Zaproxy: It is a graphical tool for testing web application security. It intends to provide a comprehensive solution for web application security checks.
As an example of how to use Burp Suite, we can check this video. As per the video description:
Detecting the OS
Some of the tools that have been already named as Netcraft and Shodan can be used to resolve this information with a simple query using their search engines.
Worth saying that, if the only thing we want to find is a specific connected device such as routers, servers, IoT or other devices, this can be done using the Shodan search engine and its variety of filters.
For example, a simple search of D-Link offers a long list of this kind of devices.
Web Spiders or Web Crawlers
Web spiders or web crawlers are automatic tools that surf and collect information from the Internet. They usually target a website and extract specific information like names, email addresses or any other information of this type.
Nowadays, the business of Risk Management and Thread Intelligence is growing and companies in this space design the crawler to cover forums, leak information pages and communities, code repositories and much more. If it is out there, it can be crawled. But, any search engines crawl the Internet trying to index it.
A good definition of what they are can be found here.
There are a lot of options and even with a few lines of code, a basic one can be built. Different attackers can have different favourite ones or even the combination of multiple to fit their needs. A good exercise having some programming background is to take a look at the source code of a simple one.
Mirroring a Website
One option is to completely download a website for offline analysis, allowing attackers to analysis in an online environment the source code of a website and its structure.
As in the previous case, there are a lot of options available with their pros and cons. Sometimes, one does the jobs, sometimes a combination of them is needed. Depends on personal preferences and use cases. Some examples are:
Attackers can check how corporative webpages have changed trying to find some extra information. The page Wayback Machine offers the possibility of, given a webpage domain, browse across its modifications and different versions. It is truly a curious exercise.
Monitoring Web Updates
When an attacker is planning to target a company is sometimes interesting to keep an eye on the changes they do to their websites. To do this manually is a tedious and not rewarding task, for this reason, there are some content monitoring tools. They help you track and monitor changes on any website under consideration so you can take immediate actions as you like.
Again, there are multiple services and tools and attackers find the one that fits better to their purposes.
All business nowadays do extensive use of electronic communications, especially email communications, internally and externally. They contain tons of information, about the company i.e. financial information or technical information.
Also, to all the information the body of the emails can give to an attacker, tracking the email communications can be very useful. The information listed below can be obtained using tracking tools:
Sender’s IP address.
Sender’s mail server.
Time and date information.
Authentication system information of the sender’s mail server.
Tools like PoliteMail can help attackers to track Outlook messages. As we can see, not all the named tools are designed for hacking purposes but, the truth is that legitimate tools can be used for illegitimate purposes.
Another interesting information that an attacker can find on email is the headers. Email headers can be explored manually but, some tools help attackers to trace hop by hop email communications and recover IP addresses.
At this point attackers are going to gather information and reports about the target competitors, they include legal news, press releases, financial information, analysis reports, and upcoming projects and plans. Attackers can identify:
When did the company begin?
Evolution of the company.
Authority of the company.
Background of an organisation.
Strategies and planning.
Monitoring Website Traffic
Ranking of the target’s website, geographical view of the users, number of total and segmented users, daily statistics, and much more are just a few examples of information that can be obtained. Some services can help us with this:
Organisation’s reputation can be tracked too using online reputation management tools (ORM). These tools are used to track reputation and rankings. It allows the attacker to study consumer opinions about target brands.
WHOIS is a query and response protocol that is widely used for querying databases that store the registered users or assignees of an Internet resource, such as a domain name, an IP address block or an autonomous system. WHOIS lookups can help attackers to find out who is behind the target domain. WHOIS is maintained by the organisation Regional Internet Registries (RIR). All the registrations are divided into five regions:
African Network Information Centre
American Registry for Internet Numbers
Antarctica, Canada, parts of the Caribbean, and the United States
Asia-Pacific Network Information Centre
East Asia, Oceania, South Asia, and Southeast Asia
Latin American and Caribbean Network Information Centre
Most of the Caribbean and all of Latin America
Réseaux IP Européens Network Coordination Centre
Europe, Central Asia, Russia, and West Asia
Lookups offer complete domain registration information like:
Domain name server information
ASN (Autonomous System Number)
Email and postal address of the registrar and admin
There are plenty of WHOIS tools. As examples, an online one and the most common one, installed in almost any system, is going to be shown:
Domain Name System (DNS) is a hierarchical and decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates more readily memorized domain names to the numerical IP addresses needed for locating and identifying computer services and devices with the underlying network protocols.
Several records can be created associated with a DNS entry:
A Record: An A record (Address Record) points a domain or subdomain to an IP address. i.e. google.co.uk -> 126.96.36.199.
CNAME: A CNAME (Canonical Name) points one domain or subdomain to another domain name, allowing you to update one A Record each time you make a change, regardless of how many Host Records need to resolve to that IP address. i.e. imap.example.org -> mail.example.org.
MX Entry: An MX Entry (Mail Exchanger) directs email to a particular mail server. Like a CNAME, MX Entries must point to a domain and never point directly to an IP address.
TXT Record: A text record was originally intended for human-readable text. These records are dynamic and can be used for several purposes.
SRV Record: An SRV (Service) record points one domain to another domain name using a specific destination port. SRV records allow specific services, such as VOIP or IM, to be directed to a separate location.
AAAA Record: The AAAA record is similar to the A record, but it allows you to point the domain to an Ipv6 address.
NS: Host name server.
SDA: Indicate authority for the domain.
PTR: IP-Host mapping.
RP: Responsible person.
HINFO: Host information.
Similar to what happened with the WHOIS tools, there are plenty of DNS lookup tools. As done before, an online one and a command one are going to be shown.
Attackers try to collect as much information as possible about the target system to find ways to penetrate the system and network footprinting is one of the most important parts of this process. Types of information we can find with network footprinting tools are:
Network address ranges
OS and application version information
Path state of the host and the applications
Structure of the application and back-end servers
Some tools attackers can use to achieve their goals are:
WHOIS (already discussed)
As a probe of concept, the next image represents the execution of the traceroute command present in almost all systems. The image shows the path information between source and destination in the hop by hop manner, listing the hops and the latency between hops.
Footprinting through Social Engineering
Social engineering has been named a few times along this document but, it has not been properly defined. The Social engineering term refers to a technique of psychological manipulation to gather information from different social interactions online or offline. And, it has proven to be an invaluable source of information. In the end, it what is said, the human is, sometimes, the weakest link in the security chain.
There is almost an infinite number of social engineering techniques and, after a few conversations with social engineers, it is easy to realise that every single one of them has its style adapted to their interpersonal skills. Despite this, there are a few basic techniques it can be listed:
Eavesdropping: Eavesdropping is the act of secretly or stealthily listening to the private conversation or communications of others without their consent. Listening conversation includes listening, reading or accessing any source of information without being notified. The practice is widely regarded as unethical, and in many jurisdictions is illegal.
Shoulder surfing: Taking this literally, it is gathering information by standing behind the targets when they are interacting with sensitive information. It is used to obtain information such as personal identification numbers (PINs), passwords and other confidential data, for example, the keystrokes on a device or sensitive information in the screen.
Dumpster diving: It is salvaging from large commercial, residential, industrial and construction containers for unused items discarded by their owners, but deemed useful to the picker.
Impersonation: Impersonation differs from other forms of social engineering because it occurs in person, rather than over the phone or through email. The social engineer “impersonates” or plays the role of someone the targets are likely to trust or obey convincingly enough to fool them into allowing access to offices, to information, or systems. This type of social engineering plays on people natural tendencies to believe that people are who they say they are, and to follow instructions when asked by an authority figure. It involves the conscious manipulation of a victim to obtain information without the individual realizing that a security breach is occurring.
Phishing: It is a fraudulent attempt to obtain sensitive information by disguising oneself as a trustworthy entity in an electronic communication. Typically carried out by email spoofing or instant messaging. It often directs users to enter personal information at a fake website which matches the look and feel of the legitimate site.
Some information attackers can obtain using social engineering are:
Credit card information
Usernames and passwords
Security devices and technology information
Operating systems information
IP addresses and name server’s information
One very interesting tool is Maltego. Maltego is an open-source intelligence and graphical link analysis tool for gathering and connecting information for investigative tasks. Using Maltego attackers can automate the process of gathering information from different data sources.
Recon-ng is a full-featured Web Reconnaissance framework written in Python. Complete with independent modules, database interaction, built-in convenience functions, interactive help, and command completion, Recon-ng provides a powerful environment in which open source web-based reconnaissance can be conducted quickly and thoroughly.
Metasploit Framework is another impressive tool with multiple uses but, it can be used to scan and gather information about a target. The Pro version can be used to automatise some of the steps in the next phases of an attack but, the free version is more than enough for this phase. You can find a comparison of both versions here,
Countermeasures of Footprinting
Among all the policies that can be set in place to try to prevent footprinting probably the most important thing is to provide education, training and awareness to employees of an organisation. Without this, no matter how many polices or countermeasures companies set, network restrictions, good server configurations, double checks on reports, press releases, everything will in some point fail if organisation users are not properly trained.
The index of this series of articles can be found here.
Information security is the methods and processes to protect information and information systems from unauthorised access, the disclosure of information, usage or modification. Information security ensures the confidentiality, integrity and availability.
Some of the concepts associated with information security that can help readers better understand this series of articles are:
Data breach: Companies posses multiple sensitive information that must be stored and protected. Information like:
Date of births
In general, any personal or sensitive information belonging to customers or employees susceptible to been gathered by attackers after an intrusion and leaked. This leak is called data breach.
Hack value: This value describes the target’s level of attraction for an attacker.
Zero-day attack: Vulnerabilities that have not disclosed yet and can be exploited even before developers identify, address and release any patch.
Vulnerability: The term vulnerability refers to a weak point, loophole or any entry point to a system or network which can be helpful and utilised by attackers to intrude a target.
Daisy-chaining: It is the consecutive execution of attacks using the same information or the information acquired in the previous attempt to gain access to a network or system.
Exploit: An exploit is a piece of software, a chunk of data, or a sequence of commands that takes advantage of a bug or vulnerability to cause unintended or unanticipated behaviour to occur on computer software, hardware, or something electronic.
Doxing: The term doxing refers to the publication of information associated with an individual.
Payload: The payload is the part of the private user text which could also contain malware such as worms or viruses which performs the malicious action; deleting data, sending spam or encrypting data.
Bot: A bot is a type of software application or script that performs automated tasks on command. Bad bots perform malicious tasks that allow attackers to remotely take control over an affected computer. Once infected, these machines may also be referred to as zombies.
Elements of Information Security
The CIA Triad is a well-known, venerable model for the development of security policies used in identifying problem areas, along with necessary solutions in the arena of information security. The CIA Triad brings us the terms: Confidentiality, Integrity and Availability.
Together, these three principles form the cornerstone of any organization’s security infrastructure; in fact, they should function as goals and objectives for every security program. The CIA Triad is so foundational to information security that anytime data is leaked, a system is attacked, a user takes a phishing bait, an account is hijacked, a website is maliciously taken down, or any number of other security incidents occur, you can be certain that one or more of these principles have been violated.
Confidentiality refers to an organization’s efforts to keep their data private or secret. In practice, it is about controlling access to data to prevent unauthorized disclosure. Typically, this involves ensuring that only those who are authorized have access to specific assets and that those who are unauthorized are actively prevented from obtaining access. Also, some extra controls within a group of authorized users, there may be additional, more stringent limitations on precisely which information those authorized users are allowed to access.
Integrity refers to the quality of something being whole or complete. In InfoSec, integrity is about ensuring that data has not been tampered with and, therefore, can be trusted. It is correct, authentic, and reliable. Ensuring integrity involves protecting data in use, in transit and when it is stored no matter where.
Systems, applications, networks and data are of little value to an organization and its customers if they are not accessible when authorized users need them. In a simple way, availability means that networks, systems and applications are up and running. It ensures that authorized users have timely, reliable access to resources when they are needed.
Loss of privacy. Unauthorised access to information. Identity theft
Encryption. Authentication. Access control
Information is no longer reliable or accurate. Fraud
Maker/Checker. Quality assurance. Audit logs
Business disruption. Loss of customer’s confidence. Loss of revenue
Business continuity plans and tests. Backups. Sufficient capacity
Authenticity and Non-Repudiation
Authenticity refers to the characteristic of communications, documents or data to ensure the genuineness or not corruption from an original. Major roles of authentication include confirming that the users are who they claim to be and ensuring the message is authentic and not altered or forged.
Non-repudiation refers to the ability to ensure that a party to a contract or a communication cannot deny the authenticity of their signature on documents or messages they originated. It is a way to guarantee that the sender of a message cannot later deny having sent the message and that the recipient cannot deny having received the message. Digital signatures and encryption are used to establish authenticity and non-repudiation of a document or message.
Security, Functionality and Usability Triangle
When designing applications, systems or devices, terms like security, functionality and usability need to be considered. Unfortunately, there is an interdependency between these three attributes. When security goes up, usability and functionality come down and, the same happens with any other approach. Any organization should balance between these three qualities to arrive at a balanced information system.
A triangle can be used to help explain the relationship between the concepts of security, functionality and usability. The use of a triangle is because an increase or decrease in any one of the factors will have an impact on the presence of the other two.
Functionality: It can be defined as the purpose that something is designed or expected to fulfil.
Usability: It can be defined as the degree to which something is able or fit to be used.
Security: It can be defined as referring to all the measures that are taken to protect a system, application or a device as well as ensuring that only people with permission to access them are able to.
Penetration Testing Phases
We can find five different phases in a pentest. Each one with its boundaries, objectives and goals well defined. These five phases are:
Reconnaissance refers to the preparatory phase where an attacker seeks to gather information about a target prior to launching the attack. In other words, find all the information at our fingertips. The attackers are going to use all the public sources that they can reach to find information about the target. And we are not talking just about the company, we are talking about employees, business, operations, network, system, competitors, etc. Everything we can learn about our target. We can use web pages, social networks, social engineering, etc. The objective is to know as much as we can about the victim and the elements around it.
We can find two types of reconnaissance:
Passive: Involves acquiring information without directly interact with the target.
Active: Involves interacting with the target directly by any means.
Scanning refers to a pre-attack phase where the attacker scans the network for specific information on the basis of information gathered during the reconnaissance. In general, in this step, we are going to use port scanners, vulnerability scanners and similar tools to obtain information about the target environment like live machines, ports in each one of these machines, services running, OS details, etc. All this information will allow us to launch the attack.
Gaining access refers to the point where the attacker obtains access to a machine or application inside the target’s network. Part of this phase is when the attacker tries to escalate privileges to obtain complete control of the system or, based on the access the attacker has, it tries to compromise other systems in the network. Here we have multiple tools and different possibilities like password cracking, denial of service, buffer overflows, session hijacking, etc.
Maintaining access refers to the phase where the attacker tries to retain the ownership of the system and make future accesses to the compromised system easier, especially in the case that the way the attacker has used to compromise the system is fixed. The attacker can do multiple things like creating users in the system, install their own applications and hide them, install backdoors, rootkits or trojans even, in some cases, the attacker can secure the compromised machine to avoid other attackers to control the machine.
Clearing tracks refers to the activities carried out by an attacker to hide malicious acts. In this phase, the attacker tries to remove all the pieces of evidence about the machine being compromised trying to avoid, in the first place, the detection and, in second place, obstructing the prosecution.
Information Assurance (IA) combine components to assure that information and information systems are secured. Components like Integrity, Availability, Confidentiality and Authenticity already described.
In addition to these components, there are some methods and processes that can help to achieve information assurance such as:
Policies and processes
Network and authentication
Scanning for network vulnerabilities
Identifying resources and possible problems
Implementation of plans for identified requirements
Application of information assurance controls
Threat modelling is a core element of the Security Development Lifecycle (SDL). It is an engineering technique you can use to help you identify threats, attacks, vulnerabilities, and countermeasures that could affect your application. Threat modelling can be used to shape application designs and meet organisation security objectives allowing them to reduce risks.
There are five major threat modelling steps:
Defining security requirements
Validating that threats have been mitigated
Enterprise Information Security Architecture
Enterprise Information Security Architectures (EISAs) are fundamental concepts or properties of a system in its environment embodied in its elements, relationship, and in the principles of its design and evolution. They are fundamental concepts and properties of a system that establish the purpose, context, and principles that provide useful guidance for IT staff to help make secure design decisions. An EISA also defines the environment and relationships that it exists in, while also doing some deep digging into the concepts and imagination of a system.
An EISA should be defined by business objectives and support the business needs in a flexible way that allows your organization to staff at the level that you require. It should also be utilized as a layered IT defence plan that analyzes the risks and threats to your portfolio, laying out practical standards for how to assess risks, rather than just technical ones. Maintaining a focused EISA strategy is ultimately what will help your organization understand how internal and external forces can and will affect your bottom line in the short and long-term.
Represents the information security organization and process dimensions. This viewpoint reflects the “business of security,” in the sense that it represents the way information security is practised in the organisation, as well as how the “security business” interrelates with the rest of the enterprise via processes, roles, responsibilities and organisational structures.
Represents the information required to run the information security function. It represents the information models used by the security team, as well as the models used to capture the security requirements for enterprise information.
Represents the security infrastructure architectures. It captures the models that are used to abstract varying requirements for security into guidance for required hardware and software configurations.
Network Security Zoning
Zoning is used to mitigate the risk of an open network by segmenting infrastructure services into logical groupings that have the same communication security policies and security requirements. The zones are separated by perimeters (Zone Interface Points) implemented through security and network devices.
Zoning is a logical design approach used to control and restrict access and data communication flows only to those components and users as per security policy. A new zone is defined by a logical grouping of services under the same policy constraints, driven by business requirements. When a new set of policy constraints are established, then a new zone is required.
Basic security zones defined are:
The public zone is entirely open and includes public networks such as the public Internet, the public switched telephone network, and other public carrier backbone networks and services. Restrictions and requirements are difficult or impossible to place or enforce this zone because it is normally outside the control of the GC. The public zone environment is assumed extremely hostile.
Public Access Zone
A PAZ mediates access between operational GC systems and the public zone. The interfaces to all government on-line services should be implemented in a PAZ. Proxy services that allow GC personnel to access Internet-based applications should be implemented in a PAZ, as should external e-mail, remote access, and extranet gateways.
A demilitarized zone (DMZ) is a component within a PAZ.
An OZ is the standard environment for routine GC operations and is where most end-user systems and workgroup servers are installed. With appropriate security controls at the end-systems, this zone may be suitable for processing sensitive information; however, it is generally unsuitable for large repositories of sensitive data or critical applications without additional strong, trustworthy security controls that are beyond the scope of this guideline.
An RZ provides a controlled network environment generally suitable for business-critical IT services (that is, those having medium reliability requirements, where compromise of the IT services would cause a business disruption) or large repositories of sensitive information (for example, a data centre). It supports access from systems in the public zone via a PAZ.
Information Security Policies
An information security policy (ISP) is a set of rules that guide individuals who work with IT assets. Your company can create an information security policy to ensure your employees and other users follow security protocols and procedures. An updated and current security policy ensures that sensitive information can only be accessed by authorized users.
Creating an effective security policy and taking steps to ensure compliance is a critical step to prevent and mitigate security breaches. To make your security policy truly effective, update it in response to changes in your company, new threats, conclusions drawn from previous breaches, and other changes to your security posture.
The basic goals and objectives of information security policies are:
Cover security requirement and conditions of the organisation
Protect organisations resources
Eliminate legal liabilities
Minimise the wastage of resources
Prevent against unauthorised access/modification, etc
Minimise the risk
There are some steps to define information security policies:
Risk assessment: Identify possible risks
Guidelines: Learn standards
Management: Discussions with management and related staff
Penalties: Set penalties
Finalisation: Ready final version
Agreement: Ensure everyone is agreed and understood
Enforcement: Deploy the policy
Training: Train the employees
Review/Update: Regular reviews and updated when needed
Types of Security Policies
Promiscuous policy: This policy does not impose any restrictions on the usage of system resources.
Permissive Policy: Policy begins wide-open and only the known dangerous services/attacks or behaviours are blocked.
Prudent Policy: A prudent policy starts with all the services blocked. The administrator permits safe and necessary services singly. It logs everything, like the system and network activities. It provides most security whereas permitting only proverbial however necessary dangers.
Paranoid Policy: A paranoid policy forbids everything. There is a strict restriction on all use of company computers, whether or not it is system usage or network usage. There is either no net association or severely restricted net usage. Because of these to a fault severe restrictions, users typically try and notice ways that around them.
Physical security is an important part of Information Security as it is the first layer of protection. Physical security is the protection of personnel, hardware, software, networks and data from physical actions and events that could cause serious loss or damage to an enterprise, agency or institution. This includes protection from fire, flood, natural disasters, burglary, theft, vandalism and terrorism.
Physical security has three important components:
Access control: Obstacles should be placed in the way of potential attackers and physical sites should be hardened against accidents, attacks or environmental disasters.
Surveillance: Physical locations should be monitored using surveillance cameras and notification systems.
Testing: Disaster recovery policies and procedures should be tested on a regular basis to ensure safety and to reduce the time it takes to recover from disruptive man-made or natural disasters.
Incident Response Management is the procedure and method of handling an incident that occurs. Similarly, in information security, incidents responses are the remediation actions or steps taken as a response to an incident. Its first goal is to restore a normal service operation as quickly as possible and to minimize the impact on business operations, thus ensuring that the best possible levels of service quality and availability are maintained.
While responding to and incident, professionals collect shreds of evidence, information and clues that will be helpful to:
Prevention in the future
Tracking an attacker
Finding holes and vulnerabilities in the system
Incident Management Process
The incident response management processes include:
Preparation for incident response
Detection and analysis of an incident response
Classification of an incident and its prioritisation
Notifications and announcements
Forensic investigation of the incident
Eradication and recovery
Responsibilities of Incident Response Teams
An incident response team (IRT) or emergency response team (ERT) is a group of people who prepare for and respond to any emergency incident. This team is generally composed of specific members designated before an incident occurs. Its members ideally are trained and prepared to fulfil the roles required by the specific situation.
Some of the responsibilities of this team are:
Take action according to an Incident Response Plan (IRP). If there is no plan or the plan is not applicable, the team will follow the leader instructions to perform coordinated actions
Examination and evaluation of events, determination of damage or scope of an attack
Document the event
If required, take the support of external security professionals or consultants
If required, take the support of local law enforcement
A vulnerability assessment is the process of examination, identification and analysis of a system or application. Through vulnerability assessments weaknesses and threats can be identified, scoped and extra security layers can be defined.
Types of Vulnerability Assessments
Wireless network assessment
Penetration testing is the process of hacking a system with the permission of the owner to evaluate the security, hack value, target of evaluation (TOE), attacks, exploits, zero-day vulnerabilities and other components such as threats, vulnerabilities or daisy-chaining.
Some of the objectives of penetration testing are:
To identify threats and vulnerabilities to organisations assets
To provide a comprehensive assessment of policies, procedures, design and architecture
To set remediation actions to secure them before they are used by attackers to breach security
To identify what attackers can access to steal
To identify what information can be theft and its use
To test and validate the security protection and identify the need for any additional protection layer
Modification and up-gradation of currently deployed security architectures
To reduce the expense of IT security by enhancing the return of security investment (ROSI)
Red and Blue Teams
Red teams are focused on penetration testing of different systems and their levels of security programs. They are there to detect, prevent and eliminate vulnerabilities.
A red team imitates real-world attacks that can hit a company or an organization, and they perform all the necessary steps that attackers would use. By assuming the role of an attacker, they show organizations what could be backdoors or exploitable vulnerabilities that pose a threat to their cybersecurity.
A blue team is similar to a red team in that it also assesses network security and identifies any possible vulnerabilities.
But what makes a blue team different is that once a red team imitates an attacker and attacks with characteristic tactics and techniques, a blue team is there to find ways to defend, change and re-group defence mechanisms to make the incident response much stronger.
Types of Penetration Testings
Black-Box Penetration Testing
In a black-box engagement, the consultant does not have access to any internal information and is not granted internal access to the client’s applications or network. It is the job of the consultant to perform all reconnaissance to obtain the sensitive knowledge needed to proceed, which places them in a role as close to the typical attacker as possible. This type of testing is the most realistic, but also requires a great deal of time and has the greatest potential to overlook a vulnerability that exists within the internal part of a network or application. A real-life attacker does not have any time constraints and can take months to develop an attack plan waiting for the right opportunity.
Grey-Box Penetration Testing
An engagement that allows a higher level of access and increased internal knowledge falls into the category of grey-box testing. Comparatively, a black-box tester begins the engagement from a strict external viewpoint attempting to get in, while the grey-box tester has already been granted some internal access and knowledge that may come in the form of lower-level credentials, application logic flow charts, or network infrastructure maps. Grey-box testing can simulate an attacker that has already penetrated the perimeter and has some form of internal access to the network.
White-Box Penetration Testing
The final category of testing is called white-box testing, which allows the security consultant to have completely open access to applications and systems. This allows consultants to view source code and be granted high-level privilege accounts to the network. The purpose of white-box testing is to identify potential weaknesses in various areas such as logical vulnerabilities, potential security exposures, security misconfigurations, poorly written development code, and lack-of-defensive measures. This type of assessment is more comprehensive, as both internal and external vulnerabilities are evaluated from a “behind the scenes” point of view that is not available to typical attackers.
Security Testing Methodologies
There are some industry-leading penetration testing methodologies:
Security Audits vs Vulnerability Assessments vs Penetration Testing
Security audits: Security audits are the evaluation of if all security measures are being followed by an organisation, department, etc. with no concern of threats or vulnerabilities.
Vulnerability Assessment: It is the evaluation or discovery of threats and vulnerabilities that may exploit, impact on performance or delivery of its services by an organisation.
Penetration Testing: It is the process of security assessment including not only security audits and vulnerability assessment but demonstrable attacks and their solutions and remediations.
Types of Attackers
There are different types of attackers. The list of types of attackers can be very large but similar general classifications can be found. One of these classifications is:
Black hats: Individuals with extraordinary computing skills, resorting to malicious or destructive activities where they don’t have permissions or authorization to be on the network or to do what they are doing. Typically, they are known as crackers.
White hats: Individuals professing hacker skills and using them for defensive purposes, they have permission to do things that they are supposed to be doing and they are also known as security analysts.
Gray hats: Individuals who work both offensively and defensively at various times, usually they are driven by their own beliefs and thought. Some times they can be acting as black hackers, sometimes as white hackers.
Suicide hackers: Individuals who aim to bring down critical infrastructures for a cause and are not worried about facing jail terms or any other kind of punishment.
Script kiddies: An unskilled hacker who compromises systems by running scripts, tools and software developed by real hackers without the knowledge to understand what are they doing and why.
Cyber terrorists: Individuals with a wide range of skills, motivated by religious or political beliefs to create fear by large-scale disruption of computer networks.
State-sponsored hackers: Individuals employed by the government to penetrate and gain top-secret information and to damage information systems of other governments.
Hacktivist: Individuals who promote a political agenda by hacking, especially by defacing or disabling websites.
In addition to all the technical considerations, one very important thing that security professionals need to keep in mind is the different laws different countries have and the defined industry standards. Things like:
Payment Card Industry Data Security Standard (PCI-DSS): It is a worldwide standard that was set up to help businesses process card payments securely and reduce card fraud. The achieves through enforcing tight controls surrounding the storage, transmission and processing of cardholder data that businesses handle.
ISO/IEC 27001:2013: It specifies the requirements for establishing, implementing, maintaining and continually improving an information security management system within the context of the organization. It also includes requirements for the assessment and treatment of information security risks tailored to the needs of the organization.
Health Insurance Portability and Accountability Act (HIPAA): It establishes a national set of security standards for protecting certain health information that is held or transferred in electronic form.
Sarbanes Oxley Act (SOX): It is a federal law that established sweeping auditing and financial regulations for public companies. It was created to help protect shareholders, employees and the public from accounting errors and fraudulent financial practices.
Digital Millennium Copyright Act (DMCA): DMCA is a copyright regulation from the United State.
Federal Information Security Management Act (FISMA): It is a United States federal law that made it a requirement for federal agencies to develop, document, and implement an information security and protection program.