Today, another small presentation, the third of a series about Cloud Computing and associated tools. This one about Infrastructure as a Service.
cloud-computing
Introduction to Cloud Computing
Today, just a small presentation, the first of a series about Cloud Computing and associated tools.
Container attack vectors
We live in a containerised world. Container solutions like Docker are now so extended that they are not a niche thing any more or a buzzword, they are mainstream. Multiple companies use it and, the ones that do not are dreaming with it probably.
The only problems are that they are still something new. The adoption of them has been fast and, it has arrived like a storm to all kind of industries that use technology. The problem is that from a security point of view we, as an industry, do not have all the awareness we should have. Containers and, especially, containers running on cloud environments are hidden partially the fact that they exist and they need to be part of our security considerations. Some companies use them thinking they are completely secure, trusting the cloud providers or the companies that generate the containers take care of everything and, even, for less technology focus business, they are an abstraction and not real and tangible thing. They are not the old bare metal servers, the desktop machines or the virtual machines they were used to it, and till a certain point, they worried because they were things that could be touched.
All of that has made that while security concerns for web applications are first-level citizens, not as much as it should but the situation has improved a lot on the last few years, security concerns about containers seem to be the black sheep of the family, no one talks about it. And, this is not right. It should have the same level of concern and the same attention should be paid to it and, be part of the development life cycle.
In the same way that web applications can be attacked in multiple ways, containers have their own attack vectors, some of which we are going to see here. We will see that some of the attack vectors can be easily compared with known attack vectors on spaces we are more aware like web applications.
Vulnerable application code
Containers package applications and third-party dependencies that can contain known flaws or vulnerabilities. There are thousands of published vulnerabilities that attackers can take advantage to exploit our systems if found on the applications running inside the containers.
The best to try to avoid running container with known vulnerabilities is to scan the images we are going to deploy and, not just as a one-time thing. This should be part of our delivery pipelines and, the scans should apply all the time. In addition to known vulnerabilities, scanners should try to find out-of-date packages that need an update. Even, some available scanners try to find some possible malware on the images.
Badly configured container images
When configuring how a container is going to be built some vulnerabilities can be introduced by mistake or if not the proper attention is paid to the building process that can be later exploited by attackers. A very common example is to configure the container to run with unnecessary root permissions giving it more privileges on the host than it really needs.
Build machine attacks
As any piece of software, the one we use to run CI/CD pipelines and build container images can be attacked successfully and, attackers can add malicious code to our containers during the build phase obtaining access to our production environment once the containers have been deploy and, even, utilising these compromised containers to pivot to other parts of our systems or networks.
Supply chain attacks
Once containers have been built they are stored in registries and retrieved or “pulled” when they are going to be run. Unfortunately, no one can guarantee the security of this registries and, an attacker can compromise the registry an replace the original image with a modified one including a few surprises.
Badly configured containers
When creating configuration files for our containers, i.e. a YAML file, we can make some mistakes and add configurations to the containers we did not need. Some possible examples are unnecessary access privileges or unnecessary open ports.
Vulnerable host
Containers run on host machines and, in the same way, we try to ensure containers are secure host should be too. Some times they run old versions of orchestration component with known vulnerabilities or other components for monitorisation. A good idea is to minimise the number of components installed on the host, configure them correctly and apply security best practices.
Exposed secrets
Credentials, tokens or passwords are all of them necessary if we want our system to be able to communicate with other parts of the system. One risk is the way we supply the container and the applications running in it these secret values. There are different approaches with varying levels of security that can be used to prevent any leakage.
Insecure networking
The same than non containerised applications, containers need to communicate using networks. some level of attention will be necessary to set up secure connections among components.
Container escape vulnerabilities
Containers are prepared to run on isolation from the hosts were they are running, in general, all container runtimes like “containerd” or “CRI-O” have been heavily tested and are quite reliable but, as always, there are vulnerabilities to be discovered. Some of these vulnerabilities can let malicious code running inside a container escape out into the host. Due to the severity of this, some stronger isolation mechanisms can be worth to consider.
Some other risks related to containers but not directly been containers can be:
- Attacks to code repositories of application deployed on the containers poisoning them with malicious code.
- Hosts accessible from the Internet should be protected as expected with other tools like firewalls, identity and access management systems, secure network configurations and others.
- When container run under an orchestrator, i.e. Kubernetes, a door to new attack vectors is open. Configurations, permission or access not controlled properly can give attackers access to our systems.
As we can see some of the attack vectors are similar to the one existing in more mature areas like networking or web application but, due to the abstraction and the easy-to-use approach, the security on containers, unfortunately, is left out the considerations.
Reference: “Container Security by Liz Rice (O’Reilly). Copyright 2020 Vertical Shift Ltd., 978-1-492-05670-6”
Fallacies of Distributed Computing
Distributed architecture styles, while much more powerful in terms of performance, scalability and availability than monolithic architecture styles, have significant trade-offs. One of the groups of issues is known as the fallacies of distributed computing.
A fallacy is something that is believed or assumed to be true but is not. All fallacies when analysed are common sense, even they are things we experiment every day but, for some reason, they are some times forgotten when designing new distributed systems.
Fallacy #1: The Network is Reliable
Ok, with the cloud environments we do not trip over network cables anymore but, while networks have become more reliable over time, the fact is that networks still remain generally unreliable, this being a factor that influences distributed system due to its reliance on the network for communication.
Fallacy #2: Latency is Zero
Let’s face it, the network sometimes goes faster and sometimes slower, is there someone out there it has never seen a streaming film suddenly freezing for a few seconds when the killer is behind the main character? Communication latency on a network is never zero and, it is always bigger than local latencies. When working with distributed systems we need to consider the latency average. And, not only that, we should consider the 95th to 99th percentile too because the values can differ.
Fallacy #3: Bandwidth is Infinite
We are sure about that, right? Explain that to your three flatmates when you are trying to call home and they are, independently, watching their favourite film on a streaming platform. The fact of the system been distributed increases the amount of information that travels through the network and every byte matters. Maybe, a simple request of 200 kilobytes seems small but multiply it for the number of requests made per second and, include all the request among services performed at the same time. This number can grow easily.
Fallacy #4: The Network is Secure
Just two words “cybercriminals everywhere” (no reason to be scared). The surface area for threats and attacks increases by magnitudes when moving from a monolithic to a distributed architecture. We know the need to secure all endpoints, even when communicating among internal services.
Fallacy #5: The Topology Never Changes
Raise your hand if you think IT members never change anything on your network over time. No hands. Good! Routers, hubs, switches, firewalls, networks and appliances used, even cloud networks can suffer changes or need updates and modifications that can affect services communications or latencies on the network.
Fallacy #6: There is Only One Administrator
Have you ask someone from IT to do something and asked later about the progress to a different person in the department to just need to explain your request again because the request was never logged? That happens, a lot, sometimes things get lost, the coordination is not good enough, the communication is not good enough or … you see where I am going.
Fallacy #7: Transport Cost is Zero
If you have an internet connection at home, probably your internet provider sends you a bill from time to time, if it does not, please stop using your neighbours’ Wi-Fi. It is not exactly the same but exemplifies that to be able to communicate certain infrastructure and network topology are necessary. The needs of monolithic applications are substantially different from the needs of distributed systems. servers, firewalls, multiple load balancers, proxies…
Fallacy #8: The Network is Homogeneous
I do not have a more mundane example for this one to make it easy to remember but a real one should be simple enough. A company using multiple cloud services at the same time. All of them are going to work well initially but, not all of them have been exactly built and tested in the same way. It can be differences in the services like latency or reliability, basically, everything named on the previous fallacies.
Reference: “Fundamentals of Software Architecture by Mark Richards and Neal Ford (O’Reilly). Copyright 2020 Mark Richards, Neal Ford, 978-1-492-04345-4″
AWS CDK Intro
During the last few years, we have been hearing about a lot of new practices applying to Software Development and DevOps. One of these topics is Infrastructure as Code. Probably, in this space, two of the most well-known solutions are Terraform and CloudFormation.
We have already discussed Terraform on this blog previously. If we take a look to the basic code on the example on this blog or you are already using it, probably, you are aware that when the infrastructure grows, the Terraform code, been quite verbose, grows fast too and, unless we have a very good code structure it can get very messy. The same can be said about CloudFomation. In addition, they are not as developer-friendly as common programming languages are and, they need to be learned as a new language.
To solve the first problem, code turning messing over time, there are some measures to split the Terraform code like creating modules but, for much of the projects I have been taken a look and articles I have read about it, it seems there is no agreement about the best way to split the code and, usually, if you do not work regularly with this kind of projects, it is very hard to find things and move around.
Trying to solve this problem and the second one, using languages well know by developers, there are projects like Pulumi trying to bring infrastructure code to use familiar programming languages and tools, including packaging and APIs.
Investigating a little bit about this, I have found another one, AWS Cloud Development Kit (AWS CDK), which is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation. As said, initially AWS CDK is designed for CloudFormation but, there is an implementation to generate Terraform code.
Today’s purpose is just to play a little bit with this technology, just the one that generates CloudFormation code and not to compare them (I do not know enough about it to compare them…yet).
I was going to write a long post with examples but, I have found a great introduction workshop offered by AWS that it does the job nicely. For this reason, I am just leaving here my project on Github.
One of the nice things I have found about AWS CDK is that support multiple languages:
- Typescript
- Javascript
- Python
- Java
- C#
And, after writing a few lines, it feels very comfortable to be writing code using Java APIs and using all the power of the multiple tools that exist on the Java ecosystem. Ond, of course, be using Java packages to sort the infrastructure code.
Just another alternative in our toolbox that seems worth it to explore.
Advantages of Cloud Computing
Nowadays, it is clear that cloud computing has revolutionized how technology is obtained, used and managed. And all of this has changed how organizations budget and pay for technology services.
Cloud Computing has given us the ability to reconfigure quickly our environments to be able to adapt them to changing business requirements. We can run cost effective services that can scale up and down depending on usage or business demands and, all of this, using pay-per-use billing. Making unnecessary huge upfront infrastructure expenses for enterprises, and balancing the possibilities between been a big enterprise or a new one.
There are multiple and diverse advantages, and most of them depend on the enterprises, the business and the needs they have. But, there are six of them that tend to appear in every case:
Variable vs. Capital Expense
Instead of having to invest in data centers and servers before knowing how they are going to be used, they can be paid when you consume computing resources and paid only for how much you consume.
Economies of Scale
By, using cloud computing, we can achieve a lower variable costs that we would get on our own. Cloud providers like AWS can aggregate hundred of thousands of customers in the cloud, achieving higher economies of scale, which translates in lower prices.
Stop Guessing Capacity
When we make a capacity decision prior to deploying an application, we usually end up either setting expensive idle resources or dealing with limited capacity. With cloud computing there is no more need for guessing. The cloud allow us to access as much or as little as we need to cover our business needs and to scale up or down as required without advanced planning, with in minutes.
Increase Speed and Agility
Deploy new resources to cover new business cases or implementation of prototypes and POCs to experiment now can be achieved with a few clicks provisioning new resources in a simple, easy and fast way usually reducing costs and time and allowing companies to adapt or explore.
Focus on Business Differentiators
Cloud computing allows enterprises to focus on their business priorities, instead of on the heavy lifting of racking, stacking, and powering servers. This allows enterprises to focus on projects that differentiate their businesses.
Go Global in Minutes
With cloud computing enterprises can easily deploy their applications to multiple locations around the world. This allows to provide redundancy and lower latencies. This is not any more reserved just for the largest enterprises. Cloud computing has democratized this ability
Install Cloud Foundry CLI in macOS
The easiest way to install Cloud Foundry in a macOS system is to use the homebrew package manager.
To install homebrew, we just need to execute the next line in our Terminal:
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Once installed, you can “tap” the Cloud Foundry repository:
brew tap cloudfoundry/tap
Finally, you install the Cloud Foundry CLI with:
brew install cloudfoundry/tap/cf-cli
Once you have installed the CLI tool, you should be able to verify that it works, by opening a Terminal and running:
cf --version
This should show something like:
cf version 6.30.0+decf883fc.2017-09-01
If you see a result like this, the CLI is installed correctly and you can start playing.
Now, we need a trial account in a Cloud Foundry provider. There are multiple option we can check in the Cloud Foundry Certified Providers page. Once we have created the account we can proceed to login with out CLI.
cf login
API endpoint> https://api.eu-gb.bluemix.net
Email>john.doe@example.org
Password>
Authenticating...
OK
Targeted org example
Targeted space dev
API endpoint: https://api.eu-gb.bluemix.net (API version: 2.75.0) User: john.doe@example.org
Org: example
Space: dev
In the above output, the email is the address you used to sign up for a service.
Once you have successfully logged in, you are authenticated with the system and the Cloud Foundry provider you use knows what information you can access and what your account can do.
The CLI tool stores some of this information, the Cloud Foundry Endpoint API, and a “token” given when you authenticated. When you logged in, instead of saving your password, Cloud Foundry generated a temporary token that the CLI can store. The CLI then can use this token instead of asking for your email and password again for every command.
The token will expire, usually in 24 hours, and the CLI will need you to login again. When you do, it will remember the last API Endpoint you used, so you now only have to provide your email and password to re-authenticate for another 24 hours.
First commands
- cf help: Shows CLI help.
- cf help <command>: Shows CLI help for an specific command.
- cf <command> –help: Shows CLI help for an specific command.
- cf help -a: Lists all the commands available in the CLI.
Microservices: Capability model
Microservices is an area that is still evolving. There is no standard or reference architecture for microservices. Some of the architectures available publicly nowadays are from vendors and, obviously, they try to promote their own tools stack.
But, even, do not having an specific standard or reference we can sketch out some guidelines to design and develop microservices.

As we can see, the capability model is main splitted in four different areas:
- Core capabilities (per microservice).
- Supporting capabilities.
- Process and governance capabilities.
- Infrastructure capabilities.
Core capabilities
Core capabilities are those components generally packaged inside a single microservice. In case of microservices and fat jars approach, everything will be inside the file we are generating.
Service listeners and libraries
This box is referring to the listener and libraries the microservice has in place to accept service requests. The can be HTTP listeners, message listeners or more. There is one exception though, if the microservice is in char only of scheduled tasks, maybe, it does not need listeners.
Storage capability
Microservices can have some king of storage to do properly their task, physical storages like MySQ, MongoDB or Elasticsearch, or in-memory storages, caches or in-memory data grids like Ehcache, Hazelcast or others. There are some different storages but, it does not matter what type of storage is used, this will be owned by the microservice.
Service implementation
This is were the business logic is implemented. It should follow tradicional design approaches like modularization and multi-layered. Different microservices can be implemented in different languages and, as a recommendation, they should be as stateless as possible.
Service endpoint
This box just refers to the external APIs offered by the microservice. Both included, asynchronous and synchronous, been possible to use technologies from REST/JSON to messaging.
Infrastructure capabilities
To deploy our application and for the application to work properly we need some infrastructure and infrastructure management capabilities.
Cloud
For obvious reason, microservice architectures fit more in cloud based environments that in tradicional data center environments. Things like scaling, cost effective management and the cost of the physical infrastructures and the operations make in multiple occasion a cloud solution more cost effective.
We can find different providers like AWS, Azure or IBM Bluemix.
Container runtime
There are multiple options here and, obviously, container solutions are not the only solutions. There are option like virtual machines but, from a resources point of view, the last ones consume more of them. In addition, usually it is much faster to start an instance of a new container than to start a new virtual machine.
Here, we can find technologies like Docker, Rocket and LXD.
Container orchestration
One of the challenges in the microservices world is that the number of instances, containers or virtual machines grows adding complexity, if not making it impossible, manual provisioning and deployments. Here is were containers orchestration tools like Kubernetes, Mesos or Marathon come quite handy, helping us to automatically deploy applications, adjust traffic flows and replicate instance among other.
Supporting capabilities
They are not related with the microservices world but they are essential for supporting large systems.
Service gateway
The service gateway help us with the routing, policy enforcement, a proxy for our services or composing multiple service endpoints. There are some options one of them is the Spring Cloud Zuul gateway.
Software defined load balancer
Our load balancers should be smart enough to be able to manage situations where new instances are added or removed, in general, when there are changes in the topology.
There are a few solutions, one of them is the combination of Ribbon, Eureka and Zuul in Spring Cloud Netflix. A second one can be Marathon Load Balancer.
Central log management
When the number of microservices grow in our system the different operations that before were in one server now are taking place in multiple server. All this servers produce logs and to have them in different machines make quite difficult to debug errors sometimes. For this reason, we should have a centrally-managed log repository. In addition, all the generated logs should have a correlation ID to be able to track an execution easily.
Service discovery
With the amount of services increasing the static service resolution is close to imposible. To support all the new addition, we need a service discovery that can deal with this situation in runtime. One option is Spring Cloud Eureka. A different one, more focus in container discovery is Mesos.
Security service
Monolithic applications were able to manage security themselves but, in a microservices ecosystem we need authentication and token services to allow all the communications flow in our ecosystem.
Spring offers a couple of solution like Spring OAuth or Spring Security, but any single sign-on solution should be good.
Service configuration
As we said int he previous article, configurations should be externalized. It is an interesting choice set in our environments and configuration server. Spring, again, provides Spring Cloud Config but there are some other alternatives.
Ops monitoring
There is need to remember that now, with all this new instances, all of them scaling up and down, environment changes, service dependencies and new deployments going on, one of the most important things it is to monitor our system. Things like Spring Cloud Netflix Turbine or Hystrix dashboard provide service-level information. There are some other tools that provide end-to-end monitoring like AppDynamic or NewRelic.
Dependency management
It is recommended the use of some dependency management visualization tools to be aware of the system complexity. They will help us to check dependencies among services and to take appropriate design decisions.
Data lake
As we have said before, each microservice should have each own data storage and this should not be shared between different microservices. From a design point of view, this is a great solution but, sometimes, organizations need to create reports or they have some business process that use data from different services. To avoid unnecessary dependencies we can set a data lake. They are like data warehouses where to store raw data without any assumption about how the information is going to be use. In this way, any service that needs information about another service, just needs to go to the data lake to find the data.
On of the things we need to consider in this approach is that we ned to propagate the changes to the data lake to maintain the information in synch, some tools that can help us with this is Spring Cloud Data Flow or Kafka.
Reliable messaging
We want to maximize the decoupling among microservices. The way to do this is to develop them as much reactive as possible. For this reliable messaging system are needed. Tools like RabbitMQ, ActiveMQ or Kafka are good for this purpose.
Process and governance capabilities
Basically, how we put everything together and we survive. We need some processes, tool and guidelines around microservices implementations.
DevOps
One of the keys about using a microservice oriented architecture is been agile, quick deploys, builds, continuous integrations, testing… Here is where a DevOps culture come handy in opposite to the waterfall culture.
Automation tools
Continuous integration, continuous delivery, continuous deployments, test automation, all of them are needed or at least recommended in a microservices environment.
And again, testing, testing, testing. I cannot say how important in this, now that we have our system splitted in microservices the need to use mocking techniques to test, and to be completely confident, we need functional and integration tests.
Container registry
We are going to create containers, in the same way we need a repository to store the artifacts we build, we need a container registry to store our containers. There are some options like Docker Hub, Google Container Repository or Amazon EC2 container registry.
Microservice documentation
Microservices system are based on communication. Communication among microservices, calls to APIs offered by this microservices but, we need to ensure that people that want to use our available APIs can understand how to do it. For this reason is important to have a good API repository:
- Expose repository via a web browser.
- Provide easy ways to navigate APIs.
- Well organized.
- Possibility to invoke and test the endpoint with examples.
For all of this we can use tools like Swagger or RAML.
Reference architecture and libraries
In an ecosystem like this the need to set standard, reference models, best practices and guidelines on how to implement better microservices are even more important than before. All of this should live as a architecture blueprints, libraries, tools and techniques promoted and enforced by the organizations and the developer teams.
I hope that after this article, we start having a rough idea about how to tackle the implementation of our systems following a microservice approach. In addition, a few tools to start playing with.
Note: Article based on my notes about the book “Spring 5.0 Microservices – Second Edition”. Rajesh R. V
Twelve-Factor Apps
Cloud computing is one of the most rapidly evolving technologies. It promises many benefits, such as cost advantages, speed, agility, flexibility and elasticity.
But, how do we ensure an application can run seamlessly across multiple providers and take advantage of the different cloud services? This means that the application can work effectively in a cloud environment, and understand and utilize cloud behaviors, such as elasticity, utilization-based charging, fail aware, and so on.
It is important to follow certain factors while developing a cloud-native application. For this purpose, we have The Twelve-Factor App. The Twelve-Factor App is a methodology that describes the characteristics expected in a modern cloud-ready application.
The Twelve Factors
I. Codebase
This factor advices that each application should have a single code base with multiple instances of deployment of the same code base. For example, development, testing and production. The code is typically managed in a VCS (Version Control System) like Git, Subversion or other similar.
II. Dependencies
All applications should bundle their dependencies along with the application bundle, and all of them can be managed with build tools like Maven or Gradle. They will be using files to specify and manage these dependencies and linking them using build artifact repositories.
III. Config
All configurations should be externalized from the code. The code should not change among environments, just the properties in the system should change.
IV. Backing services
All backing services should be accessible through an addressable URL. All services should be reachable through a URL without complex communications requirements.
V. Build, release, run
This factor advocates strong isolation among the build stage, the release stage and the run stage. The build stage refers to compiling and producing binaries by including or assets required. The release stage refers to combining binaries with environments-specific configuration parameters. The run stage refers to running applications on a specific execution environment. This pipeline is unidirectional.
VI. Processes
The factor suggests that processes should be stateless and share nothing. If the application is stateless, then it is fault tolerant and can be scaled out easily.
VII. Port binding
Applications develop following this methodology should be self-contained or standalone and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port.
VIII. Concurrency
This factor states that processes should be designed to scale about by replicating the processes. What it means, just spinning up another identical service instance.
IX. Disposability
This factor advocates to build applications with minimal startup and shutdown times. This will help us in automated deployment environments where we need to bring up and down instances as quickly as possible.
X. Dev/Prod parity
This factor establish the importance of keeping the development and the production environments as close as possible. Maybe to save costs, no the local environments where developers write their code, here they tend to run everything in one machine but, at least, we should have a non-production environments close enough to our production environment.
XI. Logs
This factor advocates for the use of a centralized logging framework to avoid I/Os in the systems locally. This is to prevent bottlenecks due to not fast enough I/Os.
XII. Admin processes
This factor advices you to target the same release and an identical environment as the long running processes runs to perform admin tasks. The admin consoles should be packaged along with the application code.
I recommend you to read carefully the The Twelve-Factor App page and its different sections.
Cloud basics
The National Institute of Standards and Technology (NIST) gives the following definition of cloud computing:
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
There are several other broadly accepted definitions of cloud computing but, all of them share the same features:
- Users should be able to provision and release resources on-demand.
- The resources can be scaled un and down automatically, depending on the load.
- The provisiones resources should be accessible over a network.
- Cloud service providers should enable pay-as-you-go model, where customers are charged based on the type and quantum of resources they consume.
We can find three types of clouds in cloud computing:
- Public cloud: In a public cloud, third-party service providers make resources and services available to their customers via the internet. The customers’ applications and data are deployed on infrastructure owned and secured by the service provider.
- Private cloud: A private cloud provides many of the same benefits of a public cloud but the services and data are managed by the organization or a third-party, solely for the customer’s organization. Usually, private cloud places increase administrative overheads on the customer but gives greater control over the infrastructure and reduce security-related concerns. The infrastructure may be located on or off the organization’s premises.
- Hybrid cloud: A hybrid cloud is a combination of both a private and a public cloud. The decision on what runs on the private versus the public cloud is usually based on several factors, including business criticality of the application, sensitivity of the data, industry certi cations and standards required, regulations, and many more. But in some cases, spikes in demand for resources are also handled in the public cloud.
As well, we can find three service models:
- Infrastructure as a Service (IaaS): IaaS provides users the capability to provision processing, storage, and network resources on demand. The customers deploy and run their own applications on these resources. Using this service model is closest to the traditional in-premise models and the virtual server provisioning models (typically offered by data center outsourcers). The bonus of administering these resources rests largely with the customer.
- Platform as a Service (PaaS): The service provider makes certain core components, such as databases, queues, workflow engines, e-mails, and so on, which are available as services to the customer. The customer then leverages these components for building their own applications. The service provider ensures high service levels, and is responsible for scalability, high- availability, and so on for these components. This allows customers to focus a lot more on their application’s functionality. However, this model also leads to application-level dependency on the providers’ services.
- Software as a Service (SaaS): Typically, third-party providers using a subscription model provide end-user applications to their customers. The customers might have some administrative capability at the application level, for example, to create and manage their users. Such applications also provide some degree of customizability, for example, the customers can use their own corporate logos, colors, and many more. Applications that have a very wide user base most often operate in a self-service mode. In contrast, the provider provisions the application for the customer for more specialized applications. The provider also hands over certain application administrative tasks to the customer’s application administrator (in most cases, this is limited to creating new users, managing passwords, and so on through well-defined application interfaces).
From an infrastructure perspective, the customer does not manage or control the underlying cloud infrastructure in all three service models.