Mahmud Alam, PhD
September 8, 2024
•
5 min read
This blog will dive into serverless technology, and compare it to traditional server-based applications. We will look at the benefits and challenges of serverless technology, and compare the cost of running a serverless application with a traditional server-based application. In future blog posts, we will explore specific tooling, processes, and accelerators to speed up your time to customer value and reduce development effort.
Serverless technology is a relatively new approach to application development that has gained a lot of popularity in recent years. It refers to the design and deployment of applications and services that do not require the traditional approach of deploying a server to run the application.
In a traditional server-based architecture, developers create applications that run on a server. This server is responsible for handling all the application logic and processing, and it is the developer's responsibility to manage the server infrastructure. This can be a significant burden, requiring expertise in system administration and infrastructure management, and it can also be expensive to maintain.
In contrast, serverless technology abstracts away the server infrastructure and allows developers to focus on writing code that runs in response to specific events or triggers. In a serverless architecture, a cloud provider is responsible for managing the underlying infrastructure, including server management, scaling, and maintenance. The cloud provider will only charge developers for the actual time and resources used to execute the code, rather than requiring them to pay for a fixed amount of server capacity.
One of the key benefits of serverless technology is that it enables developers to focus on writing code rather than managing server infrastructure. This can significantly reduce the time and resources required to develop and deploy applications, allowing developers to be more productive and deliver more value to their customers.
Another benefit of serverless technology is that it enables developers to create highly scalable and resilient applications. Serverless architectures can automatically scale up or down based on demand, which makes it easy to handle sudden spikes in traffic or usage. Additionally, since the cloud provider is responsible for managing the infrastructure, it can often provide high levels of reliability and availability.
Traditional APIs are typically built using a server-based architecture. This means that developers must manage the infrastructure required to run the API, including servers, networking, and storage. The developer must also ensure that the API can handle the expected load and traffic, which can be difficult to predict and plan for. Traditional APIs can be expensive to build and maintain, as the infrastructure required to run them must be purchased or leased, and often requires a team of system administrators to manage.
Serverless APIs, on the other hand, do not require any infrastructure management. They are built using a cloud-based service, such as AWS Lambda or Azure Functions, which takes care of the infrastructure required to run the API. Developers can simply upload their code to the cloud service and specify the events or triggers that will invoke the API. The cloud service automatically scales the infrastructure to meet the demand, so developers do not need to worry about scaling up or down.
In terms of performance, serverless APIs can be just as fast as traditional APIs. Since the cloud service automatically scales the infrastructure to meet demand, serverless APIs can handle sudden spikes in traffic without any additional configuration or management.
In this section, we will explore some of the challenges that developers face when building serverless applications.
The largest and most commonly experienced challenge faced with the adoption of serverless technology is the time it takes to provision a new instance of the function or its cold start latency. This typically happens when the function has been invoked after a period of inactivity (typically between 5-7 minutes). The cold start latency is impacted by various factors, including the language of the function, instance sizing (CPU-intensive activites), and package sizing (the size and number of dependencies).
AWS recently released a feature called AWS SnapStart [1] (for Java language functions only at this point). This feature initialises the execution environment and takes a snapshot, which it uses on every subsequent execution rather than re-initialising it. This performs the heavy lifting once, but does have some considerations - particularly around initialisation activities like database connection pooling. This has been shown to reduce the cold start latency by up to 90%.
There are other ways that you can avoid cold starts:
Debugging and testing serverless functions can be challenging, as the functions are typically invoked by events or triggers, rather than being called directly. This means that it can be difficult to test the function locally, as it is not possible to simulate the event or trigger that would normally invoke the function.
This can be mitigated with a robust logging strategy however the debugging and testing will typically take longer than a server application. Logging can be used to record the input and output of the function, which can be used to debug any issues that occur. It can also be used to record any errors that occur, which can be used to identify and fix any bugs in the code. In between, a developer must commit and deploy the code to their cloud platform so without robust CI/CD processes, this can become time-consuming and onerous.
While it's possible that a traditional microservice to be either stateful (persisting data between calls) or stateless, you should assume that your serverless function environment only exists for one invocation. This means that any required data must be retrieved, and stored externally (in a database or other service). It is generally best practice in microservice development to persist data externally to be more fault tolerant. For example, consider keeping state of a shopping cart within a stateful service. If during the course of a user's session the service becomes degraded and fails, without external persistence, the instance created to replace the failed instance would have no knowledge of the prior state of the shopping cart.
This lack of persistence further impacts other non-data components. One important example is connection pooling for databases. In a traditional microservice, a number of open connections to a database are maintained in memory, which can be reused for subsequent requests. In a serverless environment, the function is invoked and executed in a new instance of the execution environment, requiring a new connection be started at each cold start.When designing a microservice that connects to an AWS RDS database, Amazon RDS Proxy [3] provides connection pooling with no code changes for serverless functions.
In both architectures with high loads, it is possible for the number of connections to the database to exceed the maximum number allowed, or for the database to become overloaded. In this case, it is recommended to consider a decoupled architecture [4] if there is high volumes expected.
Serverless functions are typically hosted on a cloud platform, such as AWS Lambda or Azure Functions. This means that the application is tied to the cloud platform, and cannot be easily migrated to another cloud platform. This can be mitigated by using a serverless framework, such as Serverless Framework [5], which allows developers to write their code in a language of their choice, and deploy it to any cloud platform that supports the framework. This means that the application can be easily migrated to another cloud platform, or even to a traditional server-based architecture.
In this section, we will compare the cost of running a serverless application with a traditional server-based application. The assumptions that have been used are:
This comparison is for a single microservice, which is invoked by an API Gateway endpoint. The microservice is a simple function that returns a JSON response.
This comparison is a more likely analogue for a real application, containing five microservices invoked by an API Gateway endpoint.
Overall, serverless technology offers a promising approach to application development, but developers should be aware of the potential challenges and plan accordingly. Additionally, in most start-up or scale-up organisations, there are significant cost and time benefits to adopting serverless versus traditional architectures.
Are you looking to advance your journey on serverless and need a boost? Reach out to Nimbly..