With serverless coming into focus more and more, services like the AWS Lambda provide a simple to understand, good approach towards the next step in cloud computing. But “serverless” does not mean “no servers”. You as a developer just don’t have to think about managing servers. They are still there but their role has changed.
Here’s what to consider in terms of cost optimization
There are 3 factors that determine the cost of your Lambda function:
Factor | Description |
---|---|
Number of executions | You pay by each execution ($0.20 per 1M requests) |
Duration of each execution | The longer your function takes to execute, the more you pay. This is a good incentive to write efficient application code that runs fast. You get charged in 100ms increments and there is a maximum function timeout of 15 minutes. |
Memory allocated to the function | When you create a Lambda function, you allocate an amount of memory to it, which ranges between 128MB and 3008MB (in 64MB increments. If you allocate 512MB of memory to your function but each execution only uses 10MB, you pay for the whole 512MB. If your execution needs more memory than the one allocated to the function, the execution will fail. This means you have to configure the amount of memory that will guarantee successful executions, while avoiding excessive over-allocation. |
There are various factors that affect the invocation frequency of a Lambda function. This is based on the triggers. Closely monitor your triggers and see if you can do to reduce the number of invocations over time.
For example, suppose your Lambda function is being triggered by the Kinesis Stream. Since the batch size is quite small, the invocation frequency is pretty high. In such cases, what you can do is opt for a higher batch size so that your Lambda function is invoked less frequently.
The function that executes in half the time is a function that will cost you half the money. One must note that the execution duration is directly proportional to the amount of money you’ll be charged. It is critical to keep an eye on the duration metric inside CloudWatch. And it’d be common sense to modify and iterate your function if it is taking a suspiciously long time to execute.
For this purpose, AWS X-Ray can help you monitor your function from end to end.
One of the most overlooked aspects is that AWS Lambda cold starts happens for each concurrent execution. Therefore, as a first step in optimizing your costs, make sure that the functions are executed with the best frequency to avoid cold starts as much as possible. An approach that works most of the time is to group them in bigger batches.
While talking about Lambda charges, it often gets out of our site that you’ll also be charged for the data transfer at standard EC2 data transfer rates. Hence, it is the default to keep an eye on the amount of data you’re transferring out to the internet and other regions of AWS.
While talking about internal data transfer, there isn’t much you can do about it as you can’t control the amount of data your Lambda will transfer. Likewise, there is no metric inside CloudWatch which can help you in monitoring the data transfer rate.
In such cases, here’s what you can do:
When provisioning memory size for your Lambda, AWS will allocate CPU proportionally (e.g. a 256 MB function will receive twice the processing power of a 128 MB function). Depending on your workload this may give you a decent performance boost at a slightly higher price per invocation, executing your function faster and cutting cost for billed time. Therefore it’s a good incentive to check if this strategy is more cost-efficient in the long run.
When setting memory for a Lambda, the execution may halt immediately if it runs out of memory. Changes to the function over time may alter its memory usage so it’s best to monitor this.
Tools like AWS Lambda Power Tuning may help you find this break-even point for cost and/or performance by testing the appropriate memory size.