Running “Serverless”

What is serverless?

In a post earlier last month we touched on the fact Zercurity uses “serverless” containers to build our installers for distribution.

Serverless still uses severs behind the scenes, they’re just not ones you manage. Instead, containers are used to house and execute your code. Which needs to be broken down into single callable functions. Most commonly API resources.

All the other stuff, that’s required in order to translate a user request and execute your function into a RESTful API is taken care of by your chosen hosting provider either;

For this introduction, however, we’ll be using AWS Lambda to create a small REST service. We’ll also be using the Serverless framework to deploy our code. Serverless provides an abstraction layer atop these different cloud providers to make your serverless application both language and cloud agnostic. It also makes it super easy to deploy, bonus.

Why use serverless?

There are three main reasons why we use serverless:

Sound good? Let's get started.

Getting started

Serverless is built using nodejs and can be easily installed with NPM. If you’re keen to just get started you can checkout our git repository here and skip ahead.

The command below will install the serverless (sls) NPM module globally so it’ll be available on your command line.

sudo npm -i -g serverless

Lets create our project directory.

mkdir serverless-example && cd serverless-example

Serverless provides a whole array of template applications that you can create.

We’re going to use the base aws-python template. Which provides a simple Python 2.7 shell.

sls create —-template aws-python

This will create two new files your you.

Before we go any further. We suggest re-organising your project like so. As and when your codebase grows. Using this structure will hopefully provide some order. has been renamed to Otherwise, you can just skip down to configuring and deploying AWS.

./app/handlers/rest/ (formally

The two other files you’ll need to create yourself.

Lastly, before we start modifying our serverless templates. There are two plugins that I recommend you install.

npm install serverless-python-requirements serverless-prune-plugin

Awesome. We’re ready to get our first app live.


To start; we’re going to take a look at the serverless.yml configuration file. This houses all of your serverless’s deployment information.

This file can get quite large. As your application grows its worth splitting your application into smaller (logical) configs. Otherwise, it gets unwieldily.

AWS also has a limit on the number of functions managed by a single cloud configuration.

Replace your serverless.yml’s content with the following configuration.

# Welcome to Serverless!
# Happy Coding!
custom: # Your cloud formation deployment name
name: serverless-example
# This will read your requirements.txt file and download the necessary pip modules and compile them so that they're compatible with AWS lambda linux container image.
dockerizePip: true
zip: false
removeVendorHelper: false
service: ${}# You can pin your service to only deploy with a specific Serverless # version. Check out our docs for more detailsframeworkVersion: ">1.26.0"plugins:
- serverless-python-requirements
- serverless-prune-plugin # 'sls prune -n 2' Removes old versions of your code
name: aws
runtime: python2.7
stage: dev # 'sls deploy -s prod -v' to deploy as a production instance
region: eu-central-1 # The AWS region to deploy your code
profile: ${}
# Environment variables can be passed to your Python application.
# You can use os.environ['db_hostname'] to retrieve them.
db_hostname: example
db_database: example
db_username: example
db_password: example
iamRoleStatements:# S3 permissions for lambda logging
- Effect: Allow
- s3:ListBucket
- s3:PutObject
Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "ServerlessDeploymentBucket" } ] ] }
# For serverless
- Effect: Allow
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DetachNetworkInterface
- ec2:DeleteNetworkInterface
- "*"
# You can add packaging information here
individuANYy: false
- app/handlers/** # Code
- common/** # Common code
- .git
- node_modules/** # NPM modules
- .gitignore
- serverless.yml # Serverless configuration
- requirements.txt # Python dependancies
functions: # This is your Lambda function definition
handler: app/handlers/rest/hello.get
description: GET handler for hello
timeout: 5 # Maximum number of seconds for your code to run
memorySize: 128 # Can allocate up to 3GB
- http:
method: GET # HTTP method python, use ANY for everything
description: Hello response
#path: you/can/use/{path_variables}
path: /

The same goes for Just replace whats in there for now with the following.

import jsondef get(event, context):
Handle the GET request and return the full lambda request event
return {
"statusCode": 200,
"body": json.dumps(event)

Last few steps. Almost there.

Configure AWS

Deploying your code to AWS, is all taken care of for you by the serverless framework.

We just need to check or configure your machine for your AWS API keys. You’re AWS secrets should be (by default) in the following file.

vim ~/.aws/credentials

If this file doesn’t exist. Create it, with the following (you can use ‘aws configure’ if the AWS CLI is installed):


Note, that the title [serverless-example] matches your serverless project name. This is important otherwise you’re application won’t deploy. You’ll end up with the following error:

Profile serverless-example does not existFor debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.

Deploying your serverless project. Go time!

sls deploy -v

Serverless will now do all the heavy lifting; downloading your python modules, Archiving your project and then lastly uploading and configuring the affirmed AWS services via Amazon’s Cloud Formation.

You’ll get something that looks like this: We’ve suppressed some of the output.

Serverless: Installing requirements of requirements.txt in .serverless...
Serverless: Docker Image: lambci/lambda:build-python2.7
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Injecting required Python packages to package...
Serverless: Creating Stack...
Serverless: Checking Stack create progress...
CloudFormation - CREATE_COMPLETE - AWS::CloudFormation::Stack - serverless-example-dev
Serverless: Stack create finished...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (406 B)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
CloudFormation - UPDATE_COMPLETE - AWS::CloudFormation::Stack - serverless-example-dev
Serverless: Stack update finished...
Service Information
service: serverless-example
stage: dev
region: eu-central-1
stack: serverless-example-dev
api keys:
RestEnroll: serverless-example-dev-RestEnroll
Stack Outputs
RestEnrollLambdaFunctionQualifiedArn: arn:aws:lambda:eu-central-1:123456789012:function:serverless-example-dev-RestHello:1
ServerlessDeploymentBucketName: serverless-example-dev-serverlessdeploymentbucket-1j16l33moxhwl

Your app has now been successfully deployed. The console output will highlight the newly created REST endpoints.

Ours is available here as an example:

You’re also able to define a custom domain for your service on top of the API Gateway deployment.

That’s it. Hope you found this guide helpful.

Real-time security and compliance delivered.