TLDR; This is a small blog post showing you how to run a crystal binary in AWS Lambda.
Update I made some code available on github at spinscale/crystal-aws-lambda which you can play around with to run your own crystal code within an lambda.
Introduction
AWS recently announced to support custom runtimes with AWS Lambda. The implementation is pretty interesting, as it requires the custom code to execute some HTTP requests and responses - which also makes it especially nice for integration testing, as you only need a webserver responding to a set of URLs.
You can read more about implementing your own runtime in the aws lambda documenation. Also the docs about the runtime interface are a good read.
Requirements
I will be using serverless for deployment, so you need a user who is able to deploy serverless applications. You can check out the serverless AWS credentials documentation for more information about that.
Installing crystal should only be one call away, as there are packages for Linux and osx. See the crystal installation docs.
crystal app
You can create a crystal app using crystal init my-app
. Then create a
src/bootstrap.cr
file inside of that app that looks like this
require "http"
module Crystal::Lambda
VERSION = "0.1.0"
api = ENV["AWS_LAMBDA_RUNTIME_API"].split(":", 2)
host = api[0]
port = api[1].to_i
while true
client = HTTP::Client.new(host: host, port: port)
response = client.get "/2018-06-01/runtime/invocation/next"
awsRequestId = response.headers["Lambda-Runtime-Aws-Request-Id"]
baseUrl = "/2018-06-01/runtime/invocation/#{awsRequestId}"
ENV["_X_AMZN_TRACE_ID"] = response.headers["Lambda-Runtime-Trace-Id"] || ""
begin
body = %q({ "statusCode": 200, "body" : "Hello World from Crystal" })
response = client.post("#{baseUrl}/response", body: body)
STDOUT.print("response invocation response #{response.status_code} #{response.body}\n")
rescue ex
body = %Q({ "statusCode": 500, "body" : "#{ex.message}" })
response = client.post("#{baseUrl}/error", body: body)
STDOUT.print("response error invocation response #{response.status_code} #{response.body}\n")
ensure
client.close
end
end
end
You can now try to build this one using
mkdir bin
crystal build src/bootstrap.cr -o bin/bootstrap
However, this does not produce a linux binary under osx, so we need to take some additional steps, this is merely for finding out, if the code compiles.
serverless setup
The serverless setup is very short for this small example
service: crystal-hello-world
provider:
name: aws
runtime: provided
package:
artifact: ./bootstrap.zip
functions:
hello:
handler: hello-world
events:
- http:
path: hello
method: get
Building under OSX
As we need to build a linux binary for AWS lambda, we have to use docker in order to create this, as cross compilation under different operating systems is not yet that far for Crystal - we cannot wait for it though.
I assume you have docker up and running, all you need to do is running the following
docker run --rm -it -v $PWD:/app -w /app durosoft/crystal-alpine:latest \
crystal build src/bootstrap.cr -o bin/bootstrap \
--release --static --no-debug
This professionally googled one-liner looked super handy at first, but it turns out that alpine compiled binaries seem to be different than others, as those threw segmentation faults in the AWS lambda execution environment. So instead we should just go with the default crystal docker image
docker run --rm -it -v $PWD:/app -w /app crystallang/crystal:latest \
crystal build src/bootstrap.cr -o bin/bootstrap \
--release --static --no-debug
Even though I got some compilation warnings in the console, I was able to run this binary on AWS Lambda.
Packaging the app
This is one of my favourite parts of using crystal - you end up with a single binary, so unless you have to deploy assets, where you should use baked_file_system, all you need to do is to put the final binary in a zipfile, that will be used to deploy.
zip -j bootstrap.zip bin/bootstrap
The main issue here is to make sure that a binary named boostrap
lives in
the root of the zip file, as this is where the lambda execution environment
searches for it.
Deploying
As serverless is doing all the heavy lifting, all that is left to do is to have AWS account credentials with enough permissions and then you need to run
sls deploy
The deploy command returns an URL that you can just call via curl and
hopefully see the Hello World from Crystal
output on the console.
Congratulations! We just deployed Crystal code to AWS Lambda!
In order to check the execution of your request, you can run sls logs -f hello -t
on the console and will see logfile output as well as execution
times of the crystal binary, including warm up time - I suppose this is the
time that firecracker
(which is used for AWS Lambda) took to initialize for the first run (or for
more runs in parallel).
A potential next step would be to change the serverless YAML file to feature a POST request and to parse some parameters on top, from the incoming request body, which is all JSON.
Current issues and next steps
Looking at the AWS documentation, there is another endpoint that requires implementation, in case the initialization fails, so that no AWS request id could be obtained.
The current implementation closely resembles the sample provided in the lambda docs, which uses a while loop to wait for the next HTTP request. I do not know what happens in case of a timeout here. Something that needs investigation.
The implementation could be optimized by reusing the HTTP client, but that is easy to change.
The interesting part would be to create a framework around this, so one can easily write crystal events. However it should be lightweight, just some helpers for HTTP events or S3 notifications to prevent useless JSON parsing and use objects.
Update: There is already one helper available named crambda.cr which exposes some variables of the lambda context. This could of course still be used together with serverless to deploy.
If you have questions or the correct solution to this problem, either drop me an email or ping me on twitter