Conceptual Article

How To Write a Serverless Function

Published on May 24, 2022

Senior DevOps Technical Writer

How To Write a Serverless Function

Introduction

Serverless architecture allows backend web services to be implemented on an as-needed basis. Rather than needing to maintain your own server configuration, architecting your software for serverless providers can minimize the overhead involved. Serverless applications are typically deployed from a Git repository into an environment that can scale up or down as needed.

This means that serverless functions can effectively “scale to zero” – a function or endpoint should consume no resources at all as long as it is not being accessed. However, this also means that serverless functions must be well-behaved, and should become available from an idle state only to provide individual responses to input requests. These responses can be as computationally intensive as needed, but must be invoked and terminated in a predictable manner.

This tutorial will cover some best practices for writing an example serverless function.

Prerequisites

To follow this tutorial, you will need:

  • A local shell environment with a serverless deployment tool installed. Some serverless platforms make use of the serverless command, while this tutorial will reflect DigitalOcean’s doctl sandbox tools. Both provide similar functionality. To install and configure doctl, refer to its documentation.

  • The version control tool Git available in your development environment. If you are working in Ubuntu, you can refer to installing Git on Ubuntu 20.04

Step 1 — Scaffolding a Serverless App Repository

A complete serverless application can be contained in only two files at a minimum — the configuration file, usually using .yml syntax, which declares necessary metadata for your application to the serverless provider, and a file containing the code itself, e.g. my_app.py, my_app.js, or my_app.go. If your application has any language dependencies, it will typically also declare them using standard language conventions, such as a package.json file for Node.js.

To initialize a serverless application, you can use doctl sandbox init with the name of a new directory:

  1. doctl sandbox init myServerlessProject
Output
A local sandbox area 'myServerlessProject' was created for you. You may deploy it by running the command shown on the next line: doctl sandbox deploy myServerlessProject

By default, this will create a project with the following directory structure:

myServerlessProject/
├── packages
│   └── sample
│       └── hello
│           └── hello.js
└── project.yml

project.yml is contained in the top-level directory. It declares metadata for hello.js, which contains a single function. All serverless applications will follow this same essential structure. You can find more examples, using other serverless frameworks, at the official Serverless Framework GitHub repository, or refer to DigitalOcean’s documentation. You can also create these directory structures from scratch without relying on an init function, but note that the requirements of each serverless provider will differ slightly.

In the next step, you’ll explore the sample project you initialized in greater detail.

Step 2 — Architecting a Serverless Application

A serverless application can be a single function, written in a language that is interpreted by your serverless computing provider (usually Go, Python, and JavaScript), as long as it can return some output. Your function can call other functions or load other language libraries, but there should always be a single main function defined in your project configuration file that communicates with the endpoint itself.

Running doctl sandbox init in the last step automatically generated a sample project for your serverless application, including a file called hello.js. You can open that file using nano or your favorite text editor:

  1. nano myServerlessProject/packages/sample/hello/hello.js
~/myServerlessProject/packages/sample/hello/hello.js
function main(args) {
    let name = args.name || 'stranger'
    let greeting = 'Hello ' + name + '!'
    console.log(greeting)
    return {"body": greeting}
  }

This file contains a single function, called main(), which can accept a set of arguments. This is the default way that serverless architectures manage input handling. Serverless functions do not necessarily need to directly parse JSON or HTTP headers to handle input. On most providers’ platforms, serverless functions will receive input from HTTP requests as a list of arguments that can be unpacked using standard language features.

The first line of the function uses JavaScript’s || OR operator to parse a name argument if it is present, or use the string stranger if the function is called without any arguments. This is important in the event that your function’s endpoint is queried incorrectly, or with missing data. Serverless functions should always have a code path that allows you to quickly return null, or return the equivalent of null in a well-formed HTTP response, with a minimum of additional processing. The next line, let greeting =, performs some additional string manipulation.

Depending on your serverless provider, you may not have any filesystem or OS-level features available to your function. Serverless applications are not necessarily stateless. However, features that allow serverless applications to record or retain their state between runs are typically proprietary to each provider. The most common exception to this is the ability to log output from your functions. The sample hello.js app contains a console.log() function, which uses a built-in feature of JavaScript to output some additional data to a browser console or a local terminal’s stdout without returning it to the user. Most serverless providers will allow you to retain and review logging output in this way.

The final line of the function is used to return output from your function. Because most serverless functions are deployed as HTTP endpoints, you will usually want to return an HTTP response. Your serverless environment may automatically scaffold this response for you. In this case, it is only necessary to return a request body within an array, and the endpoint configuration takes care of the rest.

This function could perform many more steps, as long as it maintained the same baseline expectations around input and output. Alternatively, your application could run multiple serverless functions in a sequence, and they could be swapped out as needed. Serverless functions can be thought of as being similar to microservice-driven architectures: both enable you to construct an application out of multiple loosely-coupled services which are not necessarily dependent on one another, and communicate over established protocols such as HTTP. Not all microservice architectures are serverless, but most serverless architectures implement microservices.

Now that you understand the application architecture, in the next step, you’ll learn some best practices around preparing serverless functions for deployment and deploying serverless functions.

Step 3 — Deploying a Serverless Function

The doctl sandbox command line tools allow you to deploy and test your application without promoting them to production, and other serverless implementations provide similar functionality. However, nearly all serverless deployment workflows will eventually involve you committing your application to a source control repository such as GitHub, and connecting the GitHub repository to your serverless provider.

When you are ready for a production deployment, you should be able to visit your serverless provider’s console and identify your source repository as a component of an application. Your application may also have other components, such as a static site, or it may just provide the one endpoint.

For now, you can deploy directly to a testing sandbox using doctl sandbox:

  1. doctl sandbox deploy myServerlessProject

This will return information about your deployment, including another command that you can run to request your live testing URL:

Output
Deployed '~/Desktop/myServerlessProject' to namespace 'f8572f2a-swev6f2t3bs' on host 'https://faas-nyc1-78edc.doserverless.io' Deployment status recorded in 'myServerlessProject\.nimbella' Deployed functions ('doctl sbx fn get <funcName> --url' for URL): - sample/hello

Running this command will return your serverless function’s current endpoint:

  1. doctl sbx fn get sample/hello --url
Output
https://faas-nyc1-78edc.doserverless.io/api/v1/web/f8572f2a-swev6f2t3bs/sample/hello

The paths returned will be automatically generated, but should end in /sample/hello, based on your function names.

Note: You can review the doctl sandbox deployment functionality at its source repository.

After deploying in testing or production, you can use cURL to send HTTP requests to your endpoint. For the sample/hello app developed in this tutorial, you should be able to send a curl request to your /sample/hello endpoint:

  1. curl https://faas-nyc1-78edc.doserverless.io/api/v1/web/f8572f2a-swev6f2t3bs/sample/hello

Output will be returned as the body of a standard HTTP request:

Output
“Hello stranger!”

You can also provide the name argument to your function as outlined above, by encoding it as an additional URL parameter:

  1. curl “https://faas-nyc1-78edc.doserverless.io/api/v1/web/f8572f2a-swev6f2t3bs/sample/hello?name=sammy
Output
“Hello sammy!”

After testing and confirming that your application returns the expected responses, you should ensure that sending unexpected output to your endpoint causes it to fail gracefully. You can review best practices around error handling to ensure that input is parsed correctly, but it’s most important to ensure that your application never hangs unexpectedly, as this can cause availability issues for serverless apps, as well as unexpected per-use billing.

Finally, you’ll want to commit your application to GitHub or another source code repository for going to production. If you choose to use Git or GitHub, you can refer to how to use Git effectively for an introduction to working with Git repositories.

After connecting your source code repository to your serverless provider, you will be able to take additional steps to restrict access to your function’s endpoints, or to associate it together with other serverless functions as part of a larger, tagged app.

Conclusion

In this tutorial, you initialized, reviewed, and deployed a sample serverless function. Although each serverless computing platform is essentially proprietary, the various providers follow very similar architectural principles, and the principles in this tutorial are broadly applicable. Like any other web stack, serverless architectures can vary considerably in scale, but ensuring that individual components are self-contained helps keep your whole stack more maintainable.

Next, you may want to learn more about efficient monitoring of microservice architectures to better understand the optimization of serverless deployments. You may also want to learn about some other potential serverless architectures, such as the Jamstack environment.

Run functions on demand and scale automatically with DigitalOcean Functions. Our Functions is a serverless computing solution that runs on-demand, enabling you to focus on your code, scale instantly, and save!

Learn more here

About the authors
Default avatar

Senior DevOps Technical Writer

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
7 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Is there a way to deploy Serverless Function like this in DigitalOcean using a custom runtime (Docker container)?

I have to say, after working with Serverless Functions for a week or so, in my opionion its nowhere near production ready. It’s brittle and the docs are laughably brief, and often wrong.

Where is the complete documentation on project.yml? I am not able to configure auth through the yml file.

And of course, right after I post that I find the resource.

It’s here, for any one else looking: https://docs.digitalocean.com/products/functions/reference/project-configuration/

@agarnett thanks for this post, but I’m wondering if there are any docs on the project.yml.

The DO format is obviously different than that of the other projects in the Serverless examples repository.

I know that I can see some of them when I run doctl serverless init, but they don’t see to be listed/documented anywhere.

This is cool, I’m glad to see DigitalOcean offering Serverless functions. I haven’t used them before (yet) but I’m wondering about one thing… every time you want to change and deploy your function, you get a new endpoint for the new function correct? It involves some random hash code… what’s the best way to deal with this ever-changing endpoint when sharing to other 3rd party services and tools I might have? It would be annoying to always have to update those services with the new endpoint every time it changes.

I suppose I could write my own script that does the deploy, gets the new URL, and updates/shares it wherever necessary. Is that what people typically do?

When i have more than one file i get this error in logs:

“Zip file does not include main__”

When i have one file, everything is ok, because DO finds the main function himself.

I tried to fill “main” field in project.yml with entrypoint, but didn’t succeed. What is the format of entrypoint? sample.hello.hello.main ? Or hello.main ?

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Featured on Community

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more
DigitalOcean Cloud Control Panel