TL;DR
This is the third post in a series focussing on the use of Cloudflare Workers as APIs. In this post a more productive workflow is achieved by creating a few abstractions to enable Node.js as the local development, debugging and testing runtime, with integration into VS Code for an "F5" like debugging experience. Also, logical environments are presented as a way to enable side by side use cases such as multiple developers working in the same account and the creation of ephemeral resources in CI/CD pipelines. The post culminates with some video demos of logical environments being used to support local development and CI/CD workflows.
The challenge
When developing cloud services the local development experience often requires the developer to deploy their services to the cloud to test them, which can significantly slow down the development workflow. An alternative approach is to use local abstractions that attempt to emulate the target service - at least partially - to enable quicker iteration.
With Cloudflare Workers there's the wrangler CLI which has a dev command that enables you to send HTTP requests to an instance of your Worker running locally - presumably emulating various features of the Cloudflare Worker environment. But to be honest I don't find the mapping of Workers to scripts and routes in the required wrangler.toml file as intuitive or flexible as the Serverless framework. Plus I also struggled to find a way to easily launch (i.e. hit F5) into a debugging session with VS Code when using wrangler.
We've already seen the flexibility we gain when using the Serverless framework in my post on blue/green deployments, so I wanted to explore an approach to faster local development, debugging and testing that compliments that setup using Node.js and VS Code.
Considerations
Workers run on V8 Isolates which gives them improved performance characteristics over some other runtimes (i.e. less impact of cold starts). The Worker runtime is aligned to Web Workers so is more akin to a browser environment than a server environment like Node.js. There are some things to consider when designing a local development experience, many of which are just good software engineering principles anyway:
Substitutable dependencies
Substitutable dependencies is an inversion of control technique that requires identification of specific runtime features you will depend on when running in a given execution context (Cloudflare Workers) that may require an alternative implementation in another execution context (Node.js), and making sure you have a mechanism for substituting the dependencies (a form of dependency injection).
A good place to start is the list of Worker runtime APIs, if you intend to use any of these (you will be using Fetch/Request/Response), then ensure you have a substitution mechanism to make a runtime specific implementation available:
In the diagram above, the API resources are depending on an abstraction, that in this case is providing implementations for Runtime A. To run our code in Runtime B, we can replace the abstraction with one that provides implementations for Runtime B:
The diagram above shows a new abstraction for Runtime B that has had to polyfill some missing API support in the runtime. When you depend on an abstraction you enable flexibility and portability.
Substitutable execution context
Substitutable execution context follows on from the principle of substitutable dependencies and is the principle that your code should be appropriately encapsulated so that it is runnable in any execution context, with minimal integration to acquire input and generate output.
For example, assuming you follow the principle above for your dependencies, if you encapsulate your application logic into the abstraction of a resource that doesn't depend on specific runtime APIs to acquire input and generate output, it can easily be run in a Worker, Lambda, Container or whatever - the handling of input and output should be the only integration concern:
In the diagram above, following on from the substitutable dependencies principle, the API resources have a very thin integration layer that binds them to the entry and exit points of the runtime, i.e. the acquistion of input and generation of output. We can bind our code to another runtime by simply replacing the thin integration layer between the entry and exit points.
This allows you to test the majority of your application code irrespective of the target execution context, and for those thin layers of integration, you can use appropriate mocks and integration tests at appropriate points in your delivery pipeline.
Global scope
The runtime APIs in a Worker are available in the global scope, and so are the individual environment variables and KV bindings. Making the runtime substitutes available in the global scope of your execution context is a quick way of achieving the above principles. However, in some cases, such as with environment variables and KV bindings, it would be prudent to normalise access through an abstraction.
For example, the common task of accessing an environment variable in Node.js:
const host = process.env.HOST
In a Worker environment is:
const host = HOST
And to access a key/value pair in KV with the CONFIG binding:
const value = CONFIG.get(key)
It gets tricky to make those variables available in the global scope at runtime when you aren't pre-processing / transpiling the code, so instead access that data via an abstraction:
const host = context.environment.get('HOST')
const value = context.config.get(key)
Implementation
I forked the repo from the post on blue/green deployments to incrementally add the changes required for this post so you can see a diff of the changes required in the repo for this post.
The first thing to do was set up the solution so that we can substitute the execution context for a Node.js implementation locally.
Entry point
The default Cloudflare entry point is a small script that simply adds an event listener to run your code on a 'fetch' event and then returns the response, i.e.:
addEventListener('fetch', (event) => {
// some code
event.respondWith(someResponse)
})
It's non-trivial to substitute this entry point locally in Node.js. I tried and got so far before I decided it wasn't worth the effort as it's such a thin layer of integration anyway. Following the above principles, we should move as much API code into an abstraction outside of the entry point, and test the Cloudflare integration at an appropriate point in our delivery pipeline (integration test).
I already had the notion of a 'resource', as in RESTful resource, as an abstraction to contain the API logic, so I'll use that as the entry point with minimal integration (take input, return output):
addEventListener('fetch', (event) => {
const resource = require('../../../resources/account')
event.respondWith(resource(event))
})
I could take this further and extract the inputs from the event into a normalised representation, and do the same for the response - but YAGNI.
Now the majority of the code lives in the account resource and just the input and response need to be integration tested later.
Execution context
I want to create the concept of an execution context that I can make available to my application code and change depending on where I'm running it.
The execution context will provide runtime specific substitutes for any runtime APIs/dependencies. The context will be some code that runs before any other code to set up a context object in the global scope that will provide access to any dependencies via abstractions. In the Node.js scenario, it will make Node.js equivalents of the global runtime APIs available in the global scope.
Node.js context
The Node.js context requires the node-fetch package to make equivalent fetch APIs available in the global scope. It also provides a Node.js implementation of an abstraction to retrieve environment variables (via process.env) and then makes the context available in the global scope:
const fetch = require('node-fetch')
// substitute Node.js implementations in the global scope
global.fetch = fetch
global.Request = fetch.Request
global.Response = fetch.Response
global.Headers = fetch.Headers
// abstraction for accessing environment variables
const environment = {
variable(key) {
try {
if (process.env[key]) {
return process.env[key]
}
} catch (err) {}
return undefined
},
}
// expose abstractions in the execution context
const context = {
environment,
}
global.context = context
There are many more things we can do here, but the above is a bare-bones implementation to get the concept across. Node.js equivalents would need to be found for any other runtime APIs you use; for example, I also created a crypto implementation that uses the Node.js crypto package.
Cloudflare context
The Cloudflare context is much smaller as it only needs to substitute the abstraction used to access environment variables (via global variables):
// abstraction for accessing environment variables
const environment = {
variable(key) {
try {
if (self[key]) {
return self[key]
}
} catch (err) {}
return undefined
},
}
// expose abstractions in the execution context
const context = {
environment,
}
global.context = context
The differences in this context are using self to access global variables rather than process.env, and not having to provide a fetch implementation as it'll already be available in the Cloudflare runtime.
Loading the execution context
Now I just need to find the right place to load the execution context.
Cloudflare
I don't have any control over the Cloudflare load sequence so I'm just going to require it as the first step in the Cloudflare entry point:
require('../context')
addEventListener('fetch', (event) => {
const resource = require('../../../resources/account')
event.respondWith(resource(event))
})
This isn't ideal as I have to remember to do this for every Worker entry point I create. I could probably extend the webpack configuration to do this for me but then it becomes more opaque and I'm not sure it's worth the effort. I can always look at it again later if it becomes a problem.
Node.js
For Node.js I can take advantage of the -r flag to require and run the context (src/integration/node/context) before running the resource (src/resources/account):
> node -r src/integration/node/context src/resources/account
I can't actualy run the above command because the resource is expecting an event as input, therefore a runner will need to be created. The runner will take a resource (module) and an event (object) as input, run the resource with the event and output the response:
...
const run = async (resource, event) => {
return resource(event)
.then((response) => {
// log response code and text, i.e. 200 OK
console.info(`${response.status} ${response.statusText}`)
// log headers
if (response.headers) {
console.info(stringifyHeaders(response.headers))
}
if (response.ok) {
if (response.headers.get('content-type') === 'application/json') {
return response.json()
}
return response.text()
}
return Promise.resolve(null)
})
.then((data) => {
if (data) {
console.info(`${data}`)
}
})
.catch(() => {
// ignore
})
}
...
With that in place a small test script can be created that simply requires the runner, resource, creates an appropriate event object and then runs it:
const runner = require('../runner')
const resource = require('../../src/resources/account')
const event = runner.createEvent({ route: 'account*' })
runner.run(resource, event)
I added some utility code to the runner to easily create different event objects for different tests.
Now you can run the command:
> node -r src/integration/node/context testing/resources/account
> 200 OK
> content-type: application/json
> Processing account resource in undefined environment from undefined slot...
There we go the resource running and tested locally in Node.js.
Wait, why are the environment and slot 'undefined'...
Debugging
This brings me to another key local development concern, debugging. Now the code can be run locally we can take advantage of the debugging tools in VS Code (or whatever IDE you use).
In VS Code I can toggle Auto Attach to On, set a breakpoint in the account resource and run the command again:
> node -r src/integration/node/context testing/resources/account
> Debugger listening on ws://127.0.0.1:60149/c8abc0b9-b28f-4916-b962-66fbda4d4ad0
> For help, see: https://nodejs.org/en/docs/inspector
> Debugger attached.
Now VS Code has attached to my runner process and stopped at my breakpoint, I can quickly see that the environment variables ENVIRONMENT and SLOT aren't defined. I know why this is, since I'm running locally I need to set process.env.ENVIRONMENT and process.env.SLOT accordingly, I can end the debugging session and fix the problem.
...
> 200 OK
> content-type: application/json
> Processing account resource in undefined environment from undefined slot...
> Waiting for the debugger to disconnect...
>
I fix that problem by adding those variables to the .env file and using the -r trick to load dotenv before running my script:
> node -r dotenv/config -r src/integration/node/context testing/resources/account
> 200 OK
> content-type: application/json
> Processing account resource in test environment from blue slot...
There, fixed.
"F5" debugging
Auto attaching to my process and debugging that way was cool, but I can't be bothered to type out commands for every resource I want to debug, plus I always get them slightly wrong - I'd like to be able to just hit F5 like the old days in Visual Studio.
In VS Code you can create custom launch configurations for debugging. I've created a configuration that runs a Node.js process that requires my dotenv config and my Node.js execution context and runs the file I'm currently looking at in the editor:
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "launch",
"name": "Debug resource",
"cwd": "${workspaceFolder}/testing",
"runtimeExecutable": "node",
"runtimeArgs": ["-r", "dotenv/config", "-r", "../src/integration/node/context", "${file}"]
}
]
}
Now I can just set a breakpoint, open the test script for the resource I want to debug and hit F5 and VS Code will drop me into a debugging session.
Granted it's a little bit fiddly in that you have to have the test script for the resource open in the editor, and it can probably be improved upon, but this will do for now.
Demo
Watch a demo of "F5" debugging in action:
Logical environments
Ok so now I've got a workflow that enables me to quickly iterate and debug my Workers, but before I commit my code I want to test it in a Cloudflare Worker environment. Plus there are several developers on the team working side by side that need to do the same thing. We also want to have CI/CD processes running on our commits and pull requests (PRs) that can deploy our Workers and run integration tests. Having a separate Cloudflare account per developer and CI/CD process isn't feasible, especially when premium features are required and we share resources such as DNS records/TLS certs.
Enter the logical environment. This is a concept that allows multiple deployments of the same resources to exist in the same physical environment. The concept follows the blue / green deployments approach where an environment label forms part of the naming convention for the routes and Worker scripts and is dynamically embedded at the point of deployment.
Implementation
You can see a diff of the changes required to add logical environments to the solution in the repo for this post.
To implement this I created an ENVIRONMENT identifier that can be configured at runtime. This allows each developer and process to specify a unique environment label that can be used to namespace resources; for example:
Alex has the following in their local .env file:
ENVIRONMENT=alex
Now when Alex deploys their code the route will be (assuming the default blue slot):
blue-api.somewhere.com/alex/account* -> blue-alex-account
When a PR is created, the CI/CD process sets the environment label using the PR number:
ENVIRONMENT=pr123
When the CI/CD process deploys the code the route will be (again, assuming the default blue slot):
blue-api.somewhere.com/pr123/account* -> blue-pr123-account
I modified the Serverless plugin from my previous post to achieve this by factoring the host element of a route into a runtime variable and adding the environment as a runtime variable - you can view a summary of these modifications in the diff.
The solution assumes a default environment label of prod, and when the environment label is set to that, the route is not modified with the environment label to give you the "production" route; for example:
ENVIRONMENT=prod
Becomes:
blue-api.somewhere.com/account* -> blue-prod-account
By production route I mean the route that end-users will arrive on in the environment, the environment may not be the "production" environment.
Anyway, that's it - local development, debugging and testing with logical environments to enhance the experience of developing Cloudflare Workers.
DEMOS
Watch a demo of logical environments being used in the local development workflow:
Watch a demo of logical environments being used in a CI/CD workflow for a GitHub Pull Request:
Coming next
In the next post, I'll be looking at how we can introduce a middleware architecture to handle multiple HTTP methods and advanced route patterns in the API resources.
Make sure you check out the other posts in this series:
- Delivering APIs at the edge with Cloudflare Workers
- Blue / Green deployments for Cloudflare Workers
- Enhancing the development experience for Cloudflare Workers
- A middleware architecture for Cloudflare Workers
- API observability for Cloudflare Workers