Using AWS Lambda and Slack to browse the web, so you don't have to

Creating an event-driven serverless web browsing and notification tool to automate web-based tasks with AWS Lambda, Chrome, Puppeteer and Slack.

December 27th, 2020


Some fun examples including stock availability checks for the Xbox Series X are used to demonstrate the automation of web browsing tasks and notifications using AWS Lambda, headless Chrome, Puppeteer and Slack. The design decisions are explained, the code repo and implementation notes are shared, and video demos show the tool in action.

The idea

During lockdown earlier this year, I wanted to buy a specific outdoor storage solution for the garden. However, this particular product was only available from one retailer and seemingly always out of stock. The retailer didn’t have a stock alerting feature, and I got tired of periodically checking the website to see it was still out of stock. I decided it would be cool to have a little tool that did it for me and notify me when it’s back in stock. I've been meaning to write this post for a while, then just recently, stock availability for the Xbox Series X became a thing, so a good topical reason to do it.

Design goals

These are the design goals I had for the tool:

  • I’d like to be able to quickly script the automation of basic web browsing tasks (script/test/deploy in around 30 mins)
  • I’d like to run multiple tasks
  • I’d like to run the tasks on a schedule, such as daily or hourly, with each task having a different schedule
  • I’d like to receive a notification on my phone when the task has something worth telling me, i.e. something is in stock or there was an unexpected error while running the task (so I can investigate/fix it)
  • I don’t want to spend much (any) money to do this

Conceptual design

This is the conceptual design of the tool I want to create:

Illustration of the conceptual architecture for the web automation tool

Technology selection

Since we were in lockdown, I had some spare time on my hands and decided to invest some time researching how to build a tool/framework that would allow me to easily automate web browsing tasks.

Programming environment

JavaScript/Node.js and its package ecosystem and community is my goto to get up and running quickly, so I’d be using that to build the tool and task framework.

Web browser automation

There are several tools in the JavaScript/Node.js ecosystem you can use to do this, Puppeteer seems to be the most popular, and I’ve used it successfully for other automation tasks recently. Puppeteer is headless by default so ideal for automation.

Zero-cost infrastructure

The cost goal might seem a bit unreasonable, but due to the scheduling requirement, I knew this was a perfect fit for an event-driven serverless architecture. I’ve worked with AWS Lambda quite a lot for work and personal projects, and the free tier is quite generous, for personal projects I don’t think I’ve paid anything for it yet - if I have, it’s been pennies. However, I needed to validate if I could run web browsing tasks within the constraints of a Lambda function.

Headless browser

Puppeteer automates Chromium browsers (headless and non-headless), but can Chromium run in a Lambda function? Not without some great work from the community to create a Chrome build for the AWS Lambda runtime. There’s also a Lambda layer solution for this too, although I haven’t tried this approach yet. Another great feature of this package is that it runs headless when running in Lambda and non-headless when running locally - so it’s frictionless to develop, test and run your scripts.


Getting push notifications on your phone usually requires you have an app you can publish the notification to via the vendor’s push notification service. There’s no chance I’m developing an app just to get notifications. I could use Twilio/SNS to send SMS messages instead of push notifications, but SMS isn’t a very flexible messaging format, plus it wouldn’t be completely free (although arguably a negligible cost for my usage). I already use Slack to get notifications for AWS billing alerts etc via SNS, and I know its Webhook API provides a simple but powerful way to deliver fairly rich messages that can appear as notifications on your devices. Plus it would be a cost-free solution (for my usage).


Feeling comfortable I had all the components to build this tool, I created a quick proof of concept to validate the technology choices and the approach. I used the serverless framework to get up and running quickly with a single function that ran a basic web scraping task using chrome-aws-lambda and puppeteer-core. The serverless framework enables you to add AWS CloudWatch event rules as schedules to your Lambda functions with a few of lines of YAML. Sure enough, the solution was packaged in under 50MB and once deployed it ran on schedule and did exactly what I expected.


After the technology selection and validation, the conceptual design evolved into something more concrete:

Illustration of the logical architecture for the web automation tool


I’ve published the code for the tool on Github with the examples from the demos further on in the post, feel free to use it and adapt it. Below are some notes on the implementation:


To make it quick and easy to add/remove tasks in the future I decided to create a plugin model where the tasks are dynamically loaded at runtime from a specified directory. The plugin implementation recursively scans the specified directory and requires any JavaScript modules it finds:

if (!pluginPath.endsWith('.test.js') && pluginPath.endsWith('.js')) {
  if (!require.cache[pluginPath]) {`loading plugin: ${pluginPath}`)
    // eslint-disable-next-line import/no-dynamic-require
    return require(pluginPath)(container)
  }`plugin already loaded: ${pluginPath}`)

Each plugin is passed a plugin container (array) that it should push itself into. I also wanted to develop my tasks using TDD, and my preference is to colocate the tests file with the subject file, so I had to specifically ignore test scripts in the loading sequence (line 1).

I originally designed this as an ephemeral process and loaded the plugins on each invocation, but it turns out a Lambda process can hang around for a while, which makes sense from an optimisation point of view (especially if it has scheduled events within a relatively short time frame). Anyway, I had to add a check to see if the plugin was already loaded (line 2).


Now adding a task is as simple as adding a new JavaScript module, but what would a task look like? I decided each task should have the following structure:

  • name: used as the display name in notifications
  • url: the entry point for the task and also a link in the notification for quick access
  • emoji: to easily distinguish the content for each task in a notification I decided to include an emoji as a prefix to the content
  • schedule: the event schedule to run the task with, I decided to use the AWS CloudWatch ‘rate’ expression for event schedules as it covers my needs and is easy to parse (I can always add ‘cron’ support later if I ever need it)
  • run: a function that performs the task (async of course), it should return a result that can be used in subsequent notifications
  • shouldNotify: a function that is provided with the result of the task and returns true/false to signal whether a notification should be sent, this enables flexibility about what gets notified. For example, I might only want a notification if stock is available or if the task failed, otherwise don’t notify me at all.

Here’s a basic example from the task scheduling test for a task that runs every 5 minutes (demo later on):

const task = () => ({
  name: 'Every 5 mins',
  url: 'http://localhost/task/minutes/5',
  emoji: ':five:',
  schedule: 'rate(5 minutes)',
  shouldNotify: () => true,
  run: async function run() {
    return `${} just ran`

A plugin task provider loads the tasks from a specified location and parses the schedule into a more filterable object representation using the schedule parser:

const matches = schedule.match(/(.*)\((\d*) (.*)\)/)
if (matches && matches.length >= 4) {
  if (matches[1] === 'rate') {
    return {
      type: 'rate',
      unit: matches[3],
      value: parseInt(matches[2], 10),

Now a chainable task filter can easily filter a list of tasks based on their schedules.

Task schedules

I want a single Lambda function to run the tasks, which means I'll need multiple event schedules defined on the function. Since one of my design goals is to make it as simple as possible to add a new task, I don't want to have to remember to add new schedules to my function as and when the need for them comes up. I'd prefer the schedule requirements were picked up automatically from the tasks that have been defined.

One of the reasons I chose the serverless framework is due to its extensibility, I've previously written about using plugins and lifecycle hooks to add new capabilities. I created a serverless framework plugin that hooks into the before:package:initialize lifecycle hook to load the tasks and build a unique list of schedules, which it adds to the function definition dynamically before the function is packaged and deployed.

Task host

The task host is the execution environment that receives the invocation event and is responsible for resolving the invocation schedule. In this case, the host is a Lambda function, and unfortunately the event payload only contains a reference to the CloudWatch event rule ARN that invoked the Lambda, rather than the rule itself. So, I have to jump through some hoops to split the rule ARN to get the rule name using the resource parser, then get the rule with its schedule from the CloudWatch events API before parsing it with the schedule parser. This all comes together in the host to load the tasks and filter them based on the invocation schedule, and if there are any, runs them via the task runner and awaits the results:

const ruleName = resourceParser.parse({ resource: event.resources[0] })
if (ruleName) {
  const rule = await rules.byName({ name: ruleName })
  if (rule) {
      `invocation schedule is ${rule.schedule.type}(${rule.schedule.value} ${rule.schedule.unit})`,
    )'loading tasks')
    const tasks = await taskProvider.tasks()
    if (tasks.length > 0) {`loaded ${tasks.length} tasks`)
      const scheduledTasks = taskFilter(tasks).schedule(rule.schedule).select()`running ${scheduledTasks.length} scheduled tasks`)
      result.tasks = await{ tasks: scheduledTasks }) = tasks.length
      result.completed = true'done')
  } else {'could not parse the schedule')

The host augments the result from the task runner with the total tasks provided to the runner and signals that the process completed successfully.

Task runner

The first thing the task runner does is map through all the provided tasks and runs them, adding any successfully run tasks and their results to a list of successful runs, and the failed tasks and their results to a list of failed runs, which are returned with a count of the tasks run:

const result = {
  run: 0,
  succeeded: [],
  failed: [],

const promises = (task) => {
  try {`running ${} task`) += 1
    const taskResult = await
    result.succeeded.push({ task, result: taskResult })
  } catch (err) {
    log.error(`error running ${} task`, err)
    result.failed.push({ task, result: err })

  return result

await Promise.all(promises)

return result

Once the task runs are complete, the task runner determines which tasks should have notifications and sends them via the notifier.


In this case, the notifier is sending the notifications via Slack. First, each task result is summarised into a block of text:

text: `<${success.task.url}|${}>\n${success.task.emoji} ${success.result}`

Failed tasks are summarised similarly, except an :exclamation: emoji is used.

The task result summaries (for success and failures) are sent in a single Slack message, with each summary in a separate block and interspersed with dividers:

const message = {
  blocks: [],

const toBlock = (summary) => ({
  type: 'section',
  text: {
    type: 'mrkdwn',
    text: summary.text,

const blocks =

const divider = {
  type: 'divider',

message.blocks = intersperse(blocks, divider)

return message

The message is then sent to the Slack Webhook endpoint configured in the environment:

const endpoint = process.env.SLACK_ENDPOINT
const response = await fetch(endpoint, {
  method: 'POST',
  body: JSON.stringify(message),
  headers: { 'Content-Type': 'application/json' },

That’s the gist of it, time for some demos.


I have 2 demos for this tool. The first demo is of a test I created to ensure scheduled events worked with tasks as expected. The second is a more practical example of some real-world tasks, a daily check for rumours about my football club (Newcastle United) and a topical/seasonal example, checking stock availability for an Xbox Series X.

Schedule task runner

I set up this demo to test the scheduled running of tasks, it consists of 4 tasks that are scheduled to run every 5 minutes, 10 minutes, once an hour and every 2 hours. The tasks don’t do much other than return some text detailing that they ran, but each has a number emoji so I can see if it’s working correctly:

Footy gossip and Xbox Series X stock checks

Examples of some tasks I’m using right now are to scrape any rumours about Newcastle United from the BBC football gossip page which I run on a daily schedule, and checking the Xbox website for stock availability of the Series X, which I run on an hourly schedule.

Footy gossip

This task loads the gossip page, finds all the individual paragraphs and applies a regular expression (rumourMatcher) to filter paragraphs that contain the words Newcastle or Toon:

const rumourMatcher = /(Newcastle|Toon)/
const page = await browser.newPage()

await page.goto(url)
const allRumours = (await page.$$('article div p')) || []`found ${allRumours.length} total rumours...`)

const text = await Promise.all(
  [...allRumours].map((rumour) => rumour.getProperty('innerText').then((item) => item.jsonValue()),

const matchedRumours = text.filter((rumour) => rumour.match(context.rumourMatcher))`found ${matchedRumours.length} matching rumours...`)

result = matchedRumours.length > 0 ? matchedRumours.join(`\n\n`) : 'No gossip today.'

Any matching rumours are concatenated together with some spacing lines, and if none are matched the text ‘No gossip today.’ is returned. The task is configured with a football emoji.

Xbox Series X stock availability

This task loads the stock availability page for the standalone Xbox Series X, finds all the retailers, extracts the retailer name (or domain) from the alt text of the logo image and the stock availability text:

const page = await browser.newPage()

await page.goto(url)
const retailerElements = (await page.$$('div.hatchretailer')) || []`found ${retailerElements.length} retailers...`)

const retailerName = async (retailer) =>
  `span.retlogo img`,
  (element) => element.getAttribute('alt').slice(0, -' logo'.length), // trim ' logo' off the end of the alt text to get the retailer name

const retailerStock = async (retailer) =>
retailer.$eval(`span.retstockbuy span`, (element) => element.innerHTML)

const hasStock = (retailers) =>
retailers.reduce((acc, curr) => {
  if (curr.stock.toUpperCase() !== 'OUT OF STOCK') {

  return acc
}, [])

const retailers = await Promise.all(
  [...retailerElements].map(async (retailer) => ({
    name: await retailerName(retailer),
    stock: await retailerStock(retailer),

const retailersWithStock = hasStock(retailers)

result =
  retailersWithStock.length > 0
  ? => `${} (${retailer.stock})`).join(`\n\n`)
: 'No stock.'

I don’t know what the text is when there is stock, so I’m testing the stock availability text for anything that isn’t ‘OUT OF STOCK’ to determine retailers that might have stock, and again, concatenating any retailers with potential stock together with some spacing lines, and if none are matched the text ‘No stock.’ is returned. The task is configured with a joystick emoji.

Here are the tasks in action:

Note: I changed the schedules to 1 minute to quickly demo the tasks running.

Wrapping up

Well if you didn’t unwrap an Xbox Series X for Xmas, now you can be one of the first to know when they’re available again. I’ve shown you some fun examples of how you can use this technology, it’s especially useful where you want to act on data that isn’t available via other means, such as an alert or API. There's loads of things you can do, for fun or profit, I'll leave it to your imagination - the world wide web is your oyster.