Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add BullMQ + Redis and prototype notifications queue #161

Merged
merged 1 commit into from
Feb 10, 2023

Conversation

humphd
Copy link
Contributor

@humphd humphd commented Feb 8, 2023

This is an initial attempt to add BullMQ and Redis (i.e., ioredis) to our code.

I've done the most basic thing possible vs. making this so it will scale. At the moment we have:

  • app/queues/* which holds all of our queues.
  • app/queues/notifications.server.ts which is our first queue, dealing with sending user notifications asynchronously. We'll add more as we go, but this can provide a template.
  • The Queue and Worker are both co-located in the same file, and in the same process. Down the road, we can break the worker into its own process or container(s). But I didn't want to do this at first, since it will complicate things significantly.

The way this queue would be used is like this. Imagine we want to send a message to a user. The code would be:

import notificationsQueue from '~/queues/notifications.server';

...

// Send a new user a welcome message. Create a unique job id for this job
const jobId = `${user.email}-welcome`;
// Add the job to the queue, passing the data it will need. We have to wait for
// the job to get put into Redis, but not for the email to be sent.
await notificationsQueue.add(jobId, {
  emailAddress: user.email,
  subject: "Welcome to my.custom.domain!",
  message: "We're thrilled to welcome you to my.custom.domain...",
});

The job can now be handled by the Worker, which will attempt to send it at some point in the near future. If it fails, it will retry the job 3 more times, waiting progressively longer each time. We can configure all of this.

I don't have anywhere in the code to hook into this yet, so it's not easy to test.

@humphd humphd self-assigned this Feb 8, 2023
app/lib/redis.server.ts Show resolved Hide resolved
app/lib/redis.server.ts Show resolved Hide resolved
docker-compose.yml Show resolved Hide resolved
ghost
ghost previously approved these changes Feb 9, 2023
Copy link

@ghost ghost left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR is still in draft, but it looks great!

@humphd
Copy link
Contributor Author

humphd commented Feb 9, 2023

I added graceful shutdown for the workers and redis.

This won't build yet, I need to get the nodemailer stuff landed first, and then I'll iterate on this some more.

Thanks for the first review @dadolhay, appreciated!

@humphd
Copy link
Contributor Author

humphd commented Feb 9, 2023

I've updated this to use the work TD just landed for notifications. I also added a /dev page we can use to create test UI and added a button to send a notification.

Once you click the button, it adds a job to the notifications queue. This will get picked up by the worker and an email sent. You can check it in MailHog via http://localhost:8025:

Screenshot 2023-02-09 at 1 45 18 PM

To make this work, I also had to fix our CSP policy in helmetjs. It breaks hot-reloads in the UI due to unsafe inline script injection. We don't want that in production, but in development it's nice so the app responds to code changes in real-time.

What I'm doing here on the /dev page could be copied by other people, who need to try out UI-triggered actions (e.g., @Genne23v with DNS, @dadolhay with Certs).

@humphd humphd marked this pull request as ready for review February 9, 2023 19:44
@humphd humphd requested a review from a user February 9, 2023 19:44
@humphd
Copy link
Contributor Author

humphd commented Feb 9, 2023

Added a fix for the node EventEmitter max listeners warning you get when using Redis/BullMQ:

(node:35293) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 17 error listeners added to [Commander]. Use emitter.setMaxListeners() to increase limit

By default, node.js warns if an event emitter has more than 10 listeners; however, the way Redis works, you need way more than that all the time. The recommended fix for this by the BullMQ maintainer is to bump the default value up. In Telescope, we've used 32 as our default and not had any issues.

SerpentBytes
SerpentBytes previously approved these changes Feb 9, 2023
Copy link
Contributor

@SerpentBytes SerpentBytes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to work on my end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants