Back to Blog Home

Monitoring Performance at Moonbeam from Day One

John Shahawy image

John Shahawy -

Monitoring Performance at Moonbeam from Day One

About the author: John Shahawy is the founder of Moonbeam, where he spends his time building AI/ML systems to help writers effortlessly create engaging content. John is also the Head System Engineer at a company that builds systems for the U.S. Department of Defense. Before Moonbeam, John held executive engineering roles at some of the leading banks in the United States, including Citi and Bank of America.


As someone who has seen the devastating effects of poor performance monitoring firsthand, I can attest to the importance of doing it right from the start. If your users are experiencing latency issues and you're not aware of them, that's a big problem. At one of my previous jobs, we ended up paying out millions of dollars in SLA violation fees because we didn't have proper monitoring.

While paying violation fees was terrible, how those violations impacted my team and I is what made me recognize the importance of proactively monitoring performance. As an engineering leader, you always want to be ahead of an issue before a customer (or leadership) reports it. But because we didn’t take the time to monitor our APIs, it meant there were several times when a customer would flag a performance problem (either via Twitter or our customer support lines), and it would escalate to my team, where we had to drop everything to figure out if the problem was real and then solve it if we could. Not only did this make me look unprepared as a leader, but the constant thrash on the engineering team while trying to build new capabilities just crushed the team’s productivity and morale.

So when I started Moonbeam - an AI blog editor that helps anyone become a writer by taking an idea and turning it into an engaging blog post - I added error monitoring and performance tracking to my stack starting on day one. That way, instead of customers telling me about bugs, I’ll catch the bugs early, and if there is customer impact, I can fix the problem and notify the customer proactively.

Instrumenting Moonbeam with Sentry in <5 minutes

I’m an engineer during the day and a founder at night – so I don’t have time to fiddle with complicated instrumentation or bugs that are hard to track down. That’s why I picked the tech stack that I did.

Moonbeam’s tech stack

  • Next.js – A first-class developer experience for full-stack React apps.

  • Vercel – The best hosting and CI/CD platform for full-stack Next.js apps.

  • PlanetScale – A serverless SQL database.  If you’ve ever tried to connect a serverless app to an SQL database, you know I could write a whole post on just this.

  • Sentry – The best batteries-included error and application performance monitoring tool for modern apps.

Instrumenting Moonbeam with Sentry was so simple I thought I was missing something. I expected to do a lot of manual work to instrument Moonbeam for performance and error monitoring with Sentry. Instead, it was a breeze to get set up.

Installation

TL;DR - I used this guide: https://docs.sentry.io/platforms/javascript/guides/nextjs/

First, I had to add Sentry to my project:

npm install --save @sentry/nextjs

Then, I used the Sentry wizard to scaffold & transform my code:

npx @sentry/wizard - nextjs

The wizard was crazy good.  Usually, I expect to mess with many settings to get a wizard or scaffold tool to work in my messy codebase, but Sentry’s wizard worked out of the box.

Instrumenting Moonbeam’s next.js APIs was the only thing Sentry doesn’t automatically do. Since Moonbeam’s codebase was small, instrumenting the APIs was simple. I had to wrap all of my exports with the Sentry wrapper to start piping performance and errors to Sentry automatically.

//add this import import { withSentry } from "@sentry/nextjs"; //change this export default handler;
//add this import import { withSentry } from "@sentry/nextjs"; //to this export default withSentry(handler);

Identifying redundant queries and reducing load time by 600%

About 6 hours after implementing Sentry, I looked at Sentry's performance monitoring dashboard. I saw that my registration and app landing page were both taking ~8 seconds before the first contentful paint (the first time a user can see anything useful).

Users get bored or assume your app is broken if it takes more than 2 seconds for the page to load, so an 8-second load time is disastrous.


If you've worked with a developer, you've heard, "everything looks fine on my side." This was the case here. Moonbeam looked good on my end, so I shipped the minimum viable product (MVP) to the world.

I didn't do much testing before shipping Moonbeam's MVP because the code surface area was tiny, and it seemed like there was a minimal chance of something going wrong.

It turns out that I was making the same database call in 3 different phases of the registration & landing page for Moonbeam. I was doing this to check if the user was logged in. However, this created some unnecessary duplication and wasted time 🤦‍♂️.

The easy fix of removing the duplicative API calls decreased the Moonbeam load time by over 600%. I would never have known there was a problem if I didn't instrument my MVP with Sentry.

Why we use Sentry (instead of building it ourselves)

We’re a lean team focused on Moonbeam finding product-market fit. I have no time for writing and maintaining extra code, so using a SaaS solution was the best fit. There were a few things that made choosing Sentry a no-brainer including the setup. It took me <5 minutes from signing up for a Sentry account to having error and performance monitoring live in my app, thanks to Sentry Wizard, robust documentation, and a responsive support team.

Sentry also requires no configuration and very little maintenance, which lets us automatically instrument new pages, move faster, and develop without fiddling with custom setups to instrument our app. Sentry’s regular updates and enhancements give us a better performance and error monitoring experience without having to do any of the work ourselves.

The ability to see performance and release trends helps us identify potential problem areas and optimize our app’s performance while tracking our frontend and API calls. This data is super helpful for debugging purposes and optimizing application performance. Last, the alerts in Slack coupled with clear stack traces lets me not only see critical or unexpected issues but then have the additional context to fix them quickly.

What's next: Monitoring AI and User Misery

While I expected error and performance monitoring to be more challenging to set up and start using, it wasn’t.

Now, as our team continues to work on improving the AI blog writing experience so anyone with something interesting to write about can do so quickly and easily, we’re looking at monitoring the performance of our AI-enhanced workflows.

In addition, we're also working on adding new features and functionality to Moonbeam to make it an even more powerful tool for writers. As more users join, we plan to continue to monitor User Misery across our expanding code footprint.

Share

Share on Twitter
Share on Facebook
Share on HackerNews
Share on LinkedIn

Published

Sentry Sign Up CTA

Code breaks, fix it faster

Sign up for Sentry and monitor your application in minutes.

Try Sentry Free

Topics

Guest Posts

New product releases and exclusive demos

Listen to the Syntax Podcast

Of course we sponsor a developer podcast. Check it out on your favorite listening platform.

Listen To Syntax
    TwitterGitHubDribbbleLinkedinDiscord
© 2024 • Sentry is a registered Trademark of Functional Software, Inc.