Faire Improves Website Speed with Million Lint for Enterprise.

TLDR;

did an absolutely goated job

45%

faster page loading

45%

faster page loading

45%

faster page loading

Preview
main
0
0
0
0
Prettier

Preventing broken code from making it to production

Instacart’s CI/CD pipeline introduces complexity when deploying new features and updates at scale, potentially impacting overall software quality. Igor’s team owns infrastructure, observability, developer productivity, and build & deploy; and to maintain velocity and quality, they combine automated canary analysis, using Kayenta, with their CI/CD pipeline. By partially deploying new releases to 
a small number of nodes, they’re able to analyze key metrics to assess how they perform compared to previous stable releases.

The process relies on existing metrics showing historic success/failure rates. 
If a new release drops below previous stable releases, they’re rolled back, giving developers time to investigate and resolve any issues.

The initial iteration of this process was essentially just software testing. The idea seemed simple, but the team knew that testing alone wouldn’t cut it, they’d need metrics showing historic success or error rates to compare past, stable releases with what they were canarying.

Tapping into an existing resource

Resources subheading

Engineering teams at Instacart already use Sentry to monitor front and backend issues in Javascript, Ruby, Go, and Python. Custom tags let developers assign projects, teams, and product areas to specific issues, and correlate them with deployments for faster triaging.

Some other subheading

Custom alerts help them stay on top of the volume and frequency of new errors, and when an error occurs, developers working on the related project are notified. While triaging issues are linked in Sentry, outlining the error, exception that happened, and its corresponding release, before being rolled back.

Wherever a ‘bad release’ was at fault, Igor’s team found that there was often 
a corresponding Sentry alert notifying developers of error spikes. Engineers triaging could rely on the details they got from Sentry since nearly all instances of buggy releases were already logged.

The infrastructure team realized that, since other teams had already been adding detailed context to issues and correlating them by deployment, there’d be a considerable amount of historical data to leverage for canary analysis.

Canary analysis 2.0

The first iteration of Instacart’s automated canary analysis has a deploy manager initiating and simultaneously deploying a stable, existing release – the ‘baseline – alongside the new release – labeled ‘canary’ – onto two sets of tasks in the cloud. Each of these is roughly 2-5% of the overall capacity of the service.

Then they start comparing the baseline and canary, with both sending metrics tagged to their respective releases to Kayenta, which reads metrics from both, processes them, and determines any statistical irregularities to identify potential degradation in the canary. Seems like a simple system, right? It is, but only if it has reliable metrics to go on.

Igor’s team then started looking at Sentry to extract more detailed data to add 
to the automated analysis. The deploy manager still deploys baseline and canary, but also sends an SQS message to a custom service, which kicks off a new Sentry integration that runs for the test duration. This pulls from Sentry’s V2 API with the service name being tested and reads all the errors that are generated back to it in real-time, before splitting them into what’s coming from baseline and what originated in canary, disregarding everything else.

Take Million Lint for a spin.

For devs

Check out the docs to get started with Million Lint, in any react codebase, right now.

Get Started

For teams

Experience performance monitoring as it should be. Silky smooth user journeys across your entire site.

Book a demo