From 50 To 100: Hitting a Perfect Next.js Performance Score
8 min read

After releasing my website a few days ago and publishing my first article, I already knew the next steps would be a full performance review, including accessibility and SEO.
More than anything, to ensure peace for my soul and good sleep during the night.
For a full assessment I used Lighthouse CLI , pointing directly to the live URL, and the results were... well, surprising to be honest: I managed to hit a score of ~100 in all areas except performance.
Not too shabby. But not great either, of course.
I had to act in different areas and apply different improvement techniques, resulting in many small, intentional improvements, each of them having a cumulative impact.
In this article I’ll walk through a series of practical actions I made to improve my website Core Web Vital metrics, what actually moved the needle, and key principles behind those changes.

First Things First: FCP and LCP
First Contentful Paint (FCP) is a Core Web Vital metric that measures the time from when a page starts loading to when any part of the page's content (text, image, SVG, or non-white canvas) is rendered on the screen.
It acts as a crucial user-centric metric, indicating how quickly a user perceives the site is loading.
Largest Contentful Paint (LCP), on the other side, is another key web performance metric that measures the time it takes for the largest image or text block to become visible within the viewport (so not on the whole page).
In simpler terms, it’s the point at which the user perceives that the main content of the page has loaded.
A slow LCP can lead to a frustrating user experience, higher bounce rates, and can negatively impact your website’s search engine ranking.
For a good user experience, Google recommends FCP and LCP to be, respectively, of 1.8 and of 2.5 seconds or less.
What is Lighthouse?
Google Lighthouse is a free, open-source automated tool from Google used to improve web page quality by auditing performance, accessibility, SEO, best practices, and Progressive Web Apps (PWA). It runs either as browser DevTools extension, from the command line, or plugged-in to a CI/CD pipeline.
Lighhouse generates results per page, and it's important to keep in mind its results can fluctuate by 5-10 points between runs, depending on network conditions, server load and CPU usage.
Generally, to have reliable results we have to:
run a few passes on the same page and act on the average score
run it against the live URL, as scores will be different from local production or dev builds
repeat this per each page of your website
In my case, I had to act on reducing the First Contentful Paint and Largest Contentful Paint. There was no Cumulative Layout Shift issues, and Total Block Time and Speed Index would come down naturally when acting over the first two metrics.
Let's look at how I proceeded.
Practical Actions
1. Removing unused JavaScript
Thinking about how to reduce the FCP metric, reducing the total amount of JavaScript downloaded by the browser came as a natural answer: less JavaScript → faster load.
So I acted on the following:
Removing
lodash-es
Lodash was only used forkebabCaseandstartCase, so I replaced them with native JavaScript implementationsRemoving unused and redundant dependencies
Like@headlessui/react, a big dependency served only for a couple of client-side components, which I replaced with native HTML (React components) + CSSEnabled three-shaking
I used Next.jsoptimizePackageImportsexperimental feature for libraries like,clsx,@sanity/image-url,react-syntax-highlighterand@next/third-parties
Key takeaway:
The fastest JavaScript is the one you don’t ship, so remove what's not needed
2. Use next/font module to load fonts
When I first inspected the browser Network tab, one issue stood out immediately: fonts were being downloaded on every page change and were huge in size (hundreds of KB each, 12 variants in total - not great).
This was clearly something needed urgent fix, so I started with a couple of straightforward optimizations:
Converting fonts from
.ttfto.woff2
Switching to Web Open Font Format) reduced total font size by roughly 50-60%Preload critical fonts
I set preloading for fonts used in body text and headings for faster access, andfont-display: swapto non-critical ones to avoid blocking rendering
Results were noticeable, but not perfect.
That's when I came across Next.js recommended approach for Font Optimization , by using next/font/google (or next/font/local).
By switching to it, Next.js handled everything automatically:
fonts are downloaded and self-hosted at build time
optimal preloading is applied
font-display: swapis configured correctlylayout shifts are minimized
Key takeaway:
Don’t manually optimize what your framework already solves better — next/font gives you optimal font loading out of the box with zero maintenance cost.3. Moving work to the server (huge impact)
Even thought I put careful attention on correctly using SSR vs CSR components during development, there was a few client-side components that could be further moved on the server.
So I aggressively reduced client-side JavaScript by:
converting components to server components
pushing logic away from the browser
Some examples were <Heading />, <Button /> and <Link /> components, which was declared client-side.
This further reduced the downloaded bundle size, resulting in a reduction of the main thread blocking time and hydration cost.
Key takeaway:
Less client-side JavaScript means less hydration, less blocking, and a faster app — in a blog website like mine, move as much as possible to the server
4. Smarter data fetching
Fetch less data
Previously, when fetching data for the articles list page, my GraphQL query used to fetch the whole bodyRaw field article to only then extract the excerpt using JavaScript. This was highly inefficient.
This text is what is displayed below each post card in the articles list, a resume or intro paragraph that invites the user into further reading, and it's short in size by definition.
So, what I did is to add a dedicated excerpt field to Sanity post schema, which would need to be filled in manually for each post, and swap from fetching bodyRaw to the new excerpt field.
This resulted in a massive performance improvement for my blog list page, reducing the payload size significantly improving list rendering speed.
GraphQL caching
I fine-tuned GraphQL configuration in order to:
cache responses for 1 hour
reduce repeated network requests
improve TTFB consistency
1 2 3 4 5 6 7 8 9 10 11 12export const { getClient, query, PreloadQuery } = registerApolloClient(() => { return new ApolloClient({ cache: new InMemoryCache(), link: new HttpLink({ // this needs to be an absolute url, as relative urls cannot be used in SSR uri: 'https://xxxxxx.api.sanity.io/v2023-08-01/graphql/production/default', fetchOptions: { next: { revalidate: 3600 }, }, }), }); });
5. Image optimization
Images were another major bottleneck, and once again carefully reading Next.js docs on Image Optimization gave me the insights I needed to further optimize my custom next/image wrapper component.
I acted on the following:
Raise priority for above-the-fold images
AddingfetchPriority="high"to images that user sees first to load them with priority. This included the main article hero image and the ones of the first few post cards in post list.Fetch smaller initial image and apply a blur placeholder:
By fetching a small initial image (metadata.lqip) and combiningnext/imageblurDataURLand placeholder fields I achieved a blazing fast image loading user experienceEnsure smaller image sizes
By adding width and height attributes innext/imagesource URL we ensure the module does not create unnecessarysrcset, especially the huge ones
How my final <Img /> component looked like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21export default function Img({ source, alt, height, width, blurDataURL, ...props }: ImgProps) { const { urlFor } = useSanityImageUrl(); const sourceUrl = urlFor(source)?.width(width).auto('format').url(); if (!sourceUrl) { return null; } return ( <Image src={sourceUrl} alt={alt || '(Image)'} width={width} height={height} placeholder={blurDataURL ? 'blur' : 'empty'} blurDataURL={blurDataURL} {...props} /> ); }
Key takeaway:
Optimize images for how they are actually used—prioritize what’s visible, load less upfront, and improve perceived performance with placeholders.
6. Lazy loading non-critical components
Another easy win was deferring components that are not required for the initial render.
In my case, a good candidate was the <PostList /> component. While essential for the blog list page, it doesn’t need to block the first paint — especially if there’s other content (like headings or intro text) that can render immediately.
Instead of loading it upfront, I wrapped it in <Suspense> and provided a skeleton fallback, moving the fetch logic inside it, down from the Page component.
This allowed the rest of the page to render instantly, while the post list loads asynchronously.
1 2 3 4 5 6 7 8 9 10import { Suspense } from 'react'; import { PostList, PostListSkeleton } from '@/components/blog'; export default function BlogPage() { return ( <Suspense fallback={<PostListSkeleton />}> <PostList /> </Suspense> ); }
This approach improves both:
actual performance → less work during the initial render
perceived performance → users see immediate feedback via skeletons
In practice, it’s about prioritizing what’s visible first, and deferring everything else.
Key takeaway:
Defer non-critical components to keep the initial load fast—prioritize what’s above the fold and progressively load the rest.
7. Target modern browsers only
I tweaked my package.json to target modern browsers only. This in order to reduce legacy polyfills and transpiled code size:
1 2 3 4 5 6 7 8 9 10 11 12 13 14"browserslist": { "production": [ "chrome >= 80", "edge >= 80", "firefox >= 74", "safari >= 13.1", "not dead" ], "development": [ "last 1 chrome version", "last 1 firefox version", "last 1 safari version" ] }
Closing thoughts
Improving websites performance is never a single action breakthrough, but the result of many small, deliberate optimizations that sum up one onto each other.
Looking back at what mattered for me, these are the patterns that stood out:
Most performance issues were self-inflicted
Unused JavaScript, oversized payloads, and unnecessary client-side work added up quickly. These are all things dev teams need to monitor closely before code reaches production.The biggest wins came from simplification
Removing dependencies, reducing data, and leaning on the framework often had more impact than adding new solutions.Perceived performance matters as much as actual performance
Skeletons, image placeholders, and progressive rendering made the app feel fast.Modern frameworks already solve a lot for you
Features like Server Components,next/font, andnext/imageare designed to handle common performance pitfalls — using them properly is often enough.
It's worth to remember that, at the end of the day, performance isn’t about chasing a perfect Lighthouse score, but closely monitor what your app needs and take cumulative actions on different fronts.