First, a note on terminology. A number of people who commented on the YouTube video were irritated because they expected the topic to be how pages adapt to different size requirements for different devices. The wording can be a little confusing. The term “responsive web design” does indeed refer to building sites that display properly on a range of screens and screen sizes. What Sullivan and Mocny mean when they talk about web responsiveness is the speed with which a page responds to user input.

In the video, starting at the 45-second mark, Michal Mocny gives a brilliant illustration of responsiveness in real life, specifically from the way in which he interacts with his new car’s cruise control feature, compared with the way his old car responded to his interaction. The video illustrates why responsiveness is so important for user experience (UX).

You’re probably thinking, “Wait a minute! There’s already a responsiveness metric. Why does this matter?” You’re right, Google’s Core Web Vitals already have a metric for responsiveness. It’s called FID, which stands for First Input Delay. FID measures the time it takes for a browser to respond to a user’s interaction. But FID doesn’t look any further than the first interaction, which leaves a lot of UX unevaluated.

And, as Mocny points out, FID has some blind spots. The engineers at the Chrome Web Platform Team developed a new metric called INP, which stands for Interaction to Next Paint. What INP gives you that FID doesn’t is a fuller look at the lifetime UX for a user on a site. It’s more analogous to the CLS (Cumulative Layout Shift) metric that’s part of Core Web Vitals.

At the moment, INP isn’t part of Core Web Vitals, so a lousy score won’t necessarily impact your page ranking. It’s what Google calls an “experimental field metric.” What the metric will tell you, though, is how your site performs in terms of UX.

What’s interesting to me about INP — and this is elucidated in Mocny’s cruise control example — is that a good INP score doesn’t necessarily mean your site is working any faster. What INP tests is a factor that is specifically related to UX — it’s user-oriented. If you’re shopping on a site and you click to add an item, it takes a while for the system to add the item to your cart. What INP is looking for is an indicator to the user — like a change in the color of a button or a simple animation — that lets users know their input has been received.

And this fact — that it’s the UX that’s being measured, rather than the actual speed of the website — brings me to my larger point. If you assume — and I believe this to be true — that the Google algorithm isn’t supposed to result in an arbitrary ranking, that means that the algorithm should return results that are meaningful. The algorithm should be rooted in UX, meaning that the highest ranking pages are the ones that are most likely to contain the information users most desire or will find most useful.

Let me be clear: I am in no way criticizing the brilliant Chrome engineers like Annie Sullivan and Michal Mocny. They have clearly thought deeply about how they can improve the metrics they use to evaluate a website’s UX. They realized, in this instance, that FID didn’t cut it. They needed INP to dive deeper into UX.

The big question is: Are your metrics measuring what really matters in terms of UX? Let’s take SEO, for example — a subject near and dear to my heart. I can stuff every relevant keyword known to humankind into a website, but if the site’s not useful, that will and should affect that site’s Google ranking. Good SEO, like good web design, isn’t just about beating the Google algo game. It’s about building sites for our clients that satisfy their clients. It’s about including content that’s organic, genuinely useful to real people looking for information. It’s not about the bots. Or at least it’s not just about the bots.

Metrics and data analysis are endlessly fascinating. They’re such powerful tools — when used properly. Part of using metrics and data properly is making sure you’re really measuring what you need to measure. FID sounded like a great metric. But it wasn’t rooted in the entire UX. It was rooted in measuring site performance, but it didn’t take into account things that really matter to real users.

The evolution of the Google algorithm and Core Web Vitals are things I eat, sleep and breathe. This addition of INP as an experimental metric is, I think, a move in the right direction — one that’s user-centered.