Setup a website
Performance monitoring
5min
we are monitoring the overall performance of purple websites with both, the synthetic measurement of core web vitals and real user monitoring results provided by crux, that includes the same core web vitals measures each of your websites will be included in our monitoring to ensure the correct action is taken based on any impact changes in software or configuration may have while setting up your website, you'll mostly rely on synthetic data below find a summary of the different measures, including their challenges core web vitals core web vitals are a set of user centric performance metrics introduced by google to assess and optimize the user experience on a webpage these metrics focus on three critical aspects of how users perceive their interaction with a site largest contentful paint (lcp) measures loading performance, specifically the time it takes for the largest content element (text or image) to render a good lcp score is 2 5 seconds or less first input delay (fid) measures interactivity, particularly the time it takes for the page to respond to a user’s first interaction (like clicking a button) a good fid score is under 100 milliseconds cumulative layout shift (cls) measures visual stability by tracking unexpected layout shifts, which can frustrate users as they try to interact with a site a good cls score is less than 0 1 together, these metrics are essential in evaluating how quickly a website loads, how responsive it is, and how visually stable the elements are they contribute directly to user experience, search engine rankings, and conversion rates crux? the chrome user experience report (crux) is a public dataset that google collects from real users who opt in to share their browsing data this data is gathered from users of the chrome browser and includes information on how websites perform in real world conditions, across various network speeds, devices, and locations crux utilizes the data points of the core web vitals synthetic vs real user monitoring when evaluating website performance, there are two main approaches synthetic performance measurement and real user monitoring (rum) , such as crux both methods offer valuable insights but come with distinct differences and challenges synthetic performance measurement synthetic monitoring simulates user interactions with a website by running a series of predefined tests in controlled environments these tests are often executed in specific locations, using defined network conditions, devices, and browsers tools like google lighthouse, webpagetest, and debugbear typically use synthetic measurement for their reports advantages challenges controlled environment it provides consistency, making it easier to reproduce results and debug performance issues limited realism since synthetic tests run under pre set conditions, they don’t capture the diversity of real world scenarios network conditions, user device specifications, and geographic location variations aren’t fully replicated granular insights developers can set specific conditions (e g , test on a 3g network, on a mobile device), offering detailed insights into performance under various setups no human behavior synthetic tests can’t account for real world behavior like users interacting with multiple tabs, multitasking, or using slow and congested networks customization synthetic tests allow the use of custom test scripts, simulating specific user journeys on the site idealized conditions synthetic tests are often run on clean, high performance systems, which don’t reflect the real world devices that actual users are likely to use real user monitoring crux uses real user monitoring (rum) to collect performance data from real visitors, offering a more accurate representation of how a site performs under various real world conditions advantages challenges real world data crux captures performance metrics from actual users, providing insights into how a website performs across different devices, networks, and locations less control unlike synthetic testing, developers have no control over the conditions under which data is collected this can make it harder to pinpoint specific issues or recreate scenarios for debugging diverse user conditions the data reflects the wide variety of real world conditions, including slow connections, lower powered devices, and varying user interactions no granular testing since rum focuses on aggregate data, developers don’t have the same level of control to run specific, repeatable tests or simulate ideal performance conditions accurate user experience data is captured directly from the browser, so it gives a true picture of user perceived performance this helps to understand how a site performs in real world scenarios that synthetic testing can’t replicate data delays crux reports are typically delayed, as it takes time to gather and process real user data, meaning it’s not always the best option for immediate feedback