Making performance a priority isn't just good for users, it can also be good for business. While the best practices in this collection focus primarily on optimizing your Google Publisher Tag (GPT) integration, many other factors contribute to the overall performance of a given page. Whenever you introduce changes, it's important to evaluate the impact of those changes on all aspects of your site's performance.
Measure page performance
In order to understand how a change impacts the performance of your site, you first need to establish a baseline to compare against. The best way to do this is to create a performance budget that defines an idea baseline, which your site may or may not currently meet. If you're interested in maintaining a fixed level of performance, however, you can use your site's current performance metrics as a baseline.
To start measuring performance, a combination of the following approaches are recommended:
- Synthetic monitoring
- You can use tools like Lighthouse and Publisher Ads Audits for Lighthouse to measure page performance in a lab setting. This type of measurement doesn't require end-user interaction, so it's well suited for use in automated tests and can be used to validate the performance of changes before releasing them to users.
- Real user monitoring (RUM)
- You can use tools like Google Analytics and PageSpeed Insights to gather real-world performance data directly from users. This type of measurement is based on end-user interactions, so it's useful for identifying last mile performance issues that can’t easily be uncovered by synthetic tests.
Be sure to take measurements and compare against your baseline regularly. This will give you a good indication of whether your site's performance is trending in the right direction over time.
Choose what to measure
When it comes to performance, there's no single metric that can tell you everything you need to know about how your site is doing. You'll need to look at a variety of metrics covering various aspects of page performance to get a full picture. Some key performance areas and suggested metrics are listed in the table below.
Performance area | |
---|---|
Perceived load speed |
Measures
How quickly a page is able to load and render all UI elements. Suggested metrics
First contentful paint (FCP) |
Page load responsiveness |
Measures
How quickly a page becomes responsive after the initial load. Suggested metrics
First input delay (FID) |
Visual stability |
Measures
How much UI elements shift and whether these shifts interfere with user interaction. See Minimize layout shift for more information. Suggested metrics |
Aside from page performance, you may also want to measure ad-specific business metrics. Information such as impressions, clicks, and viewability on a slot-by-slot basis can be obtained from Google Ad Manager reporting.
Test changes
Once you've defined your performance metrics and started measuring them regularly, you can begin using this data to evaluate the performance impact of changes to your site as they're made. You do this by comparing metrics measured after a change is made, to those measured before the change was made (and/or the baseline you established earlier). This sort of testing will allow you to detect and address performance issues before they become a major problem for your business or users.
Automated testing
You can measure metrics that don't depend on user interaction through synthetic tests. These sorts of tests should be run as frequently as possible during the development process to understand how unreleased changes will affect performance. This sort of proactive testing can help uncover performance issues before changes are ever released to users.
One way to accomplish this is by making synthetic tests part of a continuous integration (CI) workflow, where tests run automatically whenever a change is made. You can use Lighthouse CI to integrate synthetic performance testing into many CI workflows:
A/B testing
Metrics that depend on user interaction can't be fully tested until a change is actually released to users. This can be risky if you're unsure of how the change will behave. One technique for mitigating that risk is A/B testing.
During an A/B test, different variants of a page are served to users at random. You can use this technique to serve a modified version of your page to a small percentage of overall traffic, while most continue to be served the unmodified page. Combined with RUM, you can then evaluate the relative performance of the two groups to determine which performs better—without putting 100% of traffic at risk.
Another benefit of A/B tests is that they allow you to more accurately measure the effects of changes. For many sites, it can be difficult to determine whether a small difference in performance is due to a recent change or a normal variation in traffic. Since the experimental group of an A/B test represents a fixed percentage of overall traffic, metrics should differ from the control group by a constant factor. Therefore, differences observed between the 2 groups can more confidently be attributed to the change being tested.
Tools like Optimizely and Google Optimize can help with setting up and running A/B tests. Be aware, however, that tag based A/B testing (the default configuration for these tools) may itself negatively impact performance and provide misleading results. Therefore, server side integration is strongly recommended:
A/B test results
To measure the impact of a change using an A/B test, you gather metrics from both the control and experimental groups and compare them against one another. To do this, you need a way to tell what traffic is part of which group.
For page performance metrics, it's often enough to include a simple identifier on each page indicating whether the control or experimental version was served. This identifier can be anything you'd like, as long as it's something you're able to parse and correlate metrics to. If you're using a pre-built testing framework, this will usually be handled for you automatically.
For ad-specific business metrics, you can use GPT's key-value targeting feature to differentiate ad requests from the control vs experimental group:
// On control group (A) pages, set page-level targeting to:
googletag.pubads().setTargeting('your-test-id', 'a');
// On experimental group (B) pages, set page-level targeting to:
googletag.pubads().setTargeting('your-test-id', 'b');
These key-values can then be referenced when running Google Ad Manager reports, to filter results by group.