How to reach a world peace speaker

Instructions: How to get a 100/100 Google Pagespeed

What is Google Pagespeed?

Google Pagespeed is an online tool that can measure the performance of your website using a wide variety of parameters. A distinction is now made between mobile and desktop speed. Pagespeed also tests the site for "usability". In addition to the better user experience, a good page speed also has a positive effect on the ranking of your page in Google search, so that it is twice as worthwhile to improve the performance.

Preface

It is important to say right from the start that optimizing the pagescore to 100/100 is just a proof-of-concept or experiment. It is good and important to optimize for a high page score, but since the rules to get to 100/100 are so strict and sometimes arbitrary, the goal cannot and should not be to achieve the full number of points. Some pages will simply not be able to be optimized up to the full number of points without having to severely restrict the functionality. Nevertheless, you can learn a lot about page speed optimization from this experiment.

In my experience, an average page that is not explicitly optimized for speed but does not contain any serious errors will achieve a pagescore between 60 and 75. In general, I would consider a score in the mid-90s to be the goal for most projects.

At first I only optimized the home page of my blog to 100/100. My blog runs on the Kirby CMS - some optimizations therefore also relate directly to this system. It is therefore possible that not all pages will achieve a pagescore of 100 with the techniques described.

I have divided most of the points in the same way as Google names the sources of error in the page analysis, but some techniques will overlap in terms of content.

The starting position and the existing sources of error are of course different for each page, so that you may still encounter problems that I cannot go into here because I did not have them before the optimization.

The hoster

If you generally value the performance of your site, you should of course, as a basic requirement, not host your site with the cheapest shared host. For advanced users, I recommend Uberspace popup: yes. If Google notices during the analysis that the server is responding slowly, this is noted with "Reduce the server's response time".

Optimize images

In order to be able to better assess the effect of the image size on the pagescore, I tested three photos that were much too large and scaled them down using CSS. With just three photos, my Pagescore dropped directly to 33/100. This shows how extremely important image optimization is when it comes to pagespeed. Basically, the following applies: Store all images only in the size in which they are actually used, i.e. do not scale too large images smaller using CSS. In Kirby, I use the built-in thumbnail function for almost all images. In addition, one of the many image optimization tools can be used to reduce the size of all images a little without loss. It gets a bit more complicated when it comes to retina images. Here you have to make sure that you use the images in the correct sizes. But that's a whole topic in itself that I'll go into again separately.

Use browser caching

Browser caching in and of itself is very easy to set up if the server supports it. On an Apache server, the caching rules are defined in the file. This defines how long the downloaded data should be saved by the browser. This doesn't help the first time you visit the site, but it does help with subsequent visits. For static files that do not change frequently, the time can therefore be increased to a month, for example. An example from mine:

With this rule, all local caching problems were eliminated for me. If you use static files in a different format, you can add them to the rule.

Reduce CSS

In the meantime, minimizing CSS files has become widely accepted. The CSS files are changed by preprocessors or corresponding tools in such a way that all comments, line breaks and spaces are removed in order to save storage space. Of course, you continue to work with a non-reduced, documented version for development and a minimized live version. In addition, individual CSS files should be combined into one file in order to reduce the number of HTTP requests. This point can be ticked off quite easily.

Reduce JavaScript

This point is also relatively self-explanatory. All JavaScript resources should be minimized using a preprocessor or a corresponding tool. In addition, the JavaScript files must of course be included before the closing tag and not in the header.

Eliminate render-blocking "above the fold"

This is where things get exciting and sometimes also complicated. "Above the fold" is the area of ​​the page that is rendered before you start scrolling. So basically everything that is displayed in any viewport when the page is loaded. The problem now is that resources that are included in the area block the rendering of this area until they have been fully loaded. On the one hand, there are CSS files that are inserted in the area, as well as JavaScript resources that should be positioned in the area because they are required directly. This includes, for example, web fonts that are integrated by Google or Typekit, for example. For web fonts, there is the simple solution of switching to an asynchronous integration instead of the standard integration, which does not block rendering. There is a solution for both Google Webfonts and Typekit. Unfortunately, the text is flashed the first time it is loaded, as the text is first loaded in the standard font and then swapped again as soon as the asynchronous loading is completed.
It gets more complicated when processing the CSS files. Google suggests that you split your CSS file into a “critical” part, which styles everything that is visible in the “above the fold” area, and a “rest”. The "critical" CSS is then inserted inline in a tag in the, so that no request is triggered, but the area can be rendered directly. The rest of the CSS is then integrated via an external file, which, however, is not linked in, but before the closing. According to the HTML5 specification, this is also allowed, even if it is almost never used.
As you can imagine, extracting the critical CSS area is a tedious and inflexible job. This may be possible for some projects, but not for many. I decided on the following intermediate route on my side: Since my entire CSS file is only 13 KB minimized, I decided to embed it completely inline. I do that in Kirby via the plug-in, which provides a "critical" function. A minimized version is then automatically integrated inline from a stored SCSS file. I have also integrated all other CSS resources as described in the (currently this is only the CSS file for my lightbox). For larger and more complex projects, I would definitely accept the minimal loss of speed and incorporate the CSS resources in the classic way.

Enable compression

The compression does the following: Data that are sent from the server to the browser are automatically compressed by the server software and unpacked again by the browser (in real time). This can save a few kilobytes of transferred data in total. As long as the compression is supported by the server and is configurable, there are no disadvantages. The compression for static files on Apache servers can also be defined via:

The compression of dynamic PHP scripts is usually a bit more complicated. For the setup on Uberspace, I've already written an article about it here. There you will also find a link to an online tool that you can use to test whether the compression works.

Avoid external resources or: The last points are the most difficult

If you have eliminated all errors up to this point, you should already score in the high 90-point range. In order to get the last few dots, it gets tricky again. The problem here is that resources from external servers are incorporated into almost every modern site. In my case, these are Google Analytics on the one hand and YouTube video embeds on the other. Other common examples would be Google Adsense, social media integrations such as Facebook, or similar. Since you cannot influence the cache duration with these external resources, and this is usually set too short for Google's perception (the Google Analytics JavaScript library, for example, has a cache header of 2 hours), Google Pagespeed deducts points here. Google also says itself that, at least in the case of Google Analytics, they will not change anything. This is of course contradictory somewhere, but also consistent in that they do not rate their own products differently in the speed test.

Google Analytics (analytics.js)

As already described, Google complains about the cache time of the analytics script. The only way to resolve this warning is to host the file yourself, as you can then define the cache duration yourself. To do this, you simply load the integrated JavaScript file and host it yourself. The link is then adjusted directly in the script snippet:

The unpleasant thing here is, of course, that Google seldom updates the code of the file a few times a year. Logically, you don't get these updates in this way. The only thing that helps here is a PHP script that runs automatically and regularly via a cron job and compares the self-hosted file with that of Google and then updates it if necessary. Unfortunately, this method is unavoidable in order to achieve 100 points, but I would also forego this adjustment in real operation.

YouTube integrations

First, as I already described in this blog post, I adapted the integration of YouTube videos so that the embed data is only loaded after clicking on the video. On the one hand, this saves data and, on the other hand, no JavaScript files with poor cache times are integrated. My problem was that I first used the thumbnails provided by YouTube, which are on a YouTube CDN. The cache time is set too short for these images too, so I had to adapt the script so that the thumbnails are downloaded from the YouTube CDN and uploaded to my server. I can then bypass completely external resources (of course only until someone watches a video, but that's not the point).

There is a similar procedure for Facebook or Google+ share buttons, which only loads the external resources after you have clicked on one of the share buttons.

Conclusion

With these adjustments I have achieved a pagespeed of 100/100 for mobile & desktop. In addition, 100 points for the user experience on mobile devices, which does not have anything to do with performance, but only with good responsive design.

← Back to the website