Loading...

Follow SitePoint - PHP on Feedspot

Continue with Google
Continue with Facebook
or

Valid

When styling websites or PWAs with CSS, you should analyze how CSS resources will affect performance. In this tutorial, we’ll use various tools and related techniques to help build a better PWA by focusing on CSS performance. Specifically, we’ll remove the unused CSS, inline the critical path CSS, and minify the resulting code.

The techniques can also be used to improve the performance of general websites and apps. We’ll be focusing on CSS performance for PWAs since they should be fast and feel native on user devices.

Progressive web apps (PWAs) are web experiences that bring the best of both worlds: native mobile apps (installable from a store) and web apps (reachable from public URLs). Users can start using the application right away from their web browser without waiting for a download, installing, or needing extra space in the device.

Service workers and caching allow the app to work offline and when network connectivity is poor. Over time, the app could become faster as more assets are cached locally. PWAs can also be installed as an icon on the home screen and launched full-screen with an initial splash screen.

The Demo PWA to Audit

Before learning how to audit a PWA for any CSS issues, you can get the code of a simple website with PWA features from this GitHub repository. The PWA uses an unminified version of Bootstrap v4 for CSS styling and displays a set of posts fetched from a statically generated JSON API. You can also use the hosted version of this demo, since learning how to build a PWA is beyond the scope of this tutorial.

PWAs are simply web apps with additional features, including these elements:

  • A manifest file. A JSON file provides the browser with information about the web application such as name, description, icons, the start URL, display factors etc.
  • A service worker. A JavaScript file is used to cache the application shell (the minimum required HTML, CSS, and JavaScript for displaying the user interface) and proxying all network requests.
  • HTTPS. PWAs must be served from a secure origin.

Here’s a screen shot of the application shell:

A screen shot of the application with data:

Auditing with Google’s Lighthouse

Lighthouse is an open-source auditing tool developed by Google. It can be used to improve the performance, accessibility and SEO of websites and progressive web apps.

Lighthouse can be accessed from the Audit tab in Chrome DevTools, programatically as a Node.js module and also as a CLI tool. It takes a URL and runs a series of audits to generate a report with optimization suggestions.

You can apply different techniques either manually or using tools. This article describes how such tools can be used to remove redundant styles, extract the above-the-fold critical CSS, load the remaining CSS with JavaScript, and minify the resulting code.

Launch Chrome, visit the PWA address https://www.techiediaries.com/unoptimizedpwa/ and open Developer Tools (CTRL-Shift-I). From the Developer Tools, click the Audits panel:

Next, click on Perform an audit…. A dialog will prompt you for the types of audit you want to perform. Keep all types selected and click the Run audit button.

Wait for Lighthouse to complete the auditing process and generate a report:

The scores are calculated in a simulated environment. You’re unlikely to get the same results on your machine because they depend on hardware and network capabilities.

From the report, you can see a timeline which visually shows how the page is loaded. First meaningful paint, First Interactive and Consistently Interactive are key time points that describe how fast the page loaded. Our goal is to optimize these metrics according to the Critical Rendering Path.

The post CSS Optimization Tools for Boosting PWA Performance appeared first on SitePoint.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We've previously shown you how to get a working local installation of Apache on your Windows PC. In this article, we'll show how to install PHP 5 as an Apache 2.2 module.

Why PHP?

PHP remains the most widespread and popular server-side programming language on the web. It is installed by most web hosts, has a simple learning curve, close ties with the MySQL database, and an excellent collection of libraries to cut your development time. PHP may not be perfect, but it should certainly be considered for your next web application. Both Yahoo and Facebook use it with great success.

Why Install PHP Locally?

Installing PHP on your development PC allows you to safely create and test a web application without affecting the data or systems on your live website. This article describes PHP installation as a module within the Windows version of Apache 2.2. Mac and Linux users will probably have it installed already.

All-in-One packages

There are some excellent all-in-one Windows distributions that contain Apache, PHP, MySQL and other applications in a single installation file, e.g. XAMPP (including a Mac version), WampServer and Web.Developer. There is nothing wrong with using these packages, although manually installing Apache and PHP will help you learn more about the system and its configuration options.

The PHP Installer

Although an installer is available from php.net, I would recommend the manual installation if you already have a web server configured and running.

The post How to Install PHP on Windows appeared first on SitePoint.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

On top of the infrastructure of the internet --- or the physical network layers --- sits the Internet Protocol, as part of the TCP/IP, or transport layer. It's the fabric underlying all or most of our internet communications.

A higher level protocol layer that we use on top of this is the application layer. On this level, various applications use different protocols to connect and transfer information. We have SMTP, POP3, and IMAP for sending and receiving emails, IRC and XMPP for chatting, SSH for remote sever access, and so on.

The best-known protocol among these, which has become synonymous with the use of the internet, is HTTP (hypertext transfer protocol). This is what we use to access websites every day. It was devised by Tim Berners-Lee at CERN as early as 1989. The specification for version 1.0 was released in 1996 (RFC 1945), and 1.1 in 1999.

The HTTP specification is maintained by the World Wide Web Consortium, and can be found at http://www.w3.org/standards/techs/HTTP.

The first generation of this protocol --- versions 1 and 1.1 --- dominated the web up until 2015, when HTTP/2 was released and the industry --- web servers and browser vendors --- started adopting it.

HTTP/1

HTTP is a stateless protocol, based on a request-response structure, which means that the client makes requests to the server, and these requests are atomic: any single request isn't aware of the previous requests. (This is why we use cookies --- to bridge the gap between multiple requests in one user session, for example, to be able to serve an authenticated version of the website to logged in users.)

Transfers are typically initiated by the client --- meaning the user's browser --- and the servers usually just respond to these requests.

We could say that the current state of HTTP is pretty "dumb", or better, low-level, with lots of "help" that needs to be given to the browsers and to the servers on how to communicate efficiently. Changes in this arena are not that simple to introduce, with so many existing websites whose functioning depends on backward compatibility with any introduced changes. Anything being done to improve the protocol has to be done in a seamless way that won't disrupt the internet.

In many ways, the current model has become a bottleneck with this strict request-response, atomic, synchronous model, and progress has mostly taken the form of hacks, spearheaded often by the industry leaders like Google, Facebook etc. The usual scenario, which is being improved on in various ways, is for the visitor to request a web page, and when their browser receives it from the server, it parses the HTML and finds other resources necessary to render the page, like CSS, images, and JavaScript. As it encounters these resource links, it stops loading everything else, and requests specified resources from the server. It doesn't move a millimeter until it receives this resource. Then it requests another, and so on.

The number of requests needed to load world's biggest websites is often in couple of hundreds.

This includes a lot of waiting, and a lot of round trips during which our visitor sees only a white screen or a half-rendered website. These are wasted seconds. A lot of available bandwidth is just sitting there unused during these request cycles.

CDNs can alleviate a lot of these problems, but even they are nothing but hacks.

As Daniel Stenberg (one of the people working on HTTP/2 standardization) from Mozilla has pointed out, the first version of the protocol is having a hard time fully leveraging the capacity of the underlying transport layer, TCP.
Users who have been working on optimizing website loading speeds know this often requires some creativity, to put it mildly.

Over time, internet bandwidth speeds have drastically increased, but HTTP/1.1-era infrastructure didn't utilize this fully. It still struggled with issues like HTTP pipelining --- pushing more resources over the same TCP connection. Client-side support in browsers has been dragging the most, with Firefox and Chrome disabling it by default, or not supporting it at all, like IE, Firefox version 54+, etc.
This means that even small resources require opening a new TCP connection, with all the bloat that goes with it --- TCP handshakes, DNS lookups, latency… And due to head-of-line blocking, the loading of one resource results in blocking all other resources from loading.

A synchronous, non-pipelined connection vs a pipelined one, showing possible savings in load time.

Some of the optimization sorcery web developers have to resort to under the HTTP/1 model to optimize their websites include image sprites, CSS and JavaScript concatenation, sharding (distributing visitors' requests for resources over more than one domain or subdomain), and so on.

The improvement was due, and it had to solve these issues in a seamless, backward-compatible way so as not to interrupt the workings of the existing web.

SPDY

In 2009, Google announced a project that would become a draft proposal of a new-generation protocol, SPDY (pronounced speedy), adding support to Chrome, and pushing it to all of its web services in subsequent years. Then followed Twitter and server vendors like Apache, nginx with their support, Node.js, and later came Facebook, WordPress.com, and most CDN providers.

SPDY introduced multiplexing --- sending multiple resources in parallel, over a single TCP connection. Connections are encrypted by default, and data is compressed. First, preliminary tests in the SPDY white paper performed on the top 25 sites showed speed improvements from 27% to over 60%.

After it proved itself in production, SPDY version 3 became basis for the first draft of HTTP/2, made by the Hypertext Transfer Protocol working group httpbis in 2015.

HTTP/2 aims to address the issues ailing the first version of the protocol --- latency issues --- by:

It also aims to solve head-of-line blocking. The data it transfers is in binary format, improving its efficiency, and it requires encryption by default (or at least, this is a requirement imposed by major browsers).

Header compression is performed with the HPACK algorithm, solving the vulnerability in SPDY, and reducing web request sizes by half.

Server push is one of the features that aims to solve wasted waiting time, by serving resources to the visitor's browser before the browser requires it. This reduces the round trip time, which is a big bottleneck in website optimization.

Due to all these improvements, the difference in loading time that HTTP/2 brings to the table can be seen on this example page by imagekit.io.

Savings in loading time become more apparent the more resources a website has.

The post HTTP/2: Background, Performance Benefits and Implementations appeared first on SitePoint.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Over a series of articles, we've been building a sample application --- a multi-image gallery blog --- for performance benchmarking and optimizations. At this point, our application serves the same image regardless of the resolution and screen size it's being served in. In this tutorial, we'll modify it to serve a resized version depending on display size.

Objective

There are two stages to this improvement.

  1. We need to make all images responsive wherever this might be useful. One place is the thumbnails on the home page and in the gallery pages, and another is the full-size image when an individual image is clicked in the gallery.
  2. We need to add resizing-logic to our app. The point is to generate a resized image on the fly as it's demanded. This will keep non-popular images from polluting our hard drive, and it'll make sure the popular ones are, on subsequent requests, served in optimal sizes.
Responsive Images?

As this post explains, images in the modern web are incredibly complex. Instead of just <img src="mypic.jpg"> from the olden days, we now have something crazy like this:

<picture>
<source media="(max-width: 700px)" sizes="(max-width: 500px) 50vw, 10vw"
srcset="stick-figure-narrow.png 138w, stick-figure-hd-narrow.png 138w">

<source media="(max-width: 1400px)" sizes="(max-width: 1000px) 100vw, 50vw"
srcset="stick-figure.png 416w, stick-figure-hd.png 416w">

<img src="stick-original.png" alt="Human">
</picture>

A combination of srcset, picture and sizes is necessary in a scenario where you're doubtful that if you use the same image for a smaller screen size, the primary subject of the image may become too small in size. You want to display a different image (more focused on the primary subject) in a different screen size, but still want to display separate assets of the same image based on device-pixel ratio, and want to customize height and width of the image based on viewport.

Since our images are photos and we always want them to be in their default DOM-specified position filling up the maximum of their parent container, we have no need for picture (which lets us define an alternative source for a different resolution or browser support --- like trying to render SVG, then PNG if SVG is unsupported) or sizes (which lets us define which viewport portion an image should occupy). We can get away with just using srcset, which loads a different size version of the same image depending on the screen size.

Adding srcset

The first location where we encounter images is in home-galleries-lazy-load.html.twig, the partial template that renders the home screen's galleries list.

<a  href="{{ url('gallery.single-gallery', {id: gallery.id}) }}">
  <img src="{{ gallery.images.first|getImageUrl }}" alt="{{ gallery.name }}" >
</a>

We can see here that the image's link is fetched from a Twig filter, which can be found in the src/Twig/ImageRendererExtension.php file. It takes the image's ID and the route's name (defined in the annotation in ImageController's serveImageAction route) and generates a URL based on that formula: /image/{id}/raw -> replacing {id} with the ID given:

public function getImageUrl(Image $image)
{
  return $this->router->generate('image.serve', [
      'id' => $image->getId(),
  ], RouterInterface::ABSOLUTE_URL);
}

Let's change that to the following:

public function getImageUrl(Image $image, $size = null)
{
  return $this->router->generate('image.serve', [
      'id' => $image->getId() . (($size) ? '--' . $size : ''),
  ], RouterInterface::ABSOLUTE_URL);
}

Now, all our image URLs will have --x as a suffix, where x is their size. This is the change we'll apply to our img tag as well, in the form of srcset. Let's change it to:

<a  href="{{ url('gallery.single-gallery', {id: gallery.id}) }}">
  <img src="{{ gallery.images.first|getImageUrl }}"
       alt="{{ gallery.name }}"
       srcset="
           {{ gallery.images.first|getImageUrl('1120') }}  1120w,
           {{ gallery.images.first|getImageUrl('720') }} 720w,
           {{ gallery.images.first|getImageUrl('400') }}  400w" >
</a>

If we refresh the home page now, we'll notice the srcset's new sizes listed:

This isn't going to help us much, though. If our viewport is wide, this will request full-size images, despite them being thumbnails. So instead of srcset, it's better to use a fixed small thumbnail size here:

<a  href="{{ url('gallery.single-gallery', {id: gallery.id}) }}">
  <img src="{{ gallery.images.first|getImageUrl('250') }}"
       alt="{{ gallery.name }}" >
</a>

We now have thumbnails-on-demand, but which get cached and fetched when they're already generated.

Let's hunt down other srcset locations now.

In templates/gallery/single-gallery.html.twig, we apply the same fix as before. We're dealing with thumbnails, so let's just shrink the file by adding the size parameter into our getImageUrl filter:

<img src="{{ image|getImageUrl(250) }}" alt="{{ image.originalFilename }}" >

And now for the srcset implementation, finally!

The individual image views are rendered with a JavaScript modal window at the bottom of the same single-gallery view:

{% block javascripts %}
    {{ parent() }}

    <script>
        $(function () {
            $('.single-gallery__item-image').on('click', function () {
                var src = $(this).attr('src');
                var $modal = $('.single-gallery__modal');
                var $modalBody = $modal.find('.modal-body');

                $modalBody.html('');
                $modalBody.append($('<img src="' + src + '" >'));
                $modal.modal({});
            });
        })
    </script>
{% endblock %}

There's an append call which adds the img element into the modal's body, so that's where our srcset attribute must go. But since our image URLs are dynamically generated, we can't really call the Twig filter from within the script. One alternative is to add the srcset into the thumbnails and then use it in the JS by copying it from the thumb elements, but this would not only make the full-sized images load in the background of the thumbnails (because our viewport is wide), but it would also call the filter 4 times for each thumbnail, slowing things down. Instead, let's create a new Twig filter in src/Twig/ImageRendererExtension.php which will generate the full srcset attribute for each image.

public function getImageSrcset(Image $image)
{
    $id = $image->getId();
    $sizes = [1120, 720, 400];
    $string = '';
    foreach ($sizes as $size) {
        $string .= $this->router->generate('image.serve', [
            'id' => $image->getId() . '--' . $size,
        ], RouterInterface::ABSOLUTE_URL).' '.$size.'w, ';
    }
    $string = trim($string, ', ');
    return html_entity_decode($string);
}

We mustn't forget to register this filter:

public function getFilters()
{
    return [
        new Twig_SimpleFilter('getImageUrl', [$this, 'getImageUrl']),
        new Twig_SimpleFilter('getImageSrcset', [$this, 'getImageSrcset']),
    ];
}

We have to add these values into a custom attribute, which we'll call data-srcset on each individual thumbnail:

<img src="{{ image|getImageUrl(250) }}"
      alt="{{ image.originalFilename }}"
      data-srcset=" {{ image|getImageSrcset }}" >

Now each individual thumbnail has a data-srcset attribute with the required srcset values, but this doesn't trigger because it's in a custom attribute, data to be used later.

The final step is updating the JS to take advantage of this:

{% block javascripts %}
    {{ parent() }}

    <script>
        $(function () {
            $('.single-gallery__item-image').on('click', function () {
                var src = $(this).attr('src');
                var srcset = $(this).attr('data-srcset');
                var $modal = $('.single-gallery__modal');
                var $modalBody = $modal.find('.modal-body');

                $modalBody.html('');
                $modalBody.append($('<img src="' + src + '" srcset="" + srcset + '" >'));
                $modal.modal({});
            });
        })
    </script>
{% endblock %}

The post Improving Performance Perception: On-demand Image Resizing appeared first on SitePoint.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Varnish Cache is an HTTP accelerator and reverse proxy developed by Danish consultant and FreeBSD core developer Poul-Henning Kamp, along with other developers at Norwegian Linpro AS. It was released in 2006.

According to Pingdom.com, a company focused on web performance, in 2012 Varnish was already famous among the world's top websites for its capacity to speed up web delivery, and it was being used by sites such as Wired, SlideShare, Zappos, SoundCloud, Weather.com, Business Insider, Answers.com, Urban Dictionary, MacRumors, DynDNS, OpenDNS, Lonely Planet, Technorati, ThinkGeek and Economist.com.

It is licensed under a two-clause BSD license. Varnish has a premium tier, Varnish Plus, focused on enterprise customers, which offers some extra features, modules, and support.

Although there are other solutions that also shine, Varnish is still a go-to solution that can dramatically improve website speed, reduce the strain on the web application server's CPU, and even serve as a protection layer from DDoS attacks. KeyCDN recommends deploying it on the origin server stack.

Varnish can sit on a dedicated machine in case of more demanding websites, and make sure that the origin servers aren't affected by the flood of requests.

At the time of this writing (November 2017), Varnish is at version 5.2.

How it Works

Caching in general works by keeping the pre-computed outputs of an application in memory, or on the disk, so that expensive computations don't have to be computed over and over on every request. Web Cache can be on the client (browser cache), or on the server. Varnish falls into the second category. It is usually configured so that it listens for requests on the standard HTTP port (80), and then serves the requested resource to the website visitor.

The first time a certain URL and path are requested, Varnish has to request it from the origin server in order to serve it to the visitor. This is called a CACHE MISS, which can be read in HTTP response headers, depending on the Varnish setup.

According to the docs,

when an object, any kind of content i.e. an image or a page, is not stored in the cache, then we have what is commonly known as a cache miss, in which case Varnish will go and fetch the content from the web server, store it and deliver a copy to the user and retain it in cache to serve in response to future requests.

When a particular URL or a resource is cached by Varnish and stored in memory, it can be served directly from server RAM; it doesn't need to be computed every time. Varnish will start delivering a CACHE HIT in a matter of microseconds.

This means that neither our origin server or our web application, including its database, are touched by future requests. They won't even be aware of the requests loaded on cached URLs.

The origin server --- or servers, in case we use Varnish as a load balancer --- are configured to listen on some non-standard port, like 8888, and Varnish is made aware of their address and port.

Varnish Features

Varnish is threaded. It's been reported that Varnish was able to handle over 200,000 requests per second on a single instance. If properly configured, the only bottlenecks of your web app will be network throughput and the amount of RAM. (This shouldn't be an unreasonable requirement, because it just needs to keep computed web pages in memory, so for most websites, a couple of gigabytes should be sufficient.)

Varnish is extendable via VMODS. These are modules that can use standard C libraries and extend Varnish functionality. There are community-contributed VMODS listed here. They range from header manipulation to Lua scripting, throttling of requests, authentication, and so on.

Varnish has its own domain-specific language, VCL. VCL provides comprehensive configurability. With a full-page caching server like Varnish, there are a lot of intricacies that need to be solved.

When we cache a dynamic website with dozens or hundreds of pages and paths, with GET query parameters, we'll want to exclude some of them from cache, or set different cache-expiration rules. Sometimes we'll want to cache certain Ajax requests, or exclude them from the cache. This varies from project to project, and can't be tailored in advance.

Sometimes we'll want Varnish to decide what to do with the request depending on request headers. Sometimes we'll want to pass requests directly to the back end with a certain cookie set.

To quote the Varnish book,

VCL provides subroutines that allow you to affect the handling of any single request almost anywhere in the execution chain.

Purging the cache often needs to be done dynamically --- triggered by publishing articles or updating the website. Purging also needs to be done as atomically as possible --- meaning it should target the smallest possible scope, like a single resource or path.

This means that specific rules need to be defined, with their order of priority in mind. Some examples can be found in the Varnish book (which is available to read online or as a downloadable PDF).

Varnish has a set of tools for monitoring and administering the server:

  • There's varnishtop, which lets us monitor requested URLs and their frequency.

  • varnishncsa can be used to print the Varnish Shared memory Log (VSL): it dumps everything pointing to a certain domain and subdomains.

  • varnishhist reads the VSL and presents a live histogram showing the distribution of the last number of requests, giving an overview of server and back-end performance.

  • varnishtest is used to test VCL configuration files and develop VMODS.

  • varnishstat displays statistics about our varnishd instance:

  • varnishlog is used to get data about specific clients and requests.

Varnish Software offers a set of commercial, paid solutions either built on top of Varnish cache, or extending its usage and helping with monitoring and management: Varnish Api Engine, Varnish Extend, Akamai Connector for Varnish, Varnish Administration Console (VAC), and Varnish Custom Statistics (VCS).

The post How to Boost Your Server Performance with Varnish appeared first on SitePoint.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Chrome DevTools incorporates many sub-tools for debugging web applications on the client side --- like recording performance profiles and inspecting animations --- most of which you've likely been using since your early days of learning web development, mostly through the DevTools console.

Let's look at some of those tools, focusing particularly on the console and the performance metrics.

To access Chrome's DevTools:

  • right click anywhere on a page, click Inspect from the context menu
  • use the keyboard shortcuts Ctrl + Shift + I on Windows and Linux systems or Alt + Command + I on macOS
  • use the keyboard shortcuts Ctrl + Shift + J on Windows and Linux systems or Alt + Command + J on macOS.
The Snippets Tool

If you're frequently writing JavaScript code right in the console, make sure to use the Snippets feature of DevTools instead, which is similar to a code editor and provides mechanisms to write JavaScript code snippets, run them in the context of the current page and save them for later. It's better than writing multi-line JavaScript code directly in the console.

You can access the Snippets tool from the Sources panel. Once open, the console gets stacked below (if it doesn't, just press Escape) so you can write, run your code and see the console output at the same time.

Using the Chrome DevTools Console

You can use the console to interact with any web page using JavaScript. You can query and change the DOM and query/output different types of performance information.

The console can be opened either as a full-screen dedicated panel or as a drawer next to any other DevTools panel by pressing Escape while DevTools is open and has focus.

When working with the browser's console, if you want to enter multi-line expressions you need to use Shift + Enter, because just Enter will execute what's in the input line at that moment.

The console history

You can clear the console history in different ways:

  • by typing clear() in the console
  • by calling console.clear() method in the console or JavaScript code
  • by clicking on the red circle in the top left corner of the console
  • by pressing CTRL+L in macOS, Windows and Linux
  • by right-clicking in the Console and then pressing Clear console.

You can preserve the log (by enabling the Preserve log checkbox) between page refreshes or changes until you clear the console or close the tab.

You can save the history in the Console as a text file by right-clicking in the console and selecting Save as…, then choosing the location of a log file.

Console variables

The variables you create in the Console are persisted until you do a page refresh, so pay attention to when you're using keywords such as let or const when declaring variables. Running the same code or function for the second time will throw an Uncaught SyntaxError, saying that the identifier has already been declared. You can either use the OR (||) operator to check if the variable is already defined or you can use var to declare variables, since it doesn't complain for previously declared variables.

The post Optimization Auditing: A Deep Dive into Chrome’s Dev Console appeared first on SitePoint.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Optimizing websites for speed is a craft, and each craft requires tools. The most-used website optimization tools are GTmetrix, YSlow and Pingdom Tools.

GTmetrix is a rather advanced tool that offers a lot on its free tier, but it also offers premium tiers. If you sign up, you can compare multiple websites, multiple versions of the same website, tested under different conditions, and save tests for later viewing.

YSlow is still relevant, although its best days were those when Firebug ruled supreme among the browser inspectors. It offers a Chrome app and other implementations --- such as add-ons for Safari and Opera, a bookmarklet, an extension for PhantomJS, and so on.

For advanced users, PhantomJS integration means that one could, for example, automate the testing of many websites --- hundreds or thousands --- and export the results into the database.

YSlow's Ruleset Matrix has for a number of years been a measuring stick for website performance.

Pingdom Tools is a SaaS service that offers monitoring and reporting of website performance, and it has strengthened its market position in recent years. It also offers a DNS health check and website speed testing on its free tier, which is comparable to GTMetrix and YSlow.

For the purposes of this article, we purchased a fitting domain name --- ttfb.review --- and installed Drupal with some demo content on it. We also installed WordPress on wp.ttfb.review, and demo installations of Yii and Symfony on their respective subdomains.

We used the default WordPress starting installation. For Drupal, we used the Devel and Realistic Dummy Content extensions to generate demo content. For Symfony we used the Symfony demo application, and for Yii we used basic application template.

This way, we'll be able to compare these installations side-by-side, and point out the things that deserve attention.

Please be aware that these are development-level installations, used only for demonstration purposes. They aren't optimized for running in production, so our results are likely to be subpar.

The post Improving Page Load Performance: Pingdom, YSlow and GTmetrix appeared first on SitePoint.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This article is part of a series on building a sample application --- a multi-image gallery blog --- for performance benchmarking and optimizations. (View the repo here.)

As we can see in this report, our site's landing page loads very quickly and generally scores well, but it could use another layer of caching and even a CDN to really do well.

To learn more about GTMetrix and other tools you can use to measure and debug performance, see Improving Page Load Performance: Pingdom, YSlow and GTmetrix

Let's use what we've learned in our previous Varnish post, along with the knowledge gained in the Intro to CDN and Cloudflare posts to really tune up our server's content delivery now.

Varnish

Varnish was created solely for the purpose of being a type of super-cache in front of a regular server.

Note: Given that Nginx itself is a pretty good server already, people usually opt for one or the other, not both. There's no harm in having both, but one does have to be wary of cache-busting problems which can occur. It's important to set them both up properly so that the cache of one of them doesn't remain stale while the other's is fresh. This can lead to different content being shown to different visitors at different times. Setting this up is a bit outside the context of this post, and will be covered in a future guide.

We can install Varnish by executing the following:

curl -L https://packagecloud.io/varnishcache/varnish5/gpgkey | sudo apt-key add -
sudo apt-get update
sudo apt-get install -y apt-transport-https

The current list of repos for Ubuntu does not have Varnish 5+ available, so additional repositories are required. If the file /etc/apt/sources.list.d/varnishcache_varnish5.list doesn't exist, create it. Add to it the following:

deb https://packagecloud.io/varnishcache/varnish5/ubuntu/ xenial main
deb-src https://packagecloud.io/varnishcache/varnish5/ubuntu/ xenial main

Then, run:

sudo apt-get update
sudo apt-get install varnish
varnishd -V

The result should be something like:

$ varnishd -V
varnishd (varnish-5.2.1 revision 67e562482)
Copyright (c) 2006 Verdens Gang AS
Copyright (c) 2006-2015 Varnish Software AS

We then change the server's default port to 8080. We're doing this because Varnish will be sitting on port 80 instead, and forwarding requests to 8080 as needed. If you're developing locally on Homestead Improved as instructed at the beginning of this series, the file you need to edit will be in /etc/nginx/sites-available/homestead.app. Otherwise, it'll probably be in /etc/nginx/sites-available/default.

server {
    listen 8080 default_server;
    listen [::]:8080 default_server ipv6only=on;

Next up, we'll configure Varnish itself by editing /etc/default/varnish and replacing the default port on the first line (6081) with 80:

DAEMON_OPTS="-a :80 \
   -T localhost:6082 \
   -f /etc/varnish/default.vcl \
   -S /etc/varnish/secret \
   -s malloc,256m"

The same needs to be done in /lib/systemd/system/varnish.service:

ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m

Finally, we can restart both Varnish and Nginx for the changes to take effect:

sudo service nginx restart
sudo /etc/init.d/varnish stop
sudo /etc/init.d/varnish start
systemctl daemon-reload

The last command is there so that the previously edited varnish.service daemon settings also reload, otherwise it'll only take into account the /etc/default/varnish file's changes. The start + stop procedure is necessary for Varnish because of a current bug which doesn't release ports properly unless done this way.

Comparing the result with the previous one, we can see that the difference is minimal due to the landing page already being extremely optimized.

Sidenote

Both of the low grades are mainly the result of us "not serving from a consistent URL", as GTMetrix would put it:

This happens because we used random images to populate our galleries, and the sample of randomness was small, so some of them are repeated. This is fine and won't be an issue when the site is in production. In fact, this is one of the very rare cases where a site will by default score better in production than it does in development.

The post How to Use Varnish and Cloudflare for Maximum Caching appeared first on SitePoint.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This article is part of a series on building a sample application --- a multi-image gallery blog --- for performance benchmarking and optimizations. (View the repo here.)

Let's continue optimizing our app. We're starting with on-the-fly thumbnail generation that takes 28 seconds per request, depending on the platform running your demo app (in my case it was a slow filesystem integration between host OS and Vagrant), and bring it down to a pretty acceptable 0.7 seconds.

Admittedly, this 28 seconds should only happen on initial load. After the tuning, we were able to achieve production-ready times:

Troubleshooting

It is assumed that you've gone through the bootstrapping process and have the app running on your machine --- either virtual or real.

Note: if you're hosting the Homestead Improved box on a Windows machine, there might be an issue with shared folders. This can be solved by adding type: "nfs" setting to the folder in Homestead.yaml:

You should also run vagrant up from a shell/powershell interface that has administrative privileges if problems persist (right-click, run as administrator).

In one example before doing this, we got 20 to 30 second load times on every request, and couldn't get a rate faster than one request per second (it was closer to 0.5 per second):

The Process

Let's go through the testing process. We installed Locust on our host, and created a very simple locustfile.py:

from locust import HttpLocust, TaskSet, task

class UserBehavior(TaskSet):
    @task(1)
    def index(self):
        self.client.get("/")

class WebsiteUser(HttpLocust):
    task_set = UserBehavior
    min_wait = 300
    max_wait = 1000

Then we downloaded ngrok to our guest machine and tunneled all HTTP connections through it, so that we can test our application over a static URL.

Then we started Locust and swarmed our app with 100 parallel users:

Our server stack consisted of PHP 7.1.10, Nginx 1.13.3 and MySQL 5.7.19, on Ubuntu 16.04.

PHP-FPM and its Process Manager Setting

php-fpm spawns its own processes, independent of the web-server process. Management of the number of these processes is configured in /etc/php/7.1/fpm/pool.d/www.conf (7.1 here can be exchanged for the actual PHP version number currently in use).

In this file, we find the pm setting. This setting can be set to dynamic, ondemand and static. Dynamic is maybe the most common wisdom; it allows the server to juggle the number of spawned PHP processes between several settings:

pm = dynamic
; The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served.
pm.max_children = 6
; The number of child processes created on startup.
; Note: Used only when pm is set to 'dynamic'
; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
pm.start_servers = 3
; The desired minimum number of idle server processes
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.min_spare_servers = 2
; The desired maximum number of idle server proceses
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.max_spare_servers = 4

The meanings of these values are self-explanatory, and the spawning of processes is being done on demand, but constrained by these minimum and maximum values.

After fixing the Windows shared-folders issue with nfs, and testing with Locust, we were able to get approximately five requests per second, with around 17–19% failures, with 100 concurrent users. Once it was swarmed with requests, the server slowed down and each request took over ten seconds to finish.

Then we changed the pm setting to ondemand.

Ondemand means that there are no minimum processes: once the requests stop, all the processes will stop. Some advocate this setting, because it means the server won't be spending any resources in its idle state, but for the dedicated (non-shared) server instances this isn't necessarily the best. Spawning a process includes an overhead, and what is gained in memory is being lost in time needed to spawn processes on-demand. The settings that are relevant here are:

pm.max_children = 6
; and
pm.process_idle_timeout = 20s;
; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s

When testing, we increased these settings a bit, having to worry about resources less.

There's also pm.max_requests, which can be changed, and which designates the number of requests each child process should execute before respawning.

This setting is a tradeoff between speed and stability, where 0 means unlimited.

ondemand didn't bring much change, except that we noticed more initial waiting time when we started swarming our application with requests, and more initial failures. In other words, there were no big changes: the application was able to serve around four to maximum six requests per second. Waiting time and rate of failures were similar to the dynamic setup.

Then we tried the pm = static setting, allowing our PHP processes to take over the maximum of the server's resources, short of swapping, or driving the CPU to a halt. This setting means we're forcing the maximum out of our system at all times. It also means that --- within our server's constraints --- there won't be any spawning overhead time cost.

What we saw was an improvement of 20%. The rate of failed requests was still significant, though, and the response time was still not very good. The system was far from being ready for production.

However, on Pingdom Tools, we got a bearable 3.48 seconds when the system was not under pressure:

This meant that pm static was an improvement, but in the case of a bigger load, it would still go down.

In one of the previous articles, we explained how Nginx can itself serve as a caching system, both for static and dynamic content. So we reached for the Nginx wizardry, and tried to bring our application to a whole new level of performance.

And we succeeded. Let's see how.

The post Server-side Optimization with Nginx and pm-static appeared first on SitePoint.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This article is part of a series on building a sample application --- a multi-image gallery blog --- for performance benchmarking and optimizations. (View the repo here.)

In a previous article, we've added on-demand image resizing. Images are resized on the first request and cached for later use. By doing this, we've added some overhead to the first load; the system has to render thumbnails on the fly and is "blocking" the first user's page render until image rendering is done.

The optimized approach would be to render thumbnails after a gallery is created. You may be thinking, "Okay, but we'll then block the user who is creating the gallery?" Not only would it be a bad user experience, but it also isn't a scalable solution. The user would get confused about long loading times or, even worse, encounter timeouts and/or errors if images are too heavy to be processed. The best solution is to move these heavy tasks into the background.

Background Jobs

Background jobs are the best way of doing any heavy processing. We can immediately notify our user that we've received their request and scheduled it for processing. The same way as YouTube does with uploaded videos: they aren't accessible after the upload. The user needs to wait until the video is processed completely to preview or share it.

Processing or generating files, sending emails or any other non-critical tasks should be done in the background.

How Does Background Processing Work?

There are two key components in the background processing approach: job queue and worker(s). The application creates jobs that should be handled while workers are waiting and taking from the queue one job at a time.

You can create multiple worker instances (processes) to speed up processing, chop a big job up into smaller chunks and process them simultaneously. It's up to you how you want to organize and manage background processing, but note that parallel processing isn't a trivial task: you should take care of potential race conditions and handle failed tasks gracefully.

Our tech stack

We're using the Beanstalkd job queue to store jobs, the Symfony Console component to implement workers as console commands and Supervisor to take care of worker processes.

If you're using Homestead Improved, Beanstalkd and Supervisor are already installed so you can skip the installation instructions below.

Installing Beanstalkd

Beanstalkd is

a fast work queue with a generic interface originally designed for reducing the latency of page views in high-volume web applications by running time-consuming tasks asynchronously.

There are many client libraries available that you can use. In our project, we're using Pheanstalk.

To install Beanstalkd on your Ubuntu or Debian server, simply run sudo apt-get install beanstalkd. Take a look at the official download page to learn how to install Beanstalkd on other OSes.

Once installed, Beanstalkd is started as a daemon, waiting for clients to connect and create (or process) jobs:

/etc/init.d/beanstalkd
Usage: /etc/init.d/beanstalkd {start|stop|force-stop|restart|force-reload|status}

Install Pheanstalk as a dependency by running composer require pda/pheanstalk.

The queue will be used for both creating and fetching jobs, so we'll centralize queue creation in a factory service JobQueueFactory:

<?php

namespace App\Service;

use Pheanstalk\Pheanstalk;

class JobQueueFactory
{
    private $host = 'localhost';
    private $port = '11300';

    const QUEUE_IMAGE_RESIZE = 'resize';

    public function createQueue(): Pheanstalk
    {
        return new Pheanstalk($this->host, $this->port);
    }
}

Now we can inject the factory service wherever we need to interact with Beanstalkd queues. We are defining the queue name as a constant and referring to it when putting the job into the queue or watching the queue in workers.

Installing Supervisor

According to the official page, Supervisor is a

client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.

We'll be using it to start, restart, scale and monitor worker processes.

Install Supervisor on your Ubuntu/Debian server by running
sudo apt-get install supervisor. Once installed, Supervisor will be running in the background as a daemon. Use supervisorctl to control supervisor processes:

$ sudo supervisorctl help

default commands (type help <topic>):
=====================================
add    exit      open  reload  restart   start   tail
avail  fg        pid   remove  shutdown  status  update
clear  maintail  quit  reread  signal    stop    version

To control processes with Supervisor, we first have to write a configuration file and describe how we want our processes to be controlled. Configurations are stored in /etc/supervisor/conf.d/. A simple Supervisor configuration for resize workers would look like this:

[program:resize-worker]
process_name=%(program_name)s_%(process_num)02d
command=php PATH-TO-YOUR-APP/bin/console app:resize-image-worker
autostart=true
autorestart=true
numprocs=5
stderr_logfile = PATH-TO-YOUR-APP/var/log/resize-worker-stderr.log
stdout_logfile = PATH-TO-YOUR-APP/var/log/resize-worker-stdout.log

We're telling Supervisor how to name spawned processes, the path to the command that should be run, to automatically start and restart the processes, how many processes we want to have and where to log output. Learn more about Supervisor configurations here.

Resizing images in the background

Once we have our infrastructure set up (i.e., Beanstalkd and Supervisor installed), we can modify our app to resize images in the background after the gallery is created. To do so, we need to:

  • update image serving logic in the ImageController
  • implement resize workers as console commands
  • create Supervisor configuration for our workers
  • update fixtures and resize images in the fixture class.

The post Using Background Processing to Speed Up Page Load Times appeared first on SitePoint.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview