Loading...

Follow Smashing Magazine | For Professional Web Design.. on Feedspot

Continue with Google
Continue with Facebook
or

Valid
How To Create A PDF From Your Web Application How To Create A PDF From Your Web Application Rachel Andrew 2019-06-19T14:00:59+02:00 2019-06-19T12:22:52+00:00

Many web applications have the requirement of giving the user the ability to download something in PDF format. In the case of applications (such as e-commerce stores), those PDFs have to be created using dynamic data, and be available immediately to the user.

In this article, I’ll explore ways in which we can generate a PDF directly from a web application on the fly. It isn’t a comprehensive list of tools, but instead I am aiming to demonstrate the different approaches. If you have a favorite tool or any experiences of your own to share, please add them to the comments below.

Starting With HTML And CSS

Our web application is likely to be already creating an HTML document using the information that will be added to our PDF. In the case of an invoice, the user might be able to view the information online, then click to download a PDF for their records. You might be creating packing slips; once again, the information is already held within the system. You want to format that in a nice way for download and printing. Therefore, a good place to start would be to consider if it is possible to use that HTML and CSS to generate a PDF version.

CSS does have a specification which deals with CSS for print, and this is the Paged Media module. I have an overview of this specification in my article “Designing For Print With CSS”, and CSS is used by many book publishers for all of their print output. Therefore, as CSS itself has specifications for printed materials, surely we should be able to use it?

The simplest way a user can generate a PDF is via their browser. By choosing to print to PDF rather than a printer, a PDF will be generated. Sadly, this PDF is usually not altogether satisfactory! To start with, it will have the headers and footers which are automatically added when you print something from a webpage. It will also be formatted according to your print stylesheet — assuming you have one.

The problem we run into here is the poor support of the fragmentation specification in browsers; this may mean that the content of your pages breaks in unusual ways. Support for fragmentation is patchy, as I discovered when I researched my article, “Breaking Boxes With CSS Fragmentation”. This means that you may be unable to prevent suboptimal breaking of content, with headers being left as the last item on the page, and so on.

In addition, we have no ability to control the content in the page margin boxes, e.g. adding a header of our choosing to each page or page numbering to show how many pages a complex invoice has. These things are part of the Paged Media spec, but have not been implemented in any browser.

My article “A Guide To The State Of Print Stylesheets In 2018” is still accurate in terms of the type of support that browsers have for printing directly from the browser, using a print stylesheet.

Printing Using Browser Rendering Engines

There are ways to print to PDF using browser rendering engines, without going through the print menu in the browser, and ending up with headers and footers as if you had printed the document. The most popular options in response to my tweet were wkhtmltopdf, and printing using headless Chrome and Puppeteer.

wkhtmltopdf

A solution that was mentioned a number of times on Twitter is a commandline tool called wkhtmltopdf. This tool takes an HTML file or multiple files, along with a stylesheet and turns them into a PDF. It does this by using the WebKit rendering engine.

We use wkhtmltopdf. It’s not perfect, although that was probably user error, but easily good enough for a production application.

— Paul Cardno (@pcardno) February 15, 2019

Essentially, therefore, this tool does the same thing as printing from the browser, however, you will not get the automatically added headers and footers. On this positive side, if you have a working print stylesheet for your content then it should also nicely output to PDF using this tool, and so a simple layout may well print very nicely.

Unfortunately, however, you will still run into the same problems as when printing directly from the web browser in terms of lack of support for the Paged Media specification and fragmentation properties, as you are still printing using a browser rendering engine. There are some flags that you can pass into wkhtmltopdf in order to add back some of the missing features that you would have by default using the Paged Media specification. However, this does require some extra work on top of writing good HTML and CSS.

Headless Chrome

Another interesting possibility is that of using Headless Chrome and Puppeteer to print to PDF.

Puppeteer. It's amazing for this.

— Alex Russell (@slightlylate) February 15, 2019

However once again you are limited by browser support for Paged Media and fragmentation. There are some options which can be passed into the page.pdf() function. As with wkhtmltopdf, these add in some of the functionality that would be possible from CSS should there be browser support.

It may well be that one of these solutions will do all that you need, however, if you find that you are fighting something of a battle, it is probable that you are hitting the limits of what is possible with current browser rendering engines, and will need to look for a better solution.

JavaScript Polyfills For Paged Media

There are a few attempts to essentially reproduce the Paged Media specification in the browser using JavaScript — essentially creating a Paged Media Polyfill. This could give you Paged Media support when using Puppeteer. Take a look at paged.js and Vivliostyle.

Yes. For simple docs, like course certificates, we can use Chrome, which has minimal @ page support. For anything else, we use PrinceXML or the paged.js polyfill in Chrome. Here's a WIP proof-of-concept using paged.js for books: https://t.co/AZ9fO94PT2

— Electric Book Works (@electricbook) February 15, 2019
Using A Print User Agent

If you want to stay with an HTML and CSS solution then you need to look to a User Agent (UA) designed for printing from HTML and CSS, which has an API for generating the PDF from your files. These User Agents implement the Paged Media specification and have far better support for the CSS Fragmentation properties; this will give you greater control over the output. Leading choices include:

A print UA will format documents using CSS — just as a web browser does. As with browser support for CSS, you need to check the documentation of these UAs to find out what they support. For example, Prince (which I am most familiar with) supports Flexbox but not CSS Grid Layout at the time of writing. When sending your pages to the tool that you are using, typically this would be with a specific stylesheet for print. As with a regular print stylesheet, the CSS you use on your site will not all be appropriate for the PDF version.

Creating a stylesheet for these tools is very similar to creating a regular print stylesheet, making the kind of decisions in terms of what to display or hide, perhaps using a different font size or colors. You would then be able to take advantage of the features in the Paged Media specification, adding footnotes, page numbers, and so on.

In terms of using these tools from your web application, you would need to install them on your server (having bought a license to do so, of course). The main problem with these tools is that they are expensive. That said, given the ease with which you can then produce printed documents with them, they may well pay for themselves in developer time saved.

It is possible to use Prince via an API, on a pay per document basis, via a service called DocRaptor. This would certainly be a good place for many applications to start as if it looked as if it would become more cost effective to host your own, the development cost of switching would be minimal.

A free alternative, which is not quite as comprehensive as the above tools but may well achieve the results you need, is WeasyPrint. It doesn’t fully implement all of Paged Media, however, it implements more than a browser engine does. Definitely, one to try!

Other tools which claim to support conversion from HTML and CSS include PDFCrowd, which boldly claims to support HTML5, CSS3 and JavaScript. I couldn’t, however, find any detail on exactly what was supported, and if any of the Paged Media specification was. Also receiving a mention in the responses to my tweet was mPDF.

Moving Away From HTML And CSS

There are a number of other solutions, which move away from using HTML and CSS and require you to create specific output for the tool. A couple of JavaScript contenders are as follows:

Headless browser + saving to PDF was once my first choice but always produced subpar results for anything other than a single page document. We switched over to https://t.co/3o8Ce23F1t for multi-page reports which took quite a lot more effort but well worth it in the end!

— JimmyJoy (@jimle_uk) February 15, 2019
Recommendations

Other than the JavaScript-based approaches, which would require you to create a completely different representation of your content for print, the beauty of many of these solutions is that they are interchangeable. If your solution is based on calling a commandline tool, and passing that tool your HTML, CSS, and possibly some JavaScript, it is fairly straightforward to switch between tools.

In the course of writing this article, I also discovered a Python wrapper which can run a number of different tools. (Note that you need to already have the tools themselves installed, however, this could be a good way to test out the various tools on a sample document.)

For support of Paged Media and fragmentation, Prince, Antenna House, and PDFReactor are going to come out top. As commercial products, they also come with support. If you have a budget, complex pages to print to PDF, and your limitation is developer time, then you would most likely find these to be the quickest route to have your PDF creation working well.

However, in many cases, the free tools will work well for you. If your requirements are very straightforward then wkhtmltopdf, or a basic headless Chrome and Puppeteer solution may do the trick. It certainly seemed to work for many of the people who replied to my original tweet.

If you find yourself struggling to get the output you want, however, be aware that it may be a limitation of browser printing, and not anything you are doing wrong. In the case that you would like more Paged Media support, but are not in a position to go for a commercial product, perhaps take a look at WeasyPrint.

I hope this is a useful roundup of the tools available for creating PDFs from your web application. If nothing else, it demonstrates that there are a wide variety of choices, if your initial choice isn’t working well.

Please add your own experiences and suggestions in the comments, this is one of those things that a lot of us end up dealing with, and personal experience shared can be incredibly helpful.

Further Reading

A roundup of the various resources and tools mentioned in this article, along with some other useful resources for working with PDF files from web applications.

Specifications Articles and Resources Tools
(il)
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Unleash The Power Of Path Animations With SVGator Unleash The Power Of Path Animations With SVGator Mikołaj Dobrucki 2019-06-18T13:30:59+02:00 2019-06-18T13:05:32+00:00

(This is a sponsored article.) Last year, a comprehensive introduction to the basic use of SVGator was published here on Smashing Magazine. If you’d like to learn about the fundamentals of SVGator, setting up your first projects, and creating your first animations, we strongly recommended you read it before continuing with this article.

Today, we’ll take a second look to explore some of the new features that have been added to it over the last few months, including the brand new Path Animator.

Note: Path Animator is a premium feature of SVGator and it’s not available to trial users. During a seven-day trial, you can see how Path Animator works in the sample project you’ll find in the app, but you won’t be able to apply it to your own SVGs unless you’re opted-in for a paid plan. SVGator is a subscription-based service. Currently, you can choose between a monthly plan ($18USD/month) and a yearly plan ($144USD total, $12USD/month). For longer projects, we recommend you consider the yearly option.

Path Animator is just the first of the premium features that SVGator plans to release in the upcoming months. All the new features will be available to all paid users, no matter when they subscribed.

The Charm Of Path Animations

SVG path animations are by no means a new thing. In the last few years, this way of enriching vector graphics has been heavily used all across the web:

Animation by Codrops (Original demo) (Large preview)

Path animations gained popularity mostly because of their relative simplicity: even though they might look impressive and complex at first glance, the underlying rule is in fact very simple.

How Do Path Animations Work?

You might think that SVG path animations require some extremely complicated drawing and transform functions. But it’s much simpler than it looks. To achieve effects similar to the example above, you don’t need to generate, draw, or animate the actual paths — you just animate their strokes. This brilliant concept allows you to create seemingly complex animations by animating a single SVG attribute: stroke-dashoffset.

Animating this one little property is responsible for the entire effect. Once you have a dashed line, you can play with the position of dashes and gaps. Combine it with the right settings and it will give you the desired effect of a self-drawing SVG path.

If this still sounds rather mysterious or you’d just like to learn about how path animations are made in more detail, you will find some useful resources on this topic at the end of the article.

No matter how simple path animations are compared with what they look like, don’t think coding them is always straightforward. As your files get more complicated, so does animating them. And this is where SVGator comes to the rescue.

Furthermore, sometimes you might prefer not to touch raw SVG files. Or maybe you’re not really fond of writing code altogether. Then SVGator has got you covered. With the new Path Animator, you can create even the most complex SVG path animations without touching a line of code. You can also combine coding with using SVGator.

To better understand the possibilities that Path Animator gives us, we will cover three separate examples presenting different use cases of path animations.

Example #1: Animated Text

In the first example, we will animate text, creating the impression of self-writing letters.

Final result of the first example (Large preview)

Often used for lettering, this cute effect can also be applied to other elements, such as drawings and illustrations. There’s a catch, though: the animated element must be styled with strokes rather than fills. Which means, for our text, that we can’t use any existing font.

Outlining fonts, no matter how thin, always results in closed shapes rather than open paths. There are no regular fonts based on lines and strokes.

Outlined fonts are not suitable for self-drawing effects with Path Animator. (Large preview) Path animations require strokes. These paths would work great with Path Animator. (Large preview)

Therefore, if we want to animate text using path animations we need to draw it ourselves (or find some ready-made vector letters suitable for this purpose). When drawing your letters, feel free to use some existing font or typography as a reference — don’t violate any copyright, though! Just keep in mind it’s not possible to use fonts out of the box.

Preparing The File

Rather than starting with an existing typeface, we’ll begin with a simple hand-drawn sketch:

A rough sketch for the animation (pardon my calligraphy skills!) (Large preview)

Now it’s time to redraw the sketch in a design tool. I used Figma, but you can use any app that supports SVG exports, such as Sketch, Adobe XD, or Adobe Illustrator.

Usually, I start with the Pen tool and roughly follow the sketch imported as a layer underneath:

Outlining the sketch in Figma - Vimeo

Once done, I remove the sketch from the background and refine the paths until I’m happy with the result. No matter what tools you use, nor technique, the most important thing is to prepare the drawing as lines and to use just strokes, no fills.

These paths can be successfully animated with Path Animator as they are created with strokes. (Large preview)

In this example, we have four such paths. The first is the letter “H”; the second is the three middle letters “ell”; and “o” is the third. The fourth path is the line of the exclamation mark.

The dot of “!” is an exception — it’s the only layer we will style with a fill, rather than a stroke. It will be animated in a different way than the other layers, without using Path Animator.

Note that all the paths we’re going to animate with Path Animator are open, except for the “o,” which is an ellipse. Although animating closed paths (such as ellipses or polygons) with Path Animator is utterly fine and doable, it’s worth making it an open path as well, because this is the easiest way to control exactly where the animation starts. For this example, I added a tiny gap in the ellipse just by the end of the letter “l” as that’s where you’d usually start writing “o” in handwriting.

A small gap in the letter ‘o’ controls the starting point of the animation. (Large preview)

Before importing our layers to SVGator, it’s best to clean up the layers’ structure and rename them in a descriptive way. This will help you quickly find your way around your file once working in SVGator.

If you’d like to learn more about preparing your shapes for path animations, I would recommend you check out this tutorial by SVGator.

It’s worth preparing your layers carefully and thinking ahead as much as possible. At the time of writing, in SVGator you can’t reimport the file to an already existing animation. While animating, if you discover an issue that requires some change to the original file, you will have to import it into SVGator again as a new project and start working on your animation from scratch.

Creating An Animation

Once you’re happy with the structure and naming of your layers, import them to SVGator. Then add the first path to the timeline and apply Path Animator to it by choosing it from the Animators list or by pressing Shift + T.

To achieve a self-drawing effect, our goal is to turn the path’s stroke into a dashed line. The length of a dash and a gap should be equal to the length of the entire path. This allows us to cover the entire path with a gap to make it disappear. Once hidden, change stroke-dashoffset to the point where the entire path is covered by a dash.

SVGator makes it very convenient for us by automatically providing the length of the path. All we need to do is to copy it with a click, and paste it into the two parameters that SVGator requires: Dashes and Offset. Pasting the value in Dashes turns the stroke into a dashed line. You can’t see it straightaway as the first dash of the line covers the whole path. Setting the Offset will change stroke-dashoffset so the gap then covers the path.

Once done, let’s create an animation by adding a new keyframe further along the timeline. Bring Offset back to zero and… ta-da! You’ve just created a self-drawing letter animation.

Creating a self-writing text animation in SVGator: Part 1 - Vimeo
Creating a self-writing text animation in SVGator: Part 1

There’s one little issue with our animation, though. The letter is animated — but back-to-front. That is, the animation starts at the wrong end of the path. There are, at least, a few ways to fix it. First, rather than animating the offset from a positive value to zero, we can start with a negative offset and bring it to zero. Unfortunately, this may not work as expected in some browsers (for example, Safari does not accept negative stroke offsets). While we wait for this bug to be fixed, let’s choose a different approach.

Let’s change the Dashes value so the path starts with a gap followed by a dash (by default, dashed lines always start with a dash). Then reverse the values of the Offset animation. This will animate the line in the opposite direction.

Reversing the direction of self-writing animation - Vimeo
Reversing the direction of self-writing animation

Now that we’re done with “H” we can move on to animating all the other paths in the same way. Eventually, we finish by animating the dot of the exclamation mark. As it’s a circle with a fill, not an outline, we won’t use Path Animator. Instead, we use Scale Animator to the make dot pop in at the end of the animation.

Creating a self-writing text animation in SVGator: Part 2 - Vimeo
Creating a self-writing text animation in SVGator: Part 2

Always remember to check the position of an element’s transform origin when playing with scale animations. In SVG, all elements have their transform origin in the top-left corner of the canvas by default. This often makes coding transform functions a very hard and tedious task. Fortunately, SVGator saves us from all this hassle by calculating all the transforms in relation to the object, rather than the canvas. By default, SVGator sets the transform origin of each element in its own top-left corner. You can change its position from the timeline, using a button next to the layer’s name.

Transform origin control in SVGator’s Timeline panel (Large preview)

Let’s add the final touch to the animation and adjust the timing functions. Timing functions define the speed over time of objects being animated, allowing us to manipulate their dynamics and make the animation look more natural.

In this case, we want to give the impression of the text being written by a single continuous movement of a hand. Therefore, I applied an Ease-in function to the first letter and an Ease-out function to the last letter, leaving the middle letters with a default Linear function. In SVGator, timing functions can be applied from the timeline, next to the Animator’s parameters:

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Building A Component Library Using Figma Building A Component Library Using Figma Emiliano Cicero 2019-06-17T14:00:16+02:00 2019-06-17T13:06:11+00:00

I’ve been working on the creation and maintenance of the main library of our design system, Lexicon. We used the Sketch app for the first year and then we moved to Figma where the library management was different in certain aspects, making the change quite challenging for us.

To be honest, as with any library construction, it requires time, effort, and planning, but it is worth the effort because it will help with providing detailed components for your team. It will also help increase the overall design consistency and will make the maintenance easier in the long run. I hope the tips that I’ll provide in this article will make the process smoother for you as well.

This article will outline the steps needed for building a component library with Figma, by using styles and a master component. (A master component will allow you to apply multiple changes all at once.) I’ll also cover in detail the components’ organization and will give you a possible solution if you have a large number of icons in the library.

Note: To make it easier to use, update and maintain, we found that it is best to use a separate Figma file for the library and then publish it as a team ‘library’ instead of publishing the components individually.

Getting Started

This guide was created from a designer’s perspective, and if you have at least some basic knowledge of Figma (or Sketch), it should help you get started with creating, organizing and maintaining a component library for your design team.

If you are new to Figma, check the following tutorials before proceeding with the article:

Requirements

Before starting, there are some requirements that we have to cover to define the styles for the library.

Typography Scale

The first step to do is to define the typography scale; it helps to focus on how the text size and line height grow in your system, allowing you to define the visual hierarchy of your texts.

Typography scales are useful to improve the hierarchy of the elements, as managing the sizes and weights of the fonts can really guide the user through the content. (Large preview)

The type of scale depends on what you’re designing. It’s common to use a bigger ratio for website designs and a smaller ratio when designing digital products.

The reason for this is behind the design’s goal — a website is usually designed to communicate and convert so it gives you one or two direct actions. It’s easier in that context to have 36px for a main title, 24px for a secondary title, and 16px for a description text.

Related resource: “8-Point Grid: Typography On The Web” by Elliot Dahl.

On the other hand, digital products or services are designed to provide a solution to a specific problem, usually with multiple actions and possible flows. It means more information, more content and more components, all in the same space.

For this case, I personally find it rare to use more than 24px for texts. It’s more common to use small sizes for components — usually from 12 to 18 pixels depending on the text’s importance.

If you’re designing a digital product, it is useful to talk to the developers first. It’s easier to maintain a typography scale based on EM/REM more than actual pixels. The creation of a rule to convert pixels into EM/REM multiples is always recommended.

Related resource: “Defining A Modular Type Scale For Web UI” by Kelly Dern.

Color Scheme

Second, we need to define the color scheme. I think it’s best if you to divide this task into two parts.

  1. First, you need to define the main colors of the system. I recommend keeping it simple and using a maximum of four or five colors (including validation colors) because the more colors you include here, the more stuff you’ll have to maintain in the future.
  2. Next, generate more color values using the Sass functions such as “Lighten” and “Darken” — this works really well for user interfaces. The main benefit of this technique is to use the same hue for the different variants and obtain a mathematical rule that can be automated in the code. You can’t do it directly with Figma, but any Sass color generator will work just fine — for example, SassMe by Jim Nielsen. I like to increase the functions by 1% to have more color selection.
Once you have your main colors (in our case, blue and grey), you can generate gradients using lighten and darken functions. (Large preview)

Tip: In order to be able to apply future changes without having to rename the variables, avoid using the color as part of the color name. E.g., instead of $blue, use $primary.

Recommended reading: “What Do You Name Color Variables?” by Chris Coyier

Figma Styles

Once we have the typography scale and the color scheme set, we can use them to define the Library styles.

This is the first actual step into the library creation. This feature lets you use a single set of properties in multiple elements.

Styles are the way to control all the basic details in your library. (Large preview)
Concrete Example

Let’s say you define your brand color as a style, it’s a soft-blue and you originally apply it to 500 different elements. If it is later decided that you need to change it to a darker blue with more contrast, thanks to the styles you can update all the 500 styled elements at once, so you won’t have to do it manually, element by element.

We can define styles for the following:

If you have variations of the same style, to make it easier to find them later, you can name the single styles and arrange them inside the panel as groups. To do so, just use this formula:

Group Name/Style Name

I’ve included a suggestion of how to name texts and colors styles below.

Text Styles

Properties that you can define within a text style:

  • Font size
  • Font weight
  • Line-height
  • Letter spacing

Tip: Figma drastically reduces the number of styles that we need to define in the library, as alignments and colors are independent of the style. You can combine a text style with a color style in the same text element.

You can apply all the typography scale we’ve seen before as text styles. (Large preview)

Text Styles Naming

I recommend using a naming rule such as “Size/Weight”
(eg: 16/Regular, 16/SemiBold, 16/Bold).

Figma only allows one level of indentation, if you need to include the font you can always add a prefix before the size:
FontFamily Size/Weight or FF Size/Weight
*(eg: SourceSansPro 16/Regular or SSP 16/Regular).*

Color Styles

The color style uses its hex value (#FFF) and the opacity as properties.

Tip: Figma allows you to set a color style for the fill and a different one for the border within the same element, making them independent of each other.

You can apply color styles to fills, borders, backgrounds, and texts. (Large preview)

Color Styles Naming

For a better organization I recommend using this rule “Color/Variant”. We named our color styles using “Primary/Default” for the starter color, “Primary/L1”, “Primary/L2” for lighten variants, and “Primary/D1”, “Primary/D2” for darken variants.

Effects

When designing an interface you might also need to create elements that use some effects such as drop shadows (the drag&drop could be an example of a pattern that uses drop shadows effects). To have control over these graphic details, you can include effect styles such as shadows or layer blurs to the library, and also divide them by groups if necessary.

Define shadows and blurs to manage special interaction effects such as drag-n-drop. (Large preview) Grids

To provide something very useful for your team, include the grid styles. You can define the 8px grid, 12 columns grid, flexible grids so your team won’t need to recreate them.

There’s no need to memorize the grid sizes anymore. (Large preview)

Tip: Taking advantage of this feature, you can provide all the different breakpoints as ‘grid styles’.

Master Component

Figma lets you generate multiple instances of the same component and update them through a single master component. It’s easier than you might think, you can start with some small elements and then use them to evolve your library.

One master component to rule them all! (Large preview)

To explain this workflow better, I will use one of the basic components all the libraries have: the..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement Anselm Hannemann 2019-06-14T14:30:00+02:00 2019-06-14T13:35:41+00:00

Last week I read about the web turning into a dark forest. This made me think, and I’m convinced that there’s hope in the dark forest. Let’s stay positive about how we can contribute to making the web a better place and stick to the principle that each one of us is able to make an impact with small actions. Whether it’s you adding Webmentions, removing tracking scripts from a website, recycling plastic, picking up trash from the street to throw it into a bin, or cycling instead of driving to work for a week, we all can make things better for ourselves and the people around us. We just have to do it.

News
  • Safari went ahead by introducing their new Intelligent Tracking Protection and making it the new default. Now Firefox followed, enabling their Enhanced Tracking Protection by default, too.
  • Chrome 75 brings support for the Web Share API which is already implemented in Safari. Latency on canvas contexts has also been improved.
  • The Safari Technology Preview Release 84 introduced Safari 13 features: warnings for weak passwords, dark mode support for iOS, support for aborting Fetch requests, FIDO2-compliant USB security keys with the Web Authentication standard, support for “Sign In with Apple” (for Safari and WKWebView). The Visual Viewport API, ApplePay in WKWebView, screen sharing via WebRTC, and an API for loading ES6 modules are also supported from now on.
  • There’s an important update to Apple’s AppStore review guidelines that requires developers to offer “Sign In with Apple” in their apps in case they support third-party sign-in once the service is available to the public later this year.
  • Firefox 67 is out now with the Dark Mode CSS media query, WebRender, and side-by-side profiles that allow you to run multiple instances parallelly. Furthermore, enhanced privacy controls are built in against crypto miners and fingerprinting, as well as support for AV1 on Windows, Linux, and macOS for videos, String.prototype.matchAll(), and dynamic imports.
General
  • The web relies on so many open-source projects, and, yet, here’s what it looks like to live off an open-source budget. Most authors are below the poverty line, forced to live in cheaper countries or not able to make a living at all from their public service of providing reliable, open software for others who then use it commercially.
  • We all know that annoying client who ignores your knowledge and gets creative on their own. As a developer, Holger Bartel experienced it dozens of times; now he found himself in the same position, having ordered a fine drink and then messed it up.
UI/UX
  • With so many dark patterns built into the software and websites we use daily, Fabricio Teixeira and Caio Braga call for a tech diet for users.
“Dark patterns try to manipulate users to engage further, deeper, or longer on a site or app. The world needs a tech diet, and designers can help make it a reality. (Image credit) CSS
  • The CSS feature for truncating multi-line text has been implemented in Firefox. -webkit-line-clamp: 3;, for example, will truncate text at the end of line three.
Security Privacy
  • Anil Dash tries to find an answer to the question if we can trust a company in 2019.
  • Kevin Litman-Navarro analyzed over 150 privacy policies and shares his findings in a visual story. Not only does it take about 15 minutes on average to read a privacy policy, but most of them require a college degree or even professional career to understand them.
  • Our view on privacy hasn’t changed much since the 18th century, but the circumstances are different today: Companies have a wild appetite to store more and more data about more people in a central place — data that was once exclusively accessible by state authorities. We should redefine what privacy, personal data, and consent are, as Maciej Cegłowski argues in “The new wilderness.”
  • The people at WebKit are very active when it comes to developing clever solutions to protect users without compromising too much on usability and keeping the interests of publishers and vendors in mind at the same time. Now they introduced “privacy preserving ad click attribution for the web,” a technique that limits the data which is sent to third parties while still providing useful attribution metrics to advertisers.
Most privacy policies on the web are harder to read than Stephen Hawking’s “A Brief History Of Time,” as Kevin Litman-Navarro found out by examining 150 privacy policies. (Image credit) Accessibility
  • Brad Frost describes a great way to reduce motion on websites (of animated GIFs, for example), using the picture element and its media query feature.
Tooling
  • The IP Geolocation API is an open-source real-time IP to Geolocation JSON API with detailed countries data integration that is based on the Maxmind Geolite2 database.
  • Pascal Landau wrote a step-by-step tutorial on how to build a Docker development setup for PHP projects, and yes, it contains everything you might need to apply it to your own projects.
Work & Life
  • Roman Imankulov from Doist shares insights into decision-making in a flat organization.
  • As a society, we’re overworked, have too many belongings, yet crave for more, and companies only exist to grow indefinitely. This is how we kick-started climate change in the past century and this is how we got more people than ever into burn-outs, depressions, and various other health issues, including work-related suicides. Philipp Frey has a bold theory that breaks with our current system: A research by Nässén and Larsson suggests that a 1% decrease in working hours could lead to a 0.8% decrease in GHG emissions. Taking it further, the paper suggests that working 12 hours a week would allow us to easily achieve climate goals, if we’re also changing the economy to not entirely focus on growth anymore. An interesting study as it explores new ways of working, living, and consuming.
  • Leo Babauta shares a method that helps you acknowledge when you’re tired. It’s hard to accept, but we are humans and not machines, so there are times when we feel tired and our batteries are low. The best way to recover is realizing that this is happening and focusing on it to regain some energy.
  • Many of us are trying to achieve some minutes or hours of “deep work” a day. Fadeke Adegbuyi’s “The Complete Guide to Deep Work” shares valuable tips to master it.
Going Beyond…
  • People who live a “zero waste” life are often seen as extreme, but this is only one point of view. Here’s the other side where one of the “extreme” persons reminds us that it used to be normal to go to a farmer’s market to buy things that aren’t packed in plastic, to ride a bike, and drink water from a public fountain. Instead, our consumerism has become quite extreme and needs to change if we want to survive and stay healthy.
  • Sweden wants to become climate neutral by 2045, and now they presented an interesting visualization of the plan. It’s designed to help policymakers identify and fill in gaps to ensure that the goal will be achieved. The visualization is open to the public, so anyone can hold the government accountable.
  • Everybody loves them, many have them: AirPods. However, they are an environmental disaster, as this article shows.
  • The North Face tricking Wikipedia is advertising’s dark side.
  • The New York Times published a guide which helps us understand our impact on climate change based on the food we eat. This is not about going vegan but how changing eating habits can make a difference, both to the environment and our own health.
(cm)
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Inspired Design Decisions: Avaunt Magazine Inspired Design Decisions: Avaunt Magazine Andrew Clarke 2019-06-13T12:30:16+02:00 2019-06-13T12:06:18+00:00

I hate to admit it, but five or six years ago my interest in web design started to wane. Of course, owning a business meant I had to keep working, but staying motivated and offering my best thinking to clients became a daily struggle.

Looking at the web didn’t improve my motivation. Web design had stagnated, predictability had replaced creativity, and ideas seemed less important than data.

The reasons why I’d enjoyed working on the web no longer seemed relevant. Design had lost its joyfulness. Complex sets of build tools and processors had even supplanted the simple pleasure of writing HTML and CSS.

When I began working with the legendary newspaper and magazine designer Mark Porter, I became fascinated by art direction and editorial design. As someone who hadn’t studied either at art school, everything about this area of design was exciting and new. I read all the books about influential art directors I could find and started collecting magazines from the places I visited around the world.

The more inspired I became by mag- azine design, the faster my enthusiasm for web design bounced back. I wondered why many web designers think that print design is old-fashioned and irrelevant to their work. I thought about why so little of what makes print design special is being transferred to the web.

My goal became to hunt for inspiring examples of editorial design, study what makes them unique, and find ways to adapt what I’d learned to create more compelling, engaging, and imaginative designs for the web.

My bookcases are now chock full of magazine design inspiration, but my collection is still growing. I have limited space, so I’m picky about what I pick up. I buy a varied selection, and rarely collect more than one issue of the same title.

I look for exciting page layouts, inspiring typography, and innovative ways to combine text with images. When a magazine has plenty of interesting design elements, I buy it. However, if a magazine includes only a few pieces of inspiration, I admit I’m not above photographing them before putting it back on the shelf.

I buy new magazines as regularly as I can, and a week before Christmas, a few friends and I met in London. No trip to the “Smoke” is complete without a stop at Magma, and I bought several new magazines. After I explained my inspiration addition, one friend suggested I write about why I find magazine design so inspiring and how magazines influence my work.

Avaunt magazine. (Large preview)

That conversation sparked the idea for a series on making inspired design decisions. Every month, I’ll choose a publication, discuss what makes its design distinctive, and how we might learn lessons which will help us to do better work for the web.

As an enthusiastic user of HTML and CSS, I’ll also explain how to implement new ideas using the latest technologies; CSS Grid, Flexbox, and Shapes.

I’m happy to tell you that I’m inspired and motivated again to design for the web and I hope this series can inspire you too.

Andy Clarke
April 2019

Avaunt Magazine: Documenting The Extraordinary What struck me most about Avaunt was the way its art director has colour, layout, and type in diverse ways while maintaining a consistent feel throughout the magazine. (Large preview)

One look at me is going to tell you I’m not much of an adventurer. I don’t consider myself particularly cultured and my wife jokes regularly about what she says is my lack of style.

So what drew me to Avaunt magazine and its coverage of “adventure,” “culture,” and “style” when there are so many competing and varied magazines?

It often takes only a few page turns for me to decide if a magazine offers the inspiration I look for. Something needs to stand out in those first few seconds for me to look closer, be it an exciting page layout, an inspiring typographic treatment, or an innovative combination of images with text.

Avaunt has all of those, but what struck me most was the way its art director has used colour, layout, and type in diverse ways while maintaining a consistent feel throughout the magazine. There are distinctive design threads which run through Avaunt’s pages. The bold uses of a stencil serif and geometric sans-serif typeface are particularly striking, as is the repetition of black, white, a red which Avaunt’s designers use in a variety of ways. Many of Avaunt’s creative choices are as adventurous as the stories it tells.

© Avaunt magazine. (Large preview)

Plenty of magazines devote their first few spreads to glossy advertising, and Avaunt does the same. Flipping past those ads finds Avaunt’s contents page and its fascinating four-column modular grid.

This layout keeps content ordered within spacial zones but maintains energy by making each zone a different size. This layout could be adapted to many kinds of online content and should be easy to implement using CSS Grid. I’m itching to try that out.

For sans-serif headlines, standfirsts, and other type elements which pack a punch, Avaunt uses MFred, designed initially for Elephant magazine by Matt Willey in 2011. Matt went on to art direct the launch of Avaunt and commissioned a stencil serif typeface for the magazine’s bold headlines and distinctive numerals.

Avaunt Stencil was designed in 2014 by London based studio A2-TYPE who have since made it available to license. There are many stencil fonts available, but it can be tricky to find one which combines boldness and elegance — looking for a stencil serif hosted on Google Fonts? Stardos would be a good choice for display size type. Need something more unique, try Caslon Stencil from URW.

Avaunt’s use of a modular grid doesn’t end with the contents page and is the basis for a spread on Moscow’s Polytechnic Museum of Cold War curiosities which first drew me to the magazine. This spread uses a three-column modular grid and spacial zones of various sizes.

What fascinates me about this Cold War spread is how modules in the verso page combine to form a single column for text content. Even with this column, the proportions of the module grid still inform the position and size of the elements inside.

© Avaunt magazine. (Large preview)

While the design of many of Avaunt’s pages is devoted to a fabulous reading experience, many others push the grid and pull their foundation typography styles in different directions. White text on dark backgrounds, brightly coloured spreads where objects are cut away to blend with the background. Giant initial caps which fill the width of a column, and large stencilled drop caps which dominate the page.

Avaunt’s playful designs add interest, and the arrangement of pages creates a rhythm which I very rarely see online. These variations in design are held together by the consistent use of Antwerp — also designed by A2-TYPE — as a typeface for running text, and a black, white, and red colour theme which runs throughout the magazine.

© Avaunt magazine. (Large preview)

Studying the design of Avaunt magazine can teach and inspire. How a modular grid can help structure content in creative ways without it feeling static. (I’ll teach you more about modular grids later.)

How a well-defined set of styles can become the foundation for distinctive and diverse designs, and finally how creating a rhythm throughout a series of pages can help readers stay engaged.

Next time you’re passing your nearest magazine store, pop in a pick up a copy of Avaunt Magazine. It’s about adventure, but I bet it can help inspire your designs to be more adventurous too.

Say Hello To Skinny Columns

For what feels like an eternity, there’s been very little innovation in grid design for the web. I had hoped the challenges of responsive design would result in creative approaches to layout, but sadly the opposite seems to be true.

Top: Squeezing an image into one column reduces its visual weight and upsets the balance of my composition. Middle: Making the image fill two standard columns also upsets that delicate balance. Bottom: Splitting the final column, then add- ing half its width to another, creates the perfect space for my image, and a more pleasing overall result. (Large preview)

Instead of original grid designs, one, two, three, or four block arrangements of content became the norm. Framework grids, like those included with Bootstrap, remain the starting point for many designers, whether they use those frameworks or not.

It’s true that there’s more to why so much of the web looks the same as web designers using the same grid. After all, there have been similar conventions for magazines and newspapers for decades, but somehow magazines haven’t lost their personality in the way many websites have.

I’m always searching for layout inspiration and magazines are a rich source. Reading Avaunt reminded me of a technique I came across years ago but hadn’t tried. This technique adds one extra narrow column to an otherwise conventional column grid. In print design, this narrow column is often referred to as a “bastard column or measure” and describes a block of content which doesn’t conform to the rest of a grid. (This being a family friendly publication, I’ll call it a “skinny column.”)

In these first examples, squeezing an image into one column reduces its visual weight and upsets the balance of my composition. Making the image fill two standard columns also upsets that delicate balance.

Splitting the final column, then adding half its width to another, creates the perfect space for my image, and a more pleasing overall result.

(Large preview)

I might use a skinny column to inform the width of design elements. This Mini Cooper logo matches the width of my skinny column, and its size feels balanced with the rest of my composition.

(
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Bringing A Healthy Code Review Mindset To Your Team Bringing A Healthy Code Review Mindset To Your Team Sandrina Pereira 2019-06-12T13:30:59+02:00 2019-06-12T12:06:41+00:00

A ‘code review’ is a moment in the development process in which you (as a developer) and your colleagues work together and look for bugs within a recent piece of code before it gets released. In such a moment, you can be either the code author or one of the reviewers.

When doing a code review, you might not be sure of what you are looking for. On the other side, when submitting a code review, you might not know what to wait for. This lack of empathy and wrong expectations between the two sides can trigger unfortunate situations and rush the process until it ends in an unpleasant experience for both sides.

In this article, I’ll share how this outcome can be changed by changing your mindset during a code review:

👩‍💻👨‍💻 Working As A Team Foster Out A Culture Of Collaboration

Before we start, it’s fundamental to understand the value of why code needs to be reviewed. Knowledge sharing and team cohesion are beneficial to everyone, however, if done with a poor mindset, a code review can be a huge time consumer with unpleasant outcomes.

The team attitude and behavior should embrace the value of a nonjudgmental collaboration, with the common goal of learning and sharing — regardless of someone else’s experience.

Include Code Reviews In Your Estimations

A complete code review takes time. If a change took one week to be made, don’t expect the code review to take less than a day. It just doesn’t work like that. Don’t try to rush a code review nor look at it as a bottleneck.

Code reviews are as important as writing the actual code. As a team, remember to include code reviews in your workflow and set expectations about how long a code review might take, so everyone is synced and confident about their work.

Save Time With Guidelines And Automatization

An effective way to guarantee consistent contributions is to integrate a Pull Request template in the project. This will help the author to submit a healthy PR with a complete description. A PR description should explain what’s the change purpose, the reason behind it, and how to reproduce it. Screenshots and reference links (Git issue, design file, and so on) are the final touches for a self-explanatory submission.

Doing all of this will prevent early comments from reviewers asking for more details. Another way of making code reviews seem less nitpicky is to use linters to find code formatting and code-quality issues before a reviewer even gets involved. Whenever you see a repetitive comment during the review process, look for ways to minimize it (being with better guidelines or code automatization).

Stay A Student

Anyone can do a code review, and everyone must receive a code review — no matter the seniority level. Receive any feedback gratefully as an opportunity to learn and to share knowledge. Look at any feedback as an open discussion rather than a defensive reaction. The amateur is defensive. The professional finds learning to be enjoyable. Stay humble because the second you stop being a student, your knowledge becomes fragile.

The Art Of Selecting Reviewers

In my opinion, picking the reviewers is one of the most important decisions for an effective and healthy code review as a team.

Let’s say your colleague is submitting a code review and decides to tag “everyone” because, unconsciously, she/he might want to speed up the process, deliver the best code possible or making sure everyone knows about the code change. Each one of the reviewers receives a notification, then opens the PR link and sees a lot of people tagged as reviewers. The thought of “someone else will do it” pops up in their minds, leading to ignore the code review and close the link.

Since nobody started the review yet, your colleague will remind each one of the reviewers to do it, i.e. creating pressure on them. Once the reviewers start to do it, the review process takes forever because everyone has their own opinion and it’s a nightmare to reach a common agreement. Meanwhile, whoever decided to not review the code, is then spammed with zillions of e-mail notifications with all of the review comments, thus creating noise in their productivity.

This is something I see happening more than I’d like: Pull Requests with a bunch of people tagged as reviewers and ending, ironically, in a non-productive code review.

There are some common effective flows when selecting the reviewers: A possible flow is to pick two to three colleagues who are familiar with the code and ask them to be reviewers. Another approach, explained by Gitlab team is to have a chained review flow in which the author picks one reviewer who picks another reviewer until at least two reviewers agree with the code. Regardless of the flow you choose, avoid having too many reviewers. Being able to trust your colleagues’ code’s judgment is the key to conduct an effective and healthy code review.

Have Empathy

Spotting pieces of code to improve is just a part of a successful code review. Just as important is to communicate that feedback in a healthy way by showing empathy towards your colleagues.

Before writing a comment, remember to put yourself in the other people’s shoes. It’s easy to be misunderstood when writing, so review your own words before sending them. Even if a conversation starts being ugly, don’t let it affect you — always stay respectful. Speaking well to others is never a bad decision.

Know How To Compromise

When a discussion isn’t solved quickly, take it to a personal call or chat. Analyze together if it’s a subject worth paralyzing the current change request or if it can be addressed in another one.

Be flexible but pragmatic and know how to balance efficiency (delivering) and effectiveness (quality). It’s a compromise to be made as a team. In these moments I like to think of a code review as an iteration rather than a final solution. There’s always room for improvement in the next round.

In-Person Code Reviews

Gathering the author and the reviewer together in a pair programming style can be highly effective. Personally, I prefer this approach when the code review involves complex changes or there’s an opportunity for a large amount of knowledge sharing. Despite this being an offline code review, it’s a good habit to leave online comments about the discussions taken, especially when changes need to be made after the meeting. This is also useful to keep other online reviewers up to date.

Learn From Code Review Outcomes

When a code review has suffered a lot of changes, took too long or has already had too many discussions, gather your team together and analyze the causes and which actions can be taken from it. When the code review is complex, splitting a future task into smaller tasks makes each code review easier.

When the experience gap is big, adopting pair programming is a strategy with incredible results — not only for knowledge sharing but also for off-line collaboration and communication. Whatever the outcome, always aim for a healthy dynamic team with clear expectations.

📝 As An Author

One of the main concerns when working on a code review as an author is to minimize the reviewer’s surprise when reading the code for the first time. That’s the first step to a predictable and smooth code review. Here’s how you can do it.

Establish Early Communication

It’s never a bad idea to talk with your future reviewers before coding too much. Whenever it’s an internal or external contribution, you could do a refinement together or a little bit of pair programming at the beginning of the development to discuss solutions.

There’s nothing wrong in asking for help as a first step. In fact, working together outside the review is a first important step to prevent early mistakes and guarantee an initial agreement. At the same time, your reviewers get aware of a future code review to be made.

Follow The Guidelines

When submitting a Pull Request to be reviewed, remember to add a description and to follow the guidelines. This will save the reviewers from spending time to understand the context of the new code. Even if your reviewers already know what it is about, you can also take this opportunity as a way to improve your writing and communication skills.

Be Your First Reviewer

Seeing your own code in a different context allows you to find things you would miss in your code editor. Do a code review of your own code before asking your colleagues. Have a reviewer mindset and really go through every line of code.

Personally, I like to annotate my own code reviews to better guide my reviewers. The goal here is to prevent comments related to a possible lack of attention and making sure you followed the contribution guidelines. Aim to submit a code review just as you would like to receive one.

Be Patient

After submitting a code review, don’t jump right into a new private message asking your reviewers to “take a look, it only takes a few minutes” and indirectly craving for that thumbs-up emoji. Trying to rush your colleagues to do their work is not a healthy mindset. Instead, trust your colleagues’ workflow as they trust you to submit a good code review. Meanwhile, wait for them to get back to you when they are available. Don’t look at your reviewers as a bottleneck. Remember to be patient because hard things take time.

Be A Listener

Once a code review is submitted, comments will come, questions will be asked, and changes will be proposed. The golden rule here is to not take any feedback as a personal attack. Remember that any code can benefit from an outside perspective.

Don’t look at your reviewers as an enemy. Instead, take this moment to set aside your ego, accept that you make mistakes, and be open to learning from your colleagues, so that you can do a better job the next time.

👀 As A Reviewer Plan Ahead Of Your Time

When you are asked to be a reviewer, don’t interrupt things right away. That’s a common mistake I see all the time. Reviewing code demands your full attention, and each time you switch code contexts, you are decreasing your productivity by wasting time in recalibrating your focus. Instead, plan ahead by allocating time slots of your day to do code reviews.

Personally, I prefer to do code reviews first thing in the morning or after lunch before picking any other of my tasks. That’s what works best for me because my brain is still fresh without a previous code context to switch off from. Besides that, once I’m done with the review, I can focus on my own tasks, while the author can reevaluate the code based on the feedback.

Be Supportive

When a Pull Request doesn’t follow the contribution guidelines, be supportive — especially to newcomers. Take that moment as an opportunity to guide the author to improve his/her contribution. Meanwhile, try to understand why the author didn’t follow the guidelines in the first place. Perhaps there’s room for improvement that you were not aware of before.

Check Out The Branch And Run It

While reviewing the code, make it run on your own computer — especially when it involves user interfaces. This habit will help you to better understand the new code and spot things you might miss if you just used a default diff-tool in the browser which limits your review scope to the changed code (without having the full context as in your code editor).

Ask Before Assuming

When you don’t understand something, don’t be afraid to say it and ask questions. When asking, remember to first read the surrounding code and avoid making assumptions.

Most of the questions fit in these two types of categories:

  1. “How” Questions
    When you don’t understand how something works or what it does, evaluate with the author if the code is clear enough. Don’t mistake simple code with ignorance. There’s a difference between code that is hard to read and code that you are not aware of. Be open to learn and use a new language feature, even if you don’t know it deeply yet. However, use it only if it simplifies the codebase.
  2. “Why” Questions
    When you don’t understand the “why”, don’t be afraid of suggesting to commenting the code, especially if it’s an edge case or a bug fix. The code should be self-explanatory when it comes to explaining what it does. Comments are a complement to explaining the why behind a certain approach. Explaining the context is highly valuable when it comes to maintainability and it will save someone from deleting a block of code that thought was useless. (Personally, I like to comment on the code until I feel safe about forgetting it later.)
Constructive Criticism

Once you find a piece of code you believe it should be improved, always remember to recognize the author’s effort in contributing to the project and express yourself in a receptive and transparent way.

  • Open discussions.
    Avoid formalizing your feedback as a command or accusation such as “You should…” or “You forgot…”. Express your feedback as an open discussion instead of a mandatory request. Remember to comment on the code, not the author. If the comment is not about the code, then it shouldn’t belong in a code review. As said before, be supportive. Using a passive voice, talking in the plural, expressing questions or suggestions are good approaches to emphasize empathy with the author.
Instead of “Extract this method from here…” prefer “This method should be extracted…” or “What do you think of extracting this method…”
  • Be clear.
    A feedback can easily be misinterpreted, especially when it comes to expressing an opinion versus sharing a fact or a guideline. Always remember to clear that right away on your comment.
Unclear: “This code should be….”

Opinion: “I believe this code should be…”

Fact: “Following [our guidelines], this code should be…”.
  • Explain why.
    What’s obvious for you, might not be for others. It’s never too much explaining the motivation behind your feedback, so the author not only understands how to change something but also what’s the benefit from it.
Opinion: “I believe this code could be…”

Explained: “I believe this code could be (…) because this will improve readability and simplify the unit tests.”
  • Provide examples.
    When sharing a code feature or a pattern which the author is not familiar with, complement your suggestion with a reference as guidance. When possible, go beyond MDN docs and share a snippet or a working example adapted to the current code scenario. When writing an example is too complex, be supportive and offer to help in person or by a video call.
Besides saying something such as “Using filter will help us to [motivation],” also say, “In this case, it could be something like: [snippet of code]. You can check [an example at Finder.js]. Any doubt, feel free to ping me on Slack.”
  • Be reasonable.
    If the same error is made multiple times, prefer to just leave one comment about it and remember the author to review it in the other places, too. Adding redundant comments only creates noise and might be contra-productive.
Keep The Focus

Avoid proposing code changes unrelated to the current code. Before suggesting a change, ask yourself if it’s strictly necessary at that moment. This type of feedback is especially common in refactors. It’s one of the many trade-offs between efficiency and effectiveness that we need to make as a team.

When it comes to refactors, personally, I tend to prefer small but effective improvements. Those are easier to review and there’s less chance of having code conflicts when updating the branch with the target branch.

Set Expectations

If you leave a code review half-done, let the author know about it, so time expectations are under control. In the end, also let the author know if you agree with the new code or if you would like to re-review it once again later.

Before approving a code review, ask yourself if you are comfortable about the possibility of touching that code in the future. If yes, that’s a sign you did a successful code review!

Learn To Refuse A Code Review

Although nobody admits it, sometimes you have to refuse a code review. The moment you decide to accept a code review but try to rush it, the project’s quality is being compromised as well as your team’s mindset.

When you accept to review someone’s else code, that person is trusting your capabilities — it’s a commitment. If you don’t have the time to be a reviewer, just say no to your colleague(s). I really mean it; don’t let your colleagues wait for a code review that will never be done. Remember to communicate and keep expectations clear. Be honest with your team and — even better — with yourself. Whatever your choice, do it healthily, and do it right.

Conclusion

Given enough time and experience, doing code reviews will teach you much more than just technical knowledge. You’ll learn to give and receive feedback from others, as well as make decisions with more thought put into it.

Each code review is an opportunity to grow as a community and individually. Even outside code reviews, remember to foster a healthy mindset until the day it comes naturally to you and your colleagues. Because sharing is what makes us better!

Further Reading on SmashingMag:
(ra, yk, il)
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
UX Optimizations For Keyboard-Only And Assistive Technology Users UX Optimizations For Keyboard-Only And Assistive Technology Users Aaron Pearlman 2019-06-11T14:00:59+02:00 2019-06-11T17:35:23+00:00

(This is a sponsored article.) One of the cool things about accessibility is that it forces you to see and think about your application beyond the typical sighted, mouse-based user experience. Users who navigate via keyboard only (KO) and/or assistive technology (AT) are heavily dependent not only on your application’s information architecture being thoughtful, but the affordances your application makes for keeping the experience as straightforward as possible for all types of users.

In this article, we’re going to go over a few of those affordances that can make your KO/AT user experiences better without really changing the experience for anyone else.

Additions To Your Application’s UX

These are features that you can add to your application to improve the UX for KO/AT users.

Skip Links

A skip link is a navigation feature that invisibly sits at the top of websites or applications. When it is present, it is evoked and becomes visible on your application’s first tab stop.

A skip link allows your user to “skip” to various sections of interest within the application without having to tab-cycle to it. The skip link can have multiple links to it if your application has multiple areas of interest you feel your users should have quick access to your application’s point of entry.

For KO/AT users, this is a helpful tool to both allow them to rapidly traverse your app and can help orient them to your application’s information architecture. For all other users, they likely will never even know this feature exists.

Here’s an example with how we handle skip links. After you click the link, hit Tab ⇥ and look in the upper-left corner. The skip link has two links: Main Content and Code Samples. You can use Tab ⇥ to move between them, hit Enter to navigate to the link.

Shortcuts/Hotkey Menus

This is a feature that I think everyone is familiar with: shortcuts and hotkeys. You’ve likely used them from time to time, they are very popular amongst power users of an application, and come in a variety of incarnations.

For KO/AT users, shortcuts/hotkeys are invaluable. They allow them to use the applications, as intended, without having to visually target anything or tab through the application to get to an element or content. While frequent actions and content are always appreciated when represented in a shortcut/hotkey menu, you may also want to consider some slightly less frequent actions that may be buried in your UI (for good reason) but are still something that a user would want to be able to access.

Making shortcuts for those functions will be extremely helpful to KO/AT users. You can make the command a bit more involved, such as using (3) keystrokes to evoke it, to imply it’s a less frequently used piece of functionality. If you have a shortcut/hotkey menu make sure to find a way to promote it in your applications so your users, especially your KO/AT users, can find it and use it effectively.

User Education

User education refers to a piece of functionality that directs the users on what to do, where to go, or what to expect. Tooltips, point outs, info bubbles, etc. are all examples of user education.

One thing you should ask yourself when designing, placing, and/or writing copy for your user education is:

“If I couldn't see this, would it still be valuable to understand ______?”

Many times it’s just reorienting the user education through that lens that can lead to a much better experience for everyone. For example, rather than saying “Next, click on the button below,” you may want to write, “To get started, click the START button.” The second method removes the visual orientation and instead focus on the common information both a sighted and KO/AT user would have at their disposal.

Note: I should mention it’s perfectly OK to use user education features, like point outs, to visually point out things on the application just make sure the companion text lets your KO/AT users understand the same things which are referred to visually.

See the Pen ftpo demo by Harris Schneiderman.

See the Pen ftpo demo by Harris Schneiderman. Augmentations To Your Application’s UX

There are changes or tweaks you can make to common components/features to improve the UX for KO/AT users.

Modal Focusing

Now we’re getting into the nitty-gritty. One of the great things about accessibility is how it opens the doorway to novel ways to solve problems you may have not considered before. You can make something fully WCAG 2.0 AA accessible and solve the problem with very different approaches. For modals, we at Deque came up with an interesting approach that would be totally invisible to most sighted-users but would be noticed by KO/AT users almost immediately.

For a modal to be accessible it needs to announce itself when evoked. Two common ways to do this are: focus the modal’s body after the modal is open or focus the modal’s header (if it has one) after the modal is open. You do this so the user’s AT can read out the modal’s intent like “Edit profile” or “Create new subscription”.

After you focus the body or header, hitting Tab ⇥ will send focus to the next focusable element in the modal — commonly a field or, if it’s in the header, sometimes it’s the close button (X). Continuing to tab will move you through all the focusable elements in the modal, typically finishing with terminal buttons like SAVE and/or CANCEL.

Now we get to the interesting part. After you focus the final element in the modal, hitting Tab ⇥ again will “cycle” you back to the first tab stop, which in the case of the modal will be either the body or the header because that’s where we started. However, in our modals we “skip” that initial tab stop and take you to the second stop (which in our modals is the close (X) in the upper corner. We do this because the modal doesn’t need to keep announcing itself over and over each cycle. It only needs to do it on the initial evocation not on any subsequent trips through, so we have a special programmatic stop which skips itself after the first time.

This is a small (but appreciated) usability improvement we came up with exclusively for KO/AT users which would be completely unknown to everyone else.

See the Pen modal demo by Harris Schneiderman.

See the Pen modal demo by Harris Schneiderman. Navigation Menus Traversing And Focus/Selected Management

Navigation menus are tricky. They can be structured in a multitude of ways, tiered, nested, and have countless mechanisms of evocation, disclosure, and traversing. This makes it important to consider how they are interacted with and represented for KO/AT users during the design phase. Good menus should be “entered” and “exited”, meaning you tab into a menu to use it and tab out of it to exit it (if you don’t use it).

This idea is best illustrated with a literal example, so let’s take a look at our 2-tier, vertical navigation from Cauldron.

  1. Go to https://pattern-library.dequelabs.com/;
  2. Hit Tab ⇥ three times. The first tab stop is the skip link (which we went over previously), the second is the logo which acts as “return to home” link, and the third tab enters the menu;
  3. Now that you are in the menu, use the arrow keys (→ and ↓) to move and open sections of the menu;
  4. Hitting Tab ⇥ at any point will exit you from the menu and send you to the content of the page.

Navigation menus can also work in conjunction with some of the previous topics such as shortcut/hotkey menus to make using the menu even more efficient.

Logical Focus Retention (I.E. Deleting Row, Returning Back To A Page)

Focus retention is very important. Most people are familiar, at least in concept, with focusing elements in their logical intended order on the page; however, it can get murky when an element or content changes/appears/disappears.

  • Where does focus go when then field you are on is deleted?
  • What about when you’re sent to another tab where the application has a new context?
  • What about after a modal is closed due to a terminal action like SAVE?

For a sighted user there are visual cues which can inform them of what happened.

Here’s an example: You have an Edit Recipe modal which lets your user add and remove any ingredients. There is one ingredient field with an “Add another ingredient” button below it. (Yes, it’s styled as a link but that’s a topic for another day.) Your focus is on the button. You click the button and a new field appears between the button and the first field. Where should the focus go? Most likely your user added another ingredient to engage with it so the focus should shift from the button into the newly added field.

See the Pen focus retention by Harris Schneiderman.

See the Pen focus retention by Harris Schneiderman.

The big takeaway from all this isn’t so much the specific examples but the mentality which supports them — consider the UX for your application through the lens of the KO/AT user as well sighted, mouse-only user. Some of the best and most clever ideas come from the most interesting and important challenges.

If you need help ensuring that your features are accessible, all of the examples above plus countless more can be tested using Deque’s free web accessibility testing application: axe pro. It’s free and you can sign up here.

(ra, il)
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Styling In Modern Web Apps Styling In Modern Web Apps Ajay Ns 2019-06-10T14:00:59+02:00 2019-06-10T14:35:44+00:00

If you search for how to style apps for the web, you’ll come across many different approaches and libraries, some even changing day by day. Block Element Modifier (BEM); preprocessors such as Less and SCSS; CSS-in-JS libraries, including JSS and styled-components; and, lately, design systems. You come across all of these in articles and blogs, tutorials and talks, and — of course — debates on Twitter.

How do we choose between them? Why do so many approaches exist in the first place? If you’re already comfortable with one method, why even consider moving to another?

In this article, I’m going to take a look at the tools I have used for production apps and sites I’ve worked on, comparing features from what I’ve actually encountered rather than summarizing the content from their readmes. This is my journey through BEM, SCSS, styled-components, and design systems in particular; but note that even if you use different libraries, the basic principles and approach remain the same for each of them.

CSS Back In The Day

When websites were just getting popular, CSS was primarily used to add funky designs to catch the user’s attention, such as neon billboards on a busy street:

Microsoft’s first site (left) and MTV’s site from the early 2000s. (Large preview)

Its use wasn’t for layout, sizing, or any of the basic needs we routinely use CSS for today, but as an optional add-on to make things fancy and eye-catching. As features were added to CSS, newer browsers supporting a whole new range of functionality and features appeared, and the standard for websites and user interfaces evolved — CSS became an essential part of web development.

It’s rare to find websites without a minimum of a couple hundred lines of custom styling or a CSS framework (at least, sites that don’t look generic or out of date):

Wired’s modern responsive site from InVision’s responsive web design examples post. (Large preview)

What came next is quite predictable. The complexity of user interfaces kept on increasing, along with the use of CSS; but without any guidelines suggested and with a lot of flexibility, styling became complicated and dirty. Developers had their own ways of doing things, with it all coming down to somehow getting things to look the way the design said it was supposed to be.

This, in turn, led to a number of common issues that many developers faced, like managing big teams on a project, or maintaining a project over a long period of time, while having no clear guides. One of the main reasons this happens even now, sadly, is that CSS is still often dismissed as unimportant and not worth paying much attention to.

CSS Is Not Easy To Manage

There’s nothing built into CSS for maintenance and management when it comes to large projects with teams, and so the common problems faced with CSS are:

  • Lack of code structure or standards greatly reduces readability;
  • Maintainability as project size increases;
  • Specificity issues due to code not being readable in the first place.

If you’ve worked with Bootstrap, you’ll have noticed you’re unable to override the default styles and you might have fixed this by adding !important or considering the specificity of selectors. Think of a big project’s style sheets, with their large number of classes and styles applied to each element. Working with Bootstrap would be fine because it has great documentation and it aims to be used as a solid framework for styling. This obviously won’t be the case for most internal style sheets, and you’ll be lost in a world of cascaded styles.

In projects, this would be like a couple thousand lines of CSS in a single file, with comments if you’re lucky. You could also see a couple of !important used to finally get certain styles to work overriding others.

!important does not fix bad CSS. (Large preview)

You may have faced specificity issues but not understood how specificity works. Let’s take a look.

(Large preview)

Which of the styles applied to the same element would be applied to the image on the right, assuming they both point to it?

What is the order of weight of selectors such as inline styles, IDs, classes, attributes, and elements? Okay, I made it easy there, they’re in order of weight:

Start at 0; add 1,000 for a style attribute; add 100 for each id; add 10 for each attribute, class or pseudo-class; add 1 for each element name or pseudo-element.

So, for instance, taking the above example:

(Large preview) (Large preview)

Do you see why the second example was the correct answer? The id selector clearly has far more weight than element selectors. This is essentially the reason why your CSS rule sometimes doesn’t seem to apply. You can read about this in detail in Vitaly Friedman’s article, “CSS Specificity: Things You Should Know”.

The larger the codebase, the greater the number of classes. These might even apply to or override different styles based on specificity, so you can see how quickly it can become difficult to deal with. Over and above this we deal with code structure and maintainability: it’s the same as the code in any language. We have atomic design, web components, templating engines; there’s a need for the same in CSS, and so we’ve got a couple of different approaches that attempt to solve these different problems.

Block Element Modifier (BEM)
“BEM is a design methodology that helps you to create reusable components and code sharing in front-end development.”

— getbem.com

The idea behind BEM is to create components out of parts of apps that are reused or are independent. Diving in, the design process is similar to atomic design: modularize things and think of each of them as a reusable component.

I chose to start out managing styles with BEM as it was very similar the way React (that I was already familiar with) breaks down apps into reusable components.

(Large preview) Using BEM

BEM is nothing more than a guide: no new framework or language to learn, just CSS with a naming convention to organize things better. Following this methodology, you can implement the patterns you’ve already been using, but in a more structured manner. You can also quite easily do progressive enhancements to your existing codebase as it requires no additional tooling configuration or any other complexities.

Advantages
  • At its heart BEM manages reusable components, preventing random global styles overriding others. In the end, we have more predictable code, which solves a lot of our specificity problems.
  • There’s not much of a learning curve; it is just the same old CSS with a couple of guides to improve maintainability. It is an easy way of making code modular by using CSS itself.
Drawbacks
  • While promoting reusability and maintainability, a side effect of the BEM naming principle is making naming the classes difficult and time-consuming.
  • The more nested your component is in the block, the longer and more unreadable the class names become. Deeply nested or grandchild selectors often face this issue.
<div >
  <p >Lorem ipsum lorem</p>
  <div >
    <!-- Grandchild elements -->
    <a href="#" >Link</a>
  </div>
</div>

That was just a quick intro to BEM and how it solves our problems. If you’d like to take a deeper look into its implementation, check out “BEM For Beginners” published here in Smashing Magazine.

Sassy CSS (SCSS)

Put blandly, SCSS is CSS on steroids. Additional functionality such as variables, nesting selectors, reusable mixins, and imports help SCSS make CSS more of a programming language. For me, SCSS was fairly easy to pick up (go through the docs if you haven’t already) and once I got to grips with the additional features, I’d always prefer to use it over CSS just for the convenience it provided. SCSS is a preprocessor, meaning the compiled result is a plain old CSS file; the only thing you need to set up is tooling to compile down to CSS in the build process.

Super handy features
  • Imports help you split style sheets into multiple files for each component/section, or whichever makes readability easier.
// main.scss
@import "_variables.scss";
@import "_mixins.scss";

@import "_global.scss";
@import "_navbar.scss";
@import "_hero.scss";
  • Mixins, loops, and variables help with the DRY principle and also make the process of writing CSS easier.
@mixin flex-center($direction) {
  display: flex;
  align-items: center;
  justify-content: center;
  flex-direction: $direction;
}

.box {
  @include flex-center(row);
}

Note: SCSS mixin: Check out more handy SCSS features over here.

  • Nesting of selectors improves readability as it works the same way HTML elements are arranged: in a nested fashion. This approach helps you recognize hierarchy at a glance.
BEM With SCSS

It is possible to group the code for components into blocks, and this greatly improves readability and helps in ease of writing BEM with SCSS:

This code is relatively less heavy and complicated as opposed to nesting multiple layers. (Large preview)

Note: For further reading on how this would work, check out Victor Jeman’s “BEM with Sass” tutorial.

Styled-Components

This is one of the most widely used CSS-in-JS libraries. Without endorsing this particular library, it has worked well for me, and I’ve found its features quite useful for my requirements. Take your time in exploring other libraries out there and pick the one that best matches your needs.

I figured a good way to get to know styled-components was to compare the code to plain CSS. Here’s a quick look at how to use styled-components and what it’s..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Web Accessibility In Context Web Accessibility In Context Be Birchall 2019-06-07T12:00:59+02:00 2019-06-07T11:07:25+00:00

Haben Girma, disability rights advocate and Harvard Law’s first deafblind graduate, made the following statement in her keynote address at the AccessU digital accessibility conference last month:

“I define disability as an opportunity for innovation.”

She charmed and impressed the audience, telling us about learning sign language by touch, learning to surf, and about the keyboard-to-braille communication system that she used to take questions after her talk.

Contrast this with the perspective many of us take building apps: web accessibility is treated as an afterthought, a confusing collection of rules that the team might look into for version two. If that sounds familiar (and you’re a developer, designer or product manager), this article is for you.

I hope to shift your perspective closer to Haben Girma’s by showing how web accessibility fits into the broader areas of technology, disability, and design. We’ll see how designing for different sets of abilities leads to insight and innovation. I’ll also shed some light on how the history of browsers and HTML is intertwined with the history of assistive technology.

Assistive Technology

An accessible product is one that is usable by all, and assistive technology is a general term for devices or techniques that can aid access, typically when a disability would otherwise preclude it. For example, captions give deaf and hard of hearing people access to video, but things get more interesting when we ask what counts as a disability.

On the ‘social model’ definition of disability adopted by the World Health Organization, a disability is not an intrinsic property of an individual, but a mismatch between the individual’s abilities and environment. Whether something counts as a ‘disability’ or an ‘assistive technology’, doesn’t have such a clear boundary and is contextual.

Addressing mismatches between ability and environment has lead to not only technological innovations but also to new understandings of how humans perceive and interact with the world.

Access + Ability, a recent exhibit at the Cooper Hewitt Smithsonian design museum in New York, showcased some recent assistive technology prototypes and products. I’d come to the museum to see a large exhibit on designing for the senses, and ended up finding that this smaller exhibit offered even more insight into the senses by its focus on cross-sensory interfaces.

Seeing is done with the brain, and not with the eyes. This is the idea behind one of the items in the exhibit, Brainport, a device for those who are blind or have low vision. Your representation of your physical environment from sight is based on interpretations your brain makes from the inputs that your eyes receive.

What if your brain received the information your eyes typically receive through another sense? A camera attached to Brainport’s headset receives visual inputs which are translated into a pixel-like grid pattern of gentle shocks perceived as “bubbles” on the wearer’s tongue. Users report being able to “see” their surroundings in their mind’s eye.

Brainport turns images from a camera into a pixel-like pattern of gentle electric shocks on the tongue. (Image Credit: Cooper Hewitt)(Large preview)

Soundshirt also translates inputs typically perceived by one sense to inputs that can be perceived by another. This wearable tech is a shirt with varied sound sensors and subtle vibrations corresponding to different instruments in an orchestra, enabling a tactile enjoyment of a symphony. Also on display for interpreting sound was an empathetically designed hearing aid that looks like a piece of jewelry instead of a clunky medical device.

Designing for different sets of abilities often leads to innovations that turn out to be useful for people and settings beyond their intended usage. Curb cuts, the now familiar mini ramps on the corners of sidewalks useful to anyone wheeling anything down the sidewalk, originated from disability rights activism in the ’70s to make sidewalks wheelchair accessible. Pellegrino Turri invented the early typewriter in the early 1800s to help his blind friend write legibly, and the first commercially available typewriter, the Hansen Writing Ball, was created by the principal of Copenhagen’s Royal Institute for the Deaf-Mutes.

Vint Cerf cites his hearing loss as shaping his interest in networked electronic mail and the TCP/IP protocol he co-invented. Smartphone color contrast settings for color blind people are useful for anyone trying to read a screen in bright sunlight, and have even found an unexpected use in helping people to be less addicted to their phones.

The Hansen Writing Ball was developed by the principal of Copenhagen’s Royal Institute for the Deaf-Mutes. (Image Credit: Wikimedia Commons) (Large preview)

So, designing for different sets of abilities gives us new insights into how we perceive and interact with the environment, and leads to innovations that make for a blurry boundary between assistive technology and technology generally.

With that in mind, let’s turn to the web.

Assistive Tech And The Web

The web was intended as accessible to all from the start. A quote you’ll run into a lot if you start reading about web accessibility is:

“The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.”

— Tim Berners-Lee, W3C Director and inventor of the World Wide Web

What sort of assistive technologies are available to perceive and interact with the web? You may have heard of or used a screen reader that reads out what’s on the screen. There are also braille displays for web pages, and alternative input devices like an eye tracker I got to try out at the Access + Ability exhibit.

It’s fascinating to learn that web pages are displayed in braille; the web pages we create may be represented in 3D! Braille displays are usually made of pins that are raised and lowered as they “translate” each small part of the page, much like the device I saw Haben Girma use to read audience questions after her AccessU keynote. A newer company, Blitab (named for “blind” + “tablet”), is creating a braille Android tablet that uses a liquid to alter the texture of its screen.

Haben Girma uses her braille reader to have a conversation with AccessU conference participants. (Photo used with her permission.) (Large preview)

People proficient with using audio screen readers get used to faster speech and can adjust playback to an impressive rate (as well as saving battery life by turning off the screen). This makes the screen reader seem like an equally useful alternative mode of interacting with web sites, and indeed many people take advantage of audio web capabilities to dictate or hear content. An interface intended for some becomes more broadly used.

Web accessibility is about more than screen readers, however, we’ll focus on them here because — as we’ll see — screen readers are central to the technical challenges of an accessible web.

Recommended reading: Designing For Accessibility And Inclusion by Steven Lambert

Technical Challenges And Early Approaches

Imagine you had to design a screen reader. If you’re like me before I learned more about assistive tech, you might start by imagining an audiobook version of a web page, thinking your task is to automate reading the words on the page. But look at this page. Notice how much you use visual cues from layout and design to tell you what its parts are for how to interact with them.

  • How would your screen reader know when the text on this page belongs to clickable links or buttons?
  • How would the screen reader determine what order to read out the text on the page?
  • How could it let the user “skim” this page to determine the titles of the main sections of this article?

The earliest screen readers were as simple as the audiobook I first imagined, as they dealt with only text-based interfaces. These “talking terminals,” developed in the mid-’80s, translated ASCII characters in the terminal’s display buffer to an audio output. But graphical user interfaces (or GUI’s) soon became common. “Making the GUI Talk,” a 1991 BYTE magazine article, gives a glimpse into the state of screen readers at a moment when the new prevalence of screens with essentially visual content made screen readers a technical challenge, while the freshly passed Americans with Disabilities Act highlighted their necessity.

OutSpoken, discussed in the BYTE article, was one of the first commercially available screen readers for GUI’s. OutSpoken worked by intercepting operating system level graphics commands to build up an offscreen model, a database representation of what is in each part of the screen. It used heuristics to interpret graphics commands, for instance, to guess that a button is drawn or that an icon is associated with nearby text. As a user moves a mouse pointer around on the screen, the screen reader reads out information from the offscreen model about the part of the screen corresponding to the cursor’s location.

The offscreen model is a database representation of the screen based on intercepting graphics commands. (Large preview)

This early approach was difficult: intercepting low-level graphics commands is complex and operating system dependent, and relying on heuristics to interpret these commands is error-prone.

The Semantic Web And Accessibility APIs

A new approach to screen readers arose in the late ’90s, based on the idea of the semantic web. Berners-Lee wrote of his dream for a semantic web in his 1999 book Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web:

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web — the content, links, and transactions between people and computers. A "Semantic Web", which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy, and our daily lives will be handled by machines talking to machines. The "intelligent agents" people have touted for ages will finally materialize.

Berners-Lee defined the semantic web as “a web of data that can be processed directly and indirectly by machines.” It’s debatable how much this dream has been realized, and many now think of it as unrealistic. However, we can see the way assistive technologies for the web work today as a part of this dream that did pan out.

Berners-Lee emphasized accessibility for the web from the start when founding the W3C, the web’s international standards group, in 1994. In a 1996 newsletter to the W3C’s Web Accessibility Initiative, he wrote:

The emergence of the World Wide Web has made it possible for individuals with appropriate computer and telecommunications equipment to interact as never before. It presents new challenges and new hopes to people with disabilities.

HTML4, developed in the late ’90s and released in 1998, emphasized separating document structure and meaning from presentational or stylistic concerns. This was based on semantic web principles, and partly motivated by improving support for accessibility. The HTML5 that we currently use builds on these ideas, and so supporting assistive technology is central to its design.

So, how exactly do browsers and HTML support screen readers today?

Many front-end developers are unaware that the browser parses the DOM to create a data structure, especially for assistive technologies. This is a tree structure known as the accessibility tree that forms the API for screen readers, meaning that we no longer rely on intercepting the rendering process as the offscreen model approach did. HTML yields one representation that the browser can use both to render on a screen, and also give to audio or braille devices.

Browsers use the DOM to render a view, and to create an accessibility tree for screen readers. (Large preview)

Let’s look at the accessibility API in a little more detail to see how it handles the challenges we considered above. Nodes of the accessibility tree, called “accessible objects,” correspond to a subset of DOM nodes and have attributes including role (such as button), name (such as the text on the button), and state (such as focused) inferred from the HTML markup. Screen readers then use this representation of the page.

This is how a screen reader user can know an element is a button without making use of the visual style cues that a sighted user depends on. How could a screen reader user find relevant information on a page without having to read through all of it? In a recent survey, screen reader users reported that the most common way they locate the information they are looking for on a page is via the page’s headings. If an element is marked up with an h1–h6 tag, a node in the accessibility tree is created with the role heading. Screen readers have a “skip to next heading” functionality, thereby allowing a page to be skimmed.

Some HTML attributes are specifically for the accessibility tree. ARIA (Accessible Rich Internet Applications) attributes can be added to HTML tags to specify the corresponding node’s name or role. For instance, imagine our button above had an icon rather than text. Adding aria-label="sign up" to the button element would ensure that the button had a label for screen readers to represent to their users...

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Image Optimization In WordPress Image Optimization In WordPress Adelina Țucă 2019-06-06T14:00:34+02:00 2019-06-06T13:17:22+00:00

A slow website is everyone’s concern. Not only that it chases visitors away but it also affects your SEO. So trying to keep it ‘in shape’ is definitely one of the main items to tick when you run a business or even a personal site.

There are many ways to speed up your WordPress site, each one complementing the other. There is not a universal method. Improving your site speed is actually the sum of more of these methods combined. One of them is image optimization, which we will tackle extensively in this post.

So read further to learn how to manually and automatically optimize all the images on your WordPress site. This is a step-by-step guide on image optimization that will make your site lightweight and faster.

The Importance Of Image Optimization

According to Snipcart, the three main reasons why images are affecting your WordPress site are:

  • They are too large. Use smaller sizes. I will talk about this later in the article.
  • They are too many, which demands as many HTTP requests. Using a CDN will help.
  • They contribute to a synchronous loading of elements, together with HTML, CSS, and JavaScript. This ends up with an increase of the render time. Displaying your images progressively (via lazy loading) will stop your images from loading at the same time with the other elements, which will make the page load quicker.

So yes, optimizing your images is an essential practice to make your site lighter. But first, you need to discover what makes your site load slowly. This is where speed tests intervene.

How To Test Your WordPress Site Speed

There are many tools that test your website’s speed. The simplest method is Pingdom.

Pingdom is a popular tool used by both casual users and developers. All you need to do is to open Pingdom and insert your WordPress site URL, choose the location that’s closest to the data center location of your hosting (based on your hosting’s servers), and start the test. If you have a CDN installed on your site, the location matters a great deal. But more on that later.

What’s nice about this tool is that, regardless of how simple its interface is, it displays advanced information about how a website performs, which is pure music to developers’ ears.

From these statistics, you will find out whether your site is doing well or it needs to be improved (or both). It’s nice because it gives you plenty of data and advice on pages, requests, and other sorts of issues and performance analysis.

(Large preview) (Large preview) (Large preview)

On the same page, GTmetrix is another cool tool that’s similar to Pingdom and which will analyze your site’s speed and performance in depth.

(Large preview)

Note: GTmetrix usually displays a rather slower WordPress website than Pingdom; this is how the tool calculates the metrics. There are no discrepancies, it’s just that GTmetrix measures the fully loaded time, unlike Pingdom which only counts the onload time.

The onload time calculates the speed after a page has been processed completely and all the files on that page have finished downloading. That’s why the onload time will always be faster than the fully loaded time.

The fully loaded time happens after the onload process when a page starts transferring data over again, which means that GTmetrix includes the onload times when it calculates the speed of a page. It basically measures the whole cycle of responses and transfers it gets from the page in question. Hence the slower times.

Google PageSpeed Insights is yet another popular tool for testing your site speed. Unlike the first two tools that only display your site performance on desktops, Google’s official testing tool is good at measuring the speed of your website’s mobile version, too.

Apart from that, Google will also give you its best recommendations on what needs to be improved on your site for getting faster loading times.

Usually, with either of these three tools, you can detect how heavily your images are affecting your site speed.

How To Speed Up Your WordPress Website

Of course, since this article is about image optimization, you guessed right that this is one of the methods. But before getting into the depths of the image optimization per se, let’s briefly talk about other ways that will help you if you have loads of images uploaded on your site.

Caching

Caching is the action of temporarily storing data in a cache so that, if a user accesses your site frequently, the data will be automatically delivered without going through the initial loading process again (which takes place when the site files are requested for the first time). A cache is a sort of memory that collects data that’s being requested many times from the same viewport and is used to increase the speed of serving this data.

Caching is in fact really simple. No matter if you do it manually or by installing a plugin, it can be implemented on your site pretty quickly. Some of the best plugins are

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview