Loading...

Follow Zsolt Nagy – Development of Maintainable Web .. on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Back in 2015, I was developing applications using Backbone and Marionette, when a young developer approached me to tell me something exciting. “Check out React”, he said. I have heard this before with AngularJs, with KnockoutJs, and during that time, every second developer was developing a framework anyway.

As a team lead, I was still not fully convinced about the need to switch to React, but I joined the presentation of my colleague. Back in the days, he said, React was the V in MVC. In hindsight, this statement turned out to be false. Frankly, in the Model-View-Presenter world of Backbone this statement didn’t even make sense to me. Why? Because in the Backbone world, the View layer just contains dumb templates that the Presenter component prepares.

The intention behind this statement was to signal that React didn’t really manage the application state in a sustainable way. It was possible to use React and only React, but then you had to pass down the state as props. Kent C. Dodds calls this technique prop drilling. Prop drilling makes it harder and harder to maintain your application as the application size grows. By the way, if you clicked on the featured image of this article and you saw a crossed out drilling machine, now you may understand the odd choice of graphics.

The community already found an answer by the time I wrote my first React line: keep business data in a store, residing outside the components. Any component should be able to access the store, as long as they follow some rules that ensure the maintainability of the solution.

The Flux architecture, and some very popular implementations, such as Redux and MobX took care of handling data in a maintainable way. I have used both Redux and MobX, and they are both great combinations for managing the application state. Redux was built based on the principle of making the application state immutable, while MobX allowed the mutation of application state.

Unfortunately, both Redux and MobX introduced the complexity of developing an application, and they didn’t seem right for small applications.

If you are new to react, you may not yet know what prop drilling is, therefore, let me illustrate the problem with an example.

Managing Application State with Prop Drilling

Imagine you have a React application symbolizing a poker game. In this game, you have a PokerHand stateless functional component, displaying the two cards you are holding.

import React from 'react';

const PokerHand = (props) => (
    <div className="hand">
        Hand: {props.cards.join( '-' )}
    </div>     
);

The cards come from the props of the hand. These props come from the component that contains the PokerHand.

In poker, cards belong to a player. Let’s define the Player component:

const Player = (props) => (
    <div className="player">
        <div className="chips">
            Chips: {props.chips}
        </div>
        <PokerHand cards={props.cards} />
    </div>     
);

The player component is included in a PokerGame component:

class PokerGame extends React.Component {
    state = {
        chips: 1200,
        cards: ['Ah', 'Kc'],
        board: null
    };
    render() {
        return (
            <Player cards={this.state.cards}
                    chips={this.state.chips} />
        );
    }
}

We can’t pass the card data from the PokerGame component down to the PokerHand component directly, because PokerGame does not have direct access to the PokerHand. Data have to go through the Player component.

In a complex application, there are a lot of data, which have to be passed by components that have nothing to do with processing these data.

If you are new to React, you may want to know how to render the PokerGame component on screen. You can either use a boilerplate like create-react-app, or you can simply write a few lines of code to render your component on screen:

import ReactDOM from 'react-dom'; 

ReactDOM.render( 
    <PokerGame />, 
    document.querySelector('.js-app') 
);

Make sure to add a div with the class name js-app to the DOM to make this code work. If you need more information, check out my other React articles first.

Using the Context API

The Context API eliminates the need for prop drilling, in case you are not using Redux, MobX, or another complex framework.

A context is created by React.createContext().

The Context API specifies two sub-components:

  • provider,
  • consumer.

The syntax for these two roles are as follows:

const ExampleContext = React.createContext();

// Provider: <ExampleContext.Provider />
// Consumer: <ExampleContext.Consumer />

A provider provides access to its data, while a consumer consumes it. Let’s first create a context and a provider with the following steps:

  1. Create a context using React.createContext()
  2. Create a context provider extending React.Component
  3. Move the state from the PokerGame component to the new context provider component
  4. Create a render method, where we wrap the rendered children of the element in a <PokerContext.Provider> component
  5. Add a value attribute to <PokerContext.Provider>, and equate it to this.state. This will provide access to the state stored by the context provider component.

const PokerContext = React.createContext();

class PokerContextProvider extends React.Component {
    state = {
        chips: 1200,
        cards: ['Ah', 'Kc'],
        board: null
    };    
    render() {
        return (
            <PokerContext.Provider value={this.state}>
                {this.props.children}
            </PokerContext.Provider>
        );
    }
}

The state of PokerContextProvider is made accessible anywhere inside its children, regardless of how deep the component hierarchy inside this.props.children is.

We can make the following changes to the PokerGame top level component:

  1. Remove the state, as it is already in the provider.
  2. Wrap the contents returned by the render function with the provider.
  3. Stop passing down the state as props.

class PokerGame extends React.Component {
    render() {
        return (
            <PokerContextProvider>
                <Player />
            </PokerContextProvider>
        );
    }
}

We will get rid of prop drilling in all children of PokerGame. Let’s start with the Player class. As the Player is not receiving its props anymore, we have to get the chips from the context component by creating a consumer.

Let’s recall the syntax of the provider:

// Provider:
<ExampleContext.Provider value={this.state}>
    {this.props.children}
</ExampleContext.Provider>

The generic syntax of the consumer is as follows:

// Consumer:
<ExampleContext.Consumer>
    {(contextValue) => ReactComponent}
</ExampleContext.Consumer>

The consumer component contains a function that takes the context value and returns a React component. This React component typically contains values from contextValue, which gives you access to the state of the context provider component found in its value attribute. Note that this value does not necessarily have to be the state, it can be any valid React attribute value.

Now that we know the syntax of the consumer component, let’s place it in the Player component to render the chip values:

const Player = (props) => (
    <div className="player">
        <div className="chips">
            Chips:
            <PokerContext.Consumer>
                {(contextValue) => (
                    <strong>{contextValue.chips}</strong>
                )}
            </PokerContext.Consumer>
        </div>
        <PokerHand />
    </div>     
);

A <strong> HTML tag was added that makes the chip values appear in bold. This demonstrates that the return value is not necessarily a string, but an arbitrary React component.

Similarly, we can also create the PokerHand component without props, using another consumer:

const PokerHand = (props) => (
    <div className="hand">
        Hand: 
        <PokerContext.Consumer>
            {(contextValue) => (
                <strong>{contextValue.cards.join( '-' )}</strong>
            )}
        </PokerContext.Consumer>
    </div>     
);

We are done with the rewrite. We don’t need any props anymore, the Context API provides access to a simple store on top level.

Updating the State using the Context API

Updating the state of the context provider is a common task. The update action can be triggered from any child component that uses a context consumer. The implementation is straightforward. If you know the basics of React, you can already do it.

Let’s add an all in feature to the Player component that changes your chip count to zero.

The first step is to pass down a function inside the value attribute of the provider that performs this action:

class PokerContextProvider extends React.Component {
    state = {
        chips: 1200,
        cards: ['Ah', 'Kc'],
        board: null
    };    
    render() {
        return (
            <PokerContext.Provider value={{
                state: this.state,
                allIn: () => this.setState({chips: 0})
            }}>
                {this.props.children}
            </PokerContext.Provider>
        );
    }
}

Now that we reorganized the contents of the value attribute, we have to change all references in all providers to refer to the chips and cards state values as state.chips and state.cards respectively. You will see the corresponding changes in the code of the Player and the PokerHand classes.

Let’s add an All In! button to the Player component that calls the allIn method.

const Player = (props) => (
    <div className="player">
        <div className="chips">
            Chips:
            <PokerContext.Consumer>
                {(contextValue) => (
                    <strong>
                        {contextValue.state.chips}
                        <button onClick={contextValue.allIn}>
                            All In!
                        </button>
                    </strong>
                )}
            </PokerContext.Consumer>
        </div>
        <PokerHand />
    </div>     
);

const PokerHand = (props) => (
    <div className="hand">
        Hand: 
        <PokerContext.Consumer>
            {(contextValue) => (
                <strong>{contextValue.state.cards.join( '-' )}</strong>
            )}
        </PokerContext.Consumer>
    </div>     
);

Once the allIn callback method is called, the setState method of the context provider is executed, changing the chip count to zero.

You can find the code belonging to this article on CodePen.

Summary

The Context API is geared towards smaller applications, where you need to have access to the state of a component more than one levels down the component hierarchy. The Context API may substitute a state management library like Redux, and your code stays maintainable as long as you create a small application.

As the application size grows, you may consider shifting towards proper application state management.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In the last few months, I have been coordinating a software project as a tech lead. From a PR perspective, we are calling this position as a tech lead, but in reality, I am undertaking an architect role.

It is a lot of fun to launch something from scratch. Sometimes it is not fun, as decisions in the dark may appear scary at times, but as a part of a Scrum team, we are always there to correct course. I am also surrounded by great people who catch whenever I overlook something.

As I gained a lot of experience in the corporate world, I started understanding how corporations work, what a nine-tier corporate structure with directors and vice presidents are like, and I also started grasping the essence of what skills are needed to get to the top. This article will focus on the tech lead role in the context of requirement analysis and software design.

The Tech Lead Role

A tech lead of a project can do many things that help move the project forward. This includes

  • taking architectural decisions,
  • resolving disputes on technologies to be used,
  • overseeing the functional and non-functional requirements,
  • setting up repositories, project structure,
  • driving architectural and detailed software design,
  • helping the team deploy an application to the cloud,
  • setting up a continuous integration pipeline,
  • and surprise, even coding.

Tech lead is a role, not a position. You may be a tech lead in one project, and may have a completely different role in another.

Tech lead is a lead for a reason. In an environment, where team members are highly autonomous, team members go to the tech lead for advice. High autonomy means that anyone who is responsible for a task, is entitled to take a decision. The tech lead is just there to provide direction, and stabilize the team.

There are some ivory tower tech leads and architects out there who never code. Similarly, you can meet some managers in most organizations who know some very high level concepts, but they do not know how to help out another team with information, because they are not connected to their own domain that deeply. They delegate and stay on very high level. Some managers and directors admit these loopholes and point at the right person. Others play a game and are afraid of admitting if they don’t know something. Similarly to the latter types of managers and directors, an ivory tower architect is often a burden for the team, because this type of person distances himself from the real world. So much so that he will delay the team rather than helping them out.

The worst type of ivory tower architects may do something called productive procrastination. They appear productive by focusing on the wrong things, and distracting the whole team with requests that no-one else cares about.

The other end of the spectrum is a tech lead who is capable of solving any challenge, maybe even a bit better than most of the team members. The practical tech lead codes less than the average because of adding more value elsewhere.

Leadership is responsibility. On all levels, leading means serving. Those who want to shine in the spotlight should immediately switch to an acting career. In today’s world, you don’t even need gatekeepers, as you just need to learn some acting skills to start a Youtube channel. Then you may convince a less tech-savvy audience that they can make $100 an hour after investing a few months of their hard work in your course. Is it possible to earn $100 per hour this way? Yes. Is it likely? No, especially if your audience has trouble with basic math.

What does a Tech Lead do during Requirement Engineering?

A tech lead is helpful during the whole software development life-cycle. During requirements analysis, a tech lead helps clarify the following aspects of the requirements:

  • Functional requirements,
  • Non-functional requirements.

Functional requirements describe the software we are building.

Non-functional requirements describe the quality of the system we build. This includes the specification of the maximum tolerable downtime (availability), data protection issues, scalability etc.

Functional requirements may imply non-functional requirements. Both functional and non-functional requirements drive architectural decisions.

As a consequence, the tech lead should be involved in exploring the requirements by asking the right questions.

Discovering Functional Requirements

Functional requirements describe the software we are building. This includes:

  • understanding who the users are,
  • understanding what problems the users want to solve,
  • understanding why the users want to solve the problem.

A great way to specify acceptance criteria for software is Gherkin. See the role of the who, what, why in the structure:

AS AN Advertiser
I WANT TO set up Advertising Campaigns
SO THAT I market my product

There are different tools to capture requirements. Without understanding the users, also known as actors, the workflows associated with the users broken down per role, and the context in which our system operates, the software we develop will be unlikely to address the pain points of the client.

In a Waterfall project, the scope of the whole project is determined upfront, which means that by the end of this phase, the requirements are more or less set in stone. For instance, if you order building a house, you will have to clarify in advance what kind of house you want. When the architect goes to work to create an architectural plans for the building you ordered, you will rarely change your mind saying that you want to add a second level on top of the building.

In agile projects, requirements often change. We tend to embrace change. We still do our best to collect all requirements to the best of our abilities as early as possible, but we expect change to happen.

Being agile should not be an excuse for not gathering requirements to the best of our abilities. Giving an excuse that requirements will change anyway, so why create requirements, is called ignorance, not agility.

To course-correct, we need a well defined direction. This direction is provided by functional requirements.

For instance, imagine a software system, where the tech lead chose writing a monolithic application with PHP, MySQL, and JavaScript, overlooking the fact that their clients would store 5 terrabytes of data, and MySQL can hardly handle a few percents of this amount. Although scaling is a non-functional requirement, the amount of data involved can be detected by examining functional requirements.

Similarly, there is a big difference between data that get updated live, hourly, and daily. Back in the days, I was the tech lead and architect of a stock trading application on end of day data. We imported our data from an Australian data provider once a day. Our clients wanted live updates, so we had to completely change the method how we got the data.

A tech lead should be there to monitor requirements, because often times the client does not know what they want exactly, until they receive the right questions.

I have seen requirement documents that contain logical contradictions. Humans often have a hard time describing what they mean by using conjunction and disjunction. Many non-tech people mix and with or. When they write or, they often mean xor.

I have seen user stories that did not deliver any value to the client while they drastically increased the complexity of the solution. A good tech lead is supposed to point these issues out, either by recognizing them, or by delegating and supervising the analysis of technical requirements to an expert.

When discovering functional requirements, high level analysis of data may be needed to understand what the system is doing. This may include:

  • a context diagram detailing the system and its boundaries in terms of data flow
  • a data flow determining how data is transformed by processes inside the system
  • entity-relationship models, which act like the foundation of a building, because both the database schema and the business logic depends on it.
What kinds of non-functional requirements do we need to pay attention to?

Once more, non-functional requirements describe the quality of the system we build. Quality is measured by some metrics that need to be defined clearly to drive architectural decisions.

I assume you don’t need this article if you are designing a mission critical application where human lives or trillions of dollars are at stake, therefore, I will focus on the typical non-functional requirements we focus on in a project:

  • Availability metrics
  • Performance metrics
  • Scalability
  • Compliance
  • Security
  • Maintainability

Let’s explore the role of these non-functional requirements one by one.

Availability describes the percentage in which the system operates according to the specification. Think about your Internet banking software. What happens if it is down between 1AM and 3AM? Not much. Bank transfers are not live anyway, and in case of emergencies, there is a telephone operator that can handle your request. In other context, just one minute of downtime threatens the lives of people, or causes a financial loss of billions of dollars. Availability is usually measured with nines.

In the telecom sector, 99.999% is a usual availability metric. What does this mean exactly? 0.864 seconds a day, or just over 5 minutes a year. Designing a system with such a high availability requires drastically more resources and fault tolerance than a system with 99% availability, where our system can be down for 3.65 days a year.

Performance metrics are typically connected with the amount of time a request is responded. In an enterprise application, a report for a member of the executive team may take a minutes to compile. Compare this to day traders, who compete against each other with decreasing the latency time from their office to the center of the stock exchange. They also minimize their response time with algorithmic trading software that acts as soon as the necessary conditions are recognized. At the same time, some people still believe that day trading is an easy living, even if you do it manually.

Performance metrics may make it impossible to use certain technologies, as there is a size of data, where your MySQL database will never respond. In other cases, you can choose between blocking I/O (PHP) and non-blocking I/O (node.js server). You may also have to consider the computer of the client to take appropriate client-side decisions.

Scalability is a metric that determines future performance needs for your software as your user-base grows. This metric is loosely connected to my article An Introduction to Microservices. You may create a monolithic application to validate your idea as an MVP (Minimal Viable Product). Then you can scale the performance bottlenecks by extracting microservices out of them.

Compliance is an important metric, because your application may be constrained by the legal environment. Legal standards for Data Protection, or licensing in case of online gaming companies are two mandatory elements that may pose an existential risk to companies. Another example is governmental tenders, where all formal elements have to match the required elements.

Security is an important metric that is often connected with compliance. For instance, data theft is caused by a security hole in the system. While a small startup will not do much except for declaring a bounty program and reward ethical hackers with some cryptocurrency, corporations are often obliged to audit their system and maintain a security team looking after all software solutions. This team performs penetration testing to make sure the software solution does not contain serious security vulnerabilities. Requirement analysis should deal with these security issues based on the associated risk, which means, some non-functional requirements are dictated by security requirements. For instance, the authentication and authorization system may be constrained by security requirements.

Maintainability is implied by the life-cycle of the software. Are we building an MVP to validate some ideas? Is development speed more important than maintainability? Maintainability is like a financial account. When the quality of the software is sacrificed, it is like borrowing money. This money has to be repaid at a later stage with interest. If too much money was borrowed, we may never be able to repay the lone and we go bankrupt. In software terms, we don’t borrow money. We create technological debt. This debt has to be repaid to avoid bankrupting the application, unless the application is meant to be bankrupted. For instance, an MVP may be completely thrown away, as its main purpose is to validate a business idea. The MVP should be developed fast, often without considering maintainability. On the other hand, a complex system automating all business processes of an organization will have to be maintainable, otherwise one day we realize that feature development takes a lot more time than before.

Extensibility is one aspect of maintainability. It is often implied by functional requirements. For instance, your task may be to build an MVP for one special client, but you should prepare the system to be used for any client in the industry with minimum customization.

Always think out of the box, as there is no checklist to extract all non-functional requirements that cover all cases.

Non-functional requirements may also be derived from the golden path approved by your organization. Bigger organizations tend to standardize tools, programming languages, frameworks, libraries, cloud providers, and almost everything related to software development. Consider this before innovating too much in your architecture.

Many other considerations may contribute to non-functional requirements, including the competencies of the development team, and internal politics.

Soft-skills as non-functional requirements

You have to know the stakeholders of the project.

For instance, due to automation, the users of the system cannot exploit a loophole anymore for financial gains. This implies that they won’t cooperate with you, because their interest is to continue exploiting the system, not to help you make the processes better.

For instance, if a disease diagnostic system automatically prescribes the best medicine for a symptom, the doctor won’t be able to get bribed by the marketers of a favored, but sub-optimal medicine so that they get more prescriptions.

Summary

A tech lead coordinates architectural and engineering decisions throughout the whole software development life-cycle, and leads his or her team.

In this article, we discovered the role of the tech lead by analyzing requirements. Requirements can be functional or non-functional. All requirements lay down the foundations for architectural choices.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is another episode of the JavaScript for Beginners series.

In the first post, you have learned how to write basic JavaScript code. In this article, you will find out how to use JavaScript with HTML and CSS.

The goal of this article is not to teach you the basics of CSS or JavaScript button click event handling, but to show you how HTML, CSS, and JavaScript is structured.

Creating the Markup

When visiting a website, we define its HTML markup, some styling information, and some dynamic functionality.

Markup is written in HTML. HTML stands for Hyper-Text Markup Language. We will not introduce HTML in detail, so in case you don’t know how to write simple HTML documents, check out this article followed by this tutorial.

Let’s create a simple example with a form. We will place a textfield in a form, and a button that displays a greeting in the form of an alert box.

<!doctype html>
<html lang="en">
  <head>
    <title>Greetings</title>
  </head>
  </body>
    <form>
      <input type="text"  
             placeholder="name">
      <button >Greet</button> 
    </form>
  </body>
</html>

As an editor, you can use Sublime Text, Atom.io, or Visual Studio Code. Experiment with the text editor of your choice a bit.

Create a GreetingProject folder on your computer and save the above file there. Name it greeting.html. Create a js and a styles folder in your folder. You should have the following folder structure:

GreetingProject
  [-] js
  [-] styles
  ----greeting.html

If you double click the greeting.html file, it opens in a browser. You can see a textfield there, where you can enter your name. Unfortunately, when you press the Greet button, the text you enter in the textfield is gone. This is what we will fix with JavaScript.

Adding JavaScript code

When you create an HTML form, pressing a button submits the form by default. Submitting a form reloads the page. Once you reload the page, the data entered in the textfield is lost. This is what we will prevent now.

Create a form.js file inside the js folder, and place the following content there:

function helloListener( event ) {
    event.preventDefault();
    console.log( 'button pressed' );
}

const helloButton = document.querySelector( '.js-hello' );
helloButton.addEventListener( 'click', helloListener );

First, we created a helloListener function. This function prevents the default action of the event, which is the submission of the form.

The second line in the function creates a console log that appears in the developer tools of your browser. More on this later.

The last line attaches the helloListener function to the button. We tend to use js- prefixed classes to refer to elements in the Document Object Model, also known as the DOM. The document.querySelector function takes a selector string, in this case a class name, and locates the node in the DOM that has this class. Check out the HTML file, you can see the same class in the class list of the button.

Once we located the .js-hello button, we can add an event listener function to it. This function takes an event, and can perform any JavaScript action ranging from manipulating the contents you can see on screen to calling a service or an API on the web, saving your data in a persistent storage.

There is only one problem with this code: we don’t have access to it in the HTML file. Let’s change this by adding an HTML tag at the bottom of the body.

<!doctype html>
<html lang="en">
  <head>
    <title>Greetings</title>
  </head>
  </body>
    <form>
      <input type="text"  
             placeholder="name">
      <button >Greet</button> 
    </form>
    <script src="js/form.js"></script>
  </body>
</html>

If you did everything correctly and saved all the files, after opening the greeting.html file in your browser, the name does not disappear once you press the Greet button.

Note that there are other ways to insert JavaScript code into an HTML document. I highly recommend sticking to the above method, but for completeness, feel free to read this article.

JavaScript is executed in the browser using the JavaScript interpreter. JavaScript is an interpreted language, which means that it runs JavaScript as it appears in the file without compiling it to an intermediate representation.

Developer Tools

Each browser has developer tools. For simplicity, I will use Google Chrome in this article, but most browsers have similar functionality.

Right click on your website anywhere inside the browser window, and select Inspect from the context menu. You will find yourself inside the developer tools. Find the Console tab. Assuming you have clicked on the button, you can find the following there:

button pressed
>

You can execute any JavaScript expression by writing it after the > prompt:

button pressed
> 2+2
4
> helloButton
    <button class=​"btn-large  js-hello">​Greet​</button>​

The helloButton variable stores a DOM node, fully accessible in JavaScript.

Greeting the User

Let’s use the console to get the name entered in the textfield

> document.querySelector( '.js-name' )
    <input type=​"text" class=​"large-text  js-name" placeholder=​"name">​
> document.querySelector( '.js-name' ).value
    Zsolt

Instead of a console log, we need to get the value of the textfield and display a greeting in an alert box:

function helloListener( event ) {
    event.preventDefault();
    const name = document.querySelector( '.js-name' ).value;
    alert( 'Hello, ' + name + '!' );
}

const helloButton = document.querySelector( '.js-hello' );
helloButton.addEventListener( 'click', helloListener );

The + operator on strings concatenates strings. The alert function creates an alert box.

If you test the code, you can see that everything is in place.

Styling

We know that

  • the static markup providing information on the structure of the website is in the HTML file,
  • the dynamic functionality goes in the JavaScript files.

Many people think that the look and feel of the page is also defined in the HTML file. This approach is wrong. Styling is separated from HTML.

We describe styles in CSS (Cascading Style Sheets) files.

.tf-large {
    font-size: 1.5rem;    
    padding: 1rem;
}

.btn-large {
    font-size: 1.5rem;
    padding: 1rem;
    font-weight: bold;
}

Similarly to JavaScript, we need to reference the CSS file from the HTML document. We do it in the head using a <link> tag:

<!doctype html>
<html lang="en">
  <head>
    <title>Greetings</title>
    <link  href="styles.css">
  </head>
  </body>
    <form>
      <input type="text"  
             placeholder="name">
      <button >Greet</button> 
    </form>
    <script src="js/form.js"></script>
  </body>
</html>

If you reload the page, you can see the changes on screen.

There are other ways to insert stylesheets into the HTML documents, but stick to external files so that you separate concerns. HTML should contain markup, CSS should contain styles. If you need more information on the basics of CSS, check out w3schools.com.

HTML classes: separation of concerns

Recall the button <button >Greet</button>. You can see two classes in there, btn-large and js-hello. The first class is used solely for styling, and the second class is used solely for referencing in JavaScript.

No-one forces you to write code this way, but it pays off to separate classes for styling and functionality. When multiple people work on the same codebase, this separation pays off. This way, a person responsible for styling can add, delete, or rename any styling classes without affecting functionality. JavaScript developers can do the same thing with the js- prefixed classes.

js- prefixed classes should be used for functionality, while regular classes should be used for styling.

Why aren’t we using HTML IDs?

You may know from your studies or previous work that we can also reference DOM nodes using their ID attributes:

<div id="intro">This is an introduction</div>

If you have the above node in the DOM, you can reference it using

document.getElementById( 'intro' )

The problem with ID attributes is that they have to be unique for the whole document. If you violate this rule, you get an HTML error.

In big websites and applications, most developers never know the context in which their markup will appear. Chances are that if you use an ID attribute name, someone else will do the same elsewhere.

As two DOM nodes cannot have the same ID, using ID attributes is not advised in most cases.

Once a business owner asked me to try out his affiliate plugin, because it was not working for him. Sure enough, I filled in the form, but once I pressed register, nothing happened. I checked the developer tools of the browser, and it turned out that there were some duplicated ID attributes in the markup.

I asked him how many times he included the affiliate plugin. He said, once for desktop computers, and once for mobile. He proudly told me that he made the mobile version hidden when someone checks it in a desktop.

Unfortunately, he only managed to hide it using a CSS style (display: none). The markup was in his document with the exact same ID attributes as the other version.

<div id="affiliate-plugin" >
    <!-- Content goes here -->
</div>
<div id="affiliate-plugin" >
    <!-- Content goes here -->
</div>

The above markup is erronous due to the duplicated ID attributes. My entrepreneur friend, lacking web development knowledge, lost hours on not knowing the fundamentals. You can now save these hours next time you encounter a similar situation.

What do we do with multiple JavaScript and CSS files?

In large projects, placing all the JavaScript code in one file is not advised for development. Therefore, we often have hundreds if not thousands of JavaScript and CSS files.

Placing all these files in the HTML markup is not advised either, because enumerating hundreds of files in the markup is neither convenient, nor efficient. You may also encounter JavaScript errors if you include the JavaScript files in the wrong order.

For large projects, use Webpack and npm (Node Package Manager) to bundle your files into one file that you can include in your markup.

You can learn more on npm modules and Webpack in my article. The syntax may be a bit too advanced for you at this stage. You will understand everything if you sign up for my ES6 Minitraining at the bottom of this article.

In a large project, I highly recommend that you learn and use SASS. SASS not only enriches your CSS syntax, but it helps you structure it better. Check out my SitePoint article titled CSS Architecture and the Three Pillars of Maintainable CSS on this topic.

Don’t forget to sign up for the ES6 minitraining containing some exercises on the basics covered in my previous article on getting started:

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Even though you have seen many tech interview exercises already, we have not covered algorithmic complexity yet. Therefore, I will just give you a straight to the point explanation of the big O notation.

Big O is a performance metric measuring the worst case complexity of an algorithm. Suppose N is the size of the input. Let’s see some examples for complexities considered in this article.

  • O(1): constant complexity
  • O(N): linear complexity
  • O(N ** 2): n-squared, quadratic complexity
  • O(2 ** N): exponential complexity
Defining Big O

Suppose N is the length of the input.

  • O(N) complexity means that there exists a finite constant c for which the number of steps taken by your solution is less than c * N assuming that c is fixed and N is arbitrarily large.
  • O(N ** 2) (N squared) complexity means that there exists a finite constant c for which the number of steps taken by your solution is less than c * (N ** 2) assuming that c is fixed and N is arbitrarily large.

This is not the formal definition of the big O notation, but it is good enough for now. You may or may not understand why I stressed the “assuming that c is fixed and N is arbitrarily large” part, so let’s shed some lights on this.

We are defining an upper bound for the number of steps we allow the algorithm to take. Regardless of how large c is, once N becomes large, c will become negligible when it comes to determining the magnitude of the solution. It is nothing else, but a constant multiplier. What matters is, whether the algorithm scales linearly, quadratically, or exponentially.

How do algorithms scale?

O(2 ** N) complexity means that there exists a finite constant c for which the number of steps taken by your solution is less than c * (2 ** N) assuming that c is fixed and N is arbitrarily large.

There is one more interesting complexity: O( N * log(N) ). I suggest that you memorize that the best sorting algorithm has N * log( N ) time complexity. So when you call array.sort(); in JavaScript, you know that your algorithm has best case N * log( N ) complexity.

Let’s see how these four complexities scale:

N    N*log(N)    N**2      2**N 
10     23.02585      100  1024
100   460.51701    10000  1.267e+30
1000 6907.75527  1000000  1.071e+301

You should get a feel for why complexity matters. Take just 1000 elements. A linear or an N * log(N) solution is quite fast compared to the quadratic N ** 2 solution. The exponential solution is horrible even for an input of length 100. For an input of length 1000, ten to the power of 301 is a lot larger number than what we can compute within a limited amount of time.

This is why complexity analysis matters. We have to know in advance how good our algorithm is.

Time and Space Complexity

When it comes to coding exercises, we deal with two types of complexities:

  • time complexity,
  • space complexity.

The time complexity of an algorithm determines the number of steps taken by the algorithm, measured with respect to N, the size of the input.

The space complexity of an algorithm determines the amount of space required by the algorithm to execute, measured with respect to N. Note that the input does not count when measuring space complexity. Therefore, if you have an array of length N as an input, and you only create two number variables, your solution has constant space complexity. This means, there is a fixed constant c that acts as an upper bound to the space required by your algorithm, regardless of how large N is. Constant complexity is denoted by O(1).

O(1) space complexity and O(N) time complexity

function getMaximum( array ) {
    let max = -Infinity;
    for ( let value of array ) {
        if ( value > max ) max = value;
    }
    return max;
}

The input array can be arbitrarily large. While computing the maximum value, we only created one variable. Therefore, we only used constant size.

The time complexity is O(N), because we performed an operation with each value in the array.

It does not matter how many times we iterate on the array. For instance, consider the following code:

function getMinMaxRange( array ) {
    let max = -Infinity;
    let min = Infinity;

    for ( let value of array ) {
        if ( value > max ) max = value;
    }
    for ( let value of array ) {
        if ( value < min ) min = value;
    }   

    return max - min; 
}

Even though we iterated on the array twice, the complexity of our algorithm is still linear.

Remember, O(N) complexity means that the number of steps needed for the completion of the algorithm is less than c * N, where c is finite, fixed, and N is arbitrarily large.

As long as we have a constant number of iterations, our upper bound for c will stay finite regardless of how big our input is.

Note that the above solution is not optimal in terms of coding style, as we could have simply written the contents of the two loops into one loop. We used this example for illustration purposes.

Polynomial and Exponential Time Complexity

O( 2 ** N) is a complexity we often avoid unless it is absolutely necessary.

We are normally looking for algorithms that can be solved in poynomial time. Polynomial time means that there is a polynomial that can be multiplied by fixed constants to overestimate the number of steps.

Suppose our polynomial is

5 * (N ** 4) + 2 * (N ** 3) - 4 * (N ** 2) + 9 * N + 4

Once N becomes arbitrarily large, all terms except 5 * (N ** 4) become negligible. The complexity of the algorithm becomes O(N ** 4). You have to look at the largest powered term in the polynomial.

What does an O(N ** 4) algorithm look like? Simple. Four nested loops:

for ( let i = 0; i < array.length; ++i ) {
        for ( let j = i; j < array.length; ++j ) {
            for ( let k = j; k < array.length; ++k ) {
                for ( let l = k; l < array.length; ++l ) {
                    // O(N**4) algorithm
                }
            }
        }
    }

Four nested loops still run a lot faster than an exponential algorithm.

Suppose we would like to create all possible subsets of a set represented by an array:

function getSubsets( [head, ...tail] ) {
    if ( typeof head === 'undefined' ) return [];
    if ( tail.length === 0 ) return [[head], []];  
    let tailSubs = getSubsets( tail );
    let tailSubsWithHead = tailSubs.map( list => {
        list.unshift( head );
        return list;
    });
    return [ ...tailSubsWithHead, tailSubs];
}

This algorithm has exponential time and space complexity, because for N values, there are 2 ** N possible combinations of arrays:

  • the first element is either in the resulting array or not. That’s 2 possibilities
  • the second element is either in the resulting array or not. That’s 2 possibilities
  • the Nth element is either in the resulting array or not. That’s 2 possibilities

In total, we have 2 * 2 * ... * 2 possibilities, where we multiply 2 with itself N times. This product is equal to 2 ** N.

The exponential algorithm is a lot worse than the O(N ** 4) algorithm.

In general, polynomial time algorithms are applauded, while exponential algorithms are avoided when necessary.

In most interviews, even in polynomial time algorithms, it matters a lot whether you deliver an O(N ** 2), an O(N * log(N)) or an O(N) solution.

O(N * log(N)) contains a logarithm. Why is it polynomial time?

Because O(N ** log(N)) can be overestimated by O(N ** 2), and the latter one is in polynomial time.

Summary

There is nothing hard about the big O notation. All you need to do is run through this article a few times and understand the terminology.

Now that you know algorithmic complexity, you can start solving interview exercises with higher confidence.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this article, you will learn about higher order functions. An important cornerstone of functional programming is higher order functions. If you want to write programs in mostly functional style, it is inevitable that you master the basics of higher order functions.

First, you will learn the simple definition of higher order functions.

I will make sure you remember the definition by showing you some higher order functions you may already be using.

We will conclude this article by implementing a higher order function example.

What are higher order functions?

Higher order functions are functions that accept function arguments or return a function.

The or in this definition is not in an exclusive sense.

We learned in the previous section that JavaScript functions are values. Therefore, functions can be passed to other functions, and they can also be return values of functions.

This is all the theory you need to understand higher order functions. The name higher order function seems scary at first, but there is really nothing scary about it: we just pass functions as arguments or return them as return values.

Note that you may know many higher order functions already, especially if you went through my JavaScript mini course.

setTimeout( () => console.log('done'), 1000 );

Learn ES6 in Practice
Sign up below to access an ES6 course with many exercises and reference solutions.

A great example for higher order functions is setTimeout. The first argument of setTimeout is a function, so setTimeout is a higher order function.

document.querySelector( '.js-submit' )
   .addEventListener( 'click', submitCallback );

When adding events to a DOM node, the function registering the event is a higher order function.

Both setTimeout and addEventListener are higher order functions, because they have function arguments. Let’s now see an example that returns a function.

Context binding returns a function with a bound this value. Therefore, bind is a higher order function.

const area = function() {
    return this.width * this.height;
};

const boundArea = area.bind( { width: 2, height: 3 } );

boundArea();

Technically, bind is a method of the area object through the prototype chain inheritance. We can always rewrite method calls in a form, where bind is a standalone function, and area is an argument:

const bind = function( f, ...args ) {
    return f.bind( ...args );
}

const boundArea = bind( area, { width: 2, height: 3 } );

After the rewrite, we can clearly see that bind not only accepts a function argument, but it also returns a function.

Writing higher order functions

We may also want to write a higher order function ourself to demonstrate its usage.

Suppose that your task is to format integer values representing cents to currencies. The request includes some customization, such as specifying the currency symbol, and the decimal separator.

If the template literal return value is not familiar to you, read my article on strings and template literals.

const formatCurrency = function( 
    currencySymbol,
    decimalSeparator  ) {
    return function( value ) {
        const wholePart = Math.trunc( value / 100 );
        let fractionalPart = value % 100;
        if ( fractionalPart < 10 ) {
            fractionalPart = '0' + fractionalPart;
        }
        return `${currencySymbol}${wholePart}${decimalSeparator}${fractionalPart}`;
    }
}

> getLabel = formatCurrency( '$', '.' );

> getLabel( 1999 )
"$19.99"

> getLabel( 2499 )
"$24.99"

formatCurrency returns a function with a fixed currency symbol and decimal separator.

We pass the formatter a value, then format this value by extracting its whole part and the fractional part. Notice that I used the ES6 math extension trunc to truncate the result.

The return value of this function is constructed by a template literal, concatenating the currency symbol, the whole part, the decimal separator, and the fractional part.

The currency symbol and the decimal separator are not passed to the returned function, they are fixed values.

We can pass integer values to the getLabel function, and we get a formatted representation back.

Therefore, the formatCurrency higher order function returned a usable formatter function.

The undesirable forEach and map-reduce-filter

Some higher order functions are useful for handling arrays in JavaScript. In fact, when writing code in mostly functional style, we often use these functions instead of loops. These functions are:

  • map,
  • reduce,
  • filter.

You may rightfully ask, why I haven’t mentioned the forEach method? After all, we are talking about loops, aren’t we?

The problem with forEach is that it is a completely useless function in functional programming. When writing code in purely functional style, it makes no sense using forEach. Let’s see a simple example:

const values = [1, 2, 3, 3, 5];
let sum = 0;

values.forEach( v => { sum += v; } );

console.log( sum );

The forEach helper iterates over values and calls its first argument on each value in the array. The main effect of this forEach call is that undefined is returned. From this perspective, it does not matter what is in the function body. When we execute the function body sum += v, a side-effect is created, modifying the context outside the scope of the forEach helper.

Pure functional programming is side-effect free. The forEach helper of arrays does not return any usable value. Therefore, the only reason to use forEach is to rely on side-effects inside the callback of forEach.

In tech interviews, I often look puzzled when candidates declare that they are going to use functional programming, so instead of the for loop, they use a forEach helper. Don’t walk into this trap.

There are better functions to manipulate arrays.

You can achieve the same result with reduce:

const values = [1, 2, 3, 3, 5];
const sum = values.reduce( (accumulator, v) => accumulator + v, 0 );
console.log( sum ); // 14

Reduce is a higher order function with a function argument. This function argument is executed on each element of the array. It takes an accumulator variable and one value from the array. The return value of this function argument is the new value of the accumulator. This new value will be used in the next call belonging to the next element.

Let’s print the state of the accumulator variable and the upcoming array value in each iteration:

const values = [1, 2, 3, 3, 5];
const sum = values.reduce( (accumulator, v) => {
    const result = accumulator + v;
    console.log( `accumulator, v, result: ${ accumulator }, ${ v }, ${ result }` );
    return result;
}, 0 );

The following values are printed to the console:

accumulator, v, result: 0, 1, 1
accumulator, v, result: 1, 2, 3
accumulator, v, result: 3, 3, 6
accumulator, v, result: 6, 3, 9
accumulator, v, result: 9, 5, 14

That’s all you need to know about reduce. You will soon get an exercise, where you will be able to use it in practice.

Our next function is map. Map is a higher order function that takes each element of an array, transforms it using a callback function, and returns an array of the transformed values.

For instance, we can get different powers of 2 using the following expression:

> [0, 1, 2, 3, 4].map( v => 2 ** v );
[1, 2, 4, 8, 16]

Remember, the ** is the exponential operator introduced in ES2016. Read 2 ** v as “two to the power of v”.

Let’s now construct the first 50 powers of 2. Assuming that we don’t want to construct an array of 51 elements by hand, we could just take an array with null elements:

new Array( 51 ).fill( null )

The map function’s callback may accept a second argument, which is the index of the current element of the array:

> new Array( 51 ).fill( null ).map( (item, index) => index )
(51) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50]

We just have one step left: instead of returning index, we need to return 2 ** index.

> new Array( 51 ).fill( null ).map( (item, index) => 2 ** index )
(51) [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608, 16777216, 33554432, 67108864, 134217728, 268435456, 536870912, 1073741824, 2147483648, 4294967296, 8589934592, 17179869184, 34359738368, 68719476736, 137438953472, 274877906944, 549755813888, 1099511627776, 2199023255552, 4398046511104, 8796093022208, 17592186044416, 35184372088832, 70368744177664, 140737488355328, 281474976710656, 562949953421312, 1125899906842624]

I have some software developer friends who know at least the first 20 values by heart. They would still use map to create this array though, because it’s faster than writing these digits down.

The third higher order array function is filter. We may want to throw away some elements from an array and return an array that only keeps the rest of the elements.

For instance, suppose we want to throw away all the negative elements from an array:

> [-1, 5, 2, -4, -2, 2].filter( v => v >= 0);
[5, 2, 2]

array.filter( f ) keeps those elements in filter for which f( element ) returns a truthy value.

If you don’t know what a truthy value is, check out this article.

Chaining map-reduce-filter

Map and filter return arrays.

Map, reduce, and filter operate on arrays.

Therefore, we can chain any sequence of map and filter calls after each other, and we may even place a reduce call at the end.

For instance, suppose you have an array of strings, and you are interested in finding the length of the longest string that starts with a.

We can find the solution using the following steps:

  1. Filter the elements that don’t start with a.
  2. Map the elements to their lengths.
  3. Reduce these lengths to their maximum.

const words = ['ab', 'abc', 'bcde'];

words
    .filter( w => w.startsWith( a ) )
    .map( w => w.length )
    .reduce( (acc, v) => Math.max( acc, v ), 0 );

Side note: technically, we don’t need to use reduce to take the maximum of an array. We could have just written:

Math.max( ...wordLengths );

If you are ready for more, read this blog post for a map-reduce-filter exercise. You will have to translate an SQL statement to map-reduce-filter calls.

If you prefer the video version, watch it here:

Translating SQL Queries using map, reduce, and filter in JavaScript - YouTube

Summary

Higher order functions are functions that accept a function argument or return a function.

You may be using many higher order functions already, possibly without thinking about them. Just think about setTimeout, event listener callbacks, or the bind function.

Some higher order functions such as map, reduce, and filter help you process arrays easier.

If you want to learn more about functional programming, check out my 5 star course on Safari Books Online: Deep Dive into Functional JavaScript. I can highly recommend this course as a great introductory resource on functional programming.

Sometimes you may get a great deal on Packt’s site too. This is Packt’s Deep Dive into Functional JavaScript page. However, Packt’s promotions are not in my control, so I don’t know when the $120-$140 price tag drops down to $10.

Last, but not least, sign up for my JavaScript mini course if you would like to learn more about

  • arrow functions,
  • destructuring,
  • functions with variable number of arguments,
  • the ES6 class syntax,
  • the spread operator and rest parameters,
  • destructuring.
Learn ES6 in Practice
Sign up below to access an ES6 course with many exercises and reference solutions.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This article will give you an overview on the new features of ES2018.

First, we will discuss the new features of ES2018. Then you will learn what you need to do to start using them today.

You may want to try out the code examples in your browser. Just open the developer tools of Google Chrome. If you are using version 66 or higher, you are golden when it comes to ES2018 support.

The proposals are not yet fully implemented. You can find the current compatibility table on kangax.github.io.

Spread and Rest for object properties

The spread operator and rest parameters have been a popular addition in ES2015. You could spread arrays to comma separated values, and you could also add a rest parameter at the end of function argument lists to deal with a variable number of arguments.

Let’s see the same concept applied for objects:

let book = {
    author: 'Zsolt Nagy',
    title: 'The Developer\'s Edge',
    website: 'devcareermastery.com',
    chapters: 8
}

We can now create an destructuring expression, where we match a couple of properties, and we gather the rest of the properties in the bookData object reference.

let { chapters, website, ...bookData } = book

Once we create this assignment, the chapters numeric field is moved into the chapters variable, website will hold the string 'devcareermastery.com'. The rest of the fields are moved into the bookData object:

> bookData
{author: "Zsolt Nagy", title: "The Developer's Edge"}

This is why the ...bookData is called the rest property. It collects the fields not matched before.

The rest property for objects works in the same way as the rest parameter for arrays. Destructuring works in the exact same way as with arrays too.

Similarly to rest parameters, we can use rest properties to make a shallow copy of an object. For reference, check out my article cloning objects in JavaScript

let clonedBook = { ...book };

You can also add more fields to an object on top of cloning the existing fields.

let extendedBook = {
    pages: 250,
    ...clonedBook
}

Asynchronous iterators and generators

In ES2015, the iterator interface was introduced. The iterator interface is based on a well known symbol, Symbol.iterator. An iterable object has a method under the key Symbol.iterator. If you call this method, you create an iterator out of the iterable.

Three years later, the sequel to the iterable interface chapter in JavaScript arrived: the ES2018 asynchronous iterator. The asynchronous iterable interface specifies the implementation of a function under the key Symbol.asyncIterator to create an iterator out of an iterable.

Asynchronous iterators have a next method that returns a { value, done } object. value is the next value of the asynchronous operation, while done is a boolean signalling status.

The tc39 proposal contains an example that has been spread as the default description of how to formulate an async iteration.

for await (const line of readLines(filePath)) {
  console.log(line);
}

We will implement readLines soon.

Notice the for and await keywords next to each other. This is part the for..await..of loop, which iterates on an asynchronous function or asynchronous generator function.

The readLines function is an asynchronous generator function. We await for each value returned by this asynchronous generator, and execute the loop body on it.

The tc39 proposal example implementation of readLine generator function is as follows:

async function* readLines(path) {
  let file = await fileOpen(path);

  try {
    while (!file.EOF) {
      yield await file.readLine();
    }
  } finally {
    await file.close();
  }
}

As you can see, each yielded value has an await operator after it to indicate the asynchronous nature of the result. The asynchronous generator object returned by the asynchronous generator function has a Symbol.asyncIterator function member:

let asyncGenerator = readLines( path );
let asyncIterator = asyncGenerator[Symbol.asyncIterator]();

let promise = asyncIterator.next();

The async iterator can then be used for iteration in the for..await..of operator.

Async iterators have a next method that returns a promise. This promise is unwrapped by the for..await..of operator.

Promise finally

Back when I wrote ES6 in Practice, I was really curious why there was no finally in a then, catch chain. I came up with a false justification that maybe, all we need, is just then and catch. The rest is syntactic sugar. This is true to some extent, because we can perform the cleanup activities inside the then and catch callbacks.

It seems though, other people raised the same question, and a proposal got accepted to include a cleanup branch called finally:

new Promise( reject, resolve )
    .then( result => { ... } )
    .then( result => { ... } )
    .catch( error => { ... } )
    .finally( ()  => { /* cleanup */ } );

ES2018 Regular expression features

There are four updates to ES2018 regular expressions:

  • named capture groups
  • lookbehind assertions
  • dotAll flag
  • unicode property escapes

I wrote a blog post on ES2018 regular expressions already. Check out the description there.

If you want to learn more about regular expressions in JavaScript, check out my course, The JavaScript Regular Expression Launchpad. To get the maximum discount Udemy can offer, click the above link.

Template Literal Revision

This update is a bit intriguing, because we don’t tend to write tagged template literals every day.

If you want to understand tagged templates and template literals on a deeper level, check out my article on Strings and Template Literals.

Let’s see what the update is all about.

In a tagged template, your hands were tied in ES6 in some tiny edge cases. Some character sequences after a backslash were not allowed. Your code could not start a \u unicode escape, or a \x hexadecimal, or \0, \1, … octal number escape.

For instance, \xzsolt is now allowed in a tagged template even though zsolt is not a valid hexadecimal number.

I am sure you have an old browser at hand. I have one called Microsoft Edge. Let me execute an invalid tagged template there:

> String.raw`\xzsolt`
Expected hexadecimal digit

If you execute the same tagged template in your latest Google Chrome, the template literal is processed, and the '\xzsolt' string is returned.

If you want more details on the full revision, check out the tc39 proposal draft.

Can I use the features already?

Check out this compatibility table.

If you use JavaScript on client side, chances are, you want to support other browsers than the recent versions of Google Chrome. Therefore, I encourage you to use Babel to provide compatibility via transpilation.

If you are using JavaScript on the server, you are in control of the Node version you are using. Even though you cannot use all the features yet, be aware of them.

Summary

After the major ES2015 update and the very useful ES2017 additions, ES2018 provided yet another reason to keep believing that the developer experience provided by JavaScript is continuously improving.

Before I knew about the finally clause of promises, I even wrote in my book ES6 in Practice that it would be logical to have a finally clause.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As I promised in my last article on JavaScript fundamentals, you can expect some beginner-friendly content on this blog along with some learning paths.

I strongly believe in practical application. Therefore, I will not bore you with facts, long theory, or rant about which parts of JavaScript I consider good or bad. This tutorial is all about getting our hands dirty and understanding how JavaScript works inside out.

Table of Contents
  1. Sit down in front of a computer!. Don’t start this tutorial before sitting down comfortably in front of a computer.
  2. . Learn how to create comments that help you remember what you wrote in your code.
  3. Your first JavaSCript line. Start writing JavaScript code on your own.
  4. Dealing with data. Use JavaScript like a calculator. Learn how to formulate conditions. Learn how to represent simple data.
  5. The let keyword. Learn how to store your data for future reference.
  6. The Object type. Construct composite data that describes different features of things in your world.
  7. Arrays. Create and maintain a list of data.
  8. Functions. Create functions that perform calculations on data.
  9. Types. Learn how to find out the type of data you stored. Are you dealing with a number? Some text? Is the value there at all?
  10. Some more operators. Enrich your code with more operators that help you change values, and help you formulate better conditions.
  11. Control structures. Learn how to branch off based on conditional statements. Learn how to repeat your code as many times as you want, by just writing down your code once.
  12. Exercises. Get many exercises and reference solutions to practice what you learned.

Sit down in front of a computer

If you are reading this article on a mobile device without access to a computer, I am warning you, it’s not optimal usage of your time. I recommend that you save this article for later, and get back to it once you can follow the coding examples.

After the tutorial, you will see a lot of exercises. I encourage you to solve them to verify your knowledge.

Comments

We often place comments in our code so that we know what it is doing.

JavaScript is similar to C++ and Java in its commenting style:

  • Everything between /* and */ is ignored by the interpreter
  • Everything after // on the same line is ignored by the interpreter.

let a = 5 + 2; // This is a single line comment lasting until the end of the line.

/*
     A comment that can span multiple lines.
     Everything inside the comment will be ignored by the JavaScript interpreter.
     let b = 2;
*/

The above two comments are interpreted as:

let a = 5 + 2;

Everything else is ignored.

Why do we use comments? To avoid ending up in similar situations:

// 
// Dear maintainer:
// 
// Once you are done trying to 'optimize' this routine,
// and have realized what a terrible mistake that was,
// please increment the following counter as a warning
// to the next guy:
// 
// total_hours_wasted_here = 42
//

My other favorite is:

// I dedicate all this code, all my work, to my wife, Darlene, who will 
// have to support me and our three children and the dog once it gets 
// released into the public.

Source: StackOverflow

Summary: write readable code with good comments!

Your First JavaScript Line

In software development, we often brag about executing a "Hello World!" program. In some languages like Java, it takes a lot to write Hello World to the standard output. In JavaScript, we get away with one single line:

console.log( "Hello World!" );

You may ask, where can I execute this line?

If you are lazy, you can simply execute it below. Try it out:

/*—Type here:—>*/

If you typed the message console.log( "Hello World!" ), you can see the following evaluation:

"Hello World!"
undefined

The first value is a console log, which is a value written to the console by the console.log function.

The second value is undefined, symbolizing that the console.log function does not have a defined returned value. You don’t have to understand what this means yet until the section on Functions. Until then, just accept that the undefined value will appear after your console log.

Notice your statement can span multiple lines. Press enter inside the expression to separate content into two lines. Reformat your code in the above code editor as follows:

console
   .log(
        "Hello world!"
)

As you can see, you can format JavaScript code in any way you want. The interpreter will not care about the redundant whitespace characters.

Experiment a bit more with the log. Instead of “Hello World!”, write 5 + 2. You should see the following:

> console.log( 5 + 2 )
7
undefined

> symbolizes the input of the console.

As it is inconvenient to load my blog each time you want to write JavaScript, I will recommend a service you can use to execute JavaScript code: CodePen. In the above link, I prepared everything you need: a JavaScript editor and a console.

In the editor, type

console.log( "Hello World!" );

Watch it appear in the console.

Congratulations! You managed to write Hello World! to the console twice. Let’s see what we learned:

  • console.log writes a log message to the console
  • "Hello World!" is a string. One way to formulate a string is using double quotes. Mind you, 'Hello World!' is also a valid string notation in JavaScript
  • there is a semicolon at the end of the statement. The semicolon itself is optional, but I recommend using it

While reading this tutorial, I encourage you to keep CodePen open, and play around with the examples.

Later we will learn how to

  • execute JavaScript in our browser developer tools,
  • embed JavaScript in HTML pages,
  • execute JavaScript using node.js.

Dealing with data

Most programming languages help you create values that symbolize a number, a character in a text, or a longer text. You can also symbolize the concept of true and false values using booleans. You can also create values that symbolize the absence of a value. These values are all called primitive datatypes.

The name primitive does not come from a negative place. These datatypes are neither stupid, nor inferior to any other datatypes we use in JavaScript. The name primitive comes from their simplicity. The other datatypes you will learn later are composite, and they consist of many primitive datatypes.

In JavaScript, there are six primitive types:

  • boolean (true or false)
  • number (including integers like 1, -2, and floating point numbers like 1.1, 2e-3)
  • string ( '' or "", 'ES6 in Practice' or "ES6 in Practice" )
  • null type (denoted by null)
  • undefined (denoted by undefined)
  • Symbol (don’t worry about them yet)

At the bottom of the console in CodePen, there is a line with a > sign. This is where you can enter JavaScript expressions. Let’s try some. Enter the expression you see after the > sign. Then press enter. You can see the result appear on the next line.

> 5
5

> 5 + 2
7

> 7 % 5
2

> 5 ** 2
25

Handling integers is straightforward. You have the four arithmetic operations ( +, -, *, /) available for addition, subtraction, multiplication, and division respectively.

The % operator is called modulus. a % b returns the remainder of the division a / b. In our example, 7 / 5 is 1, and the remainder is 2. The value 2 is returned.

The ** operator is called the exponential operator. 5 ** 2 is five raised to the second power.

> 0.1 + 0.2
0.30000000000000004

> 3.1e-3
0.0031

Some more info on floating points. Due to the way how numbers are represented, 0.1 + 0.2 is not exactly 0.3. This is normal and occurs in most programming languages.

3.1e-3 is the normal form of 0.0031. Read it like the exact value of 3.1 times ten to the power of minus three. Although the form is similar to 3.1 * (10 ** -3), there are subtle differences. 3.1e-3 describes the exact value of 0.0031. 3.1 * (10 ** -3) describes a composite expression that needs to be calculated:

> 3.1 * (10 ** -3)
0.0031000000000000003

Floating point arithmetics does not even make it exact.

The division 0 / 0 or using mismatching types creates a special number called not a number or NaN.

> 0 / 0
NaN

> 'ES6 in Practice' * 2
NaN

The latter is interesting to Python users, because in Python, the result would have been 'ES6 in PracticeES6 in Practice'. JavaScript does not work like that.

There is another interesting type: infinity.

> 1 / 0
Infinity

> Infinity * Infinity
Infinity

> -1 / 0
-Infinity

> 1e308
1e+308

> 1e309
Infinity

JavaScript registers very large numbers as infinity. For instance, ten to the power of 309 is represented as infinity. Division by zero also yields infinity.

Let’s see some strings.

> 'ES6 in ' + 'Practice'
"ES6 in Practice"

The plus operator concatenates strings. Concatenation means that you write the contents of two strings after each other.

Strings are immutable which means that their value cannot be changed. When concatenating two strings, the result is saved in a third string.

If any of the operands of plus is an integer, the result becomes a string. JavaScript automatically converts the operands of an operator to the same type. This is called automatic type casting:

> 1 + '2'
"12"

> '1' + 2
"12"

Rules may become confusing, so don’t abuse automatic type casting. Just know that you may have to explicitly cast a string to an integer to be able to add it to another integer:

> 1 + +"2" // +"2" gives a sign to "2", converting it to a number
3

> 1 + Number("2")
3

> 1 + Number.parseInt( "2", 10 )
3

> 1 + Number.parseInt( "2" )
3

All conversions work. The first relies on giving a sign to a numeric string which converts it to a number. Then 1+2 becomes 3. The second type cast is more explicit: you use Number to wrap a string and convert it to a number.

I recommend using the third option: Number.parseInt with a radix. parseInt converts a string into a number. The second argument of parseInt is optional: it describes the base in which we represent the number.

> Number.parseInt("ES6 in Practice")
NaN

> Number.parseInt( "10", 2 )
2

> Number.parseInt( "a" )
NaN

> Number.parseInt( "a", 16 )
10

Arbitrary strings are often NaN. “2” in base 2 is 10. You can see how easy it is to convert a binary or a hexadecimal (base 16) string into a decimal number. Base 16 digits are 0123456789abcdef. The last 6 digits may also be upper case.

Number.parseInt recognizes the starting characters of a string as integer numbers, and throws away the rest:

Number.parseInt( "1234.567 89" )
1234

The dot is not a character present in integer numbers, so everything after 1234 is thrown away by Number.parseInt.

You can also use Number.parseFloat to parse floating point. It parses the floating point number until the terminating space:

Number.parseFloat( "1234.567 89" )
1234.567

Let’s see some booleans values. Booleans are either true or false.

> 5 <= 5
true

> 5 < 5
false

> !(5 < 5)   // ! stands for negation. !true = false, !false = true.
true

> !!""
false

> !!"a"
true

> !!0
false

> !!1
true

> !!NaN
false

> !!Infinity
true

> !!null
false

> !!undefined
false

We can compare two numbers with >, >=, ==, ===, <=, <. We will discuss the difference between == and === soon. For the rest of the operators, the result of the comparison is a boolean.

The ! operator negates its operand. True becomes false, and false becomes true.

A truthy value is a value v for which !!v is true. Example truthy values are: nonzero integers, strings containing at least one character.
A falsy value is a value w for which !!w is false. Example falsy values are: empty string, 0, null, undefined.

We can convert a value to a boolean by negating it twice: !!:

  • assuming v is truthy, !v becomes false. !!v becomes !false, which becomes true.
  • assuming w is falsy, !w becomes true, and !!w becomes false.

For values a and b, a == b is true if and only if both a and b can be converted to the same value via type casting rules. This includes:

  • null == undefined is true
  • If an operand is a string and the other operand is a number, the string is converted to a number
  • If an operand is a number and the other operand is a boolean, the boolean is converted to a number as follows: true becomes 1, and false becomes 0.

Don’t worry about the exact definition, you will get used to it.

For values a and b, a === b is true if and only if a == b and both a and b have the same types.

> 5 == '5'   // '5' is converted to 5
true

> 5 === '5'  // types have to be the same
false

> 0 == ''    // '' is converted to 0
true

> 0 === ''   // types have to be the same
false

> NaN == NaN // I know... just accept this as something odd and funny
false

The negation of == is !=. Read it as is not equal to.

The negation of === is !==.

> 5 != '5'
false

> 5 !== '5'
true

Let me introduce the ternary operator to drive home a strong point about truthiness.

The value of a ? b : c is:

  • b if a is truthy
  • c if a is falsy

It is important to note the difference between 2 == true and !!2.

> 2 == true    // true is converted to 1
false 

> !!2          // 2 is a truthy value
true

> 2 == true ? 'the condition is true' : 'the condition is false'
"the condition is false"

> !!2 ? 'the condition is true' : 'the condition is false'
"the condition is true"

I have seen the nastiest bug in my life in a code, where a condition was in a format num == true. As I never felt like learning boring definitions, my lack of knowledge shot me in the foot, because I assumed the opposite conversion in 2 == true. I can save you some headache by highlighting this common misconception. In 2 == true, true is converted to 1, and not the other way around.

Null, undefined, and Symbols are primitive types.

Null represents an intentional absence of a primitive or composite value of a defined variable.

Undefined represents that a value is not defined.

A Symbol() is a unique value without an associated literal value. They are useful as unique keys, because Symbol() == Symbol() is false. At this stage, just accept that symbols exist. You don’t have to use them for anything yet.

> null
null

> undefined
undefined

> void 0
undefined

> Symbol('ES6 in Practice')
[object Symbol] {}

The let keyword

First of all, in the below example, you have to know that console.log may print any number of arguments separated by commas. In the console, the values appear next to each other separated by a space.

We can create variables with the let keyword. Think of a variable like a drawer. Let declares a variable, which means to you that a drawer is created with a handle.

let myDrawer;

You can put a value in your drawer:

myDrawer = '$1.000';

In order to access the value, you have to grab the handle of the box and open it. In this example, you have a drawer called myDrawer. It contains a string written '$1.000' on it. To access your thousand bucks, you have to open the drawer:

drawer
'$1.000'

You can assign an initial value to your variable with the = sign. This is called initialization, and it can occur either in the same statement where you declared the variable (see x), or after the declaration (see y). You may access a declared variable even if you have not initialized it. Its value becomes undefined.

let x = 5;

let y;
y = x ** 2;

let z;

console.log( x, y, z );

As the above editor is editable, try out one thing.

Move let z below the console.log statement. You should see a ReferenceError:

ReferenceError: z is not defined

The message is somewhat misleading, because it means z is not declared using the let keyword. Don’t mix this message with the undefined value. You only get the above reference error if you reference a variable that does not exist.

You did the following: you asked for the contents of your drawer z in the console log. But the drawer itself does not exist yet. You only created the drawer afterwards, with let z;:

console.log( z )  // Reference Error: z is not declared

let z;

Side note: I know, in most tutorials, you see var instead of let. This is an advantage of reading an ES2018-compliant tutorial. Don’t worry about var for now, you will hear about it later. Ok, I understand. If you do worry about it, read this article.

The Object type

This section will introduce the Object type. We will only learn the basics here.

An object is data structure with String keys and arbitrary values. Imagine it like a machine that accepts a key, and gives you a value. Some people call this data structure associative array, others call it hashmap. These names may sound fancy, but essentially we mean the same damn thing.

Side note: Symbol keys are also allowed for strings. This is an advanced topic, you will learn it later.

let author = {
  name: null,
  website: "zsoltnagy.eu",
  age: 35
}

author.name = "Zsolt";
console.log( author.name, author[ "website" ] )
// Prints: Zsolt zsoltnagy.eu

delete author.name
console.log( "Author name: ", author.name )
// Prints: Author name: undefined

A field in an object can be referenced in the following ways:

  • dot notation: object_name.member_name
  • bracket (associative array) notation: object_name[ string_or_symbol_value ]

The delete operator deletes a field from an object.

If a field in an object is deleted or not even declared, the value of this field is undefined.

We will learn a lot more about objects later.

Arrays

An array is an ordered list of items. The items may be of any type. You know, in most post offices, there are hundreds or thousands of post boxes. Each post box may or may not contain something. Each post box has a numeric handle. Post box 25 may be your box. You unlock it, grab its handle, and access its contents.

The trick is that in case of arrays, you have to imagine the post boxes with keys 0, 1, 2, and so on. Typically, arrays have continuous keys.

let days = [ 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday' ];

Arrays do not have to contain elements of the same type:

let storage = [ 1, 'Monday', null ];

Each element of the array can be accessed using an index starting from zero:

> days[0]
'Monday'

> days[4]
'Friday'

> days[5]
undefined

In the third example, we indexed out of the array.

Arrays have lengths:

> days.length
5

You can add elements to the beginning and to the end of the array.

> days.push( 'Saturday' );

> console.log( days ); // add 'Saturday' to the end
["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]

> days.unshift( 'Sunday' ); // add 'Sunday' to the beginning
["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]

You can also remove these elements from the array:
– Pop removes the last element from the array and returns it.
– Shift removes the first element from the array and returns it.

> let element = days.pop(); 
> console.log( element, days );
"Saturday" ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday"]

> let secondElement = days.shift();
> console.log( element, days );
"Sunday" ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"]

Similarly to objects, you can delete any element from the array. The value undefined will be placed in place of this element:

> delete days[2]
["Monday", "Tuesday", undefined, "Thursday", "Friday"]

The values of an array can be set by using their indices, and equating them to a new value. You can overwrite existing values, or add new values to the array. The indices of the added values do not have to be continuous:

> days[2] = 'Wednesday';
> days[9] = 'Wednesday';
> console.log( days );
["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", undefined, undefined, undefined, undefined, "Wednesday"]

As with most topics, bear in mind that we are just covering the basics to get you started in writing code. There are multiple layers of knowledge on JavaScript arrays. We will uncover these lessons once they become important.

Functions

Think of a function like a mathematical function giving you a relationship between input and output variables. If you don’t like maths, think of a function like a vending machine. You give it some coins and a number, and it spits out some cold coke.

function add( a, b ) {
    return a + b;
}

This function definition describes the relationship between its input variables a and b, and the return value of the function.

The return statement returns the value of the function. When calling the add function with arguments a and b, it computes the value a+b and returns it. Example:

function add( a, b ) {
return a + b;
}

add( 5, 2 )

Try to modify the input variables. The return value also changes. Try to call the add function with one variable, e.g. add( 5 ), and see what happens.

Functions are useful to create reusable chunks of code that you can call with different arguments. We will write more useful functions once you learn the basics of control structures.

You can also define functions without placing the name between the function keyword and the argument list. This structure is great if you want create a reference to it using a variable. Remember, the variable subtract is a handle to a drawer. This time, your drawer contains a function.

let subtract = function( a, b ) {
    return a - b;
}

There is another popular notation first introduced in ES6: the fat arrow notation.

let multiply = ( a, b ) => a *..
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This article is a continuation of last week’s post on ES2017 Async-Await, detailing the updates of ES2017.

Check out the ES2016 plus compatibility table for more information on the current browser support. Most likely, you will need a transpiler.

Use the ES2017 Babel Preset for code transpilation if you want to support older browsers.

New Object Extensions

In this section, we will introduce three Object methods:

  • Object.entries,
  • Object.values,
  • Object.getOwnPropertyDescriptors.
Object.entries and Object.values

We already know Object.keys:

let account = {
    first: 'Zsolt',
    last: 'Nagy',
    email: 'info@zsoltnagy.eu'
};

Object.keys( account );
> ["first", "last", "email"]

In ES7, Object.values and Object.entries are also available:

Object.values( account )
> ["Zsolt", "Nagy", "info@zsoltnagy.eu"]

Object.entries( account ) 
>  [ ["first", "Zsolt"], 
     ["last", "Nagy"], 
     ["email", "info@zsoltnagy.eu"] ]

Object.entries can also be used for creating maps.

let accountMap = new Map( Object.entries( account ) );
> Map {
    "first" => "Zsolt", 
    "last"  => "Nagy", 
    "email" => "info@zsoltnagy.eu"
  }

Symbol keys are ignored from the keys, values, and entries arrays.

The entries method was already defined for arrays in ES6, and it returned an ArrayIterator.

let iterator = Object.values( account ).entries();
> ArrayIterator {}

console.log( iterator.next() );
> { value: [0, "Zsolt"], done: false }

for ( let [val, key] of iterator ) {
    console.log( val, key );
}
> 1 "Nagy"
> 2 "info@zsoltnagy.eu"

Object.getOwnPropertyDescriptors

Object.getOwnPropertyDescriptors returns all property descriptors of its first argument:

let player = {
    cards: [ 'Ah', 'Qc' ], 
    chips: 1000 
};

let descriptors = 
    Object.getOwnPropertyDescriptors( player );

console.log( descriptors );
> Object {cards: Object, chips: Object}

console.log( descriptors.cards );
> Object {
    value: Array[2], 
    writable: true, 
    enumerable: true, 
    configurable: true
  }

Object.getOwnPropertyDescriptors returns all property descriptors in an object with the same keys as the keys of the original object. The following four property descriptors are returned (source: developer.mozilla.com):

  • value: the value of the property
  • writable: true if and only if the value associated with the property may be changed (data descriptors only)
  • get: A function which serves as a getter for the property, or undefined if there is no getter (accessor descriptors only)
  • set: A function which serves as a setter for the property, or undefined if there is no setter (accessor descriptors only)
  • configurable: true if and only if the type of this property descriptor may be changed and if the property may be deleted from the corresponding object
  • enumerable: true if and only if this property shows up during enumeration of the properties on the corresponding object

Let’s construct an example for getters and setters

let player = {
    cards: [ 'Ah', 'Qc' ], 
    chips: 1000,
    flop: [ '7d', '7c', '2c' ],
    get hand() {
        return [ ...this.cards, ...this.flop ];
    },
    set hand( newHand ) {
        if ( newHand.length && newHand.length === 5 ) {
            [ this.cards[0], 
              this.cards[1], 
              ...this.flop 
            ] = newHand;
        }
    }
};

let descriptors =
    Object.getOwnPropertyDescriptors( player );

console.log( descriptors );
> Object {
    cards: Object, 
    chips: Object, 
    flop: Object, 
    hand: Object
}

console.log( Object.keys( descriptors.hand ) );
> ["get", "set", "enumerable", "configurable"]

descriptors.hand.get
> function get hand() {
    return [ ...this.cards, ...this.flop ];
  }

Object.getOwnPropertyDescriptors handles String keys as well as Symbol keys[nodejssymbolshow]:

[nodejssymbolshow]: in order to show Symbol keys in node and in some browsers, instead of console.log, use console.dir with the flag showHidden: true. Check out this node issue for more information, or re-read the (relevant section) of Chapter 9.

let s = Symbol('test');
let test = {
    [s]: 'test'   
};

console.log( Object.getOwnPropertyDescriptors( test ) );
> { Symbol(test): Object {
        configurable : true
        enumerable : true
        value : "test"
        writable : true
    }
  }

As a consequence, Object.getOwnPropertyDescriptors can be used to make shallow copies of objects using Object.create.

Object.create takes two arguments:

  • the prototype of the object we wish to clone,
  • the property descriptors of the object.

In order to illustrate the difference between a shallow copy and a deep copy, let’s create a shallow copy of the player object defined above.

let player = {
    cards: [ 'Ah', 'Qc' ], 
    chips: 1000,
    flop: [ '7d', '7c', '2c' ],
    get hand() {
        return [ ...this.cards, ...this.flop ];
    },
    set hand( newHand ) {
        if ( newHand.length && newHand.length === 5 ) {
            [ this.cards[0], 
              this.cards[1], 
              ...this.flop 
            ] = newHand;
        }
    }
};

let proto = Object.getPrototypeOf( player );
let descriptors = 
    Object.getOwnPropertyDescriptors( player );

let newPlayer = Object.create( proto, descriptors );

newPlayer.chips = 1500;

console.log( player.chips, newPlayer.chips );
> 1000 1500

We have created two seemingly independent entities. However, when trying to change a card of the new player, the change will be made in the context of the old player as well.

newPlayer.cards[1] = 'Ad';

console.log( newPlayer.cards[1], player.cards[1] );
> 'Ad' 'Ad'

This is because shallow copying only copied the reference of the cards array to the new player object. The original and the copied reference point at the exact same array.

New String Extensions

This section is about two String prototype extensions:

These two methods are not yet implemented in all browsers. You need to open the Firefox developer tools or Chrome 57 to experiment with them.

Padding is used to add additional characters to the start or the end of a string so that it reaches a given size.

Padding is useful in character mode for alignment.

In the following example, let’s format the amounts such that the integer part contains 6 characters, and the fractional part contains 2 characters. Let’s pad the characters in front of the interger part with spaces, and the decimal part with zeros.

Let’s console log the result.

let amounts = [
    '1234.0',
    '1',
    '2.56'
];

console.log( `|dddddd.ff|` );
for ( let amount of amounts ) {
    let [ front, back = '' ] = amount.split('.');
    front = front.padStart( 6 );
    back = back.padEnd( 2, '0' );
    console.log( `|${front}.${back}|` );
}

> |dddddd.ff|
> |  1234.00|
> |     1.00|
> |     2.56|

If the second argument of padStart or padEnd is not given, ' ' characters will be used by default.

If you are interested in similar articles, sign up for weekly email updates below:

Learn ES6 in Practice
Sign up below to access an ES6 course with many exercises and reference solutions.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The ability to write asynchronous functions is a major update in ES2017.

In order to understand this chapter, I suggest that you review the chapter on promises.

What are asynchronous functions?

Asynchronous functions are functions that return a promise. We denote them by using the async keyword.

const loadData = async function( value ) {
    if ( value > 0 ) {
        return { data: value };
    } else {
        throw new Error( 'Value must be greater than 0' );
    }
}

loadData( 1 ).then( response => console.log( response ) );

loadData( 0 ).catch( error => console.log( error ) );

When loadData returns an object, the return value is wrapped into a promise. As this promise is resolved, the then callback is executed, console logging the response.

When loadData is called with the argument 0, an error is thrown. This error is wrapped into a rejected promise, which is handled by the catch callback.

In general, return values of an async function are wrapped into a resolved promise, except if the return value is a promise itself. In the latter case, the promise is returned. Errors thrown in an async function are caught and wrapped into a rejected promise.

The await operator

Await is a prefix operator standing in front of a promise.

As long as the promise behind the await operator is in a pending state, await blocks execution.

As soon as the promise is resolved, await returns the fulfillment value of the promise.

As soon as the promise is rejected, await throws the value of rejection.

Let’s see an example:

const delayedPromise = async () => {
    let p = new Promise( ( resolve, reject ) => {
        setTimeout( () => resolve( 'done' ), 1000 );
    } );
    const promiseValue = await p;
    console.log( 'Promise value: ', promiseValue );
}

delayedPromise();

// ... after 1 second
> Promise value:  done

The await operator can only be used in asynchronous functions. If the async keyword is removed from the previous example, an error is thrown:

const delayedPromise2 = () => {
    let p = new Promise( ( resolve, reject ) => {
        setTimeout( () => resolve( 'done' ), 1000 );
    } );
    const promiseValue = await p;
    console.log( 'Promise value: ', promiseValue );
}

> Uncaught SyntaxError: Unexpected identifier

Combining async and await

We already know that asynchronous functions return a promise.

We also know that the await keyword

  • expets a promise as its operand,
  • should be placed in asynchronous functions

As a consequence, we can await for the response of asynchronous functions inside asynchronous functions.

const loadData = async () => {
    disableSave();
    const resultSet1 = await asyncQuery1();
    displayResultSet1( resultSet1 );
    const resultSet2 = await asyncQuery2();
    displayResultSet2( resultSet2 );
    enableSave();
}

This hypothetical loadData function loads two tables by accessing a server via an API.

First query 1 is executed. The execution of loadData is blocked until the promise returned by asyncQuery1 is resolved.

Once resultSet1 is available, the displayResultSet1 function is executed.

Afterwards, asyncQuery2 is executed. Notice that this function is only called after the return value of asyncQuery1 is resolved. In other words, asyncQuery1 and asyncQuery2 are executed synchronously.

Once resultSet2 becomes available, the results are displayed.

There is only one problem with this example. Imagine a web application accessing ten API endpoints. Assume that each server call takes one second in average. If our page can only be rendered after all ten asynchronous calls are executed, we will have to wait ten seconds until the user can browse our page. This is unacceptable.

This is why it makes sense to execute asynchronous queries in parallel. We can use Promise.all to create a promise that combines and executes its arguments in parallel.

const loadData = async () => {
    disableSave();
    const [resultSet1, resultSet2] = await Promise.all([
        asyncQuery1(),
        asyncQuery2()
    ] );
    displayResultSet1( resultSet1 );
    displayResultSet2( resultSet2 );
    enableSave();
}

In this example, all queries are executed asynchronously. If the array inside Promise.all contained ten queries, and each query took one second to execute, the execution time of the whole Promise.all expression would still be one second.

The two solutions are not equivalent though. Suppose that the average time taken to retrieve resultSet1 is 0.1 seconds, while resultSet2 can only be retrieved in one second.

In this case,

  • the asynchronous version saves 0.1 seconds compared to the synchronous one,
  • however, displayResultSet1 is only executed after all queries are retrieved in the asynchronous version. This means that we can expect a 0.9 seconds delay compared to the synchronous version.

We can combine the advantages of the two versions by making use of the chainability of the then callback of promises.

const loadData = async () => {
    disableSave();
    const [resultSet1, resultSet2] = await Promise.all([
        asyncQuery1().then( displayResultSet1 ),
        asyncQuery2().then( displayResultSet2 )
    ] );
    enableSave();
}

In this version of the code, the queries are retrieved asynchronously, and the corresponding displayResultSet handler function is executed as soon as the corresponding promise is resolved. This means that the first query is rendered in 0.1 seconds, while the second query is rendered in one second.

Parallel execution without await

Let’s remove the disableSave and enableSave functions from the previous example:

const loadData = async () => {
    const [resultSet1, resultSet2] = await Promise.all([
        asyncQuery1().then( displayResultSet1 ),
        asyncQuery2().then( displayResultSet2 )
    ] );
}

The function is still working as expected, however, the implementation is made complex for no reason.

We could simply execute the two asynchronous queries and their corresponding handlers one after the other without wrapping them in Promise.all:

const loadData = () => {
    asyncQuery1().then( displayResultSet1 );
    asyncQuery2().then( displayResultSet2 );
}

By not using await, we are not blocking execution of asyncQuery2 before the promise of asyncQuery1 is resolved. Therefore, the two queries are still executed in parallel.

Notice that this implementation of loadData is not even declared as async, as we don’t need to return a promise in a vacuum, and we are not using the await keyword inside the function anymore.

Awaiting a rejected promise

There are cases when the operand of await becomes a rejected promise. For instance,

  • when reading a file that does not exist,
  • encountering an I/O error,
  • encountering a session timeout in case of an API call,

our promise becomes rejected.

When promise p becomes rejected, await p throws an error. As a consequence, we have to handle all sources of errors by placing error-prone await expressions in try-catch blocks.

try { 
    await p; 
} catch( e ) { 
    /* handle error */ 
}

Position of the async keyword

First, we can create named function expressions of asynchronous regular functions or arrow functions.

const name1 = async function() { ... }

const name2 = ( ...args ) => returnValue;

When creating function expressions, async is written in front of the function keyword.

async function name3() { ... }

Summary

Asynchronous functions are functions that return a promise. These functions can handle I/O operations, API calls, and other forms of delayed execution.

Awaiting for the resolution of a promise returns the resolved value of the promise or throws an error upon rejection. The await operator makes it possible to execute asynchronous functions sequentially or in parallel.

Async-await gives you an elegant way of handling asynchronous functions, and therefore, it is one of the most useful updates of ES2017.

Learn ES6 in Practice
Sign up below to access an ES6 course with many exercises and reference solutions.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview