Follow Java – Red Hat Developer Blog on Feedspot

Continue with Google
Continue with Facebook


DevNation Live tech talks are hosted by the Red Hat technologists who create our products. These sessions include real solutions and code and sample projects to help you get started. In this talk, you’ll learn about Visual Studio Code from Bob Davis, Principal Product Manager in Red Hat’s Developer Tools Group.

Did you know that Visual Studio Code is free and open source? We have already seen more than 17-million downloads of the Java extensions for this ultra-lightweight IDE, making Visual Studio Code one of the fastest growing software development tools in recent months.

In this session, we are going to show you how to get started and how to get rocking with Visual Studio Code from installation to extensions, CLI integration, maven dependencies, debugging, and much more.

Watch the entire talk:

17-million downloads of Visual Studio Code Java extension | DevNation Live - YouTube

Learn more

Join us at an upcoming developer event, and see our collection of past DevNation Live tech talks.

The post DevNation Live: 17-million downloads of Visual Studio Code Java extension appeared first on Red Hat Developer Blog.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

JBoss Tools 4.12.0 and Red Hat CodeReady Studio 12.12 for Eclipse 2019-06 are here and are waiting for you. In this article, I’ll cover the highlights of the new releases and show how to get started.


Red Hat CodeReady Studio (previously known as Red Hat Developer Studio) comes with everything pre-bundled in its installer. Simply download it from our Red Hat CodeReady Studio product page and run it like this:

java -jar codereadystudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) CodeReady Studio requires a bit more.

This release requires at least Eclipse 4.12 (2019-06), but we recommend using the latest Eclipse 4.12 2019-06 JEE Bundle because then you get most of the dependencies pre-installed.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under “JBoss Tools” or “Red Hat CodeReady Studio.”

For JBoss Tools, you can also use our update site directly:

What’s new?

Our main focus for this release was improvements for container-based development and bug fixing. Eclipse 2019-06 itself has a lot of new cool stuff, but I’ll highlight just a few updates in both Eclipse 2019-06 and JBoss Tools plugins that I think are worth mentioning.

Red Hat OpenShift Red Hat OpenShift Container Platform 4 support

The new OpenShift Container Platform (OCP) 4 is now available (see this article) and is a major shift compared to OCP 3, but JBoss Tools is compatible with this major release in a transparent way. Just define your connection to your OCP 4 based cluster as you did for an OCP 3 cluster and use the tooling!

Server tools Wildfly 17 server adapter

A server adapter has been added to work with Wildfly 17. It adds support for Java EE 8.

Hibernate Tools New runtime provider

The new Hibernate 5.4 runtime provider has been added. It incorporates Hibernate Core version 5.4.3.Final and Hibernate Tools version 5.4.3.Final

Runtime provider updates

The Hibernate 5.3 runtime provider now incorporates Hibernate Core version 5.3.10.Final and Hibernate Tools version 5.3.10.Final.

Maven Maven support updated to M2E 1.12

The Maven support is based on Eclipse M2E 1.12.

Platform Views, dialogs, and toolbar Import project by passing it as a command-line argument

You can import a project into Eclipse by passing its path as a parameter to the launcher. The command would look like eclipse /path/to/project on Linux and Windows, or open Eclipse.app -a /path/to/project on macOS.

Launch Run and Debug configurations from Quick Access

From the Quick Access proposals (accessible with Ctrl+3 shortcut), you can now directly launch any of the Run or Debug configurations available in your workspace.

Note: For performance reasons, the extra Quick Access entries are only visible if the org.eclipse.debug.ui bundle was already activated by some previous action in the workbench such as editing a launch configuration, or expanding the Run As…​ menus.

The icon used for the view menu has been improved. It is now crisp on high-resolution displays and also looks much better in the dark theme. Compare the old version at the top and the new version at the bottom:

High-resolution images drawn on Mac

On Mac, images and text are now drawn in high resolution during GC operations. You can see crisp images on high-res displays in the editor rulers, forms, etc. in Eclipse. Compare the old version at the top and the new version at the bottom:

Table/Tree background lines shown in dark theme on Mac

In dark theme on Mac, the Table and Trees in Eclipse now show the alternating dark lines in the background when setLinesVisible(true) is set. Earlier, they had a gray background even if line visibility was true.

Example of a Tree and Table in Eclipse with alternating dark lines in the background:


When the Equinox OSGi Framework is launched, the installed bundles are activated according to their configured start-level. The bundles with lower start-levels are activated first. Bundles within the same start-level are activated sequentially from a single thread.

A new configuration option equinox.start.level.thread.count has been added that enables the framework to start bundles within the same start-level in parallel. The default value is 1, which keeps the previous behavior of activating bundles from a single thread. Setting the value to 0 enables parallel activation using a thread count equal to Runtime.getRuntime().availableProcessors(). Setting the value to a number greater than 1 will use the specified number as the thread count for parallel bundle activation.

The default is 1 because of the risk of possible deadlock when activating bundles in parallel. Extensive testing must be done on the set of bundle installed in the framework before considering enabling this option in a product.

Java Development Tools (JDT) Java 12 support Change project compliance and JRE to 12

A quick fix Change project compliance and JRE to 12 is provided to change the current project to be compatible with Java 12.

Enable preview features

Preview features in Java 12 can be enabled using Preferences > Java > Compiler > Enable preview features option. The problem severity of these preview features can be configured using the Preview features with severity level option.

Set Enable preview features

A quick fix Configure problem severity is provided to update the problem severity of preview features in Java 12.

Add default case to switch statement

A quick fix Add ‘default’ case is provided to add default case to an enhanced switch statement in Java 12.

Add missing case statements to switch statement

A quick fix Add missing case statements is provided for an enhanced switch statement in Java 12.

Add default case to switch expression

A quick fix Add ‘default’ case is provided to add default case to a switch expression.

Add missing case statements to switch expression

A quick fix Add missing case statements is provided for switch expressions.

Format whitespaces in ‘switch’

As Java 12 introduced some new features into the switch construct, the formatter profile has some new settings for it. The settings allow you to control spaces around the arrow operator (separately for case and default) and around commas in a multi-value case.

The settings can be found in the Profile Editor (Preferences > Java > Code Style > Formatter > Edit…​) under the White space > Control statements > ‘switch’ subsection.

Split switch case labels

As Java 12 introduced the ability to group multiple switch case labels into a single case expression, a quick assist is provided that allows these grouped labels to be split into separate case statements.

Java Editor

In the Java > Editor > Code Mining preferences, you can now enable the Show parameter names option. This will show the parameter names as code minings in method or constructor calls, for cases where the resolution may not be obvious for a human reader.

For example, the code mining will be shown if the argument name in the method call is not an exact match of the parameter name or if the argument name doesn’t contain the parameter name as a substring.

Show number of implementations of methods as code minings

In the Java > Editor > Code Mining preferences, selecting Show implementations with the Show References (including implementations) for → Methods option now shows implementations of methods.

Clicking on method implementations brings up the Search view that shows all implementations of the method in sub-types.

Open single implementation/reference in editor from code mining

When the Java > Editor > Code Mining preferences are enabled and a single implementation or reference is shown, moving the cursor over the annotation and using Ctrl+Click will open the editor and display the single implementation or reference.

Additional quick fixes for service provider constructors

Appropriate quick fixes are offered when a service defined in a module-info.java file has a service provider implementation whose no-arg constructor is not visible or is non-existent.

Template to create Switch Labeled Statement and Switch Expressions

The Java Editor now offers new templates for the creation of switch labeled statements and switch expressions. On a switch statement, three new templates: switch labeled statement, switch case expression, and switch labeled expression are available as shown below. These new templates are available on Java projects having compliance level of Java 12 or above.

If switch is being used as an expression, then only switch case expression and switch labeled expression templates are available as shown below:

Java views and dialogs Enable comment generation in modules and packages

An option is now available to enable/disable the comment generation while creating module-info.java or package-info.java.

Improved “create getter and setter” quick assist

The quick assist for creating getter and setter methods from fields no longer forces you to create both.

Quick fix to open all required closed projects

A quick fix to open all required closed projects is now available in the Problems view.

New UI for configuring Module Dependencies

The Java Build Path configuration now has a new tab, Module Dependencies, which will gradually replace the options previously hidden behind the Is Modular node on other tabs of this dialog. The new tab provides an intuitive way for configuring all those module-related options, for which Java 9 had introduced new command-line options, such as --limit-modules, etc.

The dialog focuses on how to build one Java Project, here org.greetings.

Below this focus module, the left-hand pane shows all modules that participate in the build, where decorations A and S mark automatic modules and system modules, respectively. The extent of system modules (from JRE) can be modified with the Add System Module… and Remove buttons (corresponds to --add-modules and --limit-modules).

When a module is selected in..

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

API-first design is a commonly used approach where you define the interfaces for your application before providing an actual implementation. This approach gives you a lot of benefits. For example, you can test whether your API has the right structure before investing a lot of time implementing it, and you can share your ideas with other teams early to get valuable feedback. Later in the process, delays in the back-end development will not affect front-end developers dependent on your service so much, because it’s easy to create mock implementations of a service from the API definition.

Much has been written about the benefits of API-first design, so this article will instead focus on how to efficiently take an OpenAPI definition and bring it into code with Red Hat Fuse.

Imagine an API has been designed that is used for exposing a beer API. As you can see in the JSON file describing the API, it’s an OpenAPI definition and each operation is identified by an operationId. That will prove to be handy when doing the actual implementation. The API is pretty simple and consists of three operations:

  • GetBeer—Get a beer by name.
  • FindBeersByStatus—Find a beer by its status.
  • ListBeers—Get all beers in the database.
Keep generated code separate from the implementation

We don’t want to code all the DTOs and boilerplate code, because that’s very time-consuming and trivial as well. Therefore, we’ll use the Camel REST DSL Swagger Maven Plugin for generating all of that.

We want to keep the code generated by the swagger plugin separate from our implementation for several reasons, including:

  • Code generation consumes time and resources. Separating code generation from compiling allows us to spend less time waiting and thus more time drinking coffee with colleagues and being creative in all sorts of ways.
  • We don’t have to worry that a developer will accidentally put some implementation stuff in an autogenerated class and thus lose valuable work the next time the stub is regenerated. Of course, we have everything under version control, but it’s still time-consuming to resolve what was done, moving code, etc.
  • Other projects can refer to the generated artifacts independently of the implementation.

To keep the generated stub separate from the implementation, we have the following initial structure:

+-- README.md
│-- fuse-impl
│   +-- pom.xml
│   `-- src
│       │-- main
│       │   │-- java
│       │   `-- resources
│       `-- test
│           │-- java
│           `-- resources
`-- stub
    │-- pom.xml
    `-- src
        `-- spec

The folder stub contains the project for the generated artifacts. The folder fuse-impl contains our implementation of the actual service.

Setting up code generation with Swagger

First, configure the Swagger plugin by adding the following in the pom.xml file for the stub project:

          <goal>generate-xml-with-dto</goal><!-- 1 -->
      <specificationUri><!-- 2 -->
      <fileName>camel-rest.xml</fileName><!-- 3 -->
      <outputDirectory><!-- 4 -->
      <modelPackage>com.example.beer.dto</modelPackage><!-- 5 -->

The plugin is pretty easy to configure:

  1. The goal is set to generate-xml-with-dto, which means that a rest DSL XML file is generated from the definition together with my Data Transfer Objects. There are other options, including one to generate a Java client for the interface.
  2. specificationUri points to the location of my API definition.
  3. The name of the rest DSL XML file to generate.
  4. Where to output the generated rest DSL XML file. If placed in this location, Camel will automatically pick it up if included in a project.
  5. Package name for the DTOs.

In pom.xml, we also need to change the location of the source and resource files for the compiler. Finally, we need to copy the API specification to the location we chose previously. This isn’t described here because it’s known stuff, but you can refer to the source code for the specifics as needed. Now, we’re ready to generate the stub for the REST service.

So far, we have the following file structure in the stub project:

`-- stub
    +-- pom.xml
    `-- src
        `-- spec
            `-- beer-catalog-API.json

Run mvn install in the stub dir and the stub is automatically generated, compiled, put in a jar file, and put into the local Maven repository. The DTOs are generated in the package we chose previously. Furthermore, an XML file is created for the REST endpoint.

Contents of file stub/target/generated-sources/src/main/resources/camel-rest/camel-rest.xml:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<rests xmlns="http://camel.apache.org/schema/spring">
    <restConfiguration component="servlet"/>
        <get id="GetBeer" uri="/beer/{name}">
            <description>Get beer having name</description>
            <param dataType="string" description="Name of beer to retrieve" name="name" required="true" type="path"/>
            <to uri="direct:GetBeer"/>
        <get id="FindBeersByStatus" uri="/beer/findByStatus/{status}">
            <description>Get beers having status</description>
            <param dataType="string" description="Status of beers to retrieve" name="status" required="true" type="path"/>
            <param dataType="number" description="Number of page to retrieve" name="page" required="false" type="query"/>
            <to uri="direct:FindBeersByStatus"/>
        <get id="ListBeers" uri="/beer">
            <description>List beers within catalog</description>
            <param dataType="number" description="Number of page to retrieve" name="page" required="false" type="query"/>
            <to uri="direct:ListBeers"/>

The important thing to note is that each REST operation is routing to a uri named direct:operatorId, where operatorId is the same operator as in the API definition file. This enables us to easily provide an implementation for each operation.

Providing an implementation of the API

For the example implementation, we chose Fuse running in a Spring Boot container to make it easily deployable in Red Hat OpenShift.

Besides the usual boilerplate code, the only thing we have to do is add a dependency to the project containing the stub in our pom.xml file of the fuse-impl project:


Now we’re all set, and we can provide our implementation of the three operations. As an example of an implementation, consider the following.

Contents of fuse-impl/src/main/java/com/example/beer/routes/GetBeerByNameRoute.java:

package com.example.beer.routes;

import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.model.dataformat.JsonLibrary;
import org.springframework.stereotype.Component;

import java.math.BigDecimal;
import com.example.beer.service.BeerService;
import com.example.beer.dto.Beer;
import org.apache.camel.BeanInject;

public class GetBeerByNameRoute extends RouteBuilder {
  private BeerService mBeerService;

  public void configure() throws Exception {
    from("direct:GetBeer").process(new Processor() {

      public void process(Exchange exchange) throws Exception {
        String name = exchange.getIn().getHeader("name", String.class);
        if (name == null) {
          throw new IllegalArgumentException("must provide a name");
        Beer b = mBeerService.getBeerByName(name);

        exchange.getIn().setBody(b == null ? new Beer() : b);

Here we inject a BeerService, which holds information about the different beers. Then we define a direct endpoint, which provides the endpoint, to which the REST call is routed (remember the operationId mentioned earlier?). The processor tries to look up the beer. If no beer is found, an empty beer object is returned. To try the example, you can run:

cd fuse-impl
mvn package
java -jar target/beer-svc-impl-1.0-SNAPSHOT.jar
#in a separate terminal
curl http://localhost:8080/rest/beer/Carlsberg

We might have to do this over and over again. In that case, we can create a Maven archetype for the two projects. Alternatively, we can clone a template project containing all the boilerplate code and do the necessary changes from there. That will be a bit more work, though, because we’ll have to rename Maven modules as well as Java classes, but it’s not too much of a hassle.


With an API-first approach, you can design and test your API before doing the actual implementation. You can get early feedback on your API from the people using it, without having to provide an actual implementation. In this way, you can save time and money.

Going from design to actual implementation is easy with Red Hat Fuse. Just use the Camel REST DSL Swagger Maven Plugin to generate a stub and you are set for providing the actual implementation. If you want to try it for yourself, use the example code as a starting point.

The post API-first design with OpenAPI and Red Hat Fuse appeared first on Red Hat Developer Blog.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this series, I’ve been covering new developments of Shenandoah GC coming up in JDK 13. In part 1, I looked at the switch to load reference barriers, and, in part 2, I looked at plans for eliminating an extra word per object. In this article, I’ll look at a new architecture and a new operating system that Shenandoah GC will be working with.


BellSoft recently contributed a change that allowed Shenandoah to build and run on Solaris. Shenandoah itself has no operating system-specific code in it; therefore, it’s relatively easy to port to new operating systems. In this case, it mostly amounts to a batch of fixes to make the Solaris compiler happy, like removing a trailing comma in enums.

One notable gotcha we encountered was with Solaris 10. Contrary to what later versions of Solaris do—and what basically all other relevant operating systems do—Solaris 10 maps user memory to upper address ranges (e.g., to addresses starting with 0xff… instead of 0x7f). Other operating systems reserve the upper half of the address space to kernel memory.

This approach conflicted with an optimization of Shenandoah’s task queues, which would encode pointers assuming it has some spare space in the upper address range. It was easy enough to disable via build-time-flag, and Aleksey Shipilev did that. The fix is totally internal to Shenandoah GC and does not affect the representation of Java references in heap. With this change, Shenandoah can be built and run on Solaris 10 and newer (and possibly older, but we haven’t tried). This is not only interesting for folks who want Shenandoah to run on Solaris, but also for us, because it requires the extra bit of cleanliness to make non-mainline toolchains happy.

The changes for Solaris support are already in JDK 13 development repositories and are already backported to Shenandoah’s JDK 11 and JDK 8 backports repositories.


Shenandoah used to support x86_32 in “passive” mode a long time ago. This mode relies only on stop-the-world GC to avoid implementing barriers (basically, it runs Degenerated GC all the time). It was an interesting mode to see the footprint numbers that you can get with uncommits and slimmer native pointers with really small microservice-size VMs. This mode was dropped before integration upstream, because many Shenandoah tests expect all heuristics/modes to work properly, and having the rudimentary x86_32 support was breaking tier1 tests. So, we disabled it.

Today, we have significantly simplified runtime interface thanks to load reference barriers and elimination of separate forwarding pointer slot, and we can build the fully concurrent x86_32 on top of that. This approach allows us to maintain 32-bit cleanness in Shenandoah code (we have fixed >5 bugs ahead of this change!), and it serves as proof of concept that Shenandoah can be implemented on 32-bit platforms. It is interesting in scenarios where the extra footprint savings are important, such as in containers or embedded systems. The combination of LRB + no more forwarding pointer + 32-bit support gives us the current lowest bounds for a footprint that would be possible with Shenandoah.

The changes for x86_32-bit support are done and ready to be integrated into JDK 13. However, they are currently waiting for the elimination of forwarding pointer change, which in turn is waiting for a nasty C2 bug fix. The plan is to later backport it to Shenandoah JDK 11 and JDK 8 backports, after the load reference barriers and elimination of forwarding pointer changes have been backported.

Other architectures and OSes

With those two additions to OS and architectures support, Shenandoah will soon be available (e.g., known to build and run) on four operating systems: Linux, Windows, MacOS, and Solaris, plus three architectures: x86_64, arm64 and x86_32. Given Shenandoah’s design with zero OS-specific code, and not overly complex architecture-specific code, we may be looking at more operating systems or architectures to join the flock in future releases (if anybody finds it interesting enough to implement).

As always, if you don’t want to wait for releases, you can already have everything and help sort out problems: check out the Shenandoah GC Wiki.

Read more

Shenandoah GC in JDK 13, Part 1: Load reference barriers

Shenandoah GC in JDK 13, Part 2: Eliminating the forward pointer word

The post Shenandoah GC in JDK 13, Part 3: Architectures and operating systems appeared first on Red Hat Developer Blog.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this series of articles, I’ll be discussing new developments of Shenandoah GC coming up in JDK 13. In part 1, I looked at the switch of Shenandoah’s barrier model to load reference barriers and what that means.

The change I want to talk about here addresses another frequent—perhaps the most frequent—concern about Shenandoah GC: the need for an extra word per object. Many believe this is a core requirement for Shenandoah, but it is actually not, as you’ll see below.

Let’s first look at the usual object layout of an object in the Hotspot JVM:

0: [mark-word ]
8: [class-word ]
16: [field 1 ]
24: [field 3 ]
32: [field 3 ]

Each section here marks a heap word. That would be 64 bits on 64-bit architectures and 32 bits on 32-bit architectures.

The first word is the so-called mark word, or header of the object. It is used for a variety of purposes. For example, it can keep the hash-code of an object; it has 3 bits that are used for various locking states; some GCs use it to track object age and marking status; and it can be “overlaid” with a pointer to the “displaced” mark, to an “inflated” lock, or, during GC, the forwarding pointer.

The second word is reserved for the klass pointer. This is simply a pointer to the Hotspot-internal data structure that represents the class of the object.

Arrays would have an additional word next to store the array length. What follows is the actual payload of the object, that is, fields and array elements.

When running with Shenandoah enabled, the layout would look like this instead:

-8: [fwd pointer]
0: [mark-word ]
8: [class-word ]
16: [field 1 ]
24: [field 3 ]
32: [field 3 ]

The forward pointer is used for Shenandoah’s concurrent evacuation protocol:

  • Normally, it points to itself -> the object is not evacuated yet.
  • When evacuating (by the GC or via a write-barrier), we first copy the object, then install a new forwarding pointer to that copy using an atomic compare-and-swap, possibly yielding a pointer to an offending copy. Only one copy wins.
  • Now, the canonical copy to read-from or write-to can be found simply by reading this forwarding pointer.

The advantage of this protocol is that it’s simple and cheap. The cheap aspect is important here, because, remember, Shenandoah needs to resolve the forwardee for every single read or write, even primitive ones. And, using this protocol, the read-barrier for this would be a single instruction:

mov %rax, (%rax, -8)

That’s about as simple as it gets.

The disadvantage is obviously that it requires more memory. In the worst case, for objects without any payload, that’s one more word for an otherwise two-word object. That’s 50% more. With more realistic object size distributions, you’d still end up with 5%-10% more overhead, YMMV. This also results in reduced performance: allocating the same number of objects would hit the ceiling faster than without that overhead—prompting GCs more often—and thus reduce throughput.

If you’ve read carefully so far, you will have noticed that the mark word is also used/overlaid by some GCs to carry the forwarding pointer. So, why not do the same in Shenandoah? The answer is (or used to be), that reading the forwarding pointer required a little more work. We need to somehow distinguish a true mark word from a forwarding pointer. That is done by setting the lowest two bits in the mark word. Those are usually used as locking bits, but the combination 0b11 is not a legal combination of lock bits. In other words, when they are set, the mark word, with the lowest bits masked to 0, is to be interpreted as the forwarding pointer. This decoding of the mark word is significantly more complex than the above simple read of the forwarding pointer. I did in fact build a prototype a while ago, and the additional cost of the read-barriers was prohibitive and did not justify the savings.

All of this changed with the recent arrival of load reference barriers:

  • We no longer require read-barriers, especially not on (very frequent) primitive reads.
  • The load-reference-barriers are conditional, which means their slow-path (actual resolution) is only activated when 1. GC is active and 2. the object in question is in the collection set. This is fairly infrequent. Compare that to the previous read-barriers which would be always-on.
  • We no longer allow any access to from-space copies. The strong invariant guarantees that we only ever read from and write to to-space copies.

Two consequences are as follows. The from-space copy is not actually used for anything, and we can use that space for the forwarding pointer, instead of reserving an extra word for it. We can basically nuke the whole contents of the from-space copy and put the forwarding pointer anywhere. We only need to be able to distinguish between “not forwarded” (and we don’t care about other contents) and “forwarded” (the rest is forwarding pointer).

It also means that the actual mid- and slow-paths of the load reference barriers are not all that hot, and we can easily afford to do a little bit of decoding there. It amounts to something like (in pseudocode):

oop decode_forwarding(oop obj) {
  mark m = obj->load_mark();
  if ((m & 0b11) == 0b11) {
    return (oop) (m & ~0b11);
  } else {
    return obj;

While this looks noticeably more complicated than the simple load of the forwarding pointer, it is still basically a free lunch because it’s only ever executed in the not-very-hot mid-path of the load reference barrier. With this, the new object layout would be:

0: [mark word (or fwd pointer)]
8: [class word]
16: [field 1]
24: [field 2]
32: [field 3]

This approach has several advantages:

  • Obviously, it reduces Shenandoah’s memory footprint by doing away with the extra word.
  • Not quite as obviously, it results in increased throughput: We can now allocate more objects before hitting the GC trigger, resulting in fewer cycles spent in actual GC.
  • Objects are packed more tightly, which results in improved CPU cache pressure.
  • Again, the required GC interfaces are simpler: Where we needed special implementations of the allocation paths (to reserve and initialize the extra word), we can now use the same allocation code as any other GC.

To give you an idea of the throughput improvements, note that all the GC sensitive benchmarks that I have tried showed gains between 10% and 15%. Others benefited less or not at all, but that is not surprising for benchmarks that don’t do any GC at all.

It is, however, important to note that the extra decoding cost does not actually show up anywhere; it is basically negligible. It probably would show up on heavily evacuating workloads, but most applications don’t evacuate that much, and most of the work is done by GC threads anyway, making mid-path decoding cheap enough.

The implementation of this has recently been pushed to the Shenandoah/JDK repository. We are currently shaking out one last known bug, and then it will be ready to go upstream into JDK 13 repository. The plan is to eventually backport it to Shenandoah’s JDK 11 and JDK 8 backports repositories, and from there into RPMs. If you don’t want to wait, you can already have it: check out the Shenandoah GC Wiki.

Read more

Shenandoah GC in JDK 13, Part 1: Load reference barriers

The post Shenandoah GC in JDK 13, Part 2: Eliminating the forward pointer word appeared first on Red Hat Developer Blog.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this series of articles, I will introduce some new developments of the Shenandoah GC coming up in JDK 13. Perhaps the most significant, although not directly user-visible, change is the switch of Shenandoah’s barrier model to load reference barriers. This change resolves one major point of criticism against Shenandoah—the expensive primitive read-barriers. Here, I’ll explain more about what this change means.

Shenandoah (as well as other collectors) employs barriers to ensure heap consistency. More specifically, Shenandoah GC employs barriers to ensure what we call “to-space-invariant.” This means when Shenandoah is collecting, it is copying objects from so-called “from-space” to “to-space,” and it does so while Java threads are running (concurrently).

Thus, there may be two copies of any object floating around in the JVM. To maintain heap consistency, we need to ensure either that:

  • writes happen into to-space copy + reads can happen from both copies, subject to memory model constraints = weak to-space invariant, or that
  • writes and reads always happen into/from the to-space copy = strong to-space invariant.

The way we ensure this is by employing the corresponding type of barriers whenever reads and writes happen. Consider this pseudocode:

void example(Foo foo) {
  Bar b1 = foo.bar;             // Read
  while (..) {
    Baz baz = b1.baz;           // Read
    b1.x = makeSomeValue(baz);  // Write

Employing the Shenandoah barriers, it would look like this (what the JVM+GC would do under the hood):

void example(Foo foo) {
  Bar b1 = readBarrier(foo).bar;             // Read
  while (..) {
    Baz baz = readBarrier(b1).baz;           // Read
    X value = makeSomeValue(baz);
    writeBarrier(b1).x = readBarrier(value); // Write

In other words, wherever we read from an object, we first resolve the object via a read-barrier, and wherever we write to an object, we possibly copy the object to to-space. I won’t go into the details here; let’s just say that both operations are somewhat costly.

Notice also that we need a read-barrier on the value of the write here to ensure that we only ever write to-space-references into fields while heap references get updated (another nuisance of Shenandoah’s old barrier model).

Because those barriers are a costly affair, we worked quite hard to optimize them. An important optimization is to hoist barriers out of loops. In this example, we see that b1 is defined outside the loop but only used inside the loop. We can just as well do the barriers outside the loop, once, instead of many times inside the loop:

void example(Foo foo) {
  Bar b1 = readBarrier(foo).bar;  // Read
  Bar b1' = readBarrier(b1);
  Bar b1'' = writeBarrier(b1);
  while (..) {
    Baz baz = b1'.baz;            // Read
    X value = makeSomeValue(baz);
    b1''.x = readBarrier(value);  // Write

And, because write-barriers are stronger than read-barriers, we can fold the two up:

void example(Foo foo) {
  Bar b1 = readBarrier(foo).bar; // Read
  Bar b1' = writeBarrier(b1);
  while (..) {
    Baz baz = b1'.baz;           // Read
    X value = makeSomeValue(baz);
    b1'.x = readBarrier(value);  // Write

This is all nice and works fairly well, but it is also troublesome, in that the optimization passes for this are very complex. The fact that both from-space and two-space-copies of any objects can float around the JVM at any time is a major source of headaches and complexity. For example, we need extra barriers for comparing objects in case we compare an object to a different copy of itself. Read-barriers and write-barriers need to be inserted for *any* read or write, including primitive reads or writes, which are very frequent.

So, why not optimize this and strongly ensure to-space-invariance right when an object is loaded from memory? That is where load reference barriers come in. They work mostly like our previous write-barriers, but are not employed at use-sites (when reading from or storing to the object). Instead, they are used much earlier when objects are loaded (at their definition-site):

void example(Foo foo) {
  Bar b1' = loadReferenceBarrier(foo.bar);
  while (..) {
    Baz baz = loadReferenceBarrier(b1'.baz); // Read
    X value = makeSomeValue(baz);
    b1'.x = value;                           // Write

You can see that the code is basically the same as before —after our optimizations—except that we didn’t need to optimize anything yet. Also, the read-barrier for the store-value is gone, because we now know (because of the strong to-space-invariant) that whatever makeSomeValue() did, it must already have employed the load-reference-barrier if needed. The new load-reference-barrier is almost 100 percent the same as our previous write-barrier.

The advantages of this barrier model are many (for us GC developers):

  • Strong invariant means it’s a lot easier to reason about the state of GC and objects.
  • Much simpler barrier interface. In fact, a lot of stuff that we added to GC barrier interfaces after JDK11 will now become unused: no need for barriers on primitives, no need for object equality barriers, etc.
  • Optimization is much easier (see above). Barriers are naturally placed at the least-hot locations: their def-sites, instead of their most-hot locations: their use-sites, and then attempted to optimize them away from there (and not always successfully).
  • No more need for object equals barriers.
  • No more need for “resolve” barriers (a somewhat exotic kind of barriers used mostly in intrinsics and places that do read-like or write-like operations).
  • All barriers are now conditional, which opens up opportunities for further optimization later.
  • We can re-enable a bunch of optimizations, like fast JNI getters that needed to be disabled before because they did not play well with possible from-space references.

For users, this change is mostly invisible, but the bottom line is that it improves Shenandoah’s overall performance. It also opens the way for additional improvements, such as elimination of the forwarding pointer, which I’ll get to in a follow-up article.

Load reference barriers were integrated into JDK 13 development repository in April 2019. We will start backporting it to Shenandoah’s JDK 11 and JDK 8 backports soon. If you don’t want to wait, you can already have it: check out the Shenandoah GC Wiki for details.

The post Shenandoah GC in JDK 13, Part 1: Load reference barriers appeared first on Red Hat Developer Blog.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Building responsiveness applications is a never-ending task. With the rise of powerful and multicore CPUs, more raw power is available for applications to consume. In Java, threads are used to make the application work on multiple tasks concurrently. A developer starts a Java thread in the program, and tasks are assigned to this thread to get processed. Threads can do a variety of tasks, such as read from a file, write to a database, take input from a user, and so on.

In this article, we’ll explain more about threads and introduce Project Loom, which supports high-throughput and lightweight concurrency in Java to help simplify writing scalable software.

Use threads for better scalability

Java makes it so easy to create new threads, and almost all the time the program ends-up creating more threads than the CPU can schedule in parallel. Let’s say that we have a two-lane road (two core of a CPU), and 10 cars want to use the road at the same time. Naturally, this is not possible, but think about how this situation is currently handled. Traffic lights are one way. Traffic lights allow a controlled number of cars onto the road and make the traffic use the road in an orderly fashion.

In computers, this is a scheduler. The scheduler allocates the thread to a CPU core to get it executed. In the modern software world, the operating system fulfills this role of scheduling tasks (or threads) to the CPU.

In Java, each thread is mapped to an operating system thread by the JVM (almost all the JVMs do that). With threads outnumbering the CPU cores, a bunch of CPU time is allocated to schedule the threads on the core. If a thread goes to wait state (e.g., waiting for a database call to respond), the thread will be marked as paused and a separate thread is allocated to the CPU resource. This is called context switching (although a lot more is involved in doing so). Further, each thread has some memory allocated to it, and only a limited number of threads can be handled by the operating system.

Consider an application in which all the threads are waiting for a database to respond. Although the application computer is waiting for the database, many resources are being used on the application computer. With the rise of web-scale applications, this threading model can become the major bottleneck for the application.

Reactive programming

One solution is making use of reactive programming. Briefly, instead of creating threads for each concurrent task (and blocking tasks), a dedicated thread (called an event loop) looks through all the tasks that are assigned to threads in a non-reactive model, and processes each of them on the same CPU core. So, if a CPU has four cores, there may be multiple event loops but not exceeding to the number of CPU cores. This approach resolves the problem of context switching but introduces lots of complexity in the program itself. This type of program also scales better, which is one reason reactive programming has become very popular in recent times. Vert.x is one such library that helps Java developers write code in a reactive manner.

You can learn more about reactive programming here and in this free e-book by Clement Escoffier.

Scalability with minimal complexity

So, the thread per task model is easy to implement but not scalable. Reactive programming is more scalable but the implementation is a bit more involved. A simple graph representing program complexity vs. program scalability would look like this:

Program complexity vs. scalability.

What we need is a sweet spot as mentioned in the diagram above (the green dot), where we get web scale with minimal complexity in the application. Enter Project Loom. But first, let’s see how the current one task per thread model works.

How the current thread per task model works

Let’s see it in action. First let’s write a simple program, an echo server, which accepts a connection and allocates a new thread to every new connection. Let’s assume this thread is calling an external service, which sends the response after few seconds. This mimics the wait state of the thread. So, a simple Echo server would look like the example below. The full source code is available here.

//start listening on a socket
ServerSocket server = new ServerSocket(5566);

while (true) {
    Socket client = server.accept();
    EchoHandler handler = new EchoHandler(client);
    //create a new thread for each new connection
//extends Thread
class EchoHandler extends Thread {
public void run () {
   try {
     //make a call to the dummy downstream system. This calls will return after a couple of seconds
     //simulating a wait/block call.
     byte[] output = new java.net.URL("http://localhost:9090").openStream().readNBytes(5);
     //write something to the connection output stream
     writer.println("[echo] " + output);

When I run this program and hit the program with, say, 100 calls, the JVM thread graph shows a spike as seen below (output from jconsole). The command I executed to generate the calls is very primitive, and it adds 100 JVM threads.

for i in {1..100}; do curl localhost:5566 & done

Project Loom

Instead of allocating one OS thread per Java thread (current JVM model), Project Loom provides additional schedulers that schedule the multiple lightweight threads on the same OS thread. This approach provides better usage (OS threads are always working and not waiting) and much less context switching.

The wiki says Project Loom supports “easy-to-use, high-throughput lightweight concurrency and new programming models on the Java platform.”

The core of Project Loom involves Continuations and Fibers. The following definitions are from the excellent presentation by Alan Bateman, available here.


Fibers are lightweight, user-mode threads, scheduled by the Java virtual machine, not the operating system. Fibers are low footprint and have negligible task-switching overhead. You can have millions of them!


A Continuation (precisely: delimited continuation) is a program object representing a computation that may be suspended and resumed (also, possibly, cloned or even serialized).

In essence, most of us will never use Continuation in application code. Most of us will use Fibers to enhance our code. A simple definition would be:

Fiber = Continuation + Scheduler

Ok, this seems interesting. One of the challenges of any new approach is how compatible it will be with existing code. Project Loom team has done a great job on this front, and Fiber can take the Runnable interface. To be complete, note that Continuation also implements Runnable.

So, our echo server will change as follows. Note that the part that changed is only the thread scheduling part; the logic inside the thread remains the same. The full source code is available here.

ServerSocket server = new ServerSocket(5566);
while (true) {



 //Instead of running Thread.start() or similar Runnable logic, we can just pass the Runnable to the Fiber //scheduler and that's it



Note: The JVM with Project Loom is available here. We need to build JVM from Project Loom branch and start using it for Java/C programs. An example is shown below:

java -version
openjdk version "13-internal" 2019-09-17
OpenJDK Runtime Environment (build 13-internal+0-adhoc.faisalmasood.loom)
OpenJDK 64-Bit Server VM (build 13-internal+0-adhoc.faisalmasood.loom, mixed mode)

With this new version, the threads look much better (see below). By default, the Fiber uses the ForkJoinPool scheduler, and, although the graphs are shown at a different scale, you can see that the number of JVM threads is much lower here compared to the one thread per task model. This resulted in hitting the green spot that we aimed for in the graph shown earlier.


The improvements that Project Loom brings are exciting. We have reduced the number of threads by a factor of 5. Project Loom allows us to write highly scalable code with the one lightweight thread per task. This simplifies development, as you do not need to use reactive programming to write scalable code. Another benefit is that lots of legacy code can use this optimization without much change in the code base. I would say Project Loom brings similar capability as goroutines and allows Java programmers to write internet scale applications without reactive programming.

The post Project Loom: Lightweight Java threads appeared first on Red Hat Developer Blog.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In 2018, Oracle announced that it would only provide free public updates and auto-updates of Java SE 8 for commercial users until the end of January 2019. Java 8 is a very important platform, used by millions of programmers, so this was a big deal. The Java community needed to fill the gap.

In February of this year, I was appointed as the new Lead of the OpenJDK 8 Update Releases Project. A couple of weeks later, I was appointed the new Lead of the OpenJDK 11 Updates Project. This is an important milestone in the history of OpenJDK and of Java SE because it’s the first time that a non-Oracle employee has led the current long-term OpenJDK release project. JDK 8 is still a much-used Java release in industry, and JDK 11 is the current long-term maintenance release.

It’s now a couple of weeks after the first releases of JDK8u and JDK11u on my watch. I think the process went pretty well, although it was not entirely smooth sailing for the developers. Having said that, we got our releases out on the day, as planned, and so far we’ve seen no major problems.

There had been a considerable amount of talk, some of it verging on panic, about Oracle ceasing to provide free long-term JDK update binaries to commercial users. At the time, I believed those worries were misplaced. Now, with these releases, I think we’ve proved it.

Red Hat’s role

Of course, I’m not doing this on my own. We have a large team of OpenJDK developers within Red Hat and there are many non-Red Hatters working on the releases, too. There are also people doing highly confidential security work that you’ll not see until it’s ready.

It’s important to clarify Red Hat’s role in all of this. We are one of the largest contributors to OpenJDK, we have been for many years, and we will continue to be. However, we have not “taken over” OpenJDK updates projects, and neither would we want to. Our role in OpenJDK, as in many other projects, is to be a catalyst in communities of customers, contributors, and partners. This means that we work with others, some of whom are our competitors, in the best interests of the project. The changes Red Hat makes to OpenJDK updates are based on patches from many sources. We wrote many of them ourselves, of course, but we take them from all of the OpenJDK contributors.

My role in this as Project Lead is to supervise, encourage, and occasionally make decisions about how best to protect these precious jewels, the OpenJDK updates. I have to do so without favoring any vendor. Not only must I be impartial, but I must also be seen by everyone to be so. This way of behaving is in Red Hat’s best interests: a better OpenJDK for everyone encourages more users and more contributors. In the end, the best outcome for Red Hat is the best outcome for everyone.

The post OpenJDK 8 and 11: Still in safe hands appeared first on Red Hat Developer Blog.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I recently had the opportunity to speak at Red Hat Summit 2019. In my session, titled “Vert.x application development with Jaeger distributed tracing,” I discussed how scalable event-driven applications could be built with Eclipse Vert.x, a Java Virtual Machine toolkit for building reactive applications.

Thanks to many developer tools, creating these applications is no longer the most effort-consuming task in IT. Instead, we now have to understand how parts of our application function together to deliver a service, (across dev, test and production environments).  This can be difficult because, with distributed architectures, external monitoring only tells you the overall response time and the number of invocations, providing no insight into the individual operations. Additionally, log entries for a request are scattered across numerous logs. This article discusses the use of Eclipse Vert.x, distributed tracing, and Jaeger in the context of this problem.

As defined by the Reactive Manifesto, reactive systems are elastic, resilient, responsive, and based on a message-driven design.

Eclipse Vert.x is an open source toolkit for building reactive systems and streams on the Java Virtual Machine. Vert.x is unopinionated and polyglot, which gives developers the freedom to use the toolkit as they see fit. The core components of Vert.x include its actors, which are called Verticles, a message bus, called an Event Bus, and event dispatchers, known as Eventloops.

Eclipse Vert.x basics

Eclipse Vert.x implements a multi-reactor pattern supported by eventloops. In a reactor pattern, there exists a stream of events delegated to handlers by a thread called an eventloop. Because the eventloop observes the stream of events and calls the handlers to handle the event, it is important to never block the eventloop. If handlers are not available to the eventloop, then the eventloop has to wait; so, we effectively call the eventloop blocked.

In this pattern, a single eventloop on a multi-core machine has drawbacks, because a single thread cannot run on more than one CPU core at a time. For developers using technologies implementing the reactor pattern, this means having to manage and start up more processes with an eventloop in order to improve performance.

Eclipse Vert.x implements a multi-reactor pattern where, by default, each CPU core has two eventloops. This gives applications using Vert.x the responsiveness needed when the number of events increases.

In the figure above, the handlers are verticles, which are the main actors in Vert.x. Verticles get assigned to a random eventloop at deploy time.

Another important concept is the event bus, which is how verticles can communicate with each other in a publish-subscribe manner. Verticles are registered to the event bus and given an address to listen on. The event bus allows verticles to be scaled, as we only need to specify what address a verticle listens for events on and where it should publish those events to.


Vert.x aids development of reactive microservices, but what about application observability? It is important in distributed landscapes that we can still observe requests being handled by the application. Consider an e-commerce application, for example. A single checkout request may be passed to tens or hundreds of services before the application is finished handling that process; whether in development or production environments, developer and support teams need tools to understand and debug issues that may arise within their services.

Tracing can provide the context surrounding the failure. Distributed tracing involves code instrumentation such that:

  • Each request has a unique external request id.
  • The external request id is passed to all services that are involved in handling the request.
  • The external request id is included in log messages.
  • Information (e.g., start time, end time) about the requests and operations performed are recorded when handling an external request in a centralized service.

This code instrumentation is provided by the OpenTracing specification. Using the core concepts of distributed tracing we can use OpenTracing libraries to instrument our applications.

Application Performance Management (APM) tools, such as Cloud Native Computing Foundation‘s Jaeger, use OpenTracing to provide additional features such as a user interface for users to interact with, below is an architecture diagram for using Jaeger.

The application’s node contains the application and jaeger-client library. Once spans are finished, they are reported to the jaeger-agent, and the jaeger-collector interacts with database backends to store the reported traces to be queried when the user views the jaeger-ui. You can find more details about each Jaeger component here.

Reactive event-driven architectures provide the advantages of responsiveness, resiliency, elasticity, and message passing. Yet as our applications expand and grow, it can become difficult to understand or even debug applications. The purpose of this article (and my presentation) was to share how Vert.x can be used to create reactive microservice applications and how distributed tracing can provide the ability to better work with such applications.

This article is based on the “Vert.x application development with Jaeger distributed tracing” session presented by Tiffany Jachja at Red Hat Summit 2019.

The post Building and understanding reactive microservices using Eclipse Vert.x and distributed tracing appeared first on Red Hat Developer Blog.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This article is a continuation of Migrating Java applications to Quarkus: Lessons learned, and here, I’ll make a comparison of performance metrics for building and running a Java app before and after Quarkus. My goal here is to demonstrate how awesome Quarkus is and maybe help you decide to use Quarkus to build your cool microservices.

To make the comparison, I’ll use the same application that was used in the previous article using Thorntail and Quarkus binaries. The comparison will be made based on the following metrics:

  • Time to build the whole project
  • UberJar size
  • Time spent to start the application for the first time
  • Average of memory usage
  • Average of CPU usage
  • Loaded classes and active threads

The application will be tested in three different environments, which are:

  • My local dev environment
    • Lenovo t460s
      • Intel(R) Core(TM) i7-6600U
      • RAM 20G
      • SSD HD
  • Rpi 3 B+, specs
  • Red Hat OpenShift v3.11

The set of tests demonstrated here were all done on my local dev environment. To begin, let’s build both versions and compare the time spent on the build process:

spolti@t460s:~$ mvn clean package
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  01:01 min
spolti@t460s:~$  mvn clean package ... [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time:  01:20 min

The build times show Quarkus being 19 seconds faster than Thorntail, but the build time itself is not too important. Next, after both versions are built, let’s see its size:

spolti@t460s:~$ du -sh *
201M rebot-telegram-bot-0.4-SNAPSHOT-thorntail.jar
38M rebot-telegram-bot-1.0-SNAPSHOT-runner.jar

Here we have a big difference, Thorntail produces an uber jar five times bigger than Quarkus.

The next comparison shows the time spent to start the app the first time; usually, it takes longer to compare it. The app will be stopped after all plugins are started, tested on my local dev environment and on RPI:

spolti@t460s:~$ time java -jar <omitted parameters> rebot-telegram-bot-1.0-SNAPSHOT-runner.jar
<Startup logs>
real 0m10.633s
user 0m15.888s
sys 0m0.621s

pi@raspberrypi:~ $ time java -jar  <omitted parameters> rebot-telegram-bot-1.0-SNAPSHOT-runner.jar
<Startup logs>
real 0m21.309s
user 0m24.968s
sys 0m1.050s
spolti@t460s:~$ time java -jar <omitted parameters> rebot-telegram-bot-0.4-SNAPSHOT-thorntail.jar
<Startup logs>
real 0m38.926s
user 1m24.489s
sys 0m3.008s

pi@raspberrypi:~ $ time java -jar  <omitted parameters>  rebot-telegram-bot-0.4-SNAPSHOT-thorntail.jar
<Startup logs>
real 2m38.637s
user 2m51.688s
sys< 0m6.444s

This, in my opinion, is one of the most important metrics, and one that helped me decide to try Quarkus. It shows an amazing 30 seconds faster than my previous version on my local environment and around 137 seconds faster on RPI. This app particularly takes a few seconds to start, because it has around 10 plugins that do some tasks during startup leading to delays. But, imagine that your microservice is composed with a few Rest endpoints; it could be started in less than 1 second.

For now, let’s see how the Java memory behaves. The graphics below show information collected for 10 minutes:



This comparison is very interesting, as we can see, Quarkus has the best numbers except for the threads. The difference is not too big, but the memory usage is a way larger than Thorntail, and the number of the loaded classes is a way bigger, less than the half. With that said, when targeting devices like RPI, Quarkus is a perfect fit because it consumes a very small portion of physical resources.

The next metrics were done using the container images created with Thorntail and Quarkus version running on Red Hat OpenShift:



On OpenShift, we can also see a considerable difference in memory usage, but notice that this value can be decreased by fine-tuning the JVM memory configurations. For this example, such fine-tuning was not done.


My experience with migrating an old application running on Thorntail to Quarkus was very good, and so far,  I’ve had only great results with the metrics. In my opinion, migration to Quarkus is a go; of course, there are dozens of different scenarios that I didn’t cover, but I believe that, in most scenarios, the migration can be done and great results can be achieved.

The following table compares all the results I found during my tests:

Metric Quarkus Thorntail
build time 01:01 min 01:20 min
uber jar size 38M 201M
startup time (local dev env) 0m10.633s 0m38.926s
startup time (rpi) 0m21.309s 2m38.637s
Heap Memory ~45M-~125M ~240M-~790M
Threads ~43 ~62
Loaded Classes ~12.575 ~26.744
CPU Usage ~0.3 ~0.6

I hope this article is helpful and encourages you perhaps to try Quarkus in your next project or when migrating an existing one. For the next article, I will share more interesting stuff, which I had to do to make the application run well with a native image. So, stay tuned for the next installment.

The post Migrating Java applications to Quarkus, Part 2: Before and after appeared first on Red Hat Developer Blog.

Read for later

Articles marked as Favorite are saved for later viewing.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview