Loading...

Follow Red Hat Enterprise Linux Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Summit, the world’s fastest supercomputer running at Oak Ridge National Laboratory (ORNL), was designed from the ground up to be flexible and to support a wide range of scientific and engineering workloads. In addition to traditional simulation workloads, Summit is well suited to analysis and AI/ML workloads – it is described as “the world’s first AI supercomputer”. The use of standard components and software makes it easy to port existing applications to Summit as well as develop new applications. As pointed out by Buddy Bland, Project Director for the ORNL Leadership Computing Facility, Summit lets users bring their codes to the machine quickly, thanks to the standard software environment provided by Red Hat Enterprise Linux (RHEL).

Summit’s system is built using “fat node” building block concept, where each identically configured node is a powerful IBM Power System AC922 server which is interconnected with others via high-bandwidth dual rail Mellanox infiniband fabric, for a combined cluster of roughly 4,600 nodes. Each node in the system has:

The result is a system with excellent CPU compute capabilities, plenty of memory to hold data, high performance local storage, and massive communications bandwidth. Additionally, prominent use of graphical processing units (GPU) from Nvidia at the node architecture level provides robust acceleration platform for artificial intelligence (AI) and other workloads. All of this is achieved using standard hardware components, standard software components, and standard interfaces.

So why is workload acceleration so important? In the past, hardware accelerators such as vector processors and array processors were exotic technologies used for esoteric applications. In today’s systems, hardware accelerators are mainstream in the form of GPUs. GPUs can be used for everything from visualization to number crunching to database acceleration, and are omnipresent across the hardware landscape, existing in desktops, traditional servers, supercomputers, and everything in-between, including cloud instances . And the standard unifying component across these configurations is Red Hat Enterprise Linux, the operating system and software development environment supporting hardware, applications, and users across variety of environments at scale.

The breadth of scientific disciplines targeted by Summit can be seen in the list of applications included in the early science program. To help drive optimal use of the full system as soon as it was available, ORNL identified a set of research projects that were given access to small subsets of the full Summit system while Summit was being built. This enabled the applications to be ported to the Summit architecture, optimized for Summit, and be ready to scale out to the full system as soon as it was available. These early applications include astrophysics, materials science, systems biology, cancer research, and AI/ML.

Machine learning (ML) is a great example of a workload that stresses systems: it needs compute power, I/O, and memory to handle data. It needs massive number crunching for training, which is handled by GPUs. All of that requires an enormous amount of electrical power to run. The Summit system is not only flexible and versatile in the way it can handle workloads, it also withstands one of the biggest challenges of today’s supercomputers – excessive power consumption. Besides being the fastest supercomputer on the planet, it is equally significant that Summit performs well on the Green500 list – a supercomputer measurement of speed and efficiency which puts a premium on energy-efficient performance for sustainable supercomputing. Summit comes in at #1 in its category and #5 overall on this list, a very strong performance.

In summary, the fastest supercomputer in the world supports diverse application requirements, driven by simulation, big data, and AI/ML, employs the latest processor, acceleration and interconnect technologies from IBM, Nvidia and Mellanox, respectively, and shows unprecedented power efficiency for that scale of machines. Critical to the success of this truly versatile system is Linux, in Red Hat Enterprise Linux, as the glue that brings everything together and allows us to interact with this modern marvel.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As traditional multi-tier enterprise software is adapting to new realities of cloud infrastructure, it also needs to make use of the latest advances in computational and hardware capabilities. Red Hat has been working with major ISVs and partners, like SAP, on digital transformation scenarios while simultaneously helping them to extract additional performance from their hardware with Red Hat Enterprise Linux.

As part of the quest for enhanced performance, the focus for database and analytics applications has been shifting to in-memory execution, a deployment model that SAP HANA is offering. In the future, that trend is likely to include even more complex designs that incorporate entire software frameworks for processing information in-memory, and that is where SAP Data Hub comes into play. As a result, last year Red Hat introduced an enhanced offering, Red Hat Enterprise Linux for SAP Solutions, that is designed to assist our customers in simplifying their adoption of Red Hat Enterprise Linux and to cater to various use cases they may have, including running SAP S/4 HANA.

To further aid customers and partners in planning, sizing and configuring their environments, SAP and Red Hat, along with other software and hardware partners, have historically used a suite of performance benchmarks. For traditional multi-tier deployments, the Sales and Distribution (SD) module became a “gold standard” for benchmarking across largest enterprises and small businesses alike. With a long history of collaboration with SAP and our mutual hardware OEM partners, like HPE and Dell EMC, among others, Red Hat is no stranger to delivering leading results on these benchmarks across multiple server sizes.

To demonstrate performance and provide additional scalability and sizing information for SAP HANA applications and workloads, SAP introduced the Business Warehouse (BW) edition of SAP HANA Standard Application Benchmark. Presently on version 2, this benchmark simulates a variety of users with different analytical requirements and measures the key performance indicator (KPI) relevant to each of the three benchmark phases defined as follows:

  1. Data load phase, testing data latency and load performance (lower is better)
  2. Query throughput phase, testing query throughput with moderately complex queries (higher is better)
  3. Query runtime phase, testing the performance of running very complex queries (lower is better)

As a result of close collaboration with our OEM partners, Red Hat Enterprise Linux (RHEL) was used in several recent publications of the above benchmark.

Specifically, processing 1.3 billion initial records (a popular dataset size) using a single Dell EMC PowerEdge R940xa server, demonstrated that running the workload on Red Hat Enterprise Linux could deliver the best performance across all three benchmark KPIs and outperform similarly configured servers (see Table 1).

Table 1. Results in scale-up category running SAP BW Edition for SAP HANA Standard Application Benchmark, Version 2 with 1.3B initial records

Phase 1

(lower is better)

Phase 2

(higher is better)

Phase 3

(lower is better)

Technology Release

Database Release

Red Hat Enterprise Linux 7.4 [1]

13,421 sec

10,544

99 sec

SAP NetWeaver 7.50 SAP HANA 1.0
SUSE Linux Enterprise Server 12 [2]

14,333 sec

6,901

102 sec

SAP NetWeaver 7.50 SAP HANA 1.0
Red Hat Enterprise Linux advantage

7%

53% 3%

Additionally, in a much larger dataset size of 5.2 billion initial records, Dell EMC PowerEdge R840 server running Red Hat Enterprise Linux also outscored similarly configured server on two out of three benchmark KPIs demonstrating better dataset load time and query processing throughput (see Table 2).

Table 2. Results in scale-up category running SAP BW Edition for SAP HANA Standard Application Benchmark, Version 2 with 5.2B initial records

Phase 1

(lower is better)

Phase 2

(higher is better)

Phase 3

(lower is better)

Technology Release

Database Release

Red Hat Enterprise Linux 7.4 [3]

74,827 sec

3,095

175 sec

SAP NetWeaver 7.50 SAP HANA 2.0
SUSE Linux Enterprise Server 12 [4]

84,744 sec

2,916

172 sec

SAP NetWeaver 7.50 SAP HANA 2.0
Red Hat Enterprise Linux advantage

13%

6% -1.75%

These results demonstrate Red Hat’s commitment to helping OEM partners and ISVs deliver high-performing solutions to our mutual customers, and showcase close alignment between Red Hat and Dell EMC that, in collaboration with SAP, led to the creation of certified, single-source solutions for SAP HANA. Available in both single-server and larger, scale-out configurations, Dell EMC’s solution is optimized with Red Hat Enterprise Linux for SAP Solutions.

Learn more: https://www.redhat.com/en/partners/dell and https://www.redhat.com/en/resources/red-hat-enterprise-linux-sap-solutions-technology-overview

Results as of July 30, 2018. SAP and SAP HANA are the registered trademarks of SAP AG in Germany and in several other countries. See http://www.sap.com/benchmark for more information. [1] Dell EMC PowerEdge R940xa (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180M processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 1536 GB main memory). Certification number #2018023 [2] FUJITSU Server PRIMERGY RX4770 M4 (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180 processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 1536 GB main memory). Certification number #2018017 [3] Dell EMC PowerEdge R840 (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180M processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 3072 GB main memory). Certification number #2018028 [4] HPE Superdome Flex (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180 processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 3072 GB main memory). Certification number #2018025
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Summit, the world’s fastest supercomputer running at Oak Ridge National Laboratory (ORNL), was designed from the ground up to be flexible and to support a wide range of scientific and engineering workloads. In addition to traditional simulation workloads, Summit is well suited to analysis and AI/ML workloads – it is described as “the world’s first AI supercomputer”. The use of standard components and software makes it easy to port existing applications to Summit as well as develop new applications. As pointed out by Buddy Bland, Project Director for the ORNL Leadership Computing Facility, Summit lets users bring their codes to the machine quickly, thanks to the standard software environment provided by Red Hat Enterprise Linux (RHEL).

Summit’s system is built using “fat node” building block concept, where each identically configured node is a powerful IBM Power System AC922 server which is interconnected with others via high-bandwidth dual rail Mellanox infiniband fabric, for a combined cluster of roughly 4,600 nodes. Each node in the system has:

The result is a system with excellent CPU compute capabilities, plenty of memory to hold data, high performance local storage, and massive communications bandwidth. Additionally, prominent use of graphical processing units (GPU) from Nvidia at the node architecture level provides robust acceleration platform for artificial intelligence (AI) and other workloads. All of this is achieved using standard hardware components, standard software components, and standard interfaces.

So why is workload acceleration so important? In the past, hardware accelerators such as vector processors and array processors were exotic technologies used for esoteric applications. In today’s systems, hardware accelerators are mainstream in the form of GPUs. GPUs can be used for everything from visualization to number crunching to database acceleration, and are omnipresent across the hardware landscape, existing in desktops, traditional servers, supercomputers, and everything in-between, including cloud instances . And the standard unifying component across these configurations is Red Hat Enterprise Linux, the operating system and software development environment supporting hardware, applications, and users across variety of environments at scale.

The breadth of scientific disciplines targeted by Summit can be seen in the list of applications included in the early science program. To help drive optimal use of the full system as soon as it was available, ORNL identified a set of research projects that were given access to small subsets of the full Summit system while Summit was being built. This enabled the applications to be ported to the Summit architecture, optimized for Summit, and be ready to scale out to the full system as soon as it was available. These early applications include astrophysics, materials science, systems biology, cancer research, and AI/ML.

Machine learning (ML) is a great example of a workload that stresses systems: it needs compute power, I/O, and memory to handle data. It needs massive number crunching for training, which is handled by GPUs. All of that requires an enormous amount of electrical power to run. The Summit system is not only flexible and versatile in the way it can handle workloads, it also withstands one of the biggest challenges of today’s supercomputers – excessive power consumption. Besides being the fastest supercomputer on the planet, it is equally significant that Summit performs well on the Green500 list – a supercomputer measurement of speed and efficiency which puts a premium on energy-efficient performance for sustainable supercomputing. Summit comes in at #1 in its category and #5 overall on this list, a very strong performance.

In summary, the fastest supercomputer in the world supports diverse application requirements, driven by simulation, big data, and AI/ML, employs the latest processor, acceleration and interconnect technologies from IBM, Nvidia and Mellanox, respectively, and shows unprecedented power efficiency for that scale of machines. Critical to the success of this truly versatile system is Linux, in Red Hat Enterprise Linux, as the glue that brings everything together and allows us to interact with this modern marvel.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As traditional multi-tier enterprise software is adapting to new realities of cloud infrastructure, it also needs to make use of the latest advances in computational and hardware capabilities. Red Hat has been working with major ISVs and partners, like SAP, on digital transformation scenarios while simultaneously helping them to extract additional performance from their hardware with Red Hat Enterprise Linux.

As part of the quest for enhanced performance, the focus for database and analytics applications has been shifting to in-memory execution, a deployment model that SAP HANA is offering. In the future, that trend is likely to include even more complex designs that incorporate entire software frameworks for processing information in-memory, and that is where SAP Data Hub comes into play. As a result, last year Red Hat introduced an enhanced offering, Red Hat Enterprise Linux for SAP Solutions, that is designed to assist our customers in simplifying their adoption of Red Hat Enterprise Linux and to cater to various use cases they may have, including running SAP S/4 HANA.

To further aid customers and partners in planning, sizing and configuring their environments, SAP and Red Hat, along with other software and hardware partners, have historically used a suite of performance benchmarks. For traditional multi-tier deployments, the Sales and Distribution (SD) module became a “gold standard” for benchmarking across largest enterprises and small businesses alike. With a long history of collaboration with SAP and our mutual hardware OEM partners, like HPE and Dell EMC, among others, Red Hat is no stranger to delivering leading results on these benchmarks across multiple server sizes.

To demonstrate performance and provide additional scalability and sizing information for SAP HANA applications and workloads, SAP introduced the Business Warehouse (BW) edition of SAP HANA Standard Application Benchmark. Presently on version 2, this benchmark simulates a variety of users with different analytical requirements and measures the key performance indicator (KPI) relevant to each of the three benchmark phases defined as follows:

  1. Data load phase, testing data latency and load performance (lower is better)
  2. Query throughput phase, testing query throughput with moderately complex queries (higher is better)
  3. Query runtime phase, testing the performance of running very complex queries (lower is better)

As a result of close collaboration with our OEM partners, Red Hat Enterprise Linux (RHEL) was used in several recent publications of the above benchmark.

Specifically, processing 1.3 billion initial records (a popular dataset size) using a single Dell EMC PowerEdge R940xa server, demonstrated that running the workload on Red Hat Enterprise Linux could deliver the best performance across all three benchmark KPIs and outperform similarly configured servers (see Table 1).

Table 1. Results in scale-up category running SAP BW Edition for SAP HANA Standard Application Benchmark, Version 2 with 1.3B initial records

Phase 1

(lower is better)

Phase 2

(higher is better)

Phase 3

(lower is better)

Technology Release

Database Release

Red Hat Enterprise Linux 7.4 [1]

13,421 sec

10,544

99 sec

SAP NetWeaver 7.50 SAP HANA 1.0
SUSE Linux Enterprise Server 12 [2]

14,333 sec

6,901

102 sec

SAP NetWeaver 7.50 SAP HANA 1.0
Red Hat Enterprise Linux advantage

7%

53% 3%

Additionally, in a much larger dataset size of 5.2 billion initial records, Dell EMC PowerEdge R840 server running Red Hat Enterprise Linux also outscored similarly configured server on two out of three benchmark KPIs demonstrating better dataset load time and query processing throughput (see Table 2).

Table 2. Results in scale-up category running SAP BW Edition for SAP HANA Standard Application Benchmark, Version 2 with 5.2B initial records

Phase 1

(lower is better)

Phase 2

(higher is better)

Phase 3

(lower is better)

Technology Release

Database Release

Red Hat Enterprise Linux 7.4 [3]

74,827 sec

3,095

175 sec

SAP NetWeaver 7.50 SAP HANA 2.0
SUSE Linux Enterprise Server 12 [4]

84,744 sec

2,916

172 sec

SAP NetWeaver 7.50 SAP HANA 2.0
Red Hat Enterprise Linux advantage

13%

6% -1.75%

These results demonstrate Red Hat’s commitment to helping OEM partners and ISVs deliver high-performing solutions to our mutual customers, and showcase close alignment between Red Hat and Dell EMC that, in collaboration with SAP, led to the creation of certified, single-source solutions for SAP HANA. Available in both single-server and larger, scale-out configurations, Dell EMC’s solution is optimized with Red Hat Enterprise Linux for SAP Solutions.

Learn more: https://www.redhat.com/en/partners/dell and https://www.redhat.com/en/resources/red-hat-enterprise-linux-sap-solutions-technology-overview

Results as of July 30, 2018. SAP and SAP HANA are the registered trademarks of SAP AG in Germany and in several other countries. See http://www.sap.com/benchmark for more information. [1] Dell EMC PowerEdge R940xa (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180M processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 1536 GB main memory). Certification number #2018023 [2] FUJITSU Server PRIMERGY RX4770 M4 (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180 processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 1536 GB main memory). Certification number #2018017 [3] Dell EMC PowerEdge R840 (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180M processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 3072 GB main memory). Certification number #2018028 [4] HPE Superdome Flex (4 processor / 112 cores / 224 threads, Intel Xeon
Platinum 8180 processor, 2.50 GHz, 64 KB L1 cache and 1024 KB L2 cache per core, 38.5 MB L3 cache per processor, 3072 GB main memory). Certification number #2018025
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Nearly a year ago, Casey Stegman and I wrote a short blog on how we had (big) plans to “change up our marketing approach”… and how it might involve comic books. We also shared our new marketing mantra: Listen. Learn. Build. Well, I have some great news. We listened, we learned, we built—and today I’d like to share.

Listening

In the latter half of 2017 we took our show on the road. After that fateful encounter in Austin—where we learned that some developers just want the operating system to “get out of the way”—we knew there was an ocean of knowledge and experiences to learn from. From Cape Cod (Flock 2017) to Prague (Open Source Summit Europe 2017) to Las Vegas (AWS re:Invent 2017) to San Francisco (Red Hat Summit 2018), we spoke with literally hundreds of passionate problem solvers. We also had a blast discovering people’s various superpowers and illustrating them as the (command line) heroes / heroines that they are.  Some folks you may recognize:

 

Learning

From these interviews we learned a lot about the challenges that developers, admins, and architects are facing. We learned that while many people are struggling with technical challenges, from embracing containers and re-architecting for hybrid cloud, others are facing equally impressive turbulence as they adopt agile development practices and DevOps workflows. Fun fact here: We also learned a lot about our heroes / heroines “origin stories.” While many got their start via some form of video game (Colossal Cave Adventure anyone?) others were given early access to various bits of hardware (think: “my father brought home this crazy machine”) and began their journey from there.

Building

What did we do with all of this newfound knowledge and information? We built. Specifically, a podcast, called Command Line Heroes, which we debuted earlier this year. It’s a podcast about the people who transform technology from the command line up. We found a successful formula by taking the stories we heard, digging into some additional research, and diving deep into everything from

If you’ve been living in a (colossal) cave and have yet to subscribe to the podcast, it’s not too late! Command Line Heroes is available wherever you download / access podcasts today.

We’re not done!

More good news? We’re not done. As the fall event season approaches here in North America we plan to get back on the road. In August we’ll be in Boston at DevConf.us and then north of the border for Open Source Summit North America 2018. If you have plans to attend either event, find us—we’d love to hear more about your story.

But if you’re not traveling, here’s another way you can help us listen, learn, and build. We’ve designed a showdown of sorts. In fact, it’s a Command Line Showdown.

As we ramp up towards celebrating System Administrator Appreciation Day on Friday, July 27th, we’re going to pit various commands against each other and allow y’all to help us find “the most useful Linux command.” Quick note: The set of commands we chose was sourced from conversations with you at some of the aforementioned events. Of course, we couldn’t include them all, but if this takes off we can definitely look to a future with more polls, more commands, and bigger showdowns.

So get voting! But also don’t be shy (we know you won’t be) about giving us your feedback.

One final note. We’re hard at work on a season 2 of Command Line Heroes. And if you’d love to influence its direction, but can’t come to any of the events we’re attending, vote in “the showdown” or email us your thoughts (commandlineheroes@redhat.com). If all else fails, subscribe to the podcast and stay tuned for more news soon.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Balancing size and features is a universal challenge when building software. So, it’s unsurprising that this holds true when building container images. If you don’t include enough packages in your base image, you end up with images which are difficult to troubleshoot, missing something you need, or just cause different development teams to add the exact same package to layered images (causing duplication). If you build it too big, people complain because it takes too long to download – especially for quick and dirty projects or demos. This is where Buildah comes in.

In the currently available ecosystem of build tools, there are two main kinds of build tools:

  1. Ones which build container images from scratch.
  2. Those that build layered images.

Buildah is unique in that it elegantly blurs the line between both – and, it has a rich set of capabilities for each. One of those rich capabilities is multi-stage builds.

At Red Hat Summit 2018 in San Francisco, Scott McCarty and I boiled the practice of building production ready containers down into five key tenets – standardize, minimize, delegate, process, and iterate (video & presentation).

Two tenets in particular are often at odds – standardize and minimize. It makes sense to standardize on a rich base image, while at the same time minimizing the content in layered builds. Balancing both is tricky, but when done right, reaps the benefits of OCI image layers at scale (lots of applications) and improves registry storage efficiency.

Multi-stage builds

A particularly powerful example of how to achieve this balance is the concept of multi-stage builds. Since build dependencies like compilers and package managers are rarely required at runtime, we can exclude them from the final build by breaking it into two parts. We can do the heavy lifting in the first part, then use the build artifacts (think Go binaries or jars) in the second. We will then use the container image from the second build in production.

Using this methodology leverages the power of rich base images, while at the same time, results in a significantly smaller container image. The resultant image isn’t carrying additional dependencies that aren’t used during runtime. The multi-stage build concept became popular last year with the release of Docker v17.05, and OpenShift has long had a similar capability with the concept of chaining builds.

OK, multi-stage builds are great, you get it, but to make this work right, the two builds need to be able to copy data between them. Before we tackle this, let’s start with some background.

Buildah background

Buildah was a complete rethink of how container image builds could and should work. It follows the Unix philosophy of small, flexible tools. Multi-stage builds were part of the original design and have been possible since its inception. With the release of Buildah 1.0, users can now take advantage of the simplicity of using multi-stage builds with the Dockerfile format. All of this, with a smaller tool, no daemon, and tons of flexibility during builds (ex. build time volumes).

Below we’ll take a look at how to use Buildah to accomplish multi-stage builds with a Dockerfile and also explore a simpler, yet more sophisticated way to tackle them.

Using Dockerfiles:
$buildah bud -t [image:tag] . 

….and that’s it! Assuming your Dockerfile is written for multi-stage builds and in the directory the command is executed, everything will just work. So if this is all you’re looking for, know that it’s now trivial to accomplish this with Buildah in Red Hat Enterprise Linux 7.5.

Now, let’s dig a little deeper and take a look at using Buildah’s native commands to achieve the same outcome and some reasons why this can be a powerful alternative for certain use cases.

For clarity, we’ll start by using Alex Ellis’s blog post that demonstrates the benefits of performing multi-stage builds. Use of this example is simply to compare and contrast the Dockerfile version with Buildah’s native capabilities. It’s not an endorsement any underlying technologies such as Alpine Linux or APK. These examples could all be done in Fedora, but that would make the comparison less clear.

Using Buildah Commands

Using his https://github.com/alexellis/href-counter we can convert the included Dockerfile.multi file to a simple script like this:

First Build
#!/bin/bash

# build container

buildcntr1=$(buildah from golang:1.7.3)

buildmnt1=$(buildah mount $buildcntr)

Using simple variables like this are not required, but they will make the later commands clearer to read so it’s recommended. Think of the buildcntr1 as a handle which represents the container build, while the variable buildmnt1 represents a directory which will mount the container.

buildah run $buildcntr1 go get -d -v golang.org/x/net/html

This is the first command verbatim in the original Dockerfile. All that’s needed is to change RUN to run and point Buildah to the container we want to execute the command in. Once, the command completes, we are left with a local copy of the go program. Now we can move it wherever we want. Buildah has a native directive to copy the contents out of a container build:

buildah copy $buildcntr1 app.go  .

Alternatively, we can use the system command to do the same thing by referencing the mount point:

cp app.go $buildmnt1/go

For this example both of these lines will accomplish the same thing. We can use buildah’s copy command the same way the COPY command works in a Dockerfile, or we can simply use the host’s cp command to perform the task of copying the binary out of the container. In the rest of this tutorial, we’ll rely on the hosts command.

Now, let’s build the code:

buildah run $buildcntr1 /bin/sh -c "CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app ." 
Second Build

The same applies to this command. We’re changing RUN to run and executing the command in the same container:

# runtime container

buildcntr2=$(buildah from alpine:latest)

buildmnt2=$(buildah mount $buildcntr2)

Now let’s define a separate runtime image that we’ll use to run our application in production with.

buildah run $buildcntr2 apk --no-cache add ca-certificates

Same tweaks for the RUN command

#buildah copy $buildcntr2 $buildmnt1/go/app .

Or:

cp $buildmnt1/go/app $buildmnt2

Here we have the same option as above. To bring the compiled application into the second build, we can use the copy command from buildah or the host.

Now, add the default command to the production image.

buildah config --cmd ./app $buildcntr2

Finally, we unmount and commit the image, and optionally clean up the environment:

#unmount & commit the image

buildah unmount $buildcntr2

buildah commit $buildcntr2 multi-stage:latest


#clean up build

buildah rm $buildcntr1 $buildcntr2

Don’t forget that Buildah can also push the image to your desired registry using ​buildah push`

The beauty of Buildah is that we can continue to leverage the simplicity of the Dockerfile format, but we’re no longer bound by the limitations of it. People do some nasty, nasty things in a Dockerfile to hack everything onto a single line. This can make them hard to read, difficult to maintain, and it’s inelegant.

When you combine the power of being able to manipulate images with native Linux tooling from the build host, you are now free to go beyond the Dockerfile commands! This opens up a ton of new possibilities for the content of container images, the security model involved, and the process for building.

A great example of this was explored in one of Tom Sweeney’s blog posts on creating minimal containers. Tom’s example of leveraging the build host’s package manager is a great one, and means we no longer require something like “yum” to be available in the final image.

On the security side, we no longer require access to the Docker socket which is a win for performing builds from Kubernetes/OpenShift. In fairness Buildah currently requires escalated privileges on the host, but soon this will no longer be the case. Finally, on the process side, we can leverage Buildah to augment any existing build process, be it a CI/CD pipeline or building from a Kubernetes cluster to create simple and production-ready images.

Buildah provides all of the primitives needed to take advantage of the simplicity of Dockerfiles combined with the power of native Linux tooling, and is also paving the way to more secure container builds in OpenShift. If you are running Red Hat Enterprise Linux, or possibly an alternative Linux distribution, I highly recommend taking a look at Buildah and maximizing your container build process for production.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

At last year’s International supercomputing conference (ISC) we noted the trend of Linux underpinning the vast majority of supercomputers that are being built using sophisticated acceleration and interconnect technologies, effectively redefining the term “commodity“ in high performance computing (HPC).

Fast forward to ISC18 and Linux is the defacto standard operating system for all top supercomputers with Red Hat Enterprise Linux powering some of the largest and most intelligent supercomputers on the planet – Summit and Sierra. Red Hat is looking forward to seeing how these two newest US-based supercomputers have scored on the latest iteration of the Top500 list.

In the past, HPC workloads have had to run on custom-built software stacks and overly-specialized hardware. As HPC customers move toward cloud deployments, Red Hat is bringing open technologies to the supercomputing arena, from the world’s leading enterprise Linux platform tailored for HPC workloads to massively scalable, fully open cloud infrastructure, along with the management and automation technologies needed to keep these deployments running smoothly.

Red Hat technologies are at the heart of this transformation and we will be showcasing our latest solutions for HPC at ISC18. Stop by our booth (H-700) to learn about:

  • Proven HPC infrastructure
    Red Hat Enterprise Linux provides the foundation for many HPC software stacks and is available across multiple hardware architectures. It is at the core of Red Hat Virtualization and Red Hat OpenStack Platform, both of which are part of many HPC environments.
  • Persistent scale-out storage
    With the modernization of HPC applications based on containers and the adoption of hybrid cloud infrastructure, many enterprises, and government agencies with HPC workloads are increasingly frustrated with existing storage technologies. Software-defined solutions, like Red Hat Gluster Storage and Red Hat Ceph Storage, provide cost-effective alternatives for scale-out network-attached storage (NAS), containerized applications, and hybrid cloud environments.
  • Emerging technologies for highly scalable environments
    Large supercomputing sites find Red Hat OpenShift Container Platform and Red Hat Ansible Automation compelling for their science work as they can provide better application portability and system provisioning and automation. Modern applications are more and more frequently involving machine learning, artificial intelligence, and other data science workloads which can make use of hardware such as GPUs. NVIDIA and Red Hat are working hard to enable these workloads and to make them usable with Linux containers for deployment simplification, build automation, and scale.

Also in the booth, you will have an opportunity to experience the power and flexibility of the Red Hat portfolio by way of a virtual reality experience. In this interactive encounter, you will create your own compute cluster using multiple hardware architectures, including Arm, x86_64 and IBM POWER, deploy multiple Red Hat products to solve advanced computational problems and visualize the results.

Red Hat’s chief ARM architect, Jon Masters, will be presenting on the effects of the Spectre and Meltdown vulnerabilities on large size clusters during the show. Be sure to catch his presentation at booth N-210 in the exhibit hall on Wednesday, June 27 from 3:30-4:00 pm.

See demos in our booth, discuss trends, challenges, and opportunities you’re facing with our global team, claim your red fedora and just in time for the FIFA World Cup enter to win soccer-themed barbeque grill that we’re raffling off at the end of each conference day.

For last minute announcements, demo updates and additional information please visit www.red.ht/ISC18. We look forward to seeing you in Frankfurt!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Organizations today are seeking to increase productivity, flexibility and innovation to deliver services faster without sacrificing security, stability and performance. As hybrid IT environments continue to expand and evolve, security must be automated to scale and mitigate risks to achieve compliance and meet the needs of the business.

Why should security and compliance be automated? According to the 2017 Verizon Data Breach Report, “81% of hacking-related breaches leveraged either stolen and/or weak passwords”. Ensuring protection against stolen and/or weak passwords is preventable by defining and implementing strong password policies using automation. In this article by Gartner, “99% of the vulnerabilities exploited by the end of 2020 will continue to be ones known by security and IT professionals at the time of the incident”.  Automation can help enforce and ensure security and compliance and help protect against security vulnerabilities and security breaches.

Red Hat Enterprise Linux provides security technologies, certifications, and the ongoing support of the Product Security team to combat vulnerabilities, protect your data, and meet regulatory compliance. You can automate regulatory compliance and security configuration remediation across your systems and within containers with OpenSCAP, Red Hat’s National Institute of Standards and Technology (NIST)-certified scanner that checks and remediates against vulnerabilities and configuration security baselines, including against National Checklist content for PCI-DSS, DISA STIG, and more. Additionally, centralize and scale out configuration remediation across your entire hybrid environment with the broader Red Hat management portfolio.

OpenSCAP is a family of open source SCAP tools and content that help users create standard security checklists for enterprise systems. Natively shipping in Red Hat Enterprise Linux and Red Hat Satellite, OpenSCAP provides practical security hardening advice for Red Hat technologies and links to compliance requirements, making deployment activities like certifications and accreditations easier. OpenSCAP allows you to perform both vulnerability and security compliance checks in a fully automated way.

To better meet the varied security needs of hybrid computing, Red Hat Enterprise Linux 7.5 provides enhanced software security automation to mitigate risk through the integration of OpenSCAP with Red Hat Ansible Automation. This enables the creation of Ansible playbooks directly from OpenSCAP scans which can then be used to implement remediations more rapidly and consistently across a hybrid IT environment. The remediations are generated in the form of Ansible playbooks, either based on profiles or based on scan results.

A playbook based on a SCAP Security Guide (SSG) profile contains fixes for all rules, and the system is remediated according to the profile regardless of the state of the machine. On the other hand, playbooks based on scan results contain only fixes for rules that failed during an evaluation.

In Red Hat Enterprise Linux 7.5, Red Hat provides pre-built Ansible playbooks for many compliance profiles. The playbooks are stored in the /usr/share/scap-security-guide/ansible/ directory. You can apply the pre-generated Ansible playbooks provided by the scap-security-guide in this directory on your host.

Alternatively, to generate an Ansible playbook based on a profile (for example, the DISA STIG profile for Red Hat Enterprise Linux 7), enter the following command:

$ oscap xccdf generate fix --fix-type ansible \
--profile xccdf_org.ssgproject.content_profile_stig-rhel7-disa \
--output stig-rhel7-role.yml \
/usr/share/xml/scap/ssg/content/ssg-rhel7-ds.xml

To generate an Ansible playbook based on the results of a scan, enter the following command:

$ oscap xccdf generate fix --fix-type ansible \
--result-id "" \
--output stig-playbook-result.yml \
results.xml

where the results.xml file contains results of the scan obtained when scanning with the –results option and the result-id option contains an ID of the TestResult component in the file with results. In the example, above, we are using empty result-id. This is a trick to avoid specifying the full result ID.

To apply the Ansible playbook, enter the following command:

$ ansible-playbook -i inventory.ini stig-playbook-result.yml

Note that the ansible-playbook command is provided by the ansible package. See the ansible-playbook(1) man page and the Ansible Tower User Guide for more information.

The atomic scan command enables users to use OpenSCAP scanning capabilities to scan docker-formatted container images and containers on the system. It is possible to scan for known CVE vulnerabilities or for configuration compliance. Additionally, users can remediate docker-formatted container images to the specified policy.

The OpenSCAP scanner and SCAP content are provided in a container image that allows for easier updating and and deployment of the scanning tools.  The `atomic scan` command enables the evaluation of Red Hat Enterprise Linux based container images and running containers against any provided SCAP profile.

For example, here is how to scan the container for configuration compliance to the RHEL 7 DISA STIG profile.

$ sudo atomic scan --scan_type configuration_compliance \
 --scaner_args profile=stig-rhel7-disa, report registry.access.redhat.com/rhel7:latest

To remediate docker-formatted container images to the specified policy, you need to add the –remediate option to the atomic scan command when scanning for configuration compliance. The following command builds a new remediated container image compliant with the DISA STIG policy from the Red Hat Enterprise Linux 7 container image:

$ sudo atomic scan --remediate --scan_type configuration_compliance \
--scanner_args profile=xccdf_org.ssgproject.content_profile_stig-rhel7-disa,report \
registry.access.redhat.com/rhel7:latest

Finally, in order to automate security and compliance at scale for hybrid environments, you will need an automation strategy that includes products and tools that will help you scan and remediate more than a single machine at a time. For example, you can use OpenSCAP with a combination of Red Hat’s Management Portfolio, which includes Red Hat CloudForms, Red Hat Ansible Automation, Red Hat Satellite , and Red Hat Insights. Using OpenSCAP with these Red Hat Management portfolio projects, you can automate security and compliance at scale for your hybrid environment.

The built-in security automation capabilities of Red Hat Enterprise Linux with the integration of OpenSCAP with Red Hat Ansible Automation gives you the flexibility and ease of automating security compliance. This integration also provides the secure foundation to do security automation at scale by extending these built-in capabilities with Red Hat’s management portfolio.

Learn more in this webcast: Automating Security Compliance with Ease.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Special guest blogger: Ashish Nadkarni, Program Vice President, Worldwide Infrastructure Practice, IDC

Applications are crucial to the functioning of a modern enterprise. Firms that constantly evolve and update their application strategy are the ones that are successful at expanding their competitive differentiation in the digital economy. They infuse their applications portfolio with new-generation applications that run in the cloud, are delivered as microservices, leverage open-source technologies, and are increasingly (infrastructure) platform independent. During application design, the choice of database management systems (DBMS) and operating system environments (OSE) heavily influences the scalability and reliability of the overall stack.

Let us start with the choice of a proven SQL-based DBMS with modern in-memory capabilities for storing structured and semi-structured data. It enables the application to process transactions quickly and reliably. It enables the ingestion of huge and diverse data sets with low latency for large and/or real-time analytical tasks. Such databases support the ability to embed analytic queries in transaction processing, moving from online transaction processing (OLTP) to analytic transaction processing (ATP). And finally, databases make it easier for the application stack to meet security and compliance requirements such as PCI-DSS, GDPR and HIPAA. An example of a widely used SQL-based DBMS for this purpose is Microsoft SQL Server.

Next, let us look at the role played by the OSE. The choice of an appropriate OSE like Linux is essential for consolidating and modernizing the current-generation of applications, while also supporting the development and delivery of new-generation applications. The more versatile the OSE, the easier it is to repackage, replatform and refactor the entire application stack, including its data management and analytics components. A commercial Linux distribution such as Red Hat Enterprise Linux can accelerate database consolidation, application modernization, development and packaging initiatives. Red Hat Enterprise Linux is also Microsoft’s reference Linux platform for SQL Server – which means existing Microsoft and Red Hat customers can take advantage of the consolidation benefits inherent in using Linux without compromising on functionality or service quality.

Linux, especially commercial Linux, has grown in the enterprise. This growth isn’t surprising given the ability of Linux to enable deployment flexibility, development agility and vendor choice. Linux also helps IT organizations to achieve greater ROI through faster release cycles and meet enterprise-wide service level objectives.

Linux is also developer friendly. It enables IT to give developers more control over provisioning and orchestration of infrastructure resources. For example, running applications and databases in containers enables integration with development methodologies like DevOps and continuous integration / continuous delivery (CI/CD). The use of a microservice delivery model creates a smaller and nimbler database footprint and allows for a higher density of database instances when compared with running the same environment in a virtual machine.

It is imperative for IT to select an appropriate DBMS and OSE for enterprise-wide consolidation in order to maximize the chances of a successful outcome. A tried and trusted DBMS platform when matched with an equally proven commercial Linux OS can amplify the value proposition of the entire application stack. It brings together industry-leading product development expertise, investments in cloud platforms and services, and the support and services reputation of the respective vendors.

To learn more about data management platform consolidation with enterprise Linux visit https://red.ht/DBMSwithLinux

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Identity Management in Red Hat Enterprise Linux (IdM) supports two different integration options with Active Directory: synchronization and trust.

I recently got a question about comparison of the two. I was surprised to find that I haven’t yet covered this topic in my blog. So let us close this gap!

The customer was interested in comparison of the two. Here is the question he asked:

To integrate IdM with AD 2016 I want to use winsync rather than trusts.

  • We would like to be able to manage the SUDO, SELinux, SSH key and other options that are not in AD.
  • I understand the advantages and disadvantages of each of the configurations and it seems to me that the synchronization is the best option to get the maximum of functionalities of IdM
  • But I would like to know the reason why Red Hat does not suggest the synchronisation.

Red Hat documentation states:

“In some integration scenarios, the user synchronization may be the only available option, but in general, use of the synchronization approach is discouraged in favor of the cross-realm trust-based integration.”

Is there any special reason why Red Hat recommends trusts (although more complected) vs. winsync?

Thank you for asking!

We in fact do not recommend synchronization for several reasons that I will lay down below but we also acknowledge some cases when synchronization might be the only option. So let us dive into the details…

When you have sync you really have two accounts: one in AD and one in IdM. These would be two different users. In this case you need to keep the passwords in sync too. Keeping password in sync requires putting a password intercepting plugin – passsync on every AD domain controller because it is never known which domain controller will be used for the password change operation. After you deploy the plugin to the domain controllers you need to reset the password for every account so that the plugin can intercept the password and store it in the IdM account. So in fact there is a lot of complexity that is related to synchronization. Let us add that this solution would work only for a single domain. If you have more than one domain in a forest or even several forests you can’t use sync. The synchronization also is done against one AD domain controller so if the connecting is down the synchronization is not going to work and there is no failover.

Another issue to keep in mind is that with synchronization you have two different places where the user authentication happens. For compliance purpose all your audit tools need to be pointed to yet another environment and they would have to collect and merge logs from IdM and AD. It is usually doable but yet another complexity to keep in mind. Another aspect is the account related policies, when you have two different accounts you need to make sure that policies are the same and not diverge.

Synchronization only works for user accounts not groups. Groups structure needs to be created on the IdM side.

Benefits of Trust

With trust there are no duplicate accounts. Users always authenticate against AD. All the audit trails are there in the single place. Since there is only one account for a user all the settings that apply to the account (password length, strength, expiration, etc.) are always consistent with the company wide policy and you do not need to check and enforce them in more than one place. This makes it easier to pass audits.

Trusts are established on the environment to environment level so there is really no single point of failure.

Trust allows users in all AD domains to access IdM managed environment, and since IdM can establish trusts with multiple AD forests if needed you really can cover all forests in your infrastructure.

With the trust setup POSIX attributes can be either managed in AD via schema extensions, if they are already there, dynamically created from AD SIDs on the fly by IdM and SSSD or set on the IdM side as explicit overrides. This capability also allows setting different POSIX attributes for different sets of clients. This is usually needed in the complicated environments where UID and GID namespace has duplicates due to NIS history or merges.

AD groups are transparently exposed by IdM to the clients without the need to recreate them. IdM groups can be created on top or in addition to AD groups.

The information above can be summarized in the following table:

So the promise of the trust setup is to provide a more flexible, reliable and feature rich solution. But this is the promise. This is why I put an asterisk in the table. The reality is more complex. In practice there are challenges with the trust setup too. It turns out the trust setup assumes a well configured and well behaved AD environment. In multiple deployments Red Hat consultants uncovered misconfiguration of AD, DNS, firewalls and other elements of the infrastructure that made deployments more painful than we would like them to be. Despite of the challenges some of which are covered in the article Discovery and Affinity published last year and some of which will be covered in my talk at Red Hat Summit in May most of the current deployments see a way to resolve the deficiencies of the existing infrastructure and get to a stable and reliable environment.

So synchronization might be attractive in the case of the small environment but even in such environment setting up a trust would not be a big complication.

The only case where I would call synchronization out is two factor authentication (2FA) using one time password (OTP) tokens. Customers usually want to have some subset of users to be able to use OTP tokens to login into Linux systems. Since AD does not support 2FA natively some other system needs to assign a token to AD user. It can be a 3rd party solution if customer has it or it can be IdM. In this case to provide centralized OTP based authentication for the Linux systems managed by IdM the accounts that would use OTP would need to be created in IdM. This can be done in different ways: by syncing them from AD using winsync, by syncing them from AD using ipa migrate-ds command, by a script that will load user data from some other source using IdM CLI or LDAP operation, just manually. Once the user is created a password and token can be assigned to him in IdM or the account can be configured to proxy authentication to an existing 2FA solution via RADIUS. IdM allows to enforce 2FA for selected set of systems and services. How to do it, please, read the Red Hat documentation about authentication indicators. This is the best approach. It allows for a general population of users to access systems with their AD password while a selected set of special users will be required to use 2FA on a specific subset of hosts. The only limitation is that this approach will work on Red Hat Enterprise Linux 7 systems. Older systems have limitations with OTP support.

If all the users need to have OTP tokens to log into the Linux systems then trust does not make sense and syncing accounts might be a more attractive option.

Thank you for reading! Comments and suggestions are welcome!

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview