Loading...

OpenStack Summit returns to Vancouver, Canada May 21-24, 2018, and Red Hat will be returning as well with as big of a presence as ever. Red Hat will be a headline sponsor of the event, and you’ll have plenty of ways to interact with us during the show.

First, you can hear from our head of engineering and OpenStack Foundation board member, Mark McLoughlin, during the Monday morning Keynote sessions. Mark will be discussing OpenStack’s role in a hybrid cloud world, as well as the importance of OpenStack and Kubernetes integrations. After the keynotes, you’ll want to come by the Red Hat booth in the exhibit hall to score some cool SWAG (it goes quickly), talk with our experts, and check out our product demos. Finally, you’ll have the entire rest of the show to listen to Red Hatters present and co-present on a variety of topics, from specific OpenStack projects, to partner solutions, to OpenStack integrations with Kubernetes, Ansible, Ceph storage and more. These will be delivered via traditional sessions, labs, workshops, and lunch and learns. For a full list of general sessions featuring Red Hatters, see below.

Beyond meeting us at the Red Hat booth or listening to one of us speak in a session or during a keynote, here are the special events we’ll be sponsoring where you can also meet us. If you haven’t registered yet, use our sponsor code: REDHAT10 to get 10% off the list price. And check out the OpenStack Foundation’s great deals on hotels through early May.

Containers, Kubernetes and OpenShift on OpenStack Hands-on Training
Join the Red Hat’s OpenShift team for a full day of discussion and hands on lab to learn how OpenShift can help you deliver apps even faster on OpenStack.
Date: May 20th, 9:00 am-5:00 pm
Location: Vancouver Convention Centre West – Level Two – Room 218-219
RSVP required

Red Hat and Trilio Evening Social
All are invited to join Red Hat and Trilio for an evening of great food, drinks, and waterfront views of Vancouver Harbour.
When: Monday, May 21st, 7:30-10:30 pm
Location: TapShack Coal Harbour
RSVP required 

Women of OpenStack Networking Lunch sponsored by Red Hat
Meet with other women for lunch and discuss important topics affecting women in technology and business
Guest speaker: Margaret Dawson, Vice President of Product Marketing, Red Hat
Date: Wednesday, May 23 2018, 12:30-1:50 pm
Location: Vancouver Convention Centre West, Level 2, Room 215-216 
More information

Red Hat Training and Certification Lunch and Learn
Topic: Performance Optimization in Red Hat OpenStack Platform
Wednesday, May 23rd, 12:30-1:30 pm
Location: Vancouver Convention Centre West, Level 2, Room 213-214 
RSVP required 

Red Hat Jobs Social

"All Around the World" - A Red Hat music video - YouTube

Connect with Red Hatters and discover why working for the open source leader is a future worth exploring. We’ll have food, drinks, good vibes, and a chance to win some awesome swag.
Date: Wednesday, May 23, 6:00-8:00 pm
Location: Rogue Kitchen and Wetbar
RSVP required

General Sessions Featuring Red Hatters

Monday

Session Speaker Time
OpenStackSDKs – Project Update Monty Taylor 1:30 PM
Docs/i18n – Project Onboarding Stephen Finucane, Frank Kloeker (Deutsche Telekom), Ian Y. Choi (Fusetools Korea) 1:30 PM
Linux Containers Internal Lab Scott McCarty 1:30 PM
The Wonders of NUMA, or Why Your High Performance Application Doesn’t Perform Stephen Finucane 2:10 PM
Glance – Project Update Erno Kuvaja 3:10 PM
Call It Real: Virtual GPUs in Nova Silvain Bauza, Jianhua Wang (Citrix) 3:10 PM
A Unified Approach to Role-Based Access Control Adam Young 3:10 PM
Unlock Big Data Efficiency with CephData Lake Kyle Bader, Yong Fu (Intel), Jian Zhang (Intel), Yuan Zhuo (INTC) 4:20 PM
Storage for Data Platforms Kyle Bader, Uday Boppana 5:20 PM

Tuesday

Session Speaker Time
OpenStack with IPv6: Now You Can! Dustin Schoenbrun, Tiago Pasqualini (NetApp), Erlon Cruz (NetApp) 9:00 AM
Integrating Keystone with large-scale centralized authentication Ken Holden, Krzysztof Janizewski 9:50 AM
Sahara – Project Onboarding Telles Nobrega 11:00 AM
Lower the Barriers: Or How To Make Hassle-Free Open Source Events Sven Michels 11:40 AM
Glance – Project Onboarding Erno Kuvaja, Brian Rosmaita (Verizon) 11:50 AM
Kuryr – Project Update Daniel Mellado 12:15 PM
Sahara – Project Update Telles Nobrega 1:50 PM
Heat – Project Update Rabi Mishra, Thomas Herve, Rico Lin (EasyStack) 1:50 PM
Superfluidity: One Network To Rule Them All Daniel Mellado, Luis Tomas Bolivar, Irena Berezovsky (Huawei) 3:10 PM
Burnin’ Down the Cloud: Practical Private Cloud Management David Medberry, Steven Travis (Time Warner Cable) 3:30 PM
Infra – Project Onboarding David Moreau-Simard, Clark Boylon (OpenStack Foundation) 3:30 PM
Intro to Kata Containers Components: a Hands-on Lab Sachin Rathee, Sudhir Kethamakka 4:40 PM
Kubernetes Network-policies and Neutron Security Groups – Two Sides of the Same Coin? Daniel Mellado, Eyal Leshem (Huawei) 5:20 PM
How To Survice an OpenStack Cloud Meltdown with Ceph Federico Lucifredi, Sean Cohen, Sebatien Han 5:30 PM
OpenStack Internal Messaging at the Edge: In Depth Evaluation Kenneth Giusti, Matthieu Simonin, Javier Rojas Balderrama 5:30 PM

Wednesday

Session Speaker Time
Oslo – Project Update Ben Nemec 9:50 AM
Kuryr – Project Onboarding Daniel Mellado, Irena Berezovsky (Huawei) 9:50 AM
How To Work with Adjacent Open Source Communities – User, Developer, Vendor, Board Perspective Mark McLoughlin, Anni Lai (Huawei), Davanum Srinivas (Mirantis), Christopher Price (Ericsson), Gnanavelkandan Kathirvel (AT&T) 11:50 AM
Nova – Project Update Melanie Witt 11:50 AM
TripleO – Project Onboarding Alex Schultz, Emilien Macchi, Dan Prince 11:50 AM
Distributed File Storage in Multi-Tenant Clouds using CephFS Tom Barron, Ramana Raja, Patrick Donnelly 12:20 PM
Lunch & Learn – Performance optimization in Red Hat OpenStack Platform Razique Mahroa 12:30 PM
Cinder Thin Provisioning: a Comprehensive Guide Gorka Eguileor, Tiago Pasqualini (NetApp), Erlon Cruz (NetApp) 1:50 PM
Nova – Project Onboarding Melanie Witt 1:50 PM
Glance’s Power of Image Import Plugins Erno Kuvaja 2:30 PM
Mistral – Project Update Dougal Matthews 3:55 PM
Mistral – Project Onboarding Dougal Matthews, Brad Crochet 4:40 PM
Friendly Coexistence of Virtual Machines and Containers on Kubernetes using KubeVirt Stu Gott, Stephen Gordon 5:30 PM

Thursday

Session Speaker Time
Manila – Project Update Tom Barron 9:00 AM
Oslo – Project Onboarding Ben Nemec, Kenneth Giusti, Jay Bryant (Lenovo) 9:00 AM
Walk Through of an Automated OpenStack Deployment Using Triple-O Coupled with OpenContrail – POC Kumythini Ratnasingham, Brent Roskos, Michael Henkel (Juniper Networks) 9:00 AM
Working Remotely in a Worldwide Community Doug Hellmann, Julia Kreger, Flavio Percoco, Kendall Nelson (OpenStack Foundation), Matthew Oliver (SUSE) 9:50 AM
Manila – Project Onboarding Tom Barron 9:50 AM
Centralized Policy Engine To Enable Multiple OpenStack Deployments for Telco/NFV Bertrand Rault, Marc Bailly (Orange), Ruan He (Orange) 11:00 AM
Multi Backend CNI for Building Hybrid Workload Clusters with Kuryr and Kubernetes Daniel Mellado, Irena Berezovsky (Huawei) 11:50 AM
Workshop/Lab: Containerize your Life! Joachim von Thadden 1:50 PM
Root Your OpenStack on a Solid Foundation of Leaf-Spine Architecture! Joe Antkowiak, Ken Holden 2:10 PM
Istio: How To Make Multicloud Applications Real Christian Posta, Chris Hoge (OpenStack Foundation), Steve Drake (Cisco), Lin Sun, Costin Monolanche (Google) 2:40 PM
Push Infrastructure to the Edge with Hyperconverged Cloudlets Kevin Jones 3:30 PM
A DevOps State of Mind: Continuous Security with Kubernetes Chris Van Tuin 3:30 PM
OpenStack Upgrades Strategy: the Fast Forward Upgrade Maria Angelica Bracho, Lee Yarwood 4:40 PM
Managing OpenStack with Ansible, a Hands-on Workshop Julio Villarreal Pelegrino, Roger Lopez 4:40 PM

We’re looking forward to seeing you there!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Red Hat Summit is just around the corner, and we’re excited to share all the ways in which you can connect with OpenStack® and learn more about this powerful cloud infrastructure technology. If you’re lucky enough to be headed to the event in San Francisco, May 8-10, we’re looking forward to seeing you. If you can’t go, fear not, there will be ways to see some of what’s going on there remotely. And if you’re undecided, what are you waiting for? Register today

From the time Red Hat Summit begins you can find hands-on labs, general sessions, panel discussions, demos in our partner pavillion (Hybrid Cloud section), and more throughout the week. You’ll also hear from Red Hat OpenStack Platform customers on their successes during some of the keynote presentations. Need an open, massively scalable storage solution for your cloud infrastructure? We’ll also have sessions dedicated to our Red Hat Ceph Storage product.

Red Hat Summit has grown significantly over the years, and this year we’ll be holding activities in both the Moscone South and Moscone West. And with all of the OpenStack sessions and labs happening, it may seem daunting to make it to everything, especially if you need to transition from one building to the next. But worry not. Our good friends from Red Hat Virtualization will be sponsoring pedicabs to help transport you between the buildings.   

Here’s our list of sessions for OpenStack and Ceph at Red Hat Summit:

Tuesday

Session Speaker Time / Location
Lab – Deploy a containerized HCI IaaS with OpenStack and Ceph Rhys Oxenham, Greg Charot, Sebastien Han, John Fulton 10:00 am / Moscone South, room 156
Ironic, VM operability combined with bare-metal performances Cedric Morandin (Amadeus) 10:30 am / Moscone West, room 2006
Lab – Hands-on with OpenStack and OpenDaylight SDN Rhys Oxenham, Nir Yechiel, Andre Fredette, Tim Rozat 1:00 pm / Moscone South, room 158
Panel – OpenStack use cases: how business succeeds with OpenStack August Simonelli, Pete Pawelski 3:30 pm / Moscone West, room 2007
Lab – Understanding containerized Red Hat OpenStack Platform Ian Pilcher, Greg Charot 4:00 pm / Moscone South, room 153
Red Hat OpenStack Platform: the road ahead Nick Barcet, Mark McLoughlin 4:30 pm / Moscone West, room 2007


Wednesday

Session Speaker Time / Location
Lab – First time hands-on with Red Hat OpenStack Platform Rhys Oxenham, Jacob Liberman 10:00 am / Moscone South, room 158
Red Hat Ceph Storage roadmap: past, present, and future Neil Levine 10:30 am / Moscone West, room 2024
Optimize Ceph object storage for production in multisite clouds Michael Hackett, John Wilkins 11:45 am / Moscone South, room 208
Production-ready NFV at Telecom Italia (TIM) Fabrizio Pezzella, Matteo Bernacchi, Antonio Gianfreda (Telecom Italia) 11:45 am / Moscone West, room 2002
Workload portability using Red Hat CloudForms and Ansible Bill Helgeson, Jason Woods, Marco Berube 11:45 am / Moscone West, room 2009
Delivering Red Hat OpenShift at ease on Red Hat OpenStack Platform and Red Hat Virtualization Francesco Vollero, Natale Vinto 3:30 pm / Moscone South, room 206
The future of storage and how it is shaping our roadmap Sage Weil 3:30 pm / Moscone West, room 2020
Lab – Hands on with Red Hat OpenStack Platform Rhys Oxenham, Jacob Liberman 4:00 pm / Moscone South, room 153
OpenStack and OpenShift networking integration Russell Bryant, Antoni Segura Puimedon, and Jose Maria Ruesta (BBVA) 4:30 pm / Moscone West, room 2011

Thursday

Session Speaker Time / Location
Workshop – OpenStack roadmap in action Rhys Oxenham 10:45 am / Moscone South, room 214
Medical image processing with OpenShift and OpenStack Daniel McPherson, Ata Turk (Boston University), Rudolph Pienaar (Boston Children’s Hospital) 11:15 am / Moscone West, room 2006
Scalable application platform on Ceph, OpenStack, and Ansible Keith Hagberg (Fidelity), Senthivelrajan Lakshmanan (Fidelity), Michael Pagan, Sacha Dubois, Alexander Brovman (Solera Holdings) 1:00 pm / Moscone West, room 2007
Red Hat CloudForms: turbocharge your OpenStack Kevin Jones, Jason Ritenour 2:00 pm / Moscone West, room 201
What’s new in security for Red Hat OpenStack Platform? Nathan Kinder, Keith Basil 2:00 pm / Moscone West, room 2003
Ceph Object Storage for Apache Spark data platforms Kyle Bader, Mengmeng Liu 2:00 pm / Moscone South, room 207
OpenStack on FlexPod – like peanut butter and jelly Guil Barros, Amit Borulkar NetApp 3:00 pm / Moscone West, room 2009

Hope to see you there!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

OpenStack momentum continues to grow as an important component of hybrid cloud, particularly among enterprise and telco. At Red Hat, we continue to seek ways to make it easier to consume. We offer extensive, industry-leading training, an easy to use installation and lifecycle management tool, and the advantage of being able to support the deployment from the app layer to the OS layer.

One area that some of our customers ask about is the rapid release cycle of OpenStack. And while this speed can be advantageous in getting key features to market faster, it can also be quite challenging to follow for customers looking for stability.

With the release of Red Hat OpenStack Platform 10 in December 2016, we introduced a solution to this challenge – we call it the Long Life release. This type of release includes support for a single OpenStack release for a minimum of three years plus an option to extend another two full years. We offer this via an ELS (Extended Life Support) allowing our customers to remain on a supported, production-grade OpenStack code base for far longer than the usual 6 month upstream release cycle. Then, when it’s time to upgrade, they can upgrade in-place and without additional hardware to the next Long Life release. We aim to designate a Long Life release every third release, starting with Red Hat OpenStack Platform 10 (Newton).

Now, with the upcoming release of Red Hat OpenStack Platform 13 (Queens), we are introducing our second Long Life release. This means we can, finally and with great excitement, introduce the world to our latest new feature: the fast forward upgrade.

Fast forward upgrades take customers easily between Long Life releases. It is a first for our OpenStack distribution, using Red Hat OpenStack Platform director (known upstream as TripleO) and aims to change the game for OpenStack customers. With this feature, you can choose to stay on the “fast train” and get all the features from upstream every six months, or remain on a designated release for longer, greatly easing the upgrade treadmill some customers are feeling.

It’s a pretty clever procedure as it factorizes the process of three consecutive upgrades and therefore greatly reduces the number of steps needed to perform it. In particular, it reduces the number of reboots needed, which in the case of very large deployments makes a huge difference. This capability, combined with the extended support for security vulnerabilities and key backports from future releases, has made Red Hat OpenSack Platform 10 very popular with our customers.

Red Hat OpenStack Life Cycle dates

Under the covers of a Fast Forward Upgrade

The fast forward upgrade starts like any other upgrade, with a Red Hat OpenStack Platform 10 minor update. A minor update may contain everything from security patches to functional enhancements to even backports from newer releases. There is nothing new about the update procedure for Red Hat OpenStack Platform 10, what changes is the packages that will be included such as kernel changes and other OpenStack specific changes. We’re placing all changes requiring a reboot in this minor update. The update procedure allows for a sequential update of the undercloud, control plane nodes and the compute nodes. It may also include an instance evacuation procedure so that there is no impact to running workloads even if they reside on a node scheduled for reboot after the update. The resulting Red Hat OpenStack Platform 10 cloud will have the necessary operating system components to operate in Red Hat OpenStack Platform 13 without further node reboots.

The next step is the sequential upgrade of the undercloud from Red Hat OpenStack Platform 10, to 11, to 12, to 13. There is no stopping during these steps and in case of a needed rollback of this portion you must return to Red Hat OpenStack Platform 10.

During the fast forward upgrade there are opportunities to perform backups. These should be performed since there’s no automated rewind included but rather a restore from these backups.

A lot of things have changed between Red Hat OpenStack Platform 10 and 13. The most notable is the introduction of OpenStack services in containers. But don’t worry! The fast forward upgrade procedure takes the cloud from a non-containerized deployment to a resulting cloud with OpenStack services running in containers, while abstracting and reducing the complexity of upgrading through the middle releases of Red Hat OpenStack Platform 11 and 12.

In the final steps of the fast forward upgrade procedure, the overcloud will move from Red Hat OpenStack Platform 10 to 13 using a procedure that syncs databases, generates templates for 13, and installs 13’s services in containers. While some of the content for these steps may be part of a release which is no longer supported, Red Hat will provide full support for the required code to perform the upgrade.

What’s next …

In order for this procedure to be supported, it needs to be validated with the released code, and carefully tested in many situations. For this reason, it is scheduled to be ready for testing from Red Hat OpenStack Platform 13 general availability (GA); however, we will warn users launching the procedure that it should not be used in production environments. We encourage you to try the procedure on test environments during this period, and report any issues you find via the normal support procedure.  This will greatly help us ensure that we are covering all cases. During this time support cases relating to fast forward upgrades will not be eligible for high priority response times. 

Once we have thoroughly field tested the procedure, fixed bugs, and are confident that it is ready, we will remove this warning and make an announcement on this same blog. After this happens, it will be OK to proceed with fast forward upgrades in production environments. You can follow this progress of validation and testing by following this blog and staying in touch with your local Red Hat account and support teams.

Stay tuned for future fast forward upgrade blogs where we will dig deeper into the details of this procedure and share experiences and use cases that we’ve tested and validated.

Additionally, we will give an in depth presentation on the fast forward upgrade process at this year’s Red Hat Summit May 8-10 in San Francisco and  OpenStack Summit in Vancouver May 21-24. Please come and visit us in San Francisco and Vancouver for exciting Red Hat news, demos, and direct access to Red Hatters from all over the world. See you there!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In our first blog post on the topic of Fernet tokens, we explored what they are and why you should think about enabling them in your OpenStack cloud. In our second post, we looked at the method for enabling these

Fernet tokens in Keystone are fantastic. Enabling these, instead of UUID or PKI tokens, really does make a difference in your cloud’s performance and overall ease of management. I get asked a lot about how to manage keys on your controller cluster when using Fernet. As you may imagine, this could potentially take your cloud down if you do it wrong. Let’s review what Fernet keys are, as well as how to manage them in your Red Hat OpenStack Platform cloud.

Photo by Freddy Marschall on Unsplash

Prerequisites
  • A Red Hat OpenStack Platform 11 director-based deployment
  • One or more controller nodes
  • Git command-line client
What are Fernet Keys?

Fernet keys are used to encrypt and decrypt Fernet tokens in OpenStack’s Keystone API. These keys are stored on each controller node, and must be available to authenticate and validate users of the various OpenStack components in your cloud.

Any given implementation of keystone can have (n)keys based on the max_active_keys setting in /etc/keystone/keystone.conf. This number will include all of the types listed below.

There are essentially three types of keys:

Primary

Primary keys are used for token generation and validation. You can think of this as the active key in your cloud. Any time a user authenticates, or is validated by an OpenStack API, these are the keys that will be used. There can only be one primary key, and it must exist on all nodes (usually controllers) that are running the keystone API. The primary key is always the highest indexed key.

Secondary

Secondary keys are only used for token validation. These keys are rotated out of primary status, and thus are used to validate tokens that may exist after a new primary key has been created. There can be multiple secondary keys, the oldest of which will be deleted based on your max_active_keys setting after each key rotation.

Staged

These keys are always the lowest indexed keys (0). Whenever keys are rotated, this key is promoted to a primary key at the highest index allowable by max_active_keys. These keys exist to allow you to copy them to all nodes in your cluster before they’re promoted to primary status. This avoids the potential issue where keystone fails to validate a token because the key used to encrypt it does not yet exist in /etc/keystone/fernet-keys.

The following example shows the keys that you’d see in /etc/keystone/fernet-keys, with max_active_keys set to 4.

0 (staged: the next primary key)
1 (primary: token generation & validation)

Upon performing a key rotation, our staged key (0), will be the new primary key (2), while our old primary key (1), will be moved to secondary status (1).

0 (staged: the next primary key)
1 (secondary: token validation)
2 (primary: token generation & validation)

We have three keys here, so yet another key rotation will produce the following result:

0 (staged: the next primary key)
1 (secondary: token validation)
2 (secondary: token validation)
3 (primary: token generation & validation)

Our staged key (0), now becomes our primary key (3). Our old primary key (2), now becomes a secondary key (2), and (1) remains a secondary key.

We now have four keys, the number we’ve set in max_active_keys. One more final rotation would produce the following:

0 (staged: the next primary key)
1 (deleted)
2 (secondary: token validation)
3 (secondary: token validation)
4 (primary: token generation & validation)

Our oldest key, secondary (1), is deleted. Our previously staged key (0), is moved to primary (4) status.  A new staged key (0) is created. And finally our old primary key (3) is moved to secondary status.

If you haven’t noticed this by now, rotating keys will always remove the key with the lowest index, excluding 0 — up to your max_active_keys. Additionally, note that you must be careful to set your max_active_keys configuration setting to something that makes sense, given your token lifetime and how often you plan to rotate your keys.

When to rotate? Photo by Uroš Jovičić on Unsplash

The answer to this question would probably be different for most organizations. My take on this is simply: if you can do it safely, why not automate it and do it on a regular basis? Your threat model and use-case would normally dictate this or you may need to adhere to certain encryption and key management security controls in a given compliance framework. Whatever the case, I think about regular key rotation as a best-practices security measure. You always want to limit the amount of sensitive data, in this case Fernet tokens, encrypted with a single version of any given encryption key. Rotating your keys on a regular basis creates a smaller exposure surface for your cloud and your users.

How many keys do you need active at one time? This all depends on how often you plan to rotate them, as well as how long your token lifetime is. The answer to this can be expressed in the following equation:

fernet-keys = token-validity(hours) / rotation-time(hours) + 2

Let’s use an example of rotation every 8 hours, with a default token lifetime of 24 hours. This would be

24 hours / 8 hours + 2 = 5

Five keys on your controllers would ensure that you always had an active set of keys for your cloud. With this in mind, let’s look at way to rotate your keys using Ansible.

Rotating Fernet keys

So you may be wondering, how does one automate this process? You can image that this process can be painful and prone to error if done by hand. While you could use the fernet_rotate command to do this on each node manually, why would you?

Let’s look at how to do this with Ansible, Red Hat’s awesome tool for automation. If you’re new to Ansible, please do yourself a favor and check out this quick-start video.

We’ll be using an Ansible role, created by my fellow Red Hatter Juan Antonio Osorio (Ozz), one of the coolest guys I know. This is just one way of doing this. For a Red Hat OpenStack Platform install you should contact Red Hat support to review your options and support implications. And of course, your results may vary so be sure to test out on a non-production install!

Let’s start by logging into your Red Hat OpenStack director node as the stack user, and creating a roles directory in /home/stack:

$ cat << EOF > ~/rotate.yml
- hosts: controller 
  become: true 
  roles: 
    - tripleo-fernet-keys-rotation
EOF

We need to source our stackrc, as we’ll be operating on our controller nodes in the next step

$ source ~/stackrc

Using a dynamic inventory from /usr/bin/tripleo-ansible-inventory, we’ll run this playbook and rotate the keys on our controllers

$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory rotate.yml
Ansible Role Analysis

What happened? Looking at Ansible’s output, you’ll note that several tasks were performed. If you’d like to see these tasks, look no further than /home/stack/roles/tripleo-fernet-keys-rotation/tasks/main.yml:

This task runs a python script, generate_key_yaml.py, in ~/roles/tripleo-ansible-inventory/files, that creates a new fernet key:

- name: Generate new key
 script: generate_key_yaml.py
 register: new_key_register
 run_once: true

This task will take the output of the previous task, from stdout, and register it as the new_key.

- name: Set new key fact
 set_fact:
 new_key: "{{ new_key_register.stdout }}"

Next, we get a sorted list of the keys that currently exist in /etc/keystone/fernet-keys

- name: Get current primary key index
 shell: ls /etc/keystone/fernet-keys | sort -r | head -1
 register: current_key_index_register

Let’s set the next primary key index

- name: Set next key index fact
 set_fact:
 next_key_index: "{{ current_key_index_register.stdout|int + 1 }}"

Now we’ll move the staged key to the new primary key

- name: Move staged key to new index
 command: mv /etc/keystone/fernet-keys/0 /etc/keystone/fernet-keys/{{ next_key_index }}

Next, let’s set our new_key to the new staged key

- name: Set new key as staged key
 copy:
 content: "{{ new_key }}"
 dest: /etc/keystone/fernet-keys/0
 owner: keystone
 group: keystone
 mode: 0600

Finally, we’ll reload (not restart) httpd on the controller, allowing keystone to load the new keys

- name: Reload httpd
 service:
 name: httpd
 state: reloaded
Scheduling

Now that we have a way to automate rotation of our keys, it’s time to schedule this automation. There are several ways you could do this:

Cron

You could, but why?

Systemd Realtime Timers

Let’s create the systemd service that will run our playbook:

cat << EOF > /etc/systemd/system/fernet-rotate.service
[Unit]
Description=Run an Ansible playbook to rotate fernet keys on the overcloud
[Service]
User=stack
Group=stack
ExecStart=/usr/bin/ansible-playbook \
  -i /usr/bin/tripleo-ansible-inventory /home/stack/rotate.yml
EOF

Now we’ll create a timer with the same name, only with .timer as the suffix, in /etc/systemd/system on the director node:

cat << EOF > /etc/systemd/system/fernet-rotate.timer
[Unit]
Description=Timer to rotate our Overcloud Fernet Keys weekly
[Timer]
OnCalendar=weekly
Persistent=true
[Install]
WantedBy=timers.target
EOF
Ansible Tower

I like how your thinking! But that’s a topic for another day.

Red Hat OpenStack Platform 12

Red Hat OpenStack Platform 12 provides support for key rotation via Mistral. Learn all about Red Hat OpenStack Platform 12 here.

What about logging?

Ansible to the rescue!

Ansible will use the log_path configuration option from /etc/ansible/ansible.cfg, ansible.cfg in the directory of the playbook, or $HOME/.ansible.cfg. You just need to set this and forget it.

So let’s enable this service and timer, and we’re off to the races:

$ sudo systemctl enable fernet-rotate.service
$ sudo systemctl enable fernet-rotate.timer

Credit: Many thanks to Lance Bragstad and Dolph Matthews for the key rotation methodology.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Red Hat Stack by August Simonelli, Technical Marketi.. - 1w ago

We are happy to announce that Red Hat OpenStack Platform 12 is now Generally Available (GA).

This is Red Hat OpenStack Platform’s 10th release and is based on the upstream OpenStack release, Pike.

Red Hat OpenStack Platform 12 is focused on the operational aspects to deploying OpenStack. OpenStack has established itself as a solid technology choice and with this release, we are working hard to further improve the usability aspects and bring OpenStack and operators into harmony.

With operationalization in mind, let’s take a quick look at some the biggest and most exciting features now available.

Containers.

As containers are changing and improving IT operations it only stands to reason that OpenStack operators can also benefit from this important and useful technology concept. In Red Hat OpenStack Platform we have begun the work of containerizing the control plane. This includes some of the main services that run OpenStack, like Nova and Glance, as well as supporting technologies, such as Red Hat Ceph Storage. All these services can be deployed as containerized applications via Red Hat OpenStack Platform’s lifecycle and deployment tool, director.

Photo by frank mckenna on Unsplash

Bringing a containerized control plane to OpenStack is important. Through it we can immediately enhance, among other things, stability and security features through isolation. By design, OpenStack services often have complex, overlapping library dependencies that must be accounted for in every upgrade, rollback, and change. For example, if Glance needs a security patch that affects a library shared by Nova, time must be spent to ensure Nova can survive the change; or even more frustratingly, Nova may need to be updated itself. This makes the change effort and resulting change window and impact, much more challenging. Simply put, it’s an operational headache.

However, when we isolate those dependencies into a container we are able to work with services with much more granularity and separation. An urgent upgrade to Glance can be done alongside Nova without affecting it in any way. With this granularity, operators can more easily quantify and test the changes helping to get them to production more quickly.

We are working closely with our vendors, partners, and customers to move to this containerized approach in a way that is minimally disruptive. Upgrading from a non-containerized control plane to one with most services containerized is fully managed by Red Hat OpenStack Platform director. Indeed, when upgrading from Red Hat OpenStack Platform 11 to Red Hat OpenStack Platform 12 the entire move to containerized services is handled “under the hood” by director. With just a few simple preparatory steps director delivers the biggest change to OpenStack in years direct to your running deployment in an almost invisible, simple to run, upgrade. It’s really cool!

Red Hat Ansible.

Like containers, it’s pretty much impossible to work in operations and not be aware of, or more likely be actively using, Red Hat Ansible. Red Hat Ansible is known to be easier to use for customising and debugging; most operators are more comfortable with it, and it generally provides an overall nicer experience through a straightforward and easy to read format.

Of course, we at Red Hat are excited to include Ansible as a member of our own family. With Red Hat Ansible we are actively integrating this important technology into more and more of our products.

In Red Hat OpenStack Platform 12, Red Hat Ansible takes center stage.

But first, let’s be clear, we have not dropped Heat; there are very real requirements around backward compatibility and operator familiarity that are delivered with the Heat template model.

But we don’t have to compromise because of this requirement. With Ansible we are offering operator and developer access points independent of the Heat templates. We use the same composable services architecture as we had before; the Heat-level flexibility still works the same, we just translate to Ansible under the hood.

Simplistically speaking, before Ansible, our deployments were mostly managed by Heat templates driving Puppet. Now, we use Heat to drive Ansible by default, and then Ansible drives Puppet and other deployment activities as needed. And with the addition of containerized services, we also have positioned Ansible as a key component of the entire container deployment. By adding a thin layer of Ansible, operators can now interact with a deployment in ways they could not previously.

For instance, take the new openstack overcloud config download command. This command allows an operator to generate all the Ansible playbooks being used for a deployment into a local directory for review. And these aren’t mere interpretations of Heat actions, these are the actual, dynamically generated playbooks being run during the deployment. Combine this with Ansible’s cool dynamic inventory feature, which allows an operator to maintain their Ansible inventory file based on a real-time infrastructure query, and you get an incredibly powerful troubleshooting entry point.

Check out this short (1:50) video showing Red Hat Ansible and this new exciting command and concept:

Ansible and Red Hat OpenStack Platform - YouTube

Network composability.

Another major new addition for operators is the extension of the composability concept into networks.

As a reminder, when we speak about composability we are talking about enabling operators to create detailed solutions by giving them basic, simple, defined components from which they can build for their own unique, complex topologies.

With composable networks, operators are no longer only limited to using the predefined networks provided by director. Instead, they can now create additional networks to suit their specific needs. For instance, they might create a network just for NFS filer traffic, or a dedicated SSH network for security reasons.

Photo by Radek Grzybowski on Unsplash

And as expected, composable networks work with composable roles. Operators can create custom roles and apply multiple, custom networks to them as required. The combinations lead to an incredibly powerful way to build complex enterprise network topologies, including an on-ramp to the popular L3 spine-leaf topology.

And to make it even easier to put together we have added automation in director that verifies that resources and Heat templates for each composable network are automatically generated for all roles. Fewer templates to edit can mean less time to deployment!

Telco speed.

Telcos will be excited to know we are now delivering production ready virtualized fast data path technologies. This release includes Open vSwitch 2.7 and the Data Plane Development Kit (DPDK) 16.11 along with improvements to Neutron and Nova allowing for robust virtualized deployments that include support for large MTU sizing (i.e. jumbo frames) and multiple queues per interface. OVS+DPDK is now a viable option alongside SR-IOV and PCI passthrough in offering more choice for fast data in Infrastructure-as-a-Service (IaaS) solutions.

Operators will be pleased to see that these new features can be more easily deployed thanks to new capabilities within Ironic, which store environmental parameters during introspection. These values are then available to the overcloud deployment providing an accurate view of hardware for ideal tuning. Indeed, operators can further reduce the complexity around tuning NFV deployments by allowing director to use the collected values to dynamically derive the correct parameters resulting in truly dynamic, optimized tuning.

Serious about security.

Helping operators, and the companies they work for, focus on delivering business value instead of worrying about their infrastructure is core to Red Hat’s thinking. And one way we make sure everyone sleeps better at night with OpenStack is through a dedicated focus on security.

Starting with Red Hat OpenStack Platform 12 we have more internal services using encryption than in any previous release. This is an important step for OpenStack as a community to help increase adoption in enterprise datacenters, and we are proud to be squarely at the center of that effort. For instance, in this release even more services now feature internal TLS encryption.

Let’s be realistic, though, focusing on security extends beyond just technical implementation. Starting with Red Hat OpenStack Platform 12 we are also releasing a comprehensive security guide, which provides best practices as well as conceptual information on how to make an OpenStack cloud more secure. Our security stance is firmly rooted in meeting global standards from top international agencies such as FedRAMP (USA), ETSI (Europe), and ANSSI (France). With this guide, we are excited to share these efforts with the broader community.

Do you even test?

How many times has someone asked an operations person this question? Too many! “Of course we test,” they will say. And with Red Hat OpenStack Platform 12 we’ve decided to make sure the world knows we do, too.

Through the concept of Distributed Continuous Integration (DCI), we place remote agents on site with customers, partners, and vendors that continuously build our releases at all different stages on all different architectures. By engaging outside resources we are not limited by internal resource restrictions; instead, we gain access to hardware and architecture that could never be tested in any one company’s QA department. With DCI we can fully test our releases to see how they work under an ever-increasing set of environments. We are currently partnered with major industry vendors for this program and are very excited about how it helps us make the entire OpenStack ecosystem better for our customers.

So, do we even test? Oh, you bet we do!

Feel the love! Photo by grafxart photo on Unsplash

And this is just a small piece of the latest Red Hat OpenStack Platform 12 release. Whether you are looking to try out a new cloud, or thinking about an upgrade, this release brings a level of operational maturity that will really impress!

Now that OpenStack has proven itself an excellent choice for IaaS, it can focus on making itself a loveable one.

Let Red Hat OpenStack Platform 12 reignite the romance between you and your cloud!

Red Hat OpenStack Platform 12 is designated as a “Standard” release with a one-year support window. Click here for more details on the release lifecycle for Red Hat OpenStack Platform.

Find out more about this release at the Red Hat OpenStack Platform Product page. Or visit our vast online documentation.

And if you’re ready to get started now, check out the free 60-day evaluation available on the Red Hat portal.

Looking for even more? Contact your local Red Hat office today.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As we learned in part one of this blog post, beginning with the OpenStack Kilo release, a new token provider is now available as an alternative to PKI and UUID. Fernet tokens are essentially an implementation of ephemeral tokens in Keystone. What this means is that tokens are no longer persisted and hence do not need to be replicated across clusters or regions.

“In short, OpenStack’s authentication and authorization metadata is neatly bundled into a MessagePacked payload, which is then encrypted and signed as a Fernet token. OpenStack Kilo’s implementation supports a three-phase key rotation model that requires zero downtime in a clustered environment.” (from: http://dolphm.com/openstack-keystone-fernet-tokens/)

In our previous post, I covered the different types of tokens, the benefits of Fernet and a little bit of the technical details. In this part of our three part series we provide a method for enabling Fernet tokens on Red Hat OpenStack Platform Platform 10, during both pre and post deployment of the overcloud stack.

Pre-Overcloud Deployment

Official Red Hat documentation for enabling Fernet tokens in the overcloud can be found here:

Deploy Fernet on the Overcloud

Tools

We’ll be using the Red Hat OpenStack Platform here, so this means we’ll be interacting with the director node and heat templates. Our primary tool is the command-line client keystone-manage, part of the tools provided by the openstack-keystone RPM and used to set up and manage keystone in the overcloud. Of course, we’ll be using the director-based deployment of Red Hat’s OpenStack Platform to enable Fernet pre and/or post deployment.

Photo by Barn Images on Unsplash Prepare Fernet keys on the undercloud

This procedure will start with preparation of the Fernet keys, which a default  deployment places on each controller in /etc/keystone/fernet-keys. Each controller must have the same keys, as tokens issued on one controller must be able to be validated on all controllers. Stay tuned to part three of this blog for an in-depth explanation of Fernet signing keys.

  1. Source the stackrc file to ensure we are working with the undercloud:
$ source ~/stackrc‍‍‍‍‍‍‍‍‍‍‍‍
  1. From your director, use keystone_manage to generate the Fernet keys as deployment artifacts:
$ sudo keystone-manage fernet_setup \
    --keystone-user keystone \
    --keystone-group keystone‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
  1. Tar up the keys for upload into a swift container on the undercloud:
$ sudo tar -zcf keystone-fernet-keys.tar.gz /etc/keystone/fernet-keys‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
  1. Upload the Fernet keys to the undercloud as swift artifacts (we assume your templates exist in ~/templates):
$ upload-swift-artifacts -f keystone-fernet-keys.tar.gz \
    --environment ~/templates/deployment-artifacts.yaml‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
  1. Verify that your artifact exists in the undercloud:
$ swift list overcloud-artifacts Keystone-fernet-keys.tar.gz

NOTE: These keys should be secured as they can be used to sign and validate tokens that will have access to your cloud.‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

  1. Let’s verify that deployment-artifacts.yaml exists in ~/templates (NOTE: your URL detail will differ from what you see here – as this is a uniquely generated temporary URL):
$ cat ~/templates/deployment-artifacts.yaml
# Heat environment to deploy artifacts via Swift Temp URL(s)
parameter_defaults:
  DeployArtifactURLs:
    - 'http://192.0.2.1:8080/v1/AUTH_c9d16242396b4eb1a0f950093fa9464c/over
 cloud-artifacts/keystone-fernet-keys.tar.gz?temp_url_sig=917bd467e70516
 581b1db295783205622606e367&temp_url_expires=1520463185'‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

NOTE: This is the swift URL that your overcloud deployment will use to copy the Fernet keys to your controllers.

  1. Finally, generate the fernet.yaml template to enable Fernet as the default token provider in your overcloud:
$ cat << EOF > ~/templates/fernet.yaml
parameter_defaults:
          controllerExtraConfig:
            keystone::token_provider: 'fernet'‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
Deploy and Validate

At this point, you are ready to deploy your overcloud with Fernet enabled as the token provider, and your keys distributed to each controller in /etc/keystone/fernet-keys.

Photo by Glenn Carstens-Peters on Unsplash

NOTE: This is an example deploy command, yours will likely include many more templates. For the purposes of our discussion, it is important that you simply include fernet.yaml as well as deployment-artifacts.yaml.

$ openstack overcloud deploy \
--templates /home/stack/templates \
-e  /home/stack/templates/environments/deployment-artifacts.yaml \
-e /home/stack/templates/environments/fernet.yaml \
--control-scale 3 \
--compute-scale 4 \
--control-flavor control \
--compute-flavor compute \
--ntp-server pool.ntp.org
Testing

Once the deployment is done you should validate that your overcloud is indeed using Fernet tokens instead of the default UUID token provider. From the director node:

$ source ~/overcloudrc
$ openstack token issue
+------------+------------------------------------------+
| Field      | Value                                    |
+------------+------------------------------------------+
| expires    | 2017-03-22 19:16:21+00:00                |
| id | gAAAAABY0r91iYvMFQtGiRRqgMvetAF5spEZPTvEzCpFWr3  |
|    | 1IB8T8L1MRgf4NlOB6JsfFhhdxenSFob_0vEEHLTT6rs3Rw  |
|    | q3-Zm8stCF7sTIlmBVms9CUlwANZOQ4lRMSQ6nTfEPM57kX  |
|    | Xw8GBGouWDz8hqDYAeYQCIHtHDWH5BbVs_yC8ICXBk       |
| project_id | f8adc9dea5884d23a30ccbd486fcf4c6         |
| user_id    | 2f6106cef80741c6ae2bfb3f25d70eee         |
+------------+------------------------------------------+‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Note the length of this token in the “id” field. This is a Fernet token.

Enabling Fernet Post Overcloud Deployment

Part of the power of the Red Hat OpenStack Platform director deployment methodology lies in its ability to easily upgrade and change a running overcloud. Features such as Fernet, scaling, and complex service management, can be managed by running a deployment update directly against a running overcloud.

Updating is really straightforward. If you’ve already deployed your overcloud with UUID tokens you can change them to Fernet by simply following the pre-deploy example above and run the openstack deploy command again, with the enabled heat templates mentioned, against your running deployment! This will change your overcloud token default to Fernet. Be sure to deploy with your original deploy command, as any changes there could affect your overcloud. And of course, standard outage windows apply – production changes should be tested and prepared accordingly.

Conclusion

I hope you’ve enjoyed our discussion on enabling Fernet tokens in the overcloud. Additionally, I hope that I was able to shed some light on this process as well. Official documentation on these concepts and Fernet tokens in the overcloud process is available

In our last, and final instalment on this topic we’ll look at some of the many methods for rotating your newly enabled Fernet keys on your controller nodes. We’ll be using Red Hat’s awesome IT automation tool, Red Hat Ansible to do just that.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Red Hat Stack by Ken Savich, Senior Openstack Soluti.. - 1w ago

Thank you for joining me to talk about Fernet tokens. In this first of three posts on Fernet tokens, I’d like to go over the definition of OpenStack tokens, the different types and why Fernet tokens should matter to you. This series will conclude with some awesome examples of how to use Red Hat Ansible to manage your Fernet token keys in production.

First, some definitions …

What is a token? OpenStack tokens are bearer tokens, used to authenticate and validate users and processes in your OpenStack environment. Pretty much any time anything happens in OpenStack a token is involved. The OpenStack Keystone service is the core service that issues and validates tokens. Using these tokens, users and and software clients via API’s authenticate, receive, and finally use that token when requesting operations ranging from creating compute resources to allocating storage. Services like Nova or Ceph then validate that token with Keystone and continue on with or deny the requested operation. The following diagram, shows a simplified version of this dance.

Courtesy of the author

Token Types

Tokens come in several types, referred to as “token providers” in Keystone parlance. These types can be set at deployment time, or changed post deployment. Ultimately, you’ll have to decide what works best for your environment, given your organization’s workload in the cloud.

The following types of tokens exist in Keystone:

UUID (Universal Unique Identifier)

The default token provider in Keystone is UUID. This is a 32-byte bearer token that must be persisted (stored) across controller nodes, along with their associated metadata, in order to be validated.

PKI & PKIZ (public key infrastructure)

This token format is deprecated as of the OpenStack Ocata release, which means it is deprecated in Red Hat OpenStack Platform 11. This format is also persisted across controller nodes. PKI tokens contain catalog information of the user that bears them, and thus can get quite large, depending on how large your cloud is. PKIZ tokens are simply compressed versions of PKI tokens.

Fernet

Fernet tokens (pronounced fehr:NET) are message packed tokens that contain authentication and authorization data. Fernet tokens are signed and encrypted before being handed out to users. Most importantly, however, Fernet tokens are ephemeral. This means they do not need to be persisted across clustered systems in order to successfully be validated.

Fernet was originally a secure messaging format created by Heroku. The OpenStack implementation of this lightweight and more API-friendly format was developed by the OpenStack Keystone core team.

The Problem

As you may have guessed by now, the real problem solved by Fernet tokens is one of persistence. Imagine, if you will, the following scenario:

  1. A user logs into Horizon (the OpenStack Dashboard)
  2. User creates a compute instance
  3. User requests persistent storage upon instance creation
  4. User assigns a floating IP to the instance

While this is a simplified scenario, you can clearly see that there are multiple calls to different core components being made. In even the most basic of examples  you see at least one authentication, as well as multiple validations along the way. Not only does this require network bandwidth, but when using persistent token providers such as UUID it also requires a lot of storage in Keystone. Additionally, the token table in the database

Photo by Eugenio Mazzone on Unsplash

used by  Keystone grows as your cloud gets more usage. When using UUID tokens, operators must implement a detailed and comprehensive strategy to prune this table at periodic intervals to avoid real trouble down the line. This becomes even more difficult in a clustered environment.

It’s not only backend components which are affected. In fact, all services that are exposed to users require authentication and authorization. This leads to increased bandwidth and storage usage on one of the most critical core components in OpenStack. If Keystone goes down, your users will know it and you no longer have a cloud in any sense of the word.

Now imagine the impact as you scale your cloud;  the  problems with UUID tokens are dangerously amplified.

Benefits of Fernet tokens

Because Fernet tokens are ephemeral, you have the following immediate benefits:

  • Tokens do not need to be replicated to other instances of Keystone in your controller cluster
  • Storage is not affected, as these tokens are not stored

The end-result offers increased performance overall. This was the design imperative of Fernet tokens, and the OpenStack community has more than delivered.  

Show me the numbers

All of these benefits sound good, but what are the real numbers behind the performance differences between UUID and Fernet? One of the core keystone developers, Dolph Matthews, created a great post about Fernet benchmarks.

Note that these benchmarks are for OpenStack Kilo, so you’ll most likely see even greater performance numbers in newer releases.

The most important benchmarks in Dolph’s post are the ones comparing the various token formats to each other on a globally-distributed Galera cluster. These show the following results using UUID as a baseline:

Token creation performance

Fernet 50.8 ms (85% faster than UUID) 237.1 (42% faster than UUID)


Token validation performance

Fernet 5.55 ms (8% faster than UUID) 1957.8 (14% faster then UUID)


As you can see, these numbers are quite remarkable. More informal benchmarks can be found at the Cern OpenStack blog, OpenStack in Production.

Security Implications Photo by Praveesh Palakeel on Unsplash

One important aspect of using Fernet tokens is security. As these tokens are signed and encrypted, they are inherently more secure than plain text UUID tokens. One really great aspect of this is the fact that you can invalidate a large number of tokens, either during normal operations or during a security incident, by simply changing the keys used to validate them. This requires a key rotation strategy, which I’ll get into in the third part of this series.

While there are security advantages to Fernet tokens, it must be said they are only as secure as the keys that created them. Keystone creates the tokens with a set of keys in your Red Hat OpenStack Platform environment. Using advanced technologies like SELinux, Red Hat Enterprise Linux is a trusted partner in this equation. Remember, the OS matters.

Conclusion

While OpenStack functions just fine with its default UUID token format, I hope that this article shows you some of the benefits of Fernet tokens. I also hope that you find the knowledge you’ve gained here to be useful, once you decide to move forward to implementing them.

In our follow-up blog post in this series, we’ll be looking at how to enable Fernet tokens in your OpenStack environment — both pre and post-deploy. Finally, our last post will show you how to automate key rotation using Red Hat Ansible in a production environment. I hope you’ll join me along the way.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We have reached the end of another successful and exciting OpenStack Summit. Sydney did not disappoint giving attendees a wonderful show of weather ranging from rain and wind to bright, brilliant sunshine. The running joke was that Sydney was, again, just trying to be like Melbourne. Most locals will get that joke, and hopefully now some of our international visitors do, too!

Monty Taylor (Red Hat), Mark Collier (OpenStack Foundation), and Lauren Sell (OpenStack Foundation) open the Sydney Summit. (Photo: Author)

And much like the varied weather, the Summit really reflected the incredible diversity of both technology and community that we in the OpenStack world are so incredibly proud of. With over 2300 attendees from 54 countries, this Summit was noticeably more intimate but no less dynamic. Often having a smaller group of people allows for a more personal experience and increases the opportunities for deep, important interactions.

To my enjoyment I found that, unlike previous Summits, there wasn’t as much of a singularly dominant technological theme. In Boston it was impossible to turn a corner and not bump into a container talk. While containers were still a strong theme here in Sydney, I felt the general impetus moved away from specific technologies and into use cases and solutions. It feels like the OpenStack community has now matured to the point that it’s able to focus less on each specific technology piece and more on the business value those pieces create when working together.

Jonathan Bryce (OpenStack Foundation) (Photo: Author)

It is exciting to see both Red Hat associates and customers following this solution-based thinking with sessions demonstrating the business value that our amazing technology creates. Consider such sessions as SD-WAN – The open source way, where the complex components required for a solution are reviewed, and then live demoed as a complete solution. Truly exceptional. Or perhaps check out an overview of how the many components to an NFV solution come together to form a successful business story in A Telco Story of OpenStack Success.

At this Summit I felt that while the sessions still contained the expected technical content they rarely lost sight of the end goal: that OpenStack is becoming a key, and necessary, component to enabling true enterprise business value from IT systems.

To this end I was also excited to see over 40 sessions from Red Hat associates and our customers covering a wide range of industry solutions and use cases.  From Telcos to Insurance companies it is really exciting to see both our associates and our customers sharing their experiences with our solutions, especially in Australia and New Zealand with the world.

Mark McLoughlin, Senior Director of Engineering at Red Hat with Paddy Power Betfair’s Steven Armstrong and Thomas Andrew getting ready for a Facebook Live session (Photo: Anna Nathan)

Of course, there were too many sessions to attend in person, and with the wonderfully dynamic and festive air of the Marketplace offering great demos, swag, food, and, most importantly, conversations, I’m grateful for the OpenStack Foundation’s rapid publishing of all session videos. It’s a veritable pirate’s bounty of goodies and I recommend checking it out sooner rather than later on their website.

I was able to attend a few talks from Red Hat customers and associates that really got me thinking and excited. The themes were varied, from the growing world of Edge computing, to virtualizing network operations, to changing company culture; Red Hat and our customers are doing very exciting things.

Digital Transformation

Take for instance Telstra, who are using Red Hat OpenStack Platform as part of a virtual router solution. Two years ago the journey started with a virtualized network component delivered as an internal trial. This took a year to complete and was a big success from both a technological and cultural standpoint. As Senior Technology Specialist Andrew Harris from Telstra pointed out during the Q and A of his session, projects like this are not only about implementing new technology but also about “educating … staff in Linux, OpenStack and IT systems.” It was a great session co-presented with Juniper and Red Hat and really gets into how Telstra are able to deliver key business requirements such as reliability, redundancy, and scale while still meeting strict cost requirements.

Of course this type of digital transformation story is not limited to telcos. The use of OpenStack as a catalyst for company change as well as advanced solutions was seen strongly in two sessions from Australia’s Insurance Australia Group (IAG). Product

Eddie Satterly, IAG (Photo: Author)

Engineering and DataOps Lead Eddie Satterly recounted the journey IAG took to consolidate data for a better customer experience using open source technologies. IAG uses Red Hat OpenStack Platform as the basis for an internal open source revolution that has not only lead to significant cost savings but has even resulted in the IAG team open sourcing some of the tools that made it happen. Check out the full story of how they did it and join TechCrunch reporter Frederic Lardinois who chats with Eddie about the entire experience. There’s also a Facebook live chat Eddie did with Mark McLoughlin, Senior Director of Engineering at Red Hat that further tells their story.

Ops!

An area of excitement for those of us with roots in the operational space is the way that OpenStack continues to become easier to install and maintain. The evolution of TripleO, the upstream project for Red Hat OpenStack Platform’s deployment and lifecycle management tool known as director, has really reached a high point in the Pike cycle. With Pike, TripleO has begun utilizing Ansible as the core “engine” for upgrades, container orchestration, and lifecycle management. Check out Senior Principal Software Engineer Steve Hardy’s deep dive into all the cool things TripleO is doing and learn just how excited the new “openstack overcloud config download” command is going to make you, and your Ops team, become.

Steve Hardy (Red Hat) and Jaromir Coufal (Red Hat) (Photo: Author)

And as a quick companion to Steve’s talk, don’t miss his joint lightening talk with Red Hat Senior Product Manager Jaromir Coufal, lovingly titled OpenStack in Containers: A Deployment Hero’s Story of Love and Hate, for an excellent 10 minute intro to the journey of OpenStack, containers, and deployment.

Want more? Don’t miss these sessions …

Storage and OpenStack:

Containers and OpenStack:

Telcos and OpenStack

A great event

Although only 3 days long, this Summit really did pack a sizeable amount of content into that time. Being able to have the OpenStack world come to Sydney and enjoy a bit of Australian culture was really wonderful. Whether we were watching the world famous Melbourne Cup horse race with a room full of OpenStack developers and operators, or cruising Sydney’s famous harbour and talking the merits of cloud storage with the community, it really was an unique and exceptional week.

The Melbourne Cup is about to start! (Photo: Author)

The chance to see colleagues from across the globe, immersed in the technical content and environment they love, supporting and learning alongside customers, vendors, and engineers is incredibly exhilarating. In fact, despite the tiredness at the end of each day, I went to bed each night feeling more and more excited about the next day, week, and year in this wonderful community we call OpenStack!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Red Hat Stack by August Simonelli, Technical Marketi.. - 1w ago

In less than one week the OpenStack Summit is coming to Sydney! For those of us in the Australia/New Zealand (ANZ) region this is a very exciting time as we get to showcase our local OpenStack talents and successes. This summit will feature Australia’s largest banks, telcos, and enterprises and show the world how they have adopted, adapted, and succeeded with Open Source software and OpenStack.

Photo by Frances Gunn on Unsplash

And at Red Hat, we are doubly proud to feature a lineup of local, regional, and global speakers in over 40 exciting sessions. Not only can you stop by and see speakers from Australia, like Brisbane’s very own Andrew Hatfield (Red Hat, Practice Lead – Cloud Storage and Big Data) who has two talks discussing everything from CephFS’s impact on OpenStack to a joint talk about how OpenStack and Ceph are evolving to integrate with the Linux, Docker, and Kubernetes!

Of course, not only are local Red Hat associates telling their own stories, but so too are our ANZ customers. Australia’s own dynamic Telecom, Telstra, has worked closely with Red Hat and Juniper for all kinds of cutting edge NFV work and you can check out a joint talk from Telstra, Juniper, and Red Hat to learn all about it in “The Road to Virtualization: Highlighting The Unique Challenges Faced by Telcos” featuring Red Hat’s Senior Product Manager for Networking Technologies, Anita Tragler alongside Juniper’s Greg Smith and Telstra’s Senior Technology Specialist extraordinaire Andrew Harris.

On Wednesday at 11:00AM, come see how a 160 year old Aussie insurance company, IAG, uses Red Hat OpenStack Platform as the foundation for their Open Source Data Pipeline. IAG is leading a dynamic and disruptive change in their industry and bringing important Open Source tools and process to accelerate innovation and save costs. They were nominated for a Super User award as well for their efforts and we are proud to call them Red Hat customers.

We can’t wait to meet all our mates!

Photo by Ewa Gillen on Unsplash

For many of us ANZ-based associates the opportunity to meet the global OpenStack community in our biggest city is very exciting and one we have been waiting on for years. While we will of course be very busy attending the many sessions, one great place to be sure to meet us all is at Booth B1 in the Marketplace Expo Hall. At the booth we will have live environments and demos showcasing the exciting integration between Red Hat CloudForms, Red Hat OpenStack Platform, Red Hat Ceph Storage, and Red Hat OpenShift Container Platform accompanied by our very best ANZ, APAC, and Global talent. Come and chat with Solution Architects, documentation professionals, engineers, and senior management and find out how we develop products so that they continue to grow and lead the OpenStack and Open Source world.

And of course, there will some very special, Aussie-themed swag for you to pick up. There are a few once-in-a-lifetime items that we think you won’t want to miss and will make a truly special souvenir of your visit to our wonderful country! And of course, the latest edition of the RDO ducks will be on hand – get in fast!

There will also be a fun “Marketplace Mixer” on Monday, November 6th from 5:50PM – 7:30PM where you will find great food, conversation, and surprises in the Expo Hall. Our booth will feature yummy food, expert conversations, and more! And don’t miss the very special Melbourne Cup celebration on Tuesday, November 7th from 2:30 – 3:20. There will be a live stream of “the race that stops the nation,” the Melbourne Cup, direct from Flemington Racecourse in Victoria. Prepare your fascinator and come see the event Australia really does stop for!

Credit: Museums Victoria

If you’ve not booked yet you can still save 10% with our exclusive code, REDHAT10.

So, there you go, mate.

World’s best software, world’s best community, world’s best city.

As the (in)famous tourism campaign from 2007 asked in the most Australian of ways: “So where the bloody hell are you?

Can’t wait to see you in Sydney!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Previously we learned all about the benefits in placing Ceph storage services directly on compute nodes in a co-located fashion. This time, we dive deep into the deployment templates to see how an actual deployment comes together and then test the results!

Enabling Co-Location

This article assumes the director is installed and configured with nodes already registered. The default Heat deployment templates ship an environment file for enabling Pure HCI. This environment file is:

/usr/share/openstack-tripleo-heat-templates/environments/hyperconverged-ceph.yaml

This file does two things:

  1. It redefines the composable service list for the Compute role to include both Compute and Ceph Storage services. The parameter for storing this list in ComputeServices.

  2. It enables a port on the Storage Management network for Compute nodes using the OS::TripleO::Compute::Ports::StorageMgmtPort resource. The default network isolation disables this port for standard Compute nodes. For our scenario we must enable this port and its network for the Ceph services to communicate. If you are not using network isolation, you can leave the resource at None to disable the resource.
Updating Network Templates

As mentioned, the Compute nodes need to be attached to the Storage Management network so Red Hat Ceph Storage can access the OSDs on them. This is not usually required in a standard deployment. To ensure the Compute node receives an IP address on the Storage Management network, you need to modify the NIC templates for your  Compute node to include it. As a basic example, the following snippet adds the Storage Management network to the compute node via the OVS bridge supporting multiple VLANs:

    - type: ovs_bridge
     name: br-vlans
     use_dhcp: false
     members:
     - type: interface
       name: nic3
       primary: false
     - type: vlan
       vlan_id:
         get_param: InternalApiNetworkVlanID
       addresses:
       - ip_netmask:
           get_param: InternalApiIpSubnet
     - type: vlan
       vlan_id:
         get_param: StorageNetworkVlanID
       addresses:
       - ip_netmask:
           get_param: StorageIpSubnet
     - type: vlan
       vlan_id:
         get_param: StorageMgmtNetworkVlanID
       addresses:
       - ip_netmask:
           get_param: StorageMgmtIpSubnet
     - type: vlan
       vlan_id:
         get_param: TenantNetworkVlanID
       addresses:
       - ip_netmask:
           get_param: TenantIpSubnet

The blue highlighted section is the additional VLAN interface for the Storage Management network we discussed.

Isolating Resources

We calculate the amount of memory to reserve for the host and Red Hat Ceph Storage services using the formula found in “Reserve CPU and Memory Resources for Compute”. Note that we accommodate for 2 OSDs so that we can potentially scale an extra OSD on the node in the future.

Our total instances:
32GB / (2GB per instance + 0.5GB per instance for host overhead) = ~12 hosts

Total host memory to reserve:
(12 hosts * 0.5 overhead) + (2 OSDs * 3GB) = 12GB or 12000MB

This means our reserved host memory is 12000MB.

We can also define how to isolate the CPU resources in two ways:

  • CPU Allocation Ratio – Estimate the CPU utilization of each instance and set the ratio of instances per CPU while taking into account Ceph service usage. This ensures a certain amount of CPU resources are available for the host and Ceph services. See the ”Reserve CPU and Memory Resources for Compute” documentation for more information on calculating this value.
  • CPU Pinning – Define which CPU cores are reserved for instances and use the remaining CPU cores for the host and Ceph services.

This example uses CPU pinning. We are reserving cores 1-7 and 9-15 of our Compute node for our instances. This leaves cores 0 and 8 (both on the same physical core) for the host and Ceph services. This provides one core for the current Ceph OSD and a second core in case we scale the OSDs. Note that we also need to isolate the host to these two cores. This is shown after deploying the overcloud. 

Using the configuration shown, we create an additional environment file that contains the resource isolation parameters defined above:

parameter_defaults:
 NovaReservedHostMemory: 12000
 NovaVcpuPinSet: ['1-7,9-15']

Our example does not use NUMA pinning because our test hardware does not support multiple NUMA nodes. However if you want to pin the Ceph OSDs to a specific NUMA node, you can do so using following “Configure Ceph NUMA Pinning”.

Deploying the configuration …

This example uses the following environment files in the overcloud deployment:

  • /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml – Enables network isolation for the default roles, including the standard Compute role.
  • /home/stack/templates/network.yaml – Custom file defining network parameters (see Updating Network Templates). This file also sets the OS::TripleO::Compute::Net::SoftwareConfig resource to use our custom NIC Template containing the additional Storage Management VLAN we added to the Compute nodes above.
  • /usr/share/openstack-tripleo-heat-templates/environments/hyperconverged-ceph.yaml – Redefines the service list for Compute nodes to include the Ceph OSD service. Also adds a Storage Management port for this role. This file is provided with the director’s Heat template collection.
  • /home/stack/templates/hci-resource-isolation.yaml – Custom file with specific settings for resource isolation features such as memory reservation and CPU pinning (see Isolating Resources).

The following command deploys an overcloud with one Controller node and one co-located Compute/Storage node:

$ openstack overcloud deploy \
    --templates /usr/share/openstack-tripleo-heat-templates \
    -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \      -e /home/stack/templates/network.yaml \
    -e /home/stack/templates/storage-environment.yaml \
    -e /usr/share/openstack-tripleo-heat-templates/environments/hyperconverged-ceph.yaml \
    -e /home/stack/templates/hci-resource-isolation.yaml
    --ntp-server pool.ntp.org
Configuring Host CPU Isolation

As a final step, this scenario requires isolating the host from using the CPU cores reserved for instances. To do this, log into the Compute node and run the following commands:

$ sudo grubby --update-kernel=ALL --args="isolcpus=1,2,3,4,5,6,7,9,10,11,12,13,14,15"
$ sudo grub2-install /dev/sda

This updates the kernel to use the isolcpus parameter, preventing the kernel from using cores reserved for instances. The grub2-install command updates the boot record, which resides on /dev/sda for default locations. If using a custom disk layout for your overcloud nodes, this location might be different.

After setting this parameter, we reboot our Compute node:

$ sudo reboot
Testing

After the Compute node reboots, we can view the hypervisor details to see the isolated resources from the undercloud:

$ source ~/overcloudrc
$ openstack hypervisor show overcloud-compute-0.localdomain -c vcpus
+-------+-------+
| Field | Value |
+-------+-------+
| vcpus | 14    |
+-------+-------+
$ openstack hypervisor show overcloud-compute-0.localdomain -c free_ram_mb
+-------------+-------+
| Field       | Value |
+-------------+-------+
| free_ram_mb | 20543 |
+-------------+-------+

2 of the 16 CPU cores are reserved for the Ceph services and only 20GB out for 32GB is available for the host to use for instance.

So, let’s see if this really worked. To find out, we will run some Browbeat tests against the overcloud. Browbeat is a performance and analysis tool specifically for OpenStack. It allows you to analyse, tune, and automate the entire process.

For our test we have run a set of Browbeat benchmark tests showing the CPU activity for different cores. The following graph displays the activity for a host/Ceph CPU core (Core 0) during one of the tests:

The green line indicates the system processes and the yellow line indicates the user processes. Notice that the CPU core activity peaks during the beginning and end of the test, which is when the disks for the instances were created and deleted respectively. Also notice the CPU core activity is fairly low as a percentage.

The other available host/Ceph CPU core (Core 8) follows a similar pattern:

The peak activity for this CPU core occurs during instance creation and during three periods of high instance activity (the Browbeat tests). Also notice the activity percentages are significantly higher than the activity on Core 0.

Finally, the following is an unused CPU core (Core 2) during the same test:

As expected, the unused CPU core shows no activity during the test. However, if we create more instances and exceed the ratio of allowable instances on Core 1, then these instances would use another CPU core, such as Core 2.

These graphs indicate our resource isolation configuration works and the Ceph services will not overlap with our Compute services, and vice versa.

Conclusion

Co-locating storage on compute nodes provides a simple method to consolidate storage and compute resources. This can help when you want to maximize the hardware of each node and consolidate your overcloud. By adding tuning and resource isolation you can allocate dedicated resources to both storage and compute services, preventing both from starving each other of CPU and memory. And by doing this via Red Hat OpenStack Platform director and Red Hat Ceph Storage, you have a solution that is easy to deploy and maintain!

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview