Loading...

Follow RDO Community Blogs on Feedspot

Continue with Google
Continue with Facebook
Or

Valid


The last month has been busy to say the least, which is why we haven’t gotten around to posting a recent Blogpost Roundup, but it looks like you all have been busy as well! Thanks as always for continuing to share your knowledge around RDO and OpenStack. Enjoy!

Lessons from OpenStack Telemetry: Deflation by Julien Danjou

This post is the second and final episode of Lessons from OpenStack Telemetry. If you have missed the first post, you can read it here.

Read more at https://julien.danjou.info/lessons-from-openstack-telemetry-deflation/

Unit tests on RDO package builds by jpena

Unit tests are used to verify that individual units of source code work according to a defined spec. While this may sound complicated to understand, in short it means that we try to verify that each part of our source code works as expected, without having to run the full program they belong to.

Read more at https://blogs.rdoproject.org/2018/04/unit-tests-on-rdo-package-builds/

Red Hatters To Present at More Than 50 OpenStack Summit Vancouver Sessions by Peter Pawelski, Product Marketing Manager, Red Hat OpenStack Platform

OpenStack Summit returns to Vancouver, Canada May 21-24, 2018, and Red Hat will be returning as well with as big of a presence as ever. Red Hat will be a headline sponsor of the event, and you’ll have plenty of ways to interact with us during the show.

Read more at https://redhatstackblog.redhat.com/2018/04/13/openstack-summit-vancouver-preview/

Lessons from OpenStack Telemetry: Incubation by Julien Danjou

It was mostly around that time in 2012 that I and a couple of fellow open-source enthusiasts started working on Ceilometer, the first piece of software from the OpenStack Telemetry project. Six years have passed since then. I’ve been thinking about this blog post for several months (even years, maybe), but lacked the time and the hindsight needed to lay out my thoughts properly. In a series of posts, I would like to share my observations about the Ceilometer development history.

Read more at https://julien.danjou.info/lessons-from-openstack-telemetry-incubation/

Comparing Keystone and Istio RBAC by Adam Young

To continue with my previous investigation to Istio, and to continue the comparison with the comparable parts of OpenStack, I want to dig deeper into how Istio performs RBAC. Specifically, I would love to answer the question: could Istio be used to perform the Role check?

Read more at https://adam.younglogic.com/2018/04/comparing-keystone-and-istio-rbac/

Scaling ARA to a million Ansible playbooks a month by David Moreau Simard

The OpenStack community runs over 300 000 CI jobs with Ansible every month with the help of the awesome Zuul.

Read more at https://dmsimard.com/2018/04/09/scaling-ara-to-a-million-ansible-playbooks-a-month/

Comparing Istio and Keystone Middleware by Adam Young

One way to learn a new technology is to compare it to what you already know. I’ve heard a lot about Istio, and I don’t really grok it yet, so this post is my attempt to get the ideas solid in my own head, and to spur conversations out there.

Read more at https://adam.younglogic.com/2018/04/comparing-istio-and-keystone-middleware/

Heading to Red Hat Summit? Here’s how you can learn more about OpenStack. by Peter Pawelski, Product Marketing Manager, Red Hat OpenStack Platform

Red Hat Summit is just around the corner, and we’re excited to share all the ways in which you can connect with OpenStack® and learn more about this powerful cloud infrastructure technology. If you’re lucky enough to be headed to the event in San Francisco, May 8-10, we’re looking forward to seeing you. If you can’t go, fear not, there will be ways to see some of what’s going on there remotely. And if you’re undecided, what are you waiting for? Register today. 

Read more at https://redhatstackblog.redhat.com/2018/03/29/red-hat-summit-2018-openstack-preview/

Multiple 1-Wire Buses on the Raspberry Pi by Lars Kellogg-Stedman

The DS18B20 is a popular temperature sensor that uses the 1-Wire protocol for communication. Recent versions of the Linux kernel include a kernel driver for this protocol, making it relatively convenient to connect one or more of these devices to a Raspberry Pi or similar device.

Read more at https://blog.oddbit.com/2018/03/27/multiple-1-wire-buses-on-the-/

An Introduction to Fast Forward Upgrades in Red Hat OpenStack Platform by Maria Bracho, Principal Product Manager OpenStack

OpenStack momentum continues to grow as an important component of hybrid cloud, particularly among enterprise and telco. At Red Hat, we continue to seek ways to make it easier to consume. We offer extensive, industry-leading training, an easy to use installation and lifecycle management tool, and the advantage of being able to support the deployment from the app layer to the OS layer.

Read more at https://redhatstackblog.redhat.com/2018/03/22/an-introduction-to-fast-forward-upgrades-in-red-hat-openstack-platform/

Ceph integration topics at OpenStack PTG by Giulio Fidente

I wanted to share a short summary of the discussions happened around the Ceph integration (in TripleO) at the OpenStack PTG.

Read more at http://giuliofidente.com/2018/03/ceph-integration-topics-at-openstack-ptg.html

Generating a list of URL patterns for OpenStack services. by Adam Young

Last year at the Boston OpenStack summit, I presented on an Idea of using URL patterns to enforce RBAC. While this idea is on hold for the time being, a related approach is moving forward building on top of application credentials. In this approach, the set of acceptable URLs is added to the role, so it is an additional check. This is a lower barrier to entry approach.

Read more at https://adam.younglogic.com/2018/03/generating-url-patterns/

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Unit tests are used to verify that individual units of source code work according to a defined spec. While this may sound complicated to understand, in short it means that we try to verify that each part of our source code works as expected, without having to run the full program they belong to.

All OpenStack projects come with their own set of unit tests, for example this is the unit test folder for the oslo.config project. Those tests are executed when a new patch is proposed for review, to ensure that existing (or new) functionality is not broken with the new code. For example, if you check this review, you can see that one of the CI jobs executed is “openstack-tox-py27”, which runs unit tests using Python 2.7.

How does this translate into the packaging world? As part of a spec file, we can define a %check section, where we add scripts to test the installed code. While this is not a mandatory section in the Fedora packaging guidelines, it is highly recommended, since it provides a good assurance that the code packaged is correct.

In many cases, RDO packages include this %check section in their specs, and the project’s unit tests are executed when the package is built. This is an example of the unit tests executed for the python-oslo-utils package.

“But why are these tests executed again when packaging?”, you may ask. After all, these same tests are executed by the Zuul gate before being merged. Well, there are quite a few reasons for this:

  • Those unit tests were run with a specific operating system version and a specific package set. Those are probably different from the ones used by RDO, so we need to ensure the project compatibility with those components.
  • The project dependencies are installed in the OpenStack gate using pip, and some versions may differ. This is because OpenStack projects support a range of versions for each dependency, but usually only test with one version. We have seen cases where a project stated support for version x.0 of a library, but then added code that required version x.1. This change would not be noticed by the OpenStack gate, but it would make unit tests fail while packaging.
  • They also allow us to detect issues before they happen in the upstream gate. OpenStack projects use the requirements project to decide which version of their own libraries should be used by other projects. This allows for some inter-dependency issues, where a change in an Oslo library may uncover a bug in another project, but it is not noticed until the requirements project is updated with a new version of the Oslo library. In the RDO case, we run an RDO Trunk builder using code from the master branch in all projects, which allows us to notify in advance, like in this example bug.
  • They give us an early warning when new dependencies have been added to a project, but they are not in the package spec yet. Since unit tests exercise most of the code, any missing dependency should make them fail.

Due to the way unit tests are executed during a package build, there are some details to keep in mind when defining them. If you as a developer follow them, you will make packagers’ life easier:

  • Do not create unit tests that depend on resources available from the Internet. Most packaging environments do not allow Internet access while the package is being built, so a unit test that depends on resolving an IP address via DNS will fail.

  • Try to keep unit test runtime within reasonable limits. If unit tests for a project take 1 hour to complete, it is likely they will not be executed during packaging, such as here.

  • Do not assume that unit tests will always be executed on a machine with 8 fast cores. We have seen cases of unit tests failing when run on a limited environment or when it takes them more than a certain time to finish.

Now that you know the importance of unit tests for RDO packaging, you can go ahead and make sure we use it on every package. Happy hacking!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Hardware burn-in in the CERN datacenter by Tim Bell

During the Ironic sessions at the recent OpenStack Dublin PTG in Spring 2018, there were some discussions on adding a further burn in step to the OpenStack Bare Metal project (Ironic) state machine. The notes summarising the sessions were reported to the openstack-dev list. This blog covers the CERN burn in process for the systems delivered to the data centers as one example of how OpenStack Ironic users could benefit from a set of open source tools to burn in newly delivered servers as a stage within the Ironic workflow.

Read more at http://openstack-in-production.blogspot.com/2018/03/hardware-burn-in-in-cern-datacenter.html

Using Docker macvlan networks by Lars Kellogg-Stedman

A question that crops up regularly on #docker is “How do I attach a container directly to my local network?” One possible answer to that question is the macvlan network type, which lets you create “clones” of a physical interface on your host and use that to attach containers directly to your local network. For the most part it works great, but it does come with some minor caveats and limitations. I would like to explore those here.

Read more at http://blog.oddbit.com/2018/03/12/using-docker-macvlan-networks/

A New Fencing Mechanism (TBD) by Andrew Beekhof

Protecting Database Centric Applications. In the same way that some application require the ability to persist records to disk, for some applications the loss of access to the database means game over – more so than disconnection from the storage.

Read more at http://blog.clusterlabs.org/blog/2018/tbd-fencing

Generating a Callgraph for Keystone by Adam Young

Once I know a starting point for a call, I want to track the other functions that it calls. pycallgraph will generate an image that shows me that.

Read more at http://adam.younglogic.com/2018/03/callgraph-keystone/

Inspecting Keystone Routes by Adam Young

What Policy is enforced when you call a Keystone API? Right now, there is no definitive way to say. However, with some programmatic help, we might be able to figure it out from the source code. Lets start by getting a complete list of the Keystone routes.

Read more at http://adam.younglogic.com/2018/03/inspecting-keystone-routes/

SnowpenStack by rbowen

I’m heading home from SnowpenStack and it was quite a ride. As Theirry said in our interview at the end of Friday (coming soon to a YouTube channel near you), rather than spoiling things, the freak storm and subsequent closure of the event venue served to create a shared experience and camaraderie that made it even better.

Read more at http://drbacchus.com/snowpenstack/

Expiry of VMs in the CERN cloud by Jose Castro Leon

The CERN cloud resources are used for a variety of purposes from running compute intensive workloads to long running services. The cloud also provides personal projects for each user who is registered to the service. This allows a small quota (5 VMs, 10 cores) where the user can have resources dedicated for their use such as boxes for testing. A typical case would be for the CERN IT Tools training where personal projects are used as sandboxes for trying out tools such as Puppet.

Read more at http://openstack-in-production.blogspot.com/2018/03/expiry-of-vms-in-cern-cloud.html

My 2nd birthday as a Red Hatter by Carlos Camacho

This post will be about to speak about my experience working in TripleO as a Red Hatter for the last 2 years. In my 2nd birthday as a Red Hatter, I have learned about many technologies, really a lot… But the most intriguing thing is that here you never stop learning. Not just because you just don’t want to learn new things, instead, is because of the project’s nature, this project… TripleO…

Read more at https://www.anstack.com/blog/2018/03/01/2nd-birthday-as-a-red-hatter.html

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
RDO Community Blogs by Mary Thengvall - 1w ago

It’s been a busy few weeks of blogging! Thanks as always to those of you who continue to write great content.

OpenStack Role Assignment Inheritance for CloudForms by Adam Young

Operators expect to use CloudForms to perform administrative tasks. For this reason, the documentation for OpenStack states that the Keystone user must have an ‘admin’ role. We found at least one case, however, where this was not sufficient. Fortunately, we have a better approach, and one that can lead to success in a wider array of deployments.

Read more at http://adam.younglogic.com/2018/02/openstack-hmt-cloudforms/

Listening for connections on all ports/any port by Lars Kellogg-Stedman

On IRC — and other online communities — it is common to use a “pastebin” service to share snippets of code, logs, and other material, rather than pasting them directly into a conversation. These services will typically return a URL that you can share with others so that they can see the content in their browser.

Read more at http://blog.oddbit.com/2018/02/27/listening-for-connections-on-a/

Grouping aggregation queries in Gnocchi 4.0.x by Lars Kellogg-Stedman

In this article, we’re going to ask Gnocchi (the OpenStack telemetry storage service) how much memory was used, on average, over the course of each day by each project in an OpenStack environment.

Read more at http://blog.oddbit.com/2018/02/26/grouping-aggregation-queries-i/

TripleO deep dive session #12 (config-download) by Carlos Camacho

This is the 12th release of the TripleO “Deep Dive” sessions. In this session we will have an update for the TripleO ansible integration called config-download. It’s about applying all the software configuration with Ansible instead of doing it with the Heat agents.

Read more at https://www.anstack.com/blog/2018/02/23/tripleo-deep-dive-session-12.html

Maximizing resource utilization with Preemptible Instances by Theodoros Tsioutsias

The CERN cloud consists of around 8,500 hypervisors providing over 36,000 virtual machines. These provide the compute resources for both the laboratory’s physics program but also for the organisation’s administrative operations such as paying bills and reserving rooms at the hostel.

Read more at http://openstack-in-production.blogspot.com/2018/02/maximizing-resource-utilization-with.html

Testing TripleO on own OpenStack deployment by mrunge

For some use cases, it’s quite useful to test TripleO deployments on a OpenStack powered cloud, rather than using a baremetal system. The following article will show you how to do it.

Read more at http://www.matthias-runge.de/2018/02/16/tripleo-ovb/

A New Thing by Andrew Beekhof

If you’re interested in Kubernetes and/or managing replicated applications, such as Galera, then you might also be interested in an operator that allows this class of applications to be managed natively by Kubernetes.

Read more at http://blog.clusterlabs.org/blog/2018/replication-operator

Two Nodes – The Devil is in the Details by Andrew Beekhof

tl;dr – Many people love 2-node clusters because they seem conceptually simpler and 33% cheaper, but while it’s possible to construct good ones, most will have subtle failure modes.

Read more at http://blog.clusterlabs.org/blog/2018/two-node-problems

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
RDO Community Blogs by Rich Bowen - 1w ago

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Queens for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Queens is the 17th release from the OpenStack project, which is the work of more than 1600 contributors from around the world (source – http://stackalytics.com/ ).

]2 RDO team doing the release at the PTG

The release is making its way out to the CentOS mirror network, and should be on your favorite mirror site momentarily.

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO, and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

New and Improved

Interesting things in the Queens release include:

  • Ironic now supports Neutron routed networks with flat networking and introduces support for Nova traits when scheduling
  • RDO now includes rsdclient, an OpenStack client plugin for Rack Scale Design architecture
  • Support for octaviaclient and Octavia Horizon plugin has been added to improve Octavia service deployments.
  • Tap-as-a-Service (TaaS) network extension to the OpenStack network service (Neutron) has been included.
  • Multi-vendor Modular Layer 2 (ML2) driver networking-generic-switch si now available of operators deploying RDO Queens.

Other improvements include:

  • Most of the bundled intree tempest plugins have been moved to their own repository during Queens cycle. RDO has adapted plugin packages for these new model.
  • In an effort to improve the quality and reduce the delivery time for our users, RDO keeps refining and automating all required processes needed to build, test and publish the packages included in RDO distribution.

Note that packages for OpenStack projects with cycle-trailing release models[] will be created after a release is delivered according to the OpenStack Queens schedule. [] https://releases.openstack.org/reference/release_models.html#cycle-trailing

Contributors

During the Queens cycle, we saw the following new contributors:

  • Aditya Ramteke
  • Jatan Malde
  • Ade Lee
  • James Slagle
  • Alex Schultz
  • Artom Lifshitz
  • Mathieu Bultel
  • Petr Viktorin
  • Radomir Dopieralski
  • Mark Hamzy
  • Sagar Ippalpalli
  • Martin Kopec
  • Victoria Martinez de la Cruz
  • Harald Jensas
  • Kashyap Chamarthy
  • dparalen
  • Thiago da Silva
  • chenxing
  • Johan Guldmyr
  • David J Peacock
  • Sagi Shnaidman
  • Jose Luis Franco Arza

Welcome to all of you, and thank you so much for participating!

But, we wouldn’t want to overlook anyone. Thank you to all 76 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories, and is provided in no particular order:

  • Yatin Karel
  • Aditya Ramteke
  • Javier Pena
  • Alfredo Moralejo
  • Christopher Brown
  • Jon Schlueter
  • Chandan Kumar
  • Haikel Guemar
  • Emilien Macchi
  • Jatan Malde
  • Pradeep Kilambi
  • Luigi Toscano
  • Alan Pevec
  • Eric Harney
  • Ben Nemec
  • Matthias Runge
  • Ade Lee
  • Jakub Libosvar
  • Thierry Vignaud
  • Alex Schultz
  • Juan Antonio Osorio Robles
  • Mohammed Naser
  • James Slagle
  • Jason Joyce
  • Artom Lifshitz
  • Lon Hohberger
  • rabi
  • Dmitry Tantsur
  • Oliver Walsh
  • Mathieu Bultel
  • Steve Baker
  • Daniel Mellado
  • Terry Wilson
  • Tom Barron
  • Jiri Stransky
  • Ricardo Noriega
  • Petr Viktorin
  • Juan Antonio Osorio Robles
  • Eduardo Gonzalez
  • Radomir Dopieralski
  • Mark Hamzy
  • Sagar Ippalpalli
  • Martin Kopec
  • Ihar Hrachyshka
  • Tristan Cacqueray
  • Victoria Martinez de la Cruz
  • Bernard Cafarelli
  • Harald Jensas
  • Assaf Muller
  • Kashyap Chamarthy
  • Jeremy Liu
  • Daniel Alvarez
  • Mehdi Abaakouk
  • dparalen
  • Thiago da Silva
  • Brad P. Crochet
  • chenxing
  • Johan Guldmyr
  • Antoni Segura Puimedon
  • David J Peacock
  • Sagi Shnaidman
  • Jose Luis Franco Arza
  • Julie Pichon
  • David Moreau-Simard
  • Wes Hayutin
  • Attila Darazs
  • Gabriele Cerami
  • John Trowbridge
  • Gonéri Le Bouder
  • Ronelle Landy
  • Matt Young
  • Arx Cruz
  • Joe H. Rahme
  • marios
  • Sofer Athlan-Guyot
  • Paul Belanger

Getting Started

There are two ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works. For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.

Getting Help

The RDO Project participates in a Q&A service at ask.openstack.org. We also have our users@lists.rdoproject.org for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailng lists archives are all available at https://mail.rdoproject.org You can also find extensive documentation on the RDO docs site.

The #rdo channel on Freenode IRC is also an excellent place to find help and give help.

We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience in the RDO venues.

Getting Involved

To get involved in the OpenStack RPM packaging effort, see the RDO community pages and the CentOS Cloud SIG page. See also the RDO packaging documentation.

Join us in #rdo on the Freenode IRC network, and follow us at @RDOCommunity on Twitter. If you prefer Facebook, we’re there too, and also Google+.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Here’s the latest edition of the community blog round-up. Thanks for your contributions!

Deleting an image on RDO by Adam Young

So I uploaded a qcow image… but did it wrong. It was tagged as raw instead of qcow, and now I want it gone. Only problem… it is stuck.

Read more at http://adam.younglogic.com/2018/02/deleting-an-image-on-rdo/

Keystonerc for RDO cloud by Adam Young

If you are using RDO Cloud and want to do command line Ops, here is the outline of a keystone.rc file you can use to get started.

Read more at http://adam.younglogic.com/2018/02/keystonerc-for-rdo-cloud/

Debugging TripleO revisited – Heat, Ansible & Puppet by Steve Hardy

Some time ago I wrote a post about debugging TripleO heat templates, which contained some details of possible debug workflows when TripleO deployments fail. In recent releases we’ve made some major changes to the TripleO architecture. In this post I’d like to provide a refreshed tutorial on typical debug workflow, primarily focusing on the configuration phase of a typical TripleO deployment, and with particular focus on interfaces which have changed or are new since my original debugging post.

Read more at http://hardysteven.blogspot.com/2018/02/debugging-tripleo-revisited-heat.html

Listing iptables rules with line numbers by Lars Kellogg-Stedman

You can list iptables rules with rule numbers using the --line-numbers option, but this only works in list (-L) mode. I find it much more convenient to view rules using the output from iptables -S or iptables-save.

Read more at http://blog.oddbit.com/2018/02/08/listing-iptables-rules-with-li/

FOSDEM ’18, and the CentOS Brussels Dojo by Rich Bowen

The first weekend in February always finds me in Brussels for FOSDEM and the various associated events, and this year is no exception.

Read more at http://drbacchus.com/fosdem-18-and-the-centos-brussels-dojo/

Matching Create and Teardown in an Ansible Role by Adam Young

Nothing lasts forever. Except some developer setups that no-one seems to know who owns, and no one is willing to tear down. I’ve tried to build the code to clean up after myself into my provisioning systems. One pattern I’ve noticed is that the same data is required for building and for cleaning up a cluster. When I built Ossipee, each task had both a create and a teardown stage. I want the same from Ansible. Here is how I’ve made it work thus far.

Read more at http://adam.younglogic.com/2018/01/match-create-teardown/

Deploying an image on OpenStack that is bigger than the available flavors. by Adam Young

Today I tried to use our local OpenStack instance to deploy CloudForms Management Engine (CFME). Our OpenStack deployment has a set of flavors that all are defined with 20 GB Disks. The CFME image is larger than this, and will not deploy on the set of flavors. Here is how I worked around it.

Read more at http://adam.younglogic.com/2018/01/big-image-small-flavors/

Freeing up a Volume from a Nova server that errored by Adam Young

Trial and error. Its a key part of getting work done in my field, and I make my share of errors. Today, I tried to create a virtual machine in Nova using a bad glance image that I had converted to a bootable volume:

Read more at http://adam.younglogic.com/2018/01/free-volume-server-error/

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In just a few weeks, the OpenStack Foundation will be holding the PTG – the Project Teams Gathering – in Dublin, Ireland. At this event, the various project teams will discuss what will be implemented in the Rocky release of OpenStack.

This is the third PTG, with the first one being in Atlanta, and the second in Denver. At each PTG, I’ve done video interviews of the various project teams, about what they accomplished in the just-completed cycle, and what they intend to do in the next.

The videos from Atlanta are HERE.

And the videos from Denver are HERE.

If you’ll be at this PTG, please consider doing an interview with your project team. You can sign up in the Google doc. And please take a moment to review the information about what kind of questions I’ll be asking. I’ll be interviewing Tuesday through Friday. I’ll know the specific location where I’ll be set up once I’m on-site on Monday.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
RDO Community Blogs by Rich Bowen - 1w ago

Last weekend was FOSDEM, the annual Free and Open Source software convention in Brussels. The OpenStack Foundation had a table at the event, where there were lots of opportunities to talk with people either using OpenStack, or learning about it for the first time.

There was a good crowd that came by the table, and we had great conversations with many users.


The table was staffed entirely by volunteers from the OpenStack developer community, representing several different organizations.

On the day before FOSDEM, the CentOS community held their usual pre-FOSDEM Dojo, and a few members of the RDO community. You can see the videos from that event on the CentOS YouTube channel.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Here’s the latest round-up of RDO- and OpenStack- related blogposts from the community. Thanks to all who continue to produce this great content! Keep calm and reboot: Patching recent exploits in a production cloud by Tim Bell

At CERN, we have around 8,500 hypervisors running 36,000 guest virtual machines. With the accelerator stopping over the CERN annual closure until mid March, this is a good period to be planning reconfiguration of compute resources such as the migration of our central batch system which schedules the jobs across the central compute resources to a new system based on HTCondor. However, this year we have had an unexpected additional task to deploy the fixes for the Meltdown and Spectre exploits across the centre. Here are the steps we took to upgrade.

Read more at http://openstack-in-production.blogspot.com/2018/01/keep-calm-and-reboot-patching-recent.html

Creating an Ansible Inventory file using Jinja templating by Adam Young

While there are lots of tools in Ansible for generating an inventory file dynamically, in a system like this, you might want to be able to perform additional operations against the same cluster. For example, once the cluster has been running for a few months, you might want to do a Yum update. Eventually, you want to de-provision. Thus, having a remote record of what machines make up a particular cluster can be very useful. Dynamic inventories can be OK, but often it takes time to regenerate the inventory, and that may slow down an already long process, especially during iterated development.

Read more at http://adam.younglogic.com/2018/01/creating-an-ansible-inventory-file-using-jinja-templating/

Getting Shade for the Ansible OpenStack modules by Adam Young

When Monty Taylor and company looked to update the Ansible support for OpenStack, they realized that there was a neat little library waiting to emerge: Shade. Pulling the duplicated code into Shade brought along all of the benefits that a good refactoring can accomplish: fewer cut and paste errors, common things work in common ways, and so on. However, this means that the OpenStack modules are now dependent on a remote library being installed on the managed system. And we do not yet package Shade as part of OSP or the Ansible products. If you do want to use the OpenStack modules for Ansible, here is the “closest to supported” way you can do so.

Read more at http://adam.younglogic.com/2018/01/ansible-osp-shade/

Using JSON home on a Keystone server by Adam Young

Say you have an AUTH_URL… And now you want to do something with it. You might think you can get the info you want from the /v3 url, but it does not tell you much. Turns out, though, that there is data, it is just requires the json-home accepts header.

Read more at http://adam.younglogic.com/2018/01/using-json-home-keystone/

Safely restarting an OpenStack server with Ansible by Lars Kellogg-Stedman

The other day on #ansible, someone was looking for a way to safely shut down a Nova server, wait for it to stop, and then start it up again using the openstack cli. The first part seemed easy, but that will actually fail.

Read more at http://blog.oddbit.com/2018/01/24/safely-restarting-an-openstack-server-wi/

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The upcoming version of Zuul has many new features that allow one to create powerful continuous integration and continuous deployment pipelines.

This article presents some mechanisms to create such pipelines. As a practical example, I demonstrate the Software Factory project development workflow we use to continously build, test and deliver rpm packages through code review.

Build job

The first stage of this workflow is to build a new package for each change.

Build job definition

The build job is defined in a zuul.yaml file:

- job:
   name: sf-rpm-build
   description: Build Software Factory rpm package
   run: playbooks/rpmbuild.yaml
   required-projects:
    - software-factory/sfinfo
   nodeset:
     nodes:
       - name: mock-host
        label: centos-7

The required-projects option declare projects that are needed to run the job. In this case, the package metadata, such as the software collection targets are defined in the sfinfo project. This mean that everytime this job is executed, the sfinfo project will be copied to the test instance.

Extra required-projects can be added per project, for example the cauth package requires the cauth-distgit project to build a working package. The cauth pipeline can be defined as:

- project:
    name: software-factory/cauth
    check:
      jobs:
        - sf-rpm-build:
            required-projects:
              - software-factory/cauth-distgit

Most of the job parameters can be modified when added to a project pipeline. In the case of the required-projects the list isn’t replaced but extended. This means a change on the cauth project results in the sf-rpm-build job running with the sfinfo and cauth-distgit projects.

Build job playbook

The build job is an Ansible playbook:

- hosts: mock-host
  vars:
    # Get sfinfo location
    sfinfo_path_query: "[?name=='software-factory/sfinfo'].src_dir"
    sfinfo_path: >
      {{ (zuul.projects.values() | list | json_query(sfinfo_path_query))[0] }}
    # Get workspace path to run zuul_rpm_* commands
    sfnamespace_path: "{{ sfinfo_path | dirname | dirname }}"
  tasks:
    - name: Copy rpm-gpg keys
      become: yes
      command: "rsync -a {{ sfinfo_path }}/rpm-gpg/ /etc/pki/rpm-gpg/"

    - name: Run zuul_rpm_build.py
      command: >
        ./software-factory/sfinfo/zuul_rpm_build.py
            --distro-info ./software-factory/sfinfo/sf-{{ zuul.branch }}.yaml
            --zuulv3
            {% for item in zuul['items'] %}
              --project {{ item.project.name }}
            {% endfor %}
      args:
        chdir: "{{ sfnamespace_path }}"

    - name: Fetch zuul-rpm-build repository
      synchronize:
        src: "{{ sfnamespace_path }}/zuul-rpm-build/"
        dest: "{{ zuul.executor.log_root }}/buildset/"
        mode: pull

First, the variables use JMES query to discover the path of the sfinfo project location on the test instance. Indeed the Zuul executor prepares the workspace using relative paths constructed from the connection hostname. For reference, the playbook starts with a zuul.projects variable like the one below:

zuul:
  projects:
    managesf.softwarefactory-project.io/software-factory/sfinfo:
      name: software-factory/sfinfo
      src_dir: src/gerrit.softwarefactory-project.io/software-factory/sfinfo
    ...

Then the job runs the package building command using a loop on Zuul items. This enables the cross repository dependencies feature of Zuul where this job needs to build all the projects that are added as depends-on. Note that this is automatically done by the “tox” job, see the install_sibling task. For reference, the playbook starts with a zuul.items variable like the one below:

zuul:
  items:
    - branch: master
      change_url: https://softwarefactory-project.io/r/10736
      project:
        name: scl/zuul-jobs-distgit
    - branch: master
      change_url: https://softwarefactory-project.io/r/10599
      project:
        name: software-factory/sf-config
    - branch: master
      change_url: https://softwarefactory-project.io/r/10605
      project:
        name: software-factory/sf-ci

In this example, the depends-on list includes three changes:

  • Pages roles added to zuul-jobs-distgit,
  • Pages jobs configured in sf-config, and
  • Functional tests added to sf-ci.

The sf-rpm-build job will build a new package for each of these changes.

The last task fetches the resulting rpm repository to the job logs. Any jobs, playbooks or tasks can synchronize artifacts to the zuul.executor.log_root directory. Having the packages exported with the job logs is convenient for the end users to easily install the packages built in the CI. Moreover, this will also be used by the integration jobs below.

Integration pipeline

The second stage of the workflow is to test the packages built by the sf-rpm-build job.

Share Zuul artifacts between jobs

Child jobs can inherit data produced by a parent job when using the zuul_return Ansible module. The buildset-artifacts-location role automatically set the artifacts job logs url using this task:

- name: Define buildset artifacts location
  delegate_to: localhost
  zuul_return:
    data:
      buildset_artifacts_url: "{{ zuul_log_url }}/{{ zuul_log_path }}/buildset"

Software Factory configures this role along the upload-logs to transparently define this buildset_artifacts_url variable when there is a buildset directory in the logs.

Integration pipeline definition

The integration pipeline is defined in a zuul.yaml file:

- project-template:
    name: sf-jobs
    check:
      jobs:
        - sf-rpm-build
        - sf-ci-functional-minimal:
            dependencies:
              - sf-rpm-build
        - sf-ci-upgrade-minimal:
            dependencies:
              - sf-rpm-build
        - sf-ci-functional-allinone:
            dependencies:
              - sf-rpm-build
        - sf-ci-upgrade-allinone:
            dependencies:
              - sf-rpm-build

The functional and upgrade jobs use the dependencies option to declare that they only run after the rpm-build job is finished. The functional and upgrade jobs use new packages using the task below:

- name: Add CI packages repository
  yum_repository:
    name: "zuul-built"
    baseurl: "{{ buildset_artifacts_url }}"
    gpgcheck: "0"
  become: yes
Projects definition

The sfinfo project is a config-project in Zuul configuration. It enables the defining of all the projects’ jobs without requiring the addition of a zuul.yaml file in each project. Config-projects are allowed to configure foreign projects’ jobs, for example:

- project:
    name: scl/zuul-jobs-distgit
    templates:
      - sf-jobs

A good design for this workflow defines common jobs in a dedicated repository and the common pipeline definitions in a config-projects. Untrusted-projects can still add local jobs if needed and can even add dependencies to the common pipelines. For example, the cauth project extends the required-projects for the sf-rpm-build.

Deployment pipeline

When a change succeeds the integration tests the reviewer can approve it to trigger the deployment pipeline. The first thing to understand is how to use secrets in the deployment job.

Using secrets in jobs

Zuul can securely manage secrets using public key cryptography. Zuul manages a private key for each project and the user can encrypt secrets with the public key to store them in the repository along with the job. That means encryption is a one-way operation for the user and only the Zuul scheduler can decrypt the secret.

To create a new secret the user runs the encrypt_secret tool:

# encrypt_secret.py --infile secret.data <zuul-web-url>/keys/<tenant-name> <project-name>
- secret:
    name: <secret-name>
    data:
      <variable-name>: !encrypted/pkcs1-oaep
        - ENCRYPTED-DATA-HERE

Once a secret is added to a job the playbook will have access to its decrypted content. However, there are a few caveats:

  • The secret and the playbook need to be defined in a single job stored in the same project. Note that this may change in the future.
  • If the secret is defined in an untrusted-project, then the job is automatically converted to post-review. That means jobs using secrets can only run in post, periodic or release pipelines. This prevents speculative job modifications from leaking the secret content.
  • Alternatively, if the secret is defined in a config-project, then the job can be used in any pipeline because config-projects don’t allow speculative execution on new patchset.
Deployment pipeline definition

In the Software Factory project, the deployment is a koji build and is performed as part of the gate pipeline. That means the change isn’t merged if it is not deployed. Another strategy is to deploy in the post pipeline after the change is merged, or in the release pipeline after a tag is submitted.

The deployment pipeline is defined as below:

- project-template:
    name: sf-jobs
    gate:
      queue: sf
      jobs:
        - sf-rpm-build
        - sf-ci-functional-minimal:
            dependencies:
              - sf-rpm-build
        - sf-ci-upgrade-minimal:
            dependencies:
              - sf-rpm-build
        - sf-ci-functional-allinone:
            dependencies:
              - sf-rpm-build
        - sf-ci-upgrade-allinone:
            dependencies:
              - sf-rpm-build
        - sf-rpm-publish:
            dependencies:
              - sf-ci-functional-minimal
              - sf-ci-upgrade-minimal
              - sf-ci-functional-allinone
              - sf-ci-upgrade-allinone

The deployment pipeline needs to use the queue option to group all the approved changes in dependent order. When multiple changes are approved in parallel, they will all be tested together before being merged, as if they were submitted with a depends-on relationship.

The deployment pipeline is similar to the integration pipeline, it just adds a publish job that will only run if all the integration tests succeed. This ensures that changes are consistently tested with the projects’ current state before being deployed.

Deployment job definition

The job is declared in a zuul.yaml file as below:

- job:
    name: sf-rpm-publish
    description: Publish Software Factory rpm to koji
    run: playbooks/rpmpublish.yaml
    hold-following-changes: true
    required-projects:
      - software-factory/sfinfo
    secrets:
      - sf_koji_configuration

This job is using the hold-following-changes setting to ensure that only the top of the gate gets published. If the deployement is happening in the post or release pipeline, then this setting can be replaced by a semaphore instead, for example:

- job:
    name: deployment
    semaphore: production-access

- semaphore:
    name: production-access
    max: 1

This prevents concurrency issues when multiple changes are approved in parallel.

Zuul concepts summary

This article covered the following concepts:

  • Project types:
    • config-projects: hold deployment secrets and set projects’ pipelines.
    • untrusted-projects: the projects being tested and deployed.
  • Playbook variables:
    • zuul.projects: the projects installed on the test instance,
    • zuul.items: the list of changes being tested with depends-on,
    • zuul.executor.log_root: the location of job artifacts, and
    • zuul_return: an Ansible module to share data between jobs.
  • Job options:
    • required-projects: the list of projects to copy on the test instance,
    • dependencies: the list of jobs to wait for,
    • secret: the deployment job’s secret,
    • post-review: prevents a job from running speculatively,
    • hold-following-changes: makes dependent pipelines run in serial, and
    • semaphore: prevents concurrent deployment of different changes.
  • Pipeline options:
    • Job settings can be modified per project, and
    • queue makes all the projects depend on each other automatically.
Conclusion

To experiment Zuul by yourself, follow this deployment guide written by Fabien in this previous article.

Zuul can be used to effectively manage complex continous integration and deployment pipelines with powerfull cross repository dependencies management.

This article presented the Software Factory workflow where rpm packages are being continously built, tested and delivered through code review. A similar workflow can be created for other types of projects such as golang or container based software.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview