Loading...

Follow RDO Community Blogs on Feedspot

Continue with Google
Continue with Facebook
or

Valid

I’m attending PTG this week to conduct project interviews. These interviews have several purposes. Please consider all of the following when thinking about what you might want to say in your interview:

  • Tell the users/customers/press what you’ve been working on in Rocky
  • Give them some idea of what’s (what might be?) coming in Stein
  • Put a human face on the OpenStack project and encourage new participants to join us
  • You’re welcome to promote your company’s involvement in OpenStack but we ask that you avoid any kind of product pitches or job recruitment

In the interview I’ll ask some leading questions and it’ll go easier if you’ve given some thought to them ahead of time:

  • Who are you? (Your name, your employer, and the project(s) on which you are active.)
  • What did you accomplish in Rocky? (Focus on the 2-3 things that will be most interesting to cloud operators)
  • What do you expect to be the focus in Stein? (At the time of your interview, it’s likely that the meetings will not yet have decided anything firm. That’s ok.)
  • Anything further about the project(s) you work on or the OpenStack community in general.

Finally, note that there are only 40 interview slots available, so please consider coordinating with your project to designate the people that you want to represent the project, so that we don’t end up with 12 interview about Neutron, or whatever.

I mean, LOVE me some Neutron, but let’s give some other projects love, too.

It’s fine to have multiple people in one interview – maximum 3, probably.

Interview slots are 30 minutes, in which time we hope to capture somewhere between 10 and 20 minutes of content. It’s fine to run shorter, but 15 minutes is probably an ideal length.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

There have only been four articles in the past month? YES! But brace yourself. Release is HERE! Today is the official release day of OpenStack’s latest version, Rocky. And, sure, while we only have four articles for today’s blogroll, we’re about to get a million more posts as everyone installs, administers, uses, reads, inhales, and embraces the latest version of OpenStack. Please enjoy John’s personal system for running TripleO Quickstart at home as well as how to update ceph-ansible in a containerized undercloud, inhale Gonéri’s introduction to distributed CI and InfraRed, a tool to deploy and test OpenStack, and experience Jiří’s instructions to upgrade ceph and OpenShift Origin with TripleO.

Photo by Anderson Aguirre on Unsplash

PC for tripleo quickstart by John

I built a machine for running TripleO Quickstart at home.

Read more at http://blog.johnlikesopenstack.com/2018/08/pc-for-tripleo-quickstart.html

Distributed-CI and InfraRed by Gonéri Le Bouder

Red Hat OpenStack QE team maintains a tool to deploy and test OpenStack. This tool can deploy different types of topologies and is very modular. You can extend it to cover some new use-case. This tool is called InfraRed and is a free software and is available on GitHub.

Read more at https://blogs.rdoproject.org/2018/08/distributed-ci-and-infrared/

Updating ceph-ansible in a containerized undercloud by John

In Rocky the TripleO undercloud will run containers. If you’re using TripleO to deploy Ceph in Rocky, this means that ceph-ansible shouldn’t be installed on your undercloud server directly because your undercloud server is a container host. Instead ceph-ansible should be installed on the mistral-executor container because, as per config-download, That is the container which runs ansible to configure the overcloud.–

Read more at http://blog.johnlikesopenstack.com/2018/08/updating-ceph-ansible-in-containerized.html

Upgrading Ceph and OKD (OpenShift Origin) with TripleO by Jiří Stránský

In OpenStack’s Rocky release, TripleO is transitioning towards a method of deployment we call config-download. Basically, instead of using Heat to deploy the overcloud end-to-end, we’ll be using Heat only to manage the hardware resources and Ansible tasks for individual composable services. Execution of software configuration management (which is Ansible on the top level) will no longer go through Heat, it will be done directly. If you want to know details, i recommend watching James Slagle’s TripleO Deep Dive about config-download.

Read more at https://www.jistr.com/blog/2018-08-15-upgrading-ceph-and-okd-with-tripleo/

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
RDO Community Blogs by Gonéri Le Bouder - 3w ago
Introduction

Red Hat OpenStack QE team maintains a tool to deploy and test OpenStack. This tool can deploy different types of topologies and is very modular. You can extend it to cover some new use-case. This tool is called InfraRed and is a free software and is available on GitHub.

The purpose of Distributed-CI (or DCI) is to help OpenStack partners to test new Red Hat OpenStack (RHOSP) releases before they are published. This allows them to train on new releases, identify regression or prepare new driver ahead of time. In this article, we will explain how to integrate InfraRed with another too called Distributed-CI, or DCI.

InfraRed

InfraRed has been designed to be flexible and it can address numerous different use-cases. In this article, we will use it to prepare a virtual environment and driver a regular Red Hat OpenStack Platform 13 (OSP13) deployment on it.

InfraRed is covered by a complete documentation that we won’t copy past here. To summarize, once it’s installed, InfraRed exposes a CLI. This CLI gives the user the ability to create a workspace that will trace the state of the environment. The user can then trigger all the required steps to ultimately get a running OpenStack. In addition, InfraRed offers additional features through a plug-in system.

Distributed-CI

The partners use DCI to validate OpenStack on their labs. It’s a way to validate that they will still be able to use their gear with the next release. A DCI agent runs the deployment and is in charge of the communication with Red Hat. They then have to provide a set of scripts to deploy OpenStack on it automatically. These scripts will be used during the deployment.

DCI can be summarized with the following list of actions:

  1. Red Hat exposes the last internal snapshots of the product on the DCI
  2. Partner’s DCI agent pulls the last snapshot and deploys it internally using the local configuration and deployment scripts
  3. Partner’s DCI agent runs the tests and sends back the final result to DCI.
Deployment of the lab

For this article, we will use a libvirt hypervisor to virtualize our lab. The hypervisor can be based either on RHEL7 or a CentOS7.

The network configuration

In this tutorial, we will rely on libvirt ‘default’ network. This network uses the 192.168.122.0 range. 192.168.122.1 is our hypervisor. The IP addresses of the other VM will by dynamical and InfraRed will create some additional networks for you. We also use the hypervisor public IP which is `192.168.1.40.

Installation of the Distributed-CI agent for OpenStack

The installation of DCI agent is covered by its own documentation. All the steps are rather simple as soon as the partner has a host to run the agent that matches DCI requirements. This host is called the jumpbox in DCI jargon. In this document, the jumpbox is also the hypervisor host.

In the rest of this document will assume you have an admin access to a DCI project, that you created the remoteci on http://www.distributed-ci.io and that you have deployed the agent on your jumpbox with the help if its installation guide. To validate everything, you should be able to list the remoteci of your tenant with the following command.

# source /etc/dci-ansible-agent/dcirc.sh
# dcictl remoteci-list
+--------------------------------------+--------------+--------+------------------------------------------------------------------+--------+--------------------------------------+--------------------------------------+
|                  id                  |     name     | state  |                            api_secret                            | public |               role_id                |               team_id                |
+--------------------------------------+--------------+--------+------------------------------------------------------------------+--------+--------------------------------------+--------------------------------------+
| e86ab5ba-695c-4437-b163-261e20b20f56 | FutureTown | active | something |  None  | e5e20d68-bbbe-411c-8be4-e9dbe83cc74e | 2517154c-46b4-4db9-a447-1c89623cc00a |
+--------------------------------------+--------------+--------+------------------------------------------------------------------+--------+--------------------------------------+--------------------------------------+

So far so good, we can now start the agent for the very first time with:

# systemctl start dci-ansible-agent --no-block
# journalctl -exf -u dci-ansible-agent

The agent pulls the bits from Red Hat and uses the jumpbox to expose them. Technically speaking, it’s a Yum repository in /var/www/html and a image registry on port 5000. These resources need to be consumed during the deployment. Since we don’t have any configuration yet, the run will fail. It’s time to fix that and prepare our integration with InfraRed.

One of the crucial requirement is the set of scripts that will be used to deploy OpenStack. Those scripts are maintained by the user. They will be called by the agent through a couple of Ansible playbooks:

  • hooks/pre-run.yml: This playbook is the very first one to called on the jumpbox. It’s the place where the partner can, for instance, fetch the last copy of the configuration.
  • hooks/running.yml: This is the place where the automation will be called. Most of the time, it’s a couple of extra Ansible tasks that will call a script or include another playbook.
Preliminar configuration Security, firewall and SSH keypair

Some services like Apache will be exposed without any restriction. This is why we assume the hypervisor is on a trusted network.

We take the freedom to disable firewalld to simplify the whole process. Please do:

# systemctl stop firewalld
# systemctl disable firewalld

InfraRed interacts with the hypervisor using SSH. Just a reminder, in our case, the hypervisor is the local machine. To keep the whole setup simple, we share the same SSH key for the root and dci-ansible-agent users:

# ssh-keygen
# mkdir -p /var/lib/dci-ansible-agent/.ssh
# cp /root/.ssh /var/lib/dci-ansible-agent/.ssh
# chown -R dci-ansible-agent:dci-ansible-agent /var/lib/dci-ansible-agent/.ssh
# chmod 700 /var/lib/dci-ansible-agent/.ssh
# chmod 600 /var/lib/dci-ansible-agent/.ssh/*
# restorecon /var/lib/dci-ansible-agent/.ssh

You can validate everything work fine with:

# su - dci-ansible-agent
$ ssh root@localhost id

Libvirt

We will deploy OpenStack on our libvirt hypervisor with the Virsh provisioner.

# yum install libvirt
# systemctl start libvirtd
# systemctl enable libvirtd

Red Hat Subscription Manager configuration (RHSM)

InfraRed uses the RHSM during the deployment to register the nodes and pull the last RHEL updates. It loads the credentials from a little YAML file that you can store in the /etc/dci-ansible-agent directory with the other files:

# cat /etc/dci-ansible-agent/cdn_creds.yml
username: gleboude@redhat.com
password: 9328878db3ea4519912c36525147a21b
autosubscribe: yes

RHEL guest image

InfraRed needs a RHEL guest image to prepare the nodes. It tries hard to download it by itself, thanks InfraRed… But the default location is https://url.corp.redhat.com/rhel-guest-image-7-5-146-x86-64-qcow2 which is unlikely to match your environment. Got on https://access.redhat.com/downloads and download the last RHEL guest image. The file should be stored here on your hypervisor: /var/lib/libvirt/images/rhel-guest-image-7-5-146-x86-64-qcow2. The default image name will probably change in the future, you can list the default values for the driver with the infrared (or ir) command:

# su - dci-ansible-agent
$ source .venv/bin/activate
$ ir virsh --help

Configure the agent for InfraRed

All the configuration files of this example are available on GitHub.

Run bootstrap (pre-run.yml)

First, we want to install InfraRed dependencies and prepare a virtual environment. These steps will be done with the pre-run.yml.

---
- name: Install the RPM that InfraRed depends on
  yum:
    name: '{{ item }}'
    state: present
  with_items:
  - git
  - python-virtualenv
  become: True

We pull InfraRed directly from its Git repository using Ansible’s git module:

- name: Wipe any existing infrared virtualenv
  file:
    path: ~/infrared
    state: absent

- name: Pull the last InfraRed version
  git:
    repo: https://github.com/openstack-redhat/infrared.git
    dest: /var/lib/dci-ansible-agent/infrared
    version: master

Finally, we prepare a Python virtual environment to preserve the integrity of the system and we install InfraRed in it.

- name: Wipe any existing infrared virtualenv
  file:  ~/infrared/.venv
    state: absent
- name: Install InfraRed in a fresh virtualenv
  shell: |
    cd ~/infrared
    virtualenv .venv && source .venv/bin/activate
    pip install --upgrade pip
    pip install --upgrade setuptools
    pip install .

As mentioned above, the agent is called by the dci-ansible-agent user, we have to ensure everything is done in its home directory.

- name: Enable the InfraRed plugins that we will use during the deployment
  shell: |
    cd ~/infrared
    source .venv/bin/activate
    infrared plugin add plugins/virsh
    infrared plugin add plugins/tripleo-undercloud
    infrared plugin add plugins/tripleo-overcloud

Before we start anything, we do a cleanup of the environment. For that, we rely on InfraRed. Its virsh plugin can remove all the existing resources thinks to the --cleanup argument:

- name: Clean the hypervisor
  shell: |
    cd ~/infrared
    source .venv/bin/activate
    infrared virsh \
      --host-address 192.168.122.1 \
      --host-key $HOME/.ssh/id_rsa \
      --cleanup True

Be warned, InfraRed removes all the existing VM, network and storages from your hypervisor.

Hosts deployment (running.yml)

As mentioned before, the running.yml is actually the place where the deployment is actually done. We ask InfraRed to prepare our hosts:

- name: Prepare the hosts
  shell: |
    cd ~/infrared
    source .venv/bin/activate
    infrared virsh \
      --host-address 192.168.122.1 \
      --host-key $HOME/.ssh/id_rsa \
      --topology-nodes undercloud:1,controller:1,compute:1

Undercloud deployment (running.yml)

We can now deploy the Undercloud:

- name: Install the undercloud
  shell: |
    cd ~/infrared
    source .venv/bin/activate
    infrared tripleo-undercloud \
      --version 13 \
      --images-task rpm \
      --cdn /etc/dci-ansible-agent/cdn_creds.yml \
      --repos-skip-release True \
      --repos-url http://192.168.1.40/dci_repo/dci_repo.repo

At this stage, our libvirt virtual machines are ready and one of them host the undercloud. All these machines have a floating IP. InfraRed keeps the machines names up to date in /etc/hosts. We rely on that to get the undercloud IP address:

- name: Registry InfraRed's undercloud-0 IP
  set_fact: undercloud_ip="{{ lookup('pipe', 'getent hosts undercloud-0').split()[0]}}"

You can also use InfraRed to interact with all these hosts with a dynamic IP:

# su - dci-ansible-agent
$ cd ~/infrared
$ source .venv/bin/activate
$ ir ssh undercloud-0

Here ir is an alias for the infrared command. In both cases, it’s pretty cool, InfraRed did all the voodoo for us.

Overcloud deployment (running.yml)

It’s time to run the final step of our deployment.

- name: Deploy the overcloud
 shell: |
     cd ~/infrared
    source .venv/bin/activate
    infrared tripleo-overcloud \
      --deployment-files virt \
      --version 13 \
      --introspect yes \
      --tagging yes \
      --deploy yes \
      --post yes \
      --containers yes \
      --registry-skip-puddle yes \
      --registry-undercloud-skip yes \
      --registry-mirror 192.168.1.40:5000 \
      --registry-tag latest \
      --registry-namespace rhosp13 \
      --registry-prefix openstack- \
      --vbmc-host undercloud \
      --ntp-server 0.rhel.pool.ntp.org

Here we pass some extra arguments to accommodate InfraRed:

  • --registry-mirror: we don’t want to use the images from Red Hat. Instead, we will pick the ones delivered by DCI. Here 192.168.1.40 is the first IP address of our jumpbox. It’s the one the agent use when it deploys the image registry. Use the following command to validate you use the correct address: cat /etc/docker-distribution/registry/config.yml|grep addr
  • --registry-namespace and --registry-prefix: our images name start with /rhosp13/openstack-.
  • --vbmc-host undercloud: During the Overcloud installation, TripleO uses Ironic for the node provisioning. Ironic interacts with the nodes through a Virtual BMC server. By default InfraRed install it on the hypervisor, in our case we prefer to keep it clean. This is why we target the undercloud instead.

The virtual BMC instances will look like that on the undercloud:

[stack@undercloud-0 ~]$ ps aux|grep bmc
stack     4315  0.0  0.0 426544 15956 ?        Sl   13:19   0:00 /usr/bin/python2 /usr/bin/vbmc start controller-2
stack     4383  0.0  0.0 426544 15952 ?        Sl   13:19   0:00 /usr/bin/python2 /usr/bin/vbmc start controller-1
stack     4451  0.0  0.0 426544 15952 ?        Sl   13:19   0:00 /usr/bin/python2 /usr/bin/vbmc start controller-0
stack     4520  0.0  0.0 426544 15936 ?        Sl   13:19   0:00 /usr/bin/python2 /usr/bin/vbmc start compute-1
stack     4590  0.0  0.0 426544 15948 ?        Sl   13:19   0:00 /usr/bin/python2 /usr/bin/vbmc start compute-0
stack    10068  0.0  0.0 112708   980 pts/0    S+   13:33   0:00 grep --color=auto bmc

DCI lives Let’s start the beast!

Ok, at this stage, we can start the agent. The standard way to trigger a DCI run is through systemd:

# systemctl start dci-ansible-agent --no-block

A full run takes more than 2 hours, the --no-block argument above tells systemctl to give back the control to the shell. Even if the unit’s start-up is not completed yet.

You can follow the progress of your deployment either on the web interface: https://www.distributed-ci.io/ or with journalctl:

# journalctl -exf -u dci-ansible-agent

The CLI

DCI also comes with CLI interface that you can use directly on the hypervisor.

# source /etc/dci-ansible-agent/dcirc.sh
# dcictl job-list

This command can also give you an output in the JSON format. It’s handy when you want to reuse the DCI results in some script:

# dcictl --format json job-list --limit 1 | jq .jobs[].status
running

To conclude

I hope you enjoyed the article and this will help you to prepare your own configuration. Please, don’t hesitate to contact me if you have any question.

I would like to thanks François Charlier and the InfraRed team. François started the DCI InfraRed integration several months ago. He did a great job to resolve all the issues one by one with the help of the InfraRed team.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

One last happy birthday to OpenStack before we get ready to wrap up Rocky and prep for OpenStack PTG in Denver Colorado. Mary makes us drool over cupcakes, Carlos asks for our vote for his TripleO presentations, and Assaf dives into tenant, provider, and external neutron networks!

Happy Birthday OpenStack from SF Bay Area Open Infra by Mary Thengvall

I love birthday celebrations! They’re so full of joy and reminiscing of years gone by. Discussions of “I knew her when” or “Remember when he… ?” They have a tendency to bring separate communities of people together in unique and fun ways. And we all know how passionate I am about communities…

Read more at https://blogs.rdoproject.org/2018/07/happy-birthday-openstack-from-sf-bay-area-open-infra/

Vote for the OpenStack Berlin Summit presentations! by Carlos Camacho

I pushed some presentations for this year OpenStack summit in Berlin, the presentations are related to updates, upgrades, backups, failures and restores.

Read more at https://www.anstack.com/blog/2018/07/24/openstack-berlin-summit-vote-for-presentations.html

Tenant, Provider and External Neutron Networks by assafmuller

To this day I see confusion surrounding the terms: Tenant, provider and external networks. No doubt countless words have been spent trying to tease apart these concepts, so I thought that it’d be a good use of my time to write 470 more.

Read more at https://assafmuller.com/2018/07/23/tenant-provider-and-external-neutron-networks/

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I love birthday celebrations! They’re so full of joy and reminiscing of years gone by. Discussions of “I knew her when” or “Remember when he… ?” They have a tendency to bring separate communities of people together in unique and fun ways. And we all know how passionate I am about communities…

So when Rain Leander suggested that I attend the SF Bay Area celebration of OpenStack’s 8th birthday as one of my last tasks as interim community lead for RDO, I jumped at the chance! Celebrating a birthday AND getting to know this community better, as well as reuniting with friends I already knew? Sign me up!

I arrived at the event in time to listen to a thought-provoking panel led by Lisa-Marie Namphy, Developer Advocate, community architect, and open source software specialist at Portworx. She spoke with Michael Richmond (NIO), Tony Campbell (Red Hat) and Robert Starmer (Kumulus Tech) about Kubernetes in the Real World.

Lew Tucker, CTO of Cisco, spoke next, and said one of my favorite quotes of the night:

Cloud computing has won… and it’s multiple clouds.

My brain instantly jumped to wondering about the impact that community has had on the fact that it’s not a particular company that has won in this new stage of technology, but a concept.

Dinner and mingling with OpenStack community friends new and old was up next, followed by an awesome recap of how the OpenStack community has grown over the last 8 years.

While 8 years in the grand scheme of history doesn’t seem like much, in the Open Source world, it’s a big accomplishment. The fact that OpenStack is up to 89,000+ community members in 182 countries and supported by 672 organizations is a huge feat and one that deserves to be celebrated!

Speaking of celebrating… we at RDO express our appreciation and love for community through sharing food (Rocky Road ice cream, anyone?) and this celebration was no exception. We provided the best (and cutest) mini cupcakes that I’ve ever had. The Oreo cupcake with cookie frosting gets two thumbs up in my book!

The night ended with smiles and promises of another great year to come.

Here’s to the next 8 years of fostering and building communities, moving the industry forward, and enjoying the general awesomeness that is OpenStack.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We’ve got three posts this week related to OpenStack – Adam Young’s insight on how to verify if a patch has been tested as a reviewer, while Zane Bitter takes a look at OpenStack’s multiple layers of services, and then Nir Yechiel introduces us to the five things we need to know about networking on Red Hat OpenStack Platform 13. As always, if you know of an article not included in this round up, please comment below or track down leanderthal (that’s me! Rain Leander!) on Freenode irc #rdo.

Testing if a patch has test coverage by Adam Young

When a user requests a code review, the review is responsible for making sure that the code is tested. While the quality of the tests is a subjective matter, their presences is not; either they are there or they are not there. If they are not there, it is on the developer to explain why or why not.

Read more at https://adam.younglogic.com/2018/07/testing-patch-has-test/

Limitations of the Layered Model of OpenStack by Zane Bitter

One model that many people have used for making sense of the multiple services in OpenStack is that of a series of layers, with the ‘compute starter kit’ projects forming the base. Jay Pipes recently wrote what may prove to be the canonical distillation (this post is an edited version of my response):

Read more at https://www.zerobanana.com/archive/2018/07/17#openstack-layer-model-limitations

Red Hat OpenStack Platform 13: five things you need to know about networking by Nir Yechiel, Principal Product Manager, Red Hat

Red Hat OpenStack Platform 13, based on the upstream Queens release, is now Generally Available. Of course this version brings in many improvements and enhancements across the stack, but in this blog post I’m going to focus on the five biggest and most exciting networking features found this latest release.–

Read more at https://redhatstackblog.redhat.com/2018/07/12/red-hat-openstack-platform-13-five-things-you-need-to-know-about-networking/

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I know what you’re thinking – another blog round up SO SOON?!? Is it MY BIRTHDAY?!? Maybe! But it’s definitely OpenStack’s birthday this month – eight years old – and there are an absolute TON of blog posts as a result. Well, maybe not a ton, but definitely a lot to write about and therefore, there are a lot more community blog round ups. Expect more of the same as content allows! So, sit back and enjoy the latest RDO community blog round-up while you eat a piece of cake and wish a very happy birthday to OpenStack.

Virtualize your OpenStack control plane with Red Hat Virtualization and Red Hat OpenStack Platform 13 by Ramon Acedo Rodriguez, Product Manager, OpenStack

With the release of-Red Hat OpenStack Platform 13 (Queens)-we’ve added support to Red Hat OpenStack Platform director to deploy the overcloud controllers as virtual machines in a Red Hat Virtualization cluster. This allows you to have your controllers, along with other supporting services such as Red Hat Satellite, Red Hat CloudForms, Red Hat Ansible Tower, DNS servers, monitoring servers, and of course, the undercloud node (which hosts director), all within a Red Hat Virtualization cluster. This can reduce the physical server footprint of your architecture and provide an extra layer of availability.

Read more at https://redhatstackblog.redhat.com/2018/07/10/virtualize-your-openstack-control-plane-with-red-hat-virtualization-and-red-hat-openstack-platform-13/

Red Hat OpenStack Platform: Making innovation accessible for production by Maria Bracho, Principal Product Manager OpenStack

An OpenStack®️-based cloud environment can help you digitally transform to succeed in fast-paced, competitive markets. However, for many organizations, deploying open source software supported only by the community can be intimidating. Red Hat®️ OpenStack Platform combines community-powered innovation with enterprise-grade features and support to help your organization build a production-ready private cloud.

Read more at https://redhatstackblog.redhat.com/2018/07/09/red-hat-openstack-platform-making-innovation-accessible-for-production/

Converting policy.yaml to a list of dictionaries by Adam Young

The policy .yaml file generated from oslo is not very useful for anything other than feeding to oslo-policy to enforce. If you want to use these values for anything else, it would be much more useful to have each rule as a dictionary, and all of the rules in a list. Here is a little bit of awk to help out:

Read more at https://adam.younglogic.com/2018/07/policy-yaml-dictionary/

A Git Style change management for a Database driven app. by Adam Young

The Policy management tool I’m working on really needs revision and change management.- Since I’ve spent so much time with Git, it affects my thinking about change management things.- So, here is my attempt to lay out my current thinking for implementing a git-like scheme for managing policy rules.

Read more at https://adam.younglogic.com/2018/07/a-git-style-change-management-for-a-database-driven-app/

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

So much happened over the past month that it’s definitely time to set off the fireworks! To start, Steve Hardy shares his tips and tricks for TripleO containerized deployments, then Zane Bitter talks discusses the ever expanding OpenStack Foundation, while Maria Bracho introduces us to Red Hat OpenStack Platform’s fast forward upgrades in a step-by-step overview, and so very much more. Obviously, prep the barbecue, it’s time for the fourth of July community blog round-up!

Red Hat OpenStack Platform: Two life-cycle choices to fit your organization by Maria Bracho, Principal Product Manager OpenStack

OpenStack®️ is a powerful platform for building private cloud environments that support modern, digital business operations. However, the OpenStack community’s six-month release cadence can pose challenges for enterprise organizations that want to deploy OpenStack in production. Red Hat can help.

Read more at https://redhatstackblog.redhat.com/2018/07/02/red-hat-openstack-platform-two-life-cycle-choices-to-fit-your-organization/

CPU model configuration for QEMU/KVM on x86 hosts by Daniel Berrange

With the various CPU hardware vulnerabilities reported this year, guest CPU configuration is now a security critical task. This blog post contains content I’ve written that is on its way to become part of the QEMU documentation.

Read more at https://www.berrange.com/posts/2018/06/29/cpu-model-configuration-for-qemu-kvm-on-x86-hosts/

Requirements for an OpenStack Access Control Policy Management Tool by Adam Young

“We need a read only role.”

Read more at https://adam.younglogic.com/2018/06/requirements-for-an-openstack-access-control-policy-management-tool/

Red Hat OpenStack Platform 13 is here! by Rosa Guntrip

Accelerate. Innovate. Empower. In the digital economy, IT organizations can be expected to deliver services anytime, anywhere, and to any device. IT speed, agility, and innovation can be critical to help stay ahead of your competition. Red Hat OpenStack Platform lets you build an on-premise cloud environment designed to accelerate your business, innovate faster, and empower your IT teams.

Read more at https://redhatstackblog.redhat.com/2018/06/27/red-hat-openstack-platform-13-is-here/

Red Hat Certified Cloud Architect – An OpenStack Perspective – Part Two by Chris Janiszewski – Senior OpenStack Solutions Architect – Red Hat Tiger Team

Previously we learned about what the Red Hat Certified Architect certification is and what exams are included in the “OpenStack-focused” version of the certification. This week we want to focus on personal experience and benefits from achieving this milestone.

Read more at https://redhatstackblog.redhat.com/2018/06/24/red-hat-certified-cloud-architect-an-openstack-perspective-part-two/

Red Hat OpenStack Platform fast forward upgrades: A step-by-step overview by Maria Bracho, Principal Product Manager OpenStack

New in Red Hat®️ OpenStack®️ Platform 13, the fast forward upgrade feature lets you easily move between long-life releases, without the need to upgrade to each in-between release. Fast forward upgrades fully containerize Red Hat OpenStack Platform deployment to simplify and speed the upgrade process while reducing interruptions and eliminating the need for additional hardware. Today, we’ll take a look at what the fast forward upgrade process from Red Hat OpenStack Platform 10 to Red Hat OpenStack Platform 13 looks like in practice.

Read more at https://redhatstackblog.redhat.com/2018/06/22/red-hat-openstack-platform-fast-forward-upgrades-a-step-by-step-overview/

Red Hat Certified Cloud Architect – An OpenStack Perspective – Part One by Chris Janiszewski – Senior OpenStack Solutions Architect – Red Hat Tiger Team

The Red Hat Certified Architect (RHCA) is the highest certification provided by Red Hat. To many, it can be looked at as a “holy grail” of sorts in open source software certifications. It’s not easy to get. In order to receive it, you not only need to already be a Red Hat Certified Engineer -(RHCE) for Red Hat Enterprise Linux (with the Red Hat Certified System Administrator, (RHCSA) as pre-requisite) but also pass additional exams from various technology categories.—

Read more at https://redhatstackblog.redhat.com/2018/06/21/red-hat-certified-cloud-architect-an-openstack-perspective-part-one/

Tips on searching ceph-install-workflow.log on TripleO by John
  1. Only look at the logs relevant to the last run

Read more at http://blog.johnlikesopenstack.com/2018/06/tips-on-searching-ceph-install.html

TripleO Ceph Integration on the Road in June by John

The first week of June I went to an upstream TripleO workshop in Brno. The labs we used are at https://github.com/redhat-openstack/tripleo-workshop

Read more at http://blog.johnlikesopenstack.com/2018/06/tripleo-ceph-integration-on-road-in-june.html

The Expanding OpenStack Foundation by Zane Bitter

The OpenStack Foundation has begun the process of becoming an umbrella organisation for open source projects adjacent to but outside of OpenStack itself. However, there is no clear roadmap for the transformation, which has resulted in some confusion. After attending the joint leadership meeting with the Foundation Board of Directors and various Forum sessions that included some members of the board at the (2018) OpenStack Summit in Vancouver, I believe I can help shed some light on the situation. (Of course this is my subjective take on the topic, and I am not speaking for the Technical Committee.)–

Read more at https://www.zerobanana.com/archive/2018/06/14#osf-expansion

Configuring a static address for wlan0 on Raspbian Stretch by Lars Kellogg-Stedman

Recent releases of Raspbian have adopted the use of dhcpcd to manage both dynamic and static interface configuration. If you would prefer to use the traditional /etc/network/interfaces mechanism instead, follow these steps.

Read more at https://blog.oddbit.com/2018/06/14/configuring-a-static-address-f/

Configuring collectd plugins with TripleO by mrunge

A way of deploying OpenStack is to use TripleO. This takes the an approach to deploy a small OpenStack environment, and then to take OpenStack provided infrastructure and tools to deploy the actual production environment.

Read more at http://www.matthias-runge.de/2018/06/08/tripleo-collectd/

TripleO Containerized deployments, debugging basics by Steve Hardy

Since the Pike release, TripleO has supported deployments with OpenStack services running in containers.- Currently we use docker to run images based on those maintained by the Kolla project.We already have some tips and tricks for container deployment debugging in tripleo-docs, but below are some more notes on my typical debug workflows.

Read more at https://hardysteven.blogspot.com/2018/06/tripleo-containerized-deployments.html

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Who’s up for a rematch? Rocky Milestone 2 is here and we’re ready to rumble! Join us on June 14 & 15 (next Thursday and Friday) for an awesome time of taking down bugs and fighting errors in the most recent release. We won’t be pulling any punches.

Want to get in on the action? We’re looking for developers, users, operators, quality engineers, writers, and, yes, YOU. If you’re reading this, we think you’re a champion and we want your help!

Here’s the plan:
We’ll have packages for the following platforms:
* RHEL 7
* CentOS 7

You’ll want a fresh install with latest updates installed so that there’s no hard-to-reproduce interactions with other things.

We’ll be collecting feedback, writing up tickets, filing bugs, and answering questions.

Even if you only have a few hours to spare, we’d love your help taking this new version for a spin to work out any kinks. Not only will this help identify issues early in the development process, but you can be the one of the first to cut your teeth on the latest versions of your favorite deployment methods like TripleO, PackStack, and Kolla.

Interested? We’ll be gathering on #rdo (on Freenode IRC) for any associated questions/discussion, and working through the “Does it work?” tests.

As Rocky said, “The world ain’t all sunshine and rainbows,” but with your help, we can keep moving forward and make the RDO world better for those around us. Hope to see you on the 14th & 15th!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I’m in a bit of shock that it’s already June… anyone else share that feeling? With summer around the corner for those of us in the northern hemisphere (or Juneuary as we call it in San Francisco), there’s a promise of vacations ahead. Be sure to take us along on your various adventures — sharing about your new favorite hacks, the projects you’re working on, and the conferences you’re traveling to. We love hearing what you’re up to! Speaking of which… here’s what you’ve been blogging about recently:

TripleO deep dive session #13 (Containerized Undercloud) by Carlos Camacho

This is the 13th release of the TripleO “Deep Dive” sessions. Thanks to Dan Prince & Emilien Macchi for this deep dive session about the next step of the TripleO’s Undercloud evolution. In this session, they will explain in detail the movement re-architecting the Undercloud to move towards containers in order to reuse the containerized Overcloud ecosystem.

Read more at https://www.anstack.com/blog/2018/05/31/tripleo-deep-dive-session-13.html

Tracking Quota by Adam Young

This OpenStack Summit marks the third that I have attended where we’ve discussed the algorithms to try and record quota in Keystone but not update it on each resource allocation and free.

Read more at https://adam.younglogic.com/2018/05/tracking-quota/

Don’t rewrite your driver. 80 storage drivers for containers rolled into one! by geguileo

Do you work with containers but your storage doesn’t support your Container Orchestration system? Have you or your company already developed an Openstack/Cinder storage driver and now you have to do it again for containers? Are you having trouble deciding how to balance your engineering force between storage driver development in OpenStack, Containers, Ansible, etc? Then read on, as your life may be about to get better.

Read more at https://gorka.eguileor.com/cinderlib-csi/

Ansible Storage Role: automating your storage solutions by geguileo

Were you in the middle of writing your Ansible playbooks to automate your software provisioning, configuration, and application deployment when you realized you had to manage your storage as well? And it turns out that each of your storage solutions has a completely different Ansible module. Now you have to figure out how each module works to create ad-hoc tasks for each one. What a pain! If this has happened to you, or if you are interested in automating your storage solutions, you may be interested in the new Ansible Storage Role.

Read more at https://gorka.eguileor.com/ansible-role-storage/

Cinderlib: Every storage driver on a single Python library by geguileo

Wouldn’t it be great if we could manage any storage array using a single Python library that provided the right storage management abstraction? Well, this is no longer a beautiful dream, it has become a reality! Keep reading to find out how.

Read more at https://gorka.eguileor.com/cinderlib/

“Ultimate Private Cloud” Demo, Under The Hood! by Steven Hardy, Senior Principal Software Engineer

At the recent Red Hat Summit in San Francisco, and more recently the OpenStack Summit in Vancouver, the OpenStack engineering team worked on some interesting demos for the keynote talks. I’ve been directly involved with the deployment of Red Hat OpenShift Platform on bare metal using the Red Hat OpenStack Platform director deployment/management tool, integrated with openshift-ansible. I’ll give some details of this demo, the upstream TripleO features related to this work, and insight around the potential use-cases.

Read more at https://redhatstackblog.redhat.com/2018/05/22/ultimate-private-cloud-demo-under-the-hood/

Testing Undercloud backup and restore using Ansible by Carlos Camacho

Testing the Undercloud backup and restore It is possible to test how the Undercloud backup and restore should be performed using Ansible.

Read more at https://www.anstack.com/blog/2018/05/18/testing-undercloud-backup-and-restore-using-ansible.html

Your LEGO® Order Has Been Shipped by rainsdance

In preparation for the Red Hat Summit this week and OpenStack Summit in a week, I put together a hardware demo to sit in the RDO booth.

Read more at http://groningenrain.nl/your-lego-order-has-been-shipped/

Introducing GPUs to the CERN Cloud by Konstantinos Samaras-Tsakiris

High-energy physics workloads can benefit from massive parallelism — and as a matter of fact, the domain faces an increasing adoption of deep learning solutions. Take for example the newly-announced TrackML challenge [7], already running in Kaggle! This context motivates CERN to consider GPU provisioning in our OpenStack cloud, as computation accelerators, promising access to powerful GPU computing resources to developers and batch processing alike.

Read more at https://openstack-in-production.blogspot.com/2018/05/introducing-gpus-to-cern-cloud.html

A modern hybrid cloud platform for innovation: Containers on Cloud with Openshift on OpenStack by Stephane Lefrere

Market trends show that due to long application life-cycles and the high cost of change, enterprises will be dealing with a mix of bare-metal, virtualized, and containerized applications for many years to come. This is true even as greenfield investment moves to a more container-focused approach.

Read more at https://redhatstackblog.redhat.com/2018/05/08/containers-on-cloud/

Using a TM1637 LED module with CircuitPython by Lars Kellogg-Stedman

CircuitPython is “an education friendly open source derivative of MicroPython”. MicroPython is a port of Python to microcontroller environments; it can run on boards with very few resources such as the ESP8266. I’ve recently started experimenting with CircuitPython on a Wemos D1 mini, which is a small form-factor ESP8266 board.

Read more at https://blog.oddbit.com/2018/05/03/using-a-tm-led-module-with-cir/

ARA Records Ansible 0.15 has been released by DM Simard

I was recently writing that ARA was open to limited development for the stable release in order to improve the performance for larger scale users.

Read more at https://dmsimard.com/2018/05/03/ara-records-ansible-0.15-has-been-released/

Highlights from the OpenStack Rocky Project Teams Gathering (PTG) in Dublin by Rich Bowen

Last month in Dublin, OpenStack engineers gathered from dozens of countries and companies to discuss the next release of OpenStack. This is always my favorite OpenStack event, because I get to do interviews with the various teams, to talk about what they did in the just-released version (Queens, in this case) and what they have planned for the next one (Rocky).

Read more at https://redhatstackblog.redhat.com/2018/04/26/highlights-from-the-openstack-rocky-project-teams-gathering-ptg-in-dublin/

Red Hat Summit 2018: HCI Lab by John

I will be at Red Hat Summit in SFO on May 8th jointly hosting the lab Deploy a containerized HCI IaaS with OpenStack and Ceph.

Read more at http://blog.johnlikesopenstack.com/2018/04/red-hat-summit-2018-hci-lab.html

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview