OpenStack Superuser is a publication built to chronicle the work of superusers and their many accomplishments personally, professionally and organizationally. The emphasis is on a blend of original journalism and user-generated content, ranging from technical to business-level issues with feature stories, case studies, tips and videos for OpenStack cloud architects and administrators.
No matter how you stack up the projections, containers will only grow in popularity.
Olaph Wagoner launched into his talk about micro-services citing research that projects
container revenue will approach $3.5 billion in 2021, up from a projected $1.5 billion
Wagoner, a software engineer and developer advocate at IBM, isn’t 100 percent sure about that prediction but he is certain that containers are here to stay.
In a talk at the recent OpenInfra Days Vietnam, he offered an overview on micro-services, Kubernetes and Istio.
There is no industry consensus yet on the properties of micro-services, he says, though defining characteristics include being independently deployable and easy to replace.
As for these services being actually small, that’s up for debate. “If you have a hello world application, that might be considered small if all it does is print to the console,” he explains. “A database server that runs your entire application could still be considered a micro-service, but I don’t think anybody would call that small.”
Kubernetes defines itself as a portable, extensible open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. “Simply put, it’s a way to manage a bunch of containers or services whatever you want to call them,” he says.
Istio is an open-source service mesh that layers transparently onto existing distributed applications, allowing you to connect, secure, control and observe services. And one last definition: service mesh is the network of micro-services that make up these distributed applications and the interactions between them.
What does this mean for users?
“Istio expands on all the things you can do with the Kubernetes cluster,” Wagoner says. And that’s a long list: Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic; fine-grained control of traffic behavior with rich routing rules, retries, failovers and fault injection; a pluggable policy layer and configuration API supporting access controls, rate limits and quotas; Secure service-to-service authentication with strong identity assertions between services in a cluster.
“The coolest things are the metrics, logs and traces,” he says. “All of a sudden you can read logs to your heart’s content and know exactly which services are talking to each other, how often and who’s mad at who.”
At the 13:07 mark, Wagoner goes into detail about how Istio works with OpenStack and why it’s worthwhile.
Once you’ve got Kubernetes and installed Istio on top, the K8s admin can create a cluster using the OpenStack API. The K8s user in OpenStack can just use the cluster — they don’t get to see all the APIs doing the heavy lifting behind the scenes. All of this is now possible thanks to the Kubernetes OpenStack Cloud Provider. “What does this bad boy do?” Wagoner asks. “The Kubernetes services sometimes needs stuff from the underlying cloud, services, endpoints etc. and that’s the goal.” Ingress (part of Mixer) is a perfect example, he says, it relies on the OpenStack Cloud Provider for load balancing and add end points.
“This is my favorite part of the whole idea of why you would want to make OpenStack run Kubernetes in the first place — the idea of mesh expansion. You’ve got your cloud and you’ve got Kubernetes running on your OpenStack cloud and it’s do telling you everything your cluster is doing. You can expand that expand that service mesh to include not only virtual machines from your OpenStack cloud is but also bare metal instances.”
Catch the full 27-minute talk here or download the slides.
Over the past two years, the software defined infrastructure (SDI) team at Linaro has worked to successfully deliver a cloud running on Armv8-A AArch64 (Arm64) hardware that’s interoperable with any OpenStack cloud.
To measure interoperability with other OpenStack clouds, we use the OpenStack interop guidelines as a benchmark. Since 2016, we’ve run tests against different OpenStack releases – Newton, Queens and most recently Rocky. With Rocky, OpenStack on Arm64 hardware passed 100 percent of the tests in the 2018.02 guidelines, with enough projects enabled that Linaro’s deployment with Kolla and kolla-ansible are now compliant. This is a big achievement. Linaro is now able to offer a cloud that is, from a user perspective, fully interoperable while running on Arm64 hardware.
So what have we done so far towards making Arm a first-class citizen in OpenStack?
The Linaro Reference Architecture: A simple OpenStack multi-node deployment
We started with a handful of Arm64 servers, installed Debian/CentOS on them and tried to use distro packages to create a very basic OpenStack multi node deployment. At the time, Mitaka was the release and we didn’t get very far. We kept finding and fixing bugs, trying to contribute them upstream and to Linaro’s setup simultaneously, with all the backporting problems that entails. In the end, we decided to build our own packages from OpenStack master (which would later become Newton) and start testing/fixing issues as we found them.
The deployment was a very simple three-node control plane, N node compute plus Ceph for storage of virtual machines and volumes. It was called the Linaro Reference Architecture to ensure all Linaro engineers conducting testing remotely generated comparable results and were able to accurately reproduce failures.
The Linaro Developer Cloud generates data centers
In 2016, Arm hardware was very scarce and challenging to manage (with a culture of boards sitting on engineer’s desks). The team therefore built three data centers (in the United States, China and the United Kingdom) so that Linaro member engineers would find it easier to share hardware and automate workloads.
Linaro co-location at London data center (five racks)
During Newton, Linaro cabled servers, installed operating systems (almost by hand from a local PXE server) and tried to install/run OpenStack master with Ansible scripts that we wrote to install our packages. A cloud was installed in the UK with this rudimentary tooling and a few months later we were at the OpenStack Summit Barcelona, demoing it during the OpenStack Interoperability Challenge 2016.
These were early days for Linaro’s cloud offering and the workload was very simple (LAMP), but it spun five VMs and ran on a multi-node Newton Arm64 cloud with a Ceph backend — successfully and fully automated without any architecture specific changes. Linaro’s clouds are built on hardware donated by Linaro members and capacity from the clouds is contributed to specific open-source projects for Arm64 enablement under the umbrella of the Linaro Developer Cloud.
After that Interoperability Challenge, a team of fewer than five people spent significant time working on the Newton release, fixing bugs on the entire stack (kernel, qemu, libvirt, OpenStack) and keeping up with new OpenStack features. For every new release we used, the interop bar was raised: we were testing against a moving target, the interop guidelines and OpenStack itself.
Going upstream: Kolla
During Pike we decided to move to containers with Kolla, rather than building our own. Working with an upstream project meant our containers would be consumable by others and they would be production ready from the start. With this objective in mind, we joined the Kolla team and started building Arm64 containers alongside the ones already being built. Our goal was to fix the scripts to be multi architecture aware and ensure we could build as many containers as necessary to run a production cloud. Kolla builds a lot of containers that we don’t really use or test on our basic setup, so we only enabled a subset of them. We agreed with the Kolla team that our containers would be Debian based, so we added Debian support back into Kolla, which was at risk of being deprecated because no one was responsible for maintaining it at the time.
Queens was the first release that we could install with kolla-ansible. Rocky is the first one that’s truly interoperable with other OpenStack deployments. For comparison during Pike, we didn’t have object storage support due to a lack of manpower to test it. This support was added during Queens and enabled as part of the services of the UK Developer Cloud.
Once Linaro had working containers and working kolla-ansible playbooks to deploy them, we started migrating the Linaro Developer Cloud away from the Newton self-built package and into a Kolla-based deployment.
Being part of upstream Kolla efforts also meant committing to test the code we wrote. This is something we have started doing, but there’s still more ground to cover. As a first step, Linaro has contributed availability in its China cloud to OpenStack-Infra and the infra team were most helpful bringing up all their tooling on Arm64. This cloud has connectivity challenges when talking to the rest of the OpenStack Foundation infrastructure that needs resolution. In the meantime, Linaro has given OpenStack-Infra access to the UK cloud.
The UK Linaro Developer Cloud has been upgraded to Rocky before ever going into production with Queens. This means it will be Linaro’s first available zone that is fully interoperable with other OpenStack clouds. The other zones will be upgraded shortly to a Kolla-based Rocky deployment.
We’ve also been contributing other changes that were not necessarily architecture related but related to the day-to-day operation of the Linaro Developer Cloud. For example, we’ve added some monitoring changes to Kolla to improve the ability to deploy monitorization. For the Rocky cycle, Linaro is the fifth contributor to the Kolla project according to Stackalytics data.
Once the Linaro Developer Cloud is fully functional on OpenStack-Infra, we’ll be able to add gate jobs for the Arm64 images and deployments to the Kolla gates. This is currently work in progress. The agreement with Infra is that any project that wants to have a go at running on Arm64 can also run tests on the Linaro Developer Cloud, if desired. This enables anyone in the OpenStack community to run tests on Arm64. Linaro is still working on adding enough capacity to make this a reality during peak testing times, currently experimental runs can be added to get ready for it.
It’s been an interesting journey, particularly when asked by engineers if we were running OpenStack on Raspberry Pi! Our response has always been: “We run OpenStack on real servers with IPMI, multiple hard drives, reasonable amounts of memory and Arm64 CPUs!”
We’re actively using servers with processors from Cavium, HiSilicon, Qualcomm and others. We’ve also found and fixed bugs in the kernel, in the server firmware, in libvirt and added some features like a guest console. Libvirt made multi-architectural improvements when it reached version three; we’ve been eagerly keeping up with libvirt over the past releases, especially when Arm64 improvements came along. There are issues when it comes to libvirt being fully able to cope with Arm64 servers and hardware configurations. We’re looking into all the missing pieces necessary on the stack so that live migration will work across different vendors.
As with any first deployment, we found issues in Ceph and OpenStack when running the Linaro Developer Cloud in production since running tests on a test cloud is hardly equivalent to having a long standing cloud with VMs that survive host reboots and upgrades. Subsequently, we’ve had to improve maintainability and debuggability on Arm64. In our 18.06 release (we produce an Enterprise Reference Platform that gives interested stakeholders a preview of the latest packages), we added a few patches to the kernel that allow us to get crashdumps when things go wrong.
We’re actively using servers with processors from Cavium, HiSilicon, Qualcomm and others. We’re currently starting to work with the OpenStack-Helm and LOCI teams to see if we can deploy Kubernetes smoothly on Arm64.
If you are interested in running your projects on Arm64, get in touch with us!
About the author
Gema Gomez, technical lead of the SDI team at Linaro Ltd, joined the OpenStack community in 2014.
To this day I see confusion surrounding the terms tenant, provider and external networks. No doubt countless words have been spent trying to tease apart these concepts, so I thought that it’d be a good use of my time to write 470 more.
At a glance
A closer look
Tenant networks are created by users and Neutron is configured to automatically select a network segmentation type like VXLAN or VLAN. The user cannot select the segmentation type.
Provider networks are created by administrators, that can set one or more of the following attributes:
Segmentation type (flat, VLAN, Geneve, VXLAN, GRE)
Segmentation ID (VLAN ID, tunnel ID)
Physical network tag
Any attributes not specified will be filled in by Neutron.
OpenStack Neutron supports self-service networking – the notion that a user in a project can articulate their own networking topology, completely isolated from other projects in the same cloud, via the support of overlapping IPs and other technologies. A user can create their own network and subnets without the need to open a support ticket or the involvement of an administrator. The user creates a Neutron router, connects it to the internal and external networks (defined below) and off they go. Using the built-in ML2/OVS solution, this implies using the L3 agent, tunnel networks, floating IPs and liberal use of NAT techniques.
Provider networks (read: pre-created networks) is an entirely different networking architecture for your cloud. You’d forgo the L3 agent, tunneling, floating IPs and NAT. Instead, the administrator creates one or more provider networks, typically using VLANs, shares them with users of the cloud, and disables the ability of users to create networks, routers and floating IPs. When a new user signs up for the cloud, the pre-created networks are already there for them to use. In this model, the provider networks are typically routable – They are advertised to the public internet via physical routers via BGP. Therefor, provider networks are often said to be mapped to pre-existing data center networks, both in terms of VLAN IDs and subnet properties.
External networks are a subset of provider networks with an extra flag enabled (aptly named ‘external’). The ‘external’ attribute of a network signals that virtual routers can connect their external facing interface to the network. When you use the UI to give your router external connectivity, only external networks will show up on the list.
To summarize, I think that the confusion is due to a naming issue. Had the network types been called: self-service networks, data center networks and external networks, this blog post would not have been necessary and the world would have been even more exquisite.
About the author
Assaf Muller manages the OpenStack network engineering team at Red Hat. This post first appeared on his blog.
Superuser is always looking for tutorials and opinion pieces about open infrastructure, get in touch at editorATopenstack.org
Over the years, the term network functions virtualization may have deviated a little bit from its original definition. This post tries to get back to basics and to start a discussion around the possible deviations.
Let’s start by going back to 2012 and the question: What is NFV trying to achieve?
If you read the original white paper, you’ll see that it was written to address common concerns of the telco industry, brought on by the increasing variety of proprietary hardware appliances, including:
Capital expenditure challenges
Space and power to accommodate boxes
Scarcity of skills necessary to design, integrate and operate them
Procure-design-integrate-deploy cycles repeated with little or no revenue benefit due to accelerated end-of- life
Hardware life cycles becoming shorter
You’ll also see that NFV was defined to address these problems by “…leveraging standard IT virtualization technology to consolidate many network equipment types onto industry standard high-volume servers, switches and storage, which could be located in data centers, network nodes and in the end-user premises…”
In other words, to implement our network functions and services in a virtualized way, using commodity hardware — but is this really happening?
Almost six years later, we’re still at the early stages of NFV, even when the technology is ready and multiple carriers are experimenting with different vendors and integrators with the vision of achieving the NFV promise. So, is the vision becoming real?
Some initiatives that are clearly following that vision are AT&T’s Flexware and Telefonica’s UNICA projects, where the operators are deploying vendor agnostic NFVI and VIM solutions, creating the foundations for their VNFs and Network Services.
However, most NFV implementations around the world are still not led by the operators, but by the same vendors and integrators who participated in the original root cause of the problem (see: “increasing variety of proprietary hardware appliances.”) This is not inherently bad, because they’re all evolving, but the risk lies in the fact that most of them still rely on a strong business based on proprietary boxes.
The result is that “vertical” NFV deployments, comprised of NFV full stacks, from servers to VNFs, are all provided, supported (and understood) by just a single vendor. There are even instances of some carriers deploying the whole NFV stack multiple times, sometimes even once per VNF (!), so we’re back to appliances with a VNF tag on them.
We could accept this is part of a natural evolution, where operators feel more comfortable working with the trusted vendor or integrator, while trying to start getting familiar with the new technologies as a first step. However, this approach might be creating a distortion in the market, making NFV architectures look more expensive than traditional legacy solutions, when, in fact, people are expecting the opposite.
But in the future, is this the kind of NFV that vendors and integrators really want to recommend to their customers? Is this the kind of NFV deployments operators should accept in their networks, with duplicated components and disparate technologies all over the place?
To comply with the NFV vision, operators should lead their NFV deployments and shift gradually from the big appliances world (in all of its forms, including big NFV full stacks) towards “horizontal” NFV deployments, where a common telco cloud (NFVI/VIM) and management (MANO) infrastructure is shared by all the VNFs, with operators having complete control and knowledge of that new telco infrastructure.
Most of the industry believes in that vision, it is technically possible and it’s even more cost-effective, so what is playing against it?
I think we need to admit that the same vendors and integrators will take time to evolve their business models to match this vision. In the meantime, many operators have started to realize they need to invest in understanding new technologies and creating new roles for taking ownership of their infrastructure and even in opening the door to a new type of integrators, those born with the vision of software-defined, virtualized network functions, using commodity hardware and open-source technologies in their ADN.
Three key elements are not only recommended, but necessary for any horizontal NFV deployment to be feasible:
Embrace the NFV concept, with both technology and business outcomes, ensuring the move away from appliances and/or full single-vendor proprietary stacks towards a commodity infrastructure.
Get complete control of that single infrastructure and its management and orchestration stack, which should provide life cycle management for all the network services at different abstraction levels.
Maximize the usage of components based on open-source technologies, as a mechanism to (1) accelerate innovation by using solutions built by and for the whole industry, and (2) to decrease dependency on a reduced set of vendors and their proprietary architectures.
Regarding the latter, OpenStack is playing a key role in providing an software validated by the whole industry for managing a telco cloud as a NFV VIM, while other open-source projects like Open Source MANO and ONAP are starting to provide the management software needed at higher layers of abstraction to control the life cycle of virtualized network service.
Gianpietro Lavado is a network solutions architect interested in the latest software technologies to achieve efficient and innovative network operations. He currently works at Whitestack, a company whose mission is to promote SDN, NFV, cloud and related deployments all around the world, including a new OpenStack distribution.
If you’ve been to an major open source conference in, say, the last five years you’ve either spotted Nithya Ruff’s warm smile in the hallway track or heard her speak.
Currently charged with founding and growing an open-source practice at Comcast, Ruff was also founding director of the open source strategy and engagement office for SanDisk and chaired the SanDisk Open Source Working Group. Her open-source credentials stretch back to 1998 while working at the venerable SGI and include projects like Tripwire, Wind River Linux, Yocto Project, Tizen Automotive, Ceph and OpenStack.
Few of us know how she got started, though. The super advocate shares her origin story in a recent interview for the documentary series “Chasing Grace.” The series was created to share the stories of women who faced adversity and became change agents in the tech workplace. Inspired by pioneering computer scientist Grace Hopper, early sponsors include the Linux Foundation and the Cloud Foundry Foundation.
Growing up in Bangalore, India her engineer father showed her a path by not only hiring women in technical positions but also by including her when colleagues visited their home. And while her mother advocated for an early arranged marriage, he nixed the idea in favor of his daughter making it on her own first.
“He showed me that there were very strong careers that women could have in business,” she says. “He really involved me in those conversations, treated me as an equal.” He insisted that she finish her education before settling down and when one of his colleagues planted the idea of studying in the United States, choosing computer science seemed natural.
“It’s important to have people around who push us to do more than we believe we can,” she says. These days Ruff hopes to inspire the next generation as active participant in cross-community initiatives including the recent Diversity Empowerment Summit and the Women of OpenStack group.
Stay tuned for details on how to can catch “Chasing Grace” screenings at upcoming OpenStack Summits.
Superuser would love to hear how you got started in open source, drop us a line at editorATopenstack.org
Zuul drives continuous integration, delivery and deployment systems with a focus on project gating and interrelated projects. In a series of interviews, Superuser asks users about why they chose it and how they’re using it.
Here Superuser talks to the Software Factory team: David Moreau Simard, Fabien Boucher, Nicolas Hicher, Matthieu Huin and Tristan Cacqueray.
SF is a collection of services that provides a powerful platform to build software. Designed to be deployed in anyone’s infrastructure, the project started four-and-a-half years ago and is vastly influenced by the OpenStack project infrastructure. The team operates an instance at https://softwarefactory-project.io where the project is being developed. RDO’s CI at https://review.rdoproject.org is also based on an SF deployment.
In one of the blog posts it’s explained that “SF is for Zuul what Openshift is for Kubernetes,” what are the advantages for users with Software Factory?
SF is a distribution that integrates all the components as CentOS packages with an installer/operator named sfconfig to manage service configuration, backup, recovery and upgrades. The main advantage for our users is the simplicity of use. There’s a single configuration file to customize the settings and all the services are configured with working defaults so that it is usable out of the box. For example, Zuul is setup with default pipelines and the base job automatically publishes jobs artifacts to a log server.
Another advantage is how customizable deployments can be: whether you need a whole software development pipeline from scratch, from a code review system to collaborative pads, or if you just want to deploy a minimal Zuul gating system running on containers, or if you need an OpenStack third party CI quickly up and running, SF has got you covered.
SF also sets up CI/CD jobs for the configuration repository, similar to the openstack-infra/project-config repository. For example, SF enables users to submit project creation requests through code review and a config-update post job automatically applies the new configuration when approved.
The RDO project has been using SF successfully for about three years now. The goal is to keep the same user experience from upstream to downstream and to allow them to share configurations, jobs, pipelines and easily run third party CI jobs.
With the addition of GitHub support to Zuul, SF can now be used without Gerrit and people are looking into running their own gating CI/CD for GitHub organizations with SF to use the log processing features.
As mentioned earlier, we dogfood the platform since we do our CI on SF, but we also have a release pipeline for the developers team’s blog: the CI pre-renders the tentative articles and once they’re approved they are automatically published.
The latest Software Factory release adds support for tenant deployment and Zuul configuration management from the resources. What does this mean for users and why is it important to ship now?
While Zuul supports multi-tenancy, the rest of the services such as Gerrit or log processors do not. The latest SF release enables tenants to deploy these services on a dedicated and isolated instance. This is important for us because we can now consolidate the RDO deployment to share the same Zuul instance used by softwarefactory-project.io, so that we can rationalize the use of our shared resources.
There are a lot of features for possible Zuul future contributions, which are highest priority?
The highest priority is to provide a seamless experience for Jenkins users. Similar to the OpenStack project infrastructure, SF used to rely on Jenkins for job execution. To improve the user experience, we contributed some features such as jobs and builds web interface in Zuul. However, we still lack the capacity to abort and manually trigger jobs through the REST API and we’d like to see that sooner rather than later.
Another goal is to be able to run jobs on OpenShift and we would like to see Nodepool be able to do that too.
Generally speaking, a modular, plugin-based architecture would allow third parties like us to experiment with Zuul without burdening the community with contributions that might be of lower priority to them.
What are you hoping the Zuul community will focus on / deliver?
As packagers of Zuul for CentOS, it would make our lives easier if Zuul followed a release schedule. It may sound at odds with the continuous deployment trend, but packaging and consuming RPMs stability and reliability are of utmost importance. We rely on Zuul’s continuous integration and testing capabilities to ensure that.
The user experience can really make or break Zuul’s acceptance outside of the OpenStack community. As users and operators who have helped the RDO community migrate from Jenkins to Zuul 3 and Ansible, we realized how important it is to educate users about what Zuul is (the concept of gating system itself being very novel) and what it does (for instance, it does not aim to be a one-to-one replacement of Jenkins).
We hope the community will work on spreading the word, help decrease the entry costs and learning curves and generally improve Zuul’s user experience. Of course we will help toward this goal as well!
The days of annual, monthly or even weekly releases are long gone. How is CI/CD defining new ways to develop and manage software within open infrastructure?
As hinted above, we have a more nuanced opinion regarding continuous deployment: it’s great for some workflows but overkill for some others, so it really depends on your production environment and needs. We are, however, firm believers in continuous integration in general and code gating in particular and we hope to see more development teams adopting the “OpenStack way” of doing CI, possibly after trying out SF!
Continuous delivery is a big challenge for packaging RPMs, especially when following an upstream source and especially when this upstream source is OpenStack where hundreds of changes get merged every day. The RDO community came up with a “master chaser” called DLRN which, combined with Software Factory, allows them to ensure that OpenStack packages are always healthy, or at least notify packagers as fast as possible when a change introduces a problem. This new way of managing software is what allows RDO to deliver working OpenStack packages for CentOS just mere hours after an upstream release.
Generally speaking, CI/CD becomes incredibly powerful when paired with the concept of everything as code. Virtualization, containerization, provisioning technologies turn whole infrastructures into code repositories on which you can apply CI and CD just like you would on “ordinary” code. We actually toyed a while ago with a proof-of-concept where SF would be used to test and deploy configuration changes on an OpenStack infrastructure with OSP Director. Imagination is the limit!
It’s a show of strength to run a new software release on day one. Even more so if you’re using it to power a brand new region.
Vexxhost did just that, getting into the ring with OpenStack Rocky on the first day of its release.
The Canadian company launched a new region in Santa Clara, California in the heart of the Silicon Valley that’s also the first cloud provider to run Rocky. Founded back in 2006, Vexxhost started out as a web hosting provider offering services from shared hosting to VPS. They adopted OpenStack software in 2011, to provide infrastructure-as-a-service public cloud, private cloud and hybrid cloud solutions to companies of varying sizes around the world.
The new region offers users the latest features and bug fixes right out of the gate. Also on offer are 40Gbps internal networking and nested virtualization, allowing users to take advantage of technology like Kata Containers, an open-source project building lightweight virtual machines that seamlessly plug into the containers ecosystem. Each virtual machine has access to the 10Gbps public internet and the new data center is also equipped with high-performance triple-replicated and distributed SSD storage.
For the Rocky release, Vexxhost can also now deploy across three operating systems openSUSE, Ubuntu and CentOS, which helps people use the operating system they’re most comfortable with.
The confidence to take on this challenge comes in part from long experience with OpenStack. CEO Mohammed Naser has been involved since 2011, including as an elected member of the Technical Committee and the current project team lead of OpenStack Ansible as well as a core contributor to OpenStack Puppet.
“The really really cool thing is that the cloud is already being used by the upstream OpenStack infrastructure team,” says Naser. “And it’s always awesome to see the community work together and get all these things done.”
Check out the case study on the community webinar starting at the 13:25 mark or the press release here.
Heat is the core project in the OpenStack orchestration program.
It implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like code. A native Heat template format is evolving, but Heat also aims to provide compatibility with the AWS CloudFormation template format, so that many existing CloudFormation templates can be launched on OpenStack. Heat provides both an OpenStack-native ReST API and a CloudFormation-compatible Query API.
Rico Lin offered this tutorial on how to auto-scale a self-healing cluster with Heat at the recent OpenInfra Days in Vietnam. Lin has been the project team lead for Heat in the Rocky, Pike and Queens cycles as well as a Heat core contributor member since the Liberty release. He’s currently a software engineer at EasyStack.
Here he walks you through how to configure Heat, set up Heat container agents before discussing options for auto-scaling, choosing your structure and then launching a self-healing cluster.
For the full 14-minute demo see below and check out his slides here.
DEMO: AUTO-SCALE A SELF-HEALING CLUSTER IN OPENSTACK With Heat - YouTube
Check out the Heat self-healing special interest group (SIG). The auto-scaling templates for Heat can be found at GitHub.
The theme for this one is about exploring distributed infrastructure, so participants are advised to “get out your toys and apply this idea to a sample cloud with Raspberry Pis, edge routers, system-on-a-chip board designs and other gadgets” that you like. If you can’t pack your favorite devices, organizers will also have some on hand. Mentors will also be available to help getting you up to speed with the systems and technology. If you’re an OpenStack pro, check out the event page for info on how to volunteer and help out.
For team members of all roles–– app developers, devops, UX, sysadmin, or network engineers––hackathons are a great way to learn quickly in a fun and competitive environment. There will be prizes for the best technical solution, the most complete project, the most inclusive approach, the most clever design and more.
OpenStack Hackathons offer participants a chance to learn more about developing applications for OpenStack clouds from experts and put their skills to use by building applications. They launched in 2016 designed as a fast and furious weekend of work that also rewards the best projects with fantastic prizes.
New to hackathons? Check out this post about how to survive and thrive over the weekend.
Google’s motto of “Don’t be evil” hasn’t made it any easier for the tech giant to achieve better diversity.
The most recent statistics available show a need for improvement. For starters, despite efforts the company has given to getting more women on board, in the tech corridors of the company the percentage has risen about 25 percent since 2014 to 21 percent total. The global percentage of women working in any department at Google is 30 percent and it hasn’t budged for four years. Then there were the infamous James Damore memo and harassment of Danielle Brown, VP of diversity and a Wired investigation declaring a “dirty war” surrounding these issues at the company.
None of that is stopping Valeisha Butterfield-Jones. As the global head of women and black community engagement, she says it’s high time for disruption.
The greatest challenge? “Decoding what the real barriers to entry are, for people of color and for women,” she says in a profile at Harper’s Bazaar. One of these efforts is Google’s decoding race series, organized as the first step of a longer-term strategy – intended to inform and empower Googlers to have open and constructive conversations on race. One of the more provocative discussions called “Programming and Prejudice: Can Computers Be Racist?” and moderated by Van Jones is available online at YouTube. Butterfield-Jones is also tackling the pipeline problem with a scholarship program aimed at historically black colleges.
“We’re trying to break the system, to rebuild it and to make it better. It is hard work,” she says. “Having good intentions isn’t enough. You have to actually do the work. I’m committed, and I know we are, to doing the work.”
The intro on the company’s diversity website states that “Google should be a place where people from different backgrounds and experiences come to do their best work. That’s why we continue to support efforts that fuel our commitments to progress.”