Loading...

Follow Superuser | The OpenStack User Publication on Feedspot

Continue with Google
Continue with Facebook
Or

Valid


The 2015 OpenStack Summit Vancouver was one of my favorites. Besides the obvious beauty of the water and mountains, there’s more to this “Canadian Manhattan.” Much like the diversity of the OpenStack community, Vancouver reflects its rich past with influences from India, China, Japan as well as its sibling states in the U.S. Pacific Northwest. The result is a veritable cornucopia of food and lifestyle options in the city — I like to think of Vancouver as one of those cities that personify the OpenStack project and community.

Vancouver was also my very first North American OpenStack Summit. I remember being apprehensive about meeting the people that I’d been rubbing virtual elbows with online. Even though I’d been working with OpenStack for over a year, I had never been involved with the larger OpenStack community. Suddenly, here I was in Vancouver, ready to take the plunge into a like-minded gathering of thousands of OpenStack aficionados.

Even with a year of OpenStack under my belt, I still felt like an amateur. So I sought some guidance to help me navigate the event. Some of the resources I used were published right here on Superuser (surprise!).  Articles like “Making your list and checking it twice for the OpenStack Vancouver Summit” that outlines planning, packing and arrival information for Vancouver were invaluable in preparing me for the logistics of travelling to Vancouver. Today, just like in 2015, there are already articles being published on Superuser about specific topics like,  “What’s new at the Vancouver Summit” and Must-see sessions at the Vancouver Summit that give readers some good suggestions about the general direction and even specific sessions to see.

“Wearing your 6-inch stilettos or funky pointed-toe Florsheim’s might leave you in flip flops by Tuesday”

Even after reading everything I could get my hands on about the summit in 2015, I remember still feeling overwhelmed by all the sessions and the registration process and deadlines. So let me give you some suggestions to help you relax and get the most out of your OpenStack Summit in Vancouver.

General tips
  • Register early. Typically there’s plenty of time to register prior to the Monday morning rush. If you’re like me, you don’t like standing in lines. Tear yourself away from snapping pics of the beautiful vistas and get that OpenStack badge early!
  • Pick your sessions in advance and adjust at the conference. Go to the Summit Schedule and choose the sessions you really want to see first, then schedule the ones you are just curious about next. Get to sessions early, some up fill fast.
  • Don’t schedule every single day with back-to-back sessions. Listen, I know you’re excited and don’t want to miss a single session, I get it. However, if you don’t schedule some downtime during the day (and no, lunch is “a” break, not “the” break), your brain will feel like butterscotch pudding by the last session and you’re not going to retain anything. Be nice to your brain.
  • Hydrate. Hydrate. Hydrate. Most conference centers do a great job of having water available in strategic areas, but a lot of times they aren’t exactly close-by. I recommend bringing a water bottle of your own and keeping it filled throughout the day. The brain needs water to function and I guarantee you’re going to be using the ol’ noodle a lot.
  • Wear comfy shoes. You’ll be walking a lot at the OpenStack Summit and afterwards. I routinely do 12-20,000 steps each day of the Summit. So if you look at your Fitbit right now you’re clocking in 3,ooo steps a day you’ll understand why wearing your 6-inch stilettos or funky pointed-toe Florsheim’s might leave you in flip flops by Tuesday.
  • Get out of your hotel room. There are plenty of parties and gatherings in the evening at every OpenStack Summit and Vancouver will be no different. Attending some of these events is a great way to see the city, plus it’s an excuse for a walk down to historic Gastown, an area chock full of eating and people-watching opportunities.
Tips for beginners
  • In the Summit Schedule, use the filter button that allows you to see only beginner sessions. I remember looking them when I was a rookie and wondering if everyone had a different definition of  “beginner.” Find some sessions that appeal to you and give them a try. If they’re too far over your head, use your mobile device to find another session and head over there. No one expects you to be an expert, the Summits are for everyone. As a primer on OpenStack, check out my Sydney session on YouTube entitled “OpenStack for Absolute Amateur” it will get you well on your way to understanding the other sessions.
  • Talk to lots of people. I made the mistake my first Summits by not getting involved in the hallway conversations. They’re the best source of what’s really going on out there and what people are doing with OpenStack.
  • Ask questions. There are some really knowledgeable people at the Summit and it’s one of the few opportunities where you’ll get them all in one place. Plus, there are lots of operators, engineers and developers in attendance so no matter what the question, someone will have the answer or will be able to point you in the right direction.
  • Get involved. There are so many ways to get involved including contributing code. There will be a lot of OpenStack Ambassadors available as well folks from the User Group community who would love to talk to you about what they’re doing in your city or town. Check out the Summit schedule for community events where you’ll run into them.
  • Is AWS your thing? No problem. If you’re coming to OpenStack from AWS or from any other cloud provider, you might enjoy my session on OpenStack for AWS Architects. It will present a Rosetta Stone approach to translating AWS products to OpenStack projects and how the two stack up.

    A view of Vancouver. Photo: Ben Silverman.

Intermediate and advanced tips
  • Expand your mind. Now that OpenStack has matured, the OpenStack Foundation is moving into some other interrelated areas. Some of these areas like edge computing, Kata containers and other open-source container technologies. I would recommend branching out to some of the sessions in these new areas, they are very exciting and on the edge (pun intended) of cloud technology.
  • Bigger and better. OpenStack at scale has always been a favorite topic at the summit. But what happens when you start combining it with hybrid and multi-cloud architectures? What about cells now that they are a mandatory part of Nova? How do container technologies play a part in this? Good news for you, there are sessions for all of those questions. Get them on your schedule now!
  • Calling all telco/NFV fans. If you’re a service provider and telco/NFV OpenStack fanboy like me, you’re going to have a great week, there are over 40 telco/NFV sessions scheduled. You can hear sessions about everything from 5G, Edge, and autoscaling, VNF service chain orchestration to Ceph in Telco clouds.

Regardless of your experience level, you must do one thing: enjoy yourself in Vancouver and relax. Admittedly my last summit in Vancouver was anything but relaxing — and I’ve always regretted my high-strung activities in such a beautiful place.

Therefore, this year, I plan on relaxing before and after my session. My wish for all attendees is that they have fun, learn as much as they can (comfortably!) and get some time to unwind and process what they’ve learned throughout the week. I’ll be there all week, so if you see me staring out the enormous windows of the convention center overlooking the water, please come over say hello.  It might just remind us both to relax and take in the beauty of this City of Glass.

About the author

Ben Silverman is the Chief Cloud Officer for OnX. He considers himself an international cloud activist, and he’s also co-author of the book “OpenStack for Architects.”  Silvernan started his OpenStack career in 2013 by designing and delivering American Express’ first OpenStack environment, worked for Mirantis as a senior architect and has been a contributing member of the OpenStack Documentation team since 2014. He’s also written for Superuser on how to get a job with OpenStack.

Superuser is always interested in community content, get in touch at editorATopenstack.org

Cover Photo // CC BY NC

The post Return to the City of Glass: A guide to the Vancouver OpenStack Summit appeared first on Superuser.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

OpenDev  is an annual event focused at the intersection of composable open infrastructure and modern applications. The first edition centered  on edge computing while the 2018 collaborative two-day event focuses on continuous integration, deployment and delivery (CI/CD). This time it’s co-located with the OpenStack Summit, enabling Summit attendees to dive into the content, too.

Here we’re highlighting some of the sessions you’ll want to add to your schedule. Check out the whole program here and if you’re attending the OSF Summit you can add theses sessions here.

Deploy from CI/CD to production in one click

With complex deployment such as AT&T’s, it’s always a challenge to deploy artifacts frequently on production environments and to prevent human errors in case of manual deployments. In this intermediate session, Jerome Brette will talk about how his team has designed updates, upgrades and greenfields to be deployed with out any human intervention. The result bridges the gap between the development activities and deployment activities. Details here.

CI/CD build and pipelines for OpenStack at scale

Oath Inc. deploys a heavily customized version of OpenStack across many different regions and clusters. The company need a way to build each OpenStack component with its customizations into a self-contained package that’s deployable in data centers. They also required fine-grained control over deployments in order to swiftly deploy fixes/enhancements to single components in a single cluster with minimal downtime. Oath’s Ryan Bridges will talk about their journey and the solutions they’re using now in this intermediate session. Details here.

CI/CD in ETSI NFV environment

CI/CD and dev ops are common practices but require the automatic management of software components on live systems and the capability of sending feedback from live systems to developers. ETSI NFV defined a multi-layer MANO framework to manage the different layers of telecom services. OpenStack is a key enabler of dev ops in the NFV architecture, thanks to its capacity to dynamically manage workloads. There’s a push in the NFV community to introduce capabilities that allow  dev ops and CI/CD into this framework. In this beginner-level presentation, Nokia’s Gergely Csatari and Ixia Solutions Group Pierre Lynch will describe the challenges of introducing dev ops practices in the telecom environment and discuss planned solutions for these challenges. Details here.

We came together, now what?

As various initiatives across open source communities address the challenges to solve common problems, it’s clear that how well these components work together is increasingly critical. In this beginner-level session, Fatih Degirmenci of Ericsson, Melvin Hillsman Huawei and Robyn Bergeron of Red Hat will talk about how these efforts are shaping infrastructure, CI/CD, integration and testing as well as how communities can keep the conversation going. Details here.

Open CD for open infrastructure: Hybrid and multi-cloud deployments with Spinnaker

Spinnaker is an open source, multi-cloud continuous delivery platform built by Netflix, Google, Microsoft and others. At Netflix, Spinnaker powers over 4,000 deployments a day. In this talk, Spinnaker’s Andrew Phillips will introduce Spinnaker and how it enables continuous delivery automation from basic first steps through to advanced pipelines incorporating deployment safeguards, Canary analysis, out-of-the-box deployment strategies, sophisticated health checks, sharing Golden Path pipelines and more. The session will also cover muiti-cloud and Kubernetes support and talk about patterns for microservice delivery, before wrapping up with a quick tour of Spinnaker in action. Details here.

See you at OpenDev May 22-23, 2018!  Register here.

Cover Photo // CC BY NC

The post What’s next in CI/CD appeared first on Superuser.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In a world where metrics are everything, some numbers are still eye-popping. Take AT&T, for example: One of the world’s largest telecom multinationals has been talking for years about “exploding” wireless traffic. How much are we talking about? On a typical business day, roughly 200 petabytes of data now flow across their wireless networks.

That number was center stage at the recent Open Networking Summit (ONS) organized by the Linux Foundation where Andre Fuetsch, AT&T’s CTO, keynoted. It represents nearly a 50 percent increase just in 12 months, he says, adding that “we kind of stopped doing the math on how many Libraries of Congress it equals, just because the numbers were just getting too ridiculous.”

Where are the new demands coming from? Streaming, video, augmented reality and gaming, he says. It all adds up to network demand that Fuetsch sees only accelerating in the future. To keep pace, AT&T plans to launch its mobile 5G network in a dozen U.S. cities before the end of 2018.

And what’s helping keep the lights on, Fuetsch says, is a strong open-source strategy.

The company plans to deploy over 60,000 white box routers in macro- and small-cell mobile infrastructures forming the core of AT&T’s 5g build. It’s an open-hardware design whose specs will be made available to the community later this year.

“Open source really helps us on two fronts: one is cost and the other speed,” Fuetsch says in an interview with TIA Now from ONS.  “If we were to approach software in a closed manner, we’d get saddled with all the costs — the life cycle costs, from birth to death… but if we’re able to put it into open source and show how others can take advantage and use it, we actually share the costs. Where the speed comes in is where we get others to help collaborate and advance that software not just for their needs but for our needs as well.”

To make 5G a reality, AT&T needed to devise a software stack at the edge, Fuetsch says. They decided to build it with open-source projects including OpenStack, Kubernetes and ONAP.

“That way we don’t start building all these one-off islands. We actually can work off a standard implementation that’s part of an open-source community that can evolve. We think it’s pretty powerful.”

Check out the entire AT&T keynote here or the TIA interview here.

What’s next

At the upcoming Vancouver Summit, you can hear more on 5G including a case study from China Unicom and sessions on network slicing, edge and autoscaling. AT&T, a platinum member of the OpenStack Foundation, will be out in force, too, heading up about 20 workshops, lightning talks and sessions covering everything from high-performance Ceph, OpenContrail-Helm and Kubernetes.

Cover Photo // CC BY NC

The post Flipping the switch to 5G with open source appeared first on Superuser.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The OpenStack community runs over 300,000 continuous integration jobs with Ansible every month with the help of the awesome Zuul.

It even provides ARA reports for ARA’s integration test jobs in a sort-of nested way. Zuul’s Ansible ends up installing Ansible and ARA. It makes my brain hurt sometimes… but in an awesome way.

As a core contributor of the infrastructure team there, I get to witness issues and get a lot of feedback directly from the users.

Static HTML report generation in ARA is simple but didn’t scale very well for us. One day, I was randomly chatting with Ian Wienand and he pointed out an attempt at a WSGI middleware that would serve extracted logs.

That inspired me to write something similar but for dynamically loading ARA sqlite databases instead… This resulted in an awesome feature that I had not yet taken the time to explain very well…until now.

An excerpt from the documentation:

To put this use case into perspective, it was “benchmarked” against a single job from the OpenStack-Ansible project:

  • 4 playbooks
  • 4,647 tasks
  • 4,760 results
  • 53 hosts, of which 39 had gathered host facts
  • 416 saved files

Generating a static report from that database takes ~1min30s on an average machine. The result contains 5321 files and 5243 directories for an aggregate size of 63MB (or 27MB recursively gzipped).
This middleware allows you to host the exact same report on your web server just by storing the sqlite database which is just one file and weighs 5.6MB.

This middleware can be useful if you’re not interested in aggregating data in a central database server like MySQL or PostgreSQL.

The OpenStack CI use case is decentralized: each of the >300,000 Zuul CI jobs have their own sqlite database uploaded as part of the log and artifact collection.

There’s a lot of benefits to doing things this way:

  • There’s no network latency to a remote database server: the first bottleneck is your local disk speed.
    • Even if it’s a 5ms road trip, this adds up over hundreds of hosts and thousands of tasks.
    • Oh, and contrary to popular belief, sqlite is pretty damn fast.
  • There’s no risk of a network interruption or central database server crash which would make ARA (and your sysadmins) panic.
  • Instead of one large database with lots of rows, you have more databases (“shards”) with fewer rows.
  • Instead of generating thousands of files and directories, you’re dealing with one small sqlite file.
  • There’s no database cluster to maintain, just standard file servers with a web server in front.

Another benefit is that you can easily have as many individual reports as you’d like, all you have to do is to configure ARA to use a custom database location.

When I announced that we’d be switching to the sqlite middleware on openstack-dev, I mentioned that projects could leverage this within their jobs and OpenStack-Ansible was the first to take a stab at it: https://review.openstack.org/#/c/557921/.

Their job’s logs now look like this:

ara-report/ansible.sqlite   # ARA report for this Zuul job
logs/                       # Job's logs
└── ara-report/             # ARA report for this OpenStack-Ansible deployment
    └── ansible.sqlite      # Database for this OpenStack-Ansible deployment

The performance improvements for the OpenStack community at large are significant.

Even if we’re spending one minute generating and transferring thousands of HTML files… That’s >300,000 minutes worth of compute that could be spent running other jobs.

How expensive are 300,000 minutes (or 208 days!) of compute? What about bandwidth and storage?

Unfreezing ARA’s stable release for development

The latest version of ARA is currently 0.14.6 and ARA was more or less in feature-freeze mode while all the work was focused on the next major release, “1.0”.

However, there is a growing amount of large scale users (me included!) who are really pushing the current limitations of ARA and 1.0 (or 2.0!) won’t be ready for a while still.

I couldn’t afford to leave performance issues and memory leaks ruin the experience of a tool that would otherwise be very useful to them.

These improvement opportunities have convinced me that there will be a 0.15.0 release for ARA.

Stay tuned for the 0.15.0 release notes and another update about 2.0 in the near future

Get involved

If you’d like to learn more, David Moreau Simard  will lead a session on Infra Project Onboarding along with the OSF’s Clark Boylan at the upcoming Vancouver Summit. Continuous integration is also one of the main themes of the Summit, check out all  of those sessions here.

This post first ran on David Moreau Simard‘s blog. Superuser is always interested in community content, get in touch at editorATopenstack.org

Cover Photo // CC BY NC

The post A million playbooks a month: Scaling Ansible Run Analysis appeared first on Superuser.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

At Catalyst Cloud, we’re planning to deploy Magnum in our OpenStack based public cloud in New Zealand.

Magnum is an OpenStack service offering container clusters as-a-service, with support for Docker Swarm, Kubernetes, Mesos or DC/OS (Catalyst Cloud will support Kubernetes at the first step). Users of the service can deploy clusters of thousands of nodes in minutes and access them securely using their native APIs.

One of the feature requests coming from our existing customers is integration with OpenStack Keystone for both authentication and authorization, so that existing users within a tenant can access a Kubernetes cluster created by the tenant administrator in Magnum without too much extra user-management configuration inside the Kubernetes cluster.

The development team at Catalyst Cloud tested the integration between Keystone and Kubernetes, after some bug fixes and improvements, it works perfectly fine!

Before I walk you through it, you might want to look at a blog post about Keystone authentication in Kubernetes clusters by Saverio Proto from SWITCH. Here in this tutorial, we’ll talk about authentication and authorization, making the authorization configuration is very easy and flexible, especially after this pull request merged.  One last note before starting: we’d also like to give huge thanks to OpenLab — all of our tests are performed on infrastructure that they provided.

Prerequisites
  • An OpenStack deployment. I’m using Devstack to set up the OpenStack cloud of Queens release for the test.
  • We’re assuming that users for this tutorial already use OpenStack.
    In the real world, there are several types of users:
    • Cloud operators responsible for cloud operation and maintenance
    • Kubernetes cluster admin users who can create/delete Kubernetes cluster in OpenStack cloud and who are also administrators of the Kubernetes cluster
    • Kubernetes cluster normal users who can use the cluster resources, including, maybe different users with different authorities
  • Kubernetes version >= v1.9.3. For testing purposes, I won’t use Magnum in this post, the Kubernetes cluster admin can create the cluster in VMs by using kubeadm.
  • kubectl version >=v1.8.0. As Saverio Proto said in his blog post, kubectl has been capable of reading the openstack env variables since v1.8.0
Basic workflow From the perspective of OpenStack cloud operator

As a cloud operator, there are some tasks to perform before exposing the Keystone authentication and authorization functionality to Kubernetes cluster admin:

  • Necessary Keystone roles for Kubernetes cluster operations need to be created for different users, e.g. kube-adm, kube-editor, kube-viewer
    • kube-adm role can create/update/delete Kubernetes cluster, can also associate roles to other normal users within the tenant
    • kube-editor can create/update/delete/watch Kubernetes cluster resources
    • kube-viewer can only have read access to Kubernetes cluster resources
source ~/openstack_admin_credentials
for role in "k8s-admin" "k8s-viewer" "k8s-editor"; do openstack role create $role; done
openstack role add --user demo --project demo k8s-viewer
openstack user create demo_editor --project demo --password password
openstack role add --user demo_editor --project demo k8s-editor
openstack user create demo_admin --project demo --password password
openstack role add --user demo_admin --project demo k8s-admin
From the perspective of a Kubernetes cluster admin

In this example, demo_admin user is a Kubernetes cluster admin in the demo tenant. demo_admin user is responsible for creating Kubernetes cluster inside the VMs, it could be done easily with Saverio Proto’s repo, I also have an ugly Ansible script repo here.

After the Kubernetes cluster is up and running, the cluster admin should make some configuration for Keystone authentication and authorization to work.

  1. Define Keystone authorization policy file.
    cat << EOF > /etc/kubernetes/pki/webhookpolicy.json
    [
      {
        "resource": {
          "verbs": ["get", "list", "watch"],
          "resources": ["pods"],
          "version": "*",
          "namespace": "default"
        },
        "match": [
          {
            "type": "role",
            "values": ["k8s-admin", "k8s-viewer", "k8s-editor"]
          },
          {
            "type": "project",
            "values": ["demo"]
          }
        ]
      },
      {
        "resource": {
          "verbs": ["create", "update", "delete"],
          "resources": ["pods"],
          "version": "*",
          "namespace": "default"
        },
        "match": [
          {
            "type": "role",
            "values": ["k8s-admin", "k8s-editor"]
          },
          {
            "type": "project",
            "values": ["demo"]
          }
        ]
      }
    ]
    EOF
    

    As an example, the above policy file definition is pretty straightforward. The Kubernetes cluster can only be accessed by the users in demo tenant, users with k8s-admin or k8s-editor role have both write and read permission to the pod resource, but users with k8s-viewer role can only get/list/watch pods.

  2. Deploy k8s-keystone-auth service. The implementation of k8s-keystone-auth service locates in the OpenStack cloud provider repo for Kubernetes. It’s running as a static pod (managed by kubelet) in the Kubernetes cluster.
    cat << EOF > /etc/kubernetes/manifests/k8s-keystone-auth.yaml
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      labels:
        component: k8s-keystone-auth
        tier: control-plane
      name: k8s-keystone-auth
      namespace: kube-system
    spec:
      containers:
        - name: k8s-keystone-auth
          image: lingxiankong/k8s-keystone-auth:authorization-improved
          imagePullPolicy: Always
          args:
            - ./bin/k8s-keystone-auth
            - --tls-cert-file
            - /etc/kubernetes/pki/apiserver.crt
            - --tls-private-key-file
            - /etc/kubernetes/pki/apiserver.key
            - --keystone-policy-file
            - /etc/kubernetes/pki/webhookpolicy.json
            - --keystone-url
            - http://10.0.19.138/identity/v3
          volumeMounts:
            - mountPath: /etc/kubernetes/pki
              name: k8s-certs
              readOnly: true
            - mountPath: /etc/ssl/certs
              name: ca-certs
              readOnly: true
          resources:
            requests:
              cpu: 200m
          ports:
            - containerPort: 8443
              hostPort: 8443
              name: https
              protocol: TCP
      hostNetwork: true
      volumes:
      - hostPath:
          path: /etc/kubernetes/pki
          type: DirectoryOrCreate
        name: k8s-certs
      - hostPath:
          path: /etc/ssl/certs
          type: DirectoryOrCreate
        name: ca-certs
    status: {}
    EOF
    

    The image is built using the Dockerfile in the cloud-provider-openstack repo, you can build your own image if needed. Please replace the keystone-url param pointing to your own OpenStack cloud.

  3. Configure authentication and authorization webhook for Kubernetes API server, then wait for the API server to run.
    cat << EOF > /etc/kubernetes/pki/webhookconfig.yaml
    ---
    apiVersion: v1
    kind: Config
    preferences: {}
    clusters:
      - cluster:
          insecure-skip-tls-verify: true
          server: https://localhost:8443/webhook
        name: webhook
    users:
      - name: webhook
    contexts:
      - context:
          cluster: webhook
          user: webhook
        name: webhook
    current-context: webhook
    EOF
    sed -i "/authorization-mode/c \ \ \ \ - --authorization-mode=Node,Webhook,RBAC" /etc/kubernetes/manifests/kube-apiserver.yaml
    sed -i '/image:/ i \ \ \ \ - --authentication-token-webhook-config-file=/etc/kubernetes/pki/webhookconfig.yaml' /etc/kubernetes/manifests/kube-apiserver.yaml
    sed -i '/image:/ i \ \ \ \ - --authorization-webhook-config-file=/etc/kubernetes/pki/webhookconfig.yaml' /etc/kubernetes/manifests/kube-apiserver.yaml
    

Now the Kubernetes cluster is ready for use by users within the demo tenant in OpenStack.

From the perspective of a Kubernetes cluster user

Cluster users only need to config the kubectl to work with OpenStack environment variables.

kubectl config set-credentials openstackuser --auth-provider=openstack
kubectl config set-context --cluster=kubernetes --user=openstackuser openstackuser@kubernetes --namespace=default
kubectl config use-context openstackuser@kubernetes
  1. demo user can list pods but can not create pods.
    $ source ~/openrc_demo
    $ kubectl get pods
    No resources found.
    $ cat << EOF > ~/test_pod.yaml
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: busybox-test
      namespace: default
    spec:
      containers:
      - name: busybox
        image: busybox
        command:
          - sleep
          - "3600"
        imagePullPolicy: IfNotPresent
      restartPolicy: Never
    EOF
    $ kubectl create -f ~/test_pod.yaml
    Error from server (Forbidden): error when creating "test_pod.yaml": pods is forbidden: User "demo" cannot create pods in the namespace "default"
    
  2. demo_editor user can create and list pods.
    $ source ~/openrc_demoeditor
    $ kubectl create -f ~/test_pod.yaml
    pod "busybox-test" created
    $ kubectl get pods
    NAME           READY     STATUS    RESTARTS   AGE
    busybox-test   1/1       Running   0          3s
    
  3. users from other tenants don’t have access to the Kubernetes cluster
    $ source ~/openrc_alt_demo
    $ kubectl get pods
    Error from server (Forbidden): pods is forbidden: User "alt_demo" cannot list pods in the namespace "default"
    
All in one – live demo

Future work

At Catalyst Cloud, we’re working on automating all the manual steps in Magnum, so in the near future, the Kubernetes cluster admin will only need to run a single command to create the cluster and Magnum will perform the rest automatically. We’ll also continue to work with sig-openstack group to keep improving stability and performance of the integration.

Author Lingxian Kong will be heading up four sessions at the Vancouver Summit, check them out here.

The post Kubernetes and Keystone: An integration test passed with flying colors appeared first on Superuser.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Superuser by Superuser - 1w ago

The Superuser Awards are open — you’ve got until April 15 to nominate your team for the Vancouver Summit.

Over the years, we’ve had a truly stellar group of winners (AT&T, CERN, China Mobile, Comcast, NTT Group) and finalists (GoDaddy, FICO, Walmart, Workday among many others). While only one wins the title, all of the finalists get a shoutout with an overview of what makes them super from the keynote stage at the Vancouver Summit.

When evaluating winners for the Superuser Award, nominees are judged on the unique nature of use case(s), as well as integrations and applications of OpenStack performed by a particular team.

Here are a few things that will make your application stand out:

  • Take a look at what works.  You can browse the applications of previous finalists here and the winners here.
  • Take your time. The application seems short — eight questions — but the majority of those questions cover a lot of ground.
  • Boil it down. It’s a balancing act: you’ll want to include significant milestones and contributions but stay within character count. (The space allotted for each answer is 800 characters, roughly 130 words.) Offers of libations in Vancouver will not persuade the judging committee or Foundation staff to accept supplemental materials!
  • Remember that you’re talking to your peers. Your nomination is first taken into consideration by the larger OpenStack community and then by the Superuser Editorial Advisory Board which makes the final call. Most of them are technically-minded folks who tend to be more impressed with metrics than marketing speak.
  • Lead with your most impressive accomplishments. All of the applications from nominees are published on Superuser for community voting. Most likely they’ll see a mention on social media, scan through your post, then click to vote. Make sure they see the best bits first.
  • Proofread and fact check with your team before submitting. The Superuser editorial staff goes through every finalist application and edits them for grammar and house style, but do keep in mind that the information you submit in the application will be published.

We’re looking forward to hearing more about your accomplishments with open infrastructure – deadline for Superuser Awards for the Vancouver Summit is April 15.  Superuser Awards application.

The post What makes a Superuser appeared first on Superuser.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Join the people building and operating open infrastructure at the OpenStack Summit Vancouver in May.  The Summit schedule features over 300 sessions organized by use cases: including AI and machine learning, HPC, edge computing, NFV, container infrastructure and public, private and multi-cloud strategies. For the container curious, note that 60 of those sessions feature Kubernetes and another 25 feature Docker.

Here we’re highlighting some of the sessions you’ll want to add to your schedule about container infrastructure. Check out all the sessions, workshops and lightning talks focusing on container infrastructure here.

CERN experience with multi-cloud, federated Kubernetes

Using public cloud resources to cover for peak workloads is a practical and economical alternative to over provisioning on-premise resources. This is the case in environments like CERN, where its large internal computing infrastructure usually suffices but where periods prior to big international conferences or large event reconstruction campaigns see a significant spike in the amount of workloads submitted. CERN’s Ricardo Rocha and Clenimar Filemon Universidade Federal de Campina Grande will share their experiences in this intermediate-level talk. Details here.

OpenStack-Helm hands-on workshop: Deploy and upgrade OpenStack on Kubernetes

OpenStack-Helm is a Helm chart library that allows simple customization and deployment of OpenStack containers across Kubernetes clusters, from laptop-scale to data center-scale. Bring your own laptop for this beginner workshop led by AT&T’s Pete Birley, SK Telecom’s Jawon Choo and Siri Kim. Participants will deploy OpenStack on top of Kubernetes, deploy logging, monitoring, and alerting tools and perform OpenStack version upgrade on the running OpenStack cluster.
Details
here.

Kata Containers: The way to run virtualized containers

Kata Containers is a new open source project merging two hypervisor-based container runtime efforts: Hyper’s runV and Intel’s Clear Containers. Providing an OCI and CRI compatible runtime, it seamlessly integrates with OpenStack containers services. Each container, or each sandbox as defined by Zun, is hypervisor-isolated and runs inside a dedicated Linux VM. Intel’s Sebastien Boeuf will demo how Kata Containers can be as fast as a namespace-based container runtime while being run in a VM in this intermediate-level session. Details here.

Bonus session: If you’d like to go hands-on, check out the Kata workshop from Red Hat’s Sachin Rathee and Sudhir Kethamakka.

On the way to cloud native: Working with containers in a hybrid environment

Nokia’s Amy Fredj, Liat Pele and Gideon Agmon will showcase an example of a VNF implementation based on VMs and containers. Their option allows VNFs to be developed for hybrid environments, where installation is based on OS and networking uses Calico BGP to distribute Neutron networks to container domain. In this beginner-level talk, they’ll address lifecycle management, including installation, management, networking and root cause analysis in the hybrid world.
Details here.

Intro to container security

Application containerization solves numerous problems, allows for incredible application density, and can really increase flexibility and responsiveness. But container security is a lot more than what application is in the container. In this session, Red Hat’s Thomas Cameron will talk about the basic components of container security including kernel namespaces, Security Enhanced Linux, Linux control groups, the Docker daemon, etc. He’ll also talk about tips and tricks for planning a secure container environment, describe some “gotchas” about containers and debunk some of the security myths about containers. Details here.

See you at the OSF Summit in Vancouver, May 21-24, 2018! Register here.

Cover Photo // CC BY NC

The post Inside container infrastructure: Must-see sessions at the Vancouver Summit appeared first on Superuser.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Verizon’s Beth Cohen is on the front lines of edge. In her role as cloud networking product manager, she focuses on developing new cloud-based networking products and services.

She sat down, along with Ericsson’s Chris Price and the OpenStack Foundation’s Ildiko Vancsa, to talk all things edge with TIA now recently.  It’s a lively conversation about that the trio will be expanding on at the upcoming Vancouver Summit in a panel session along with Ericsson’s Alison Randal.

“It’s really important to understand that edge is not just one thing, it’s a lot of different things — it’s containers but not just containers…” Cohen says. “It’s a new way of looking at networks. Verizon has been deploying hundreds of thousands of devices for 50 years, but it’s the challenge of understanding the disaggregation between the hardware and the software and the VNFs and the underlying management.”

The tool sets that developed over long periods of time aren’t really designed to think about that disaggregation, she adds. An example? “The alarming systems have to take into consideration that there’s this underlying hardware… If a VNF or a network service gets alarmed, the underlying hardware might be just perfectly fine.”

Price agrees. “Containers are the key technology to allow you to get the applications deployed — get simple lifecycle management, get scale at the edge — but there’s a lot more that goes into actually enabling that.”

Culture plays a big part in the transition. “Just having the conversations with the operations people has been a challenge,” Cohen says. “I’ve spent hours and hours literally training them about how to work with virtual systems and virtual services and virtual applications because they don’t really think that way, that’s not in their DNA.”

While Price maintains that edge use cases are still being inched towards, Cohen sees a fairly straightforward path.
“Tech refresh is a huge use case for us and I think that’s across the board,” she says but adds there some other really more “far-out use cases” that will come about once companies have invested in the technology. She cites internet of things and augmented reality  as the “kind of sexy things” that will  come out of it but maintains not they’re not going to be the “driving force.”

Speaking of driving, “self-driving cars is actually the worst” use case for edge she says. “If you’re sending all the stuff back to the data center and it’s coming back, you really need very low latency and guaranteed connections. You don’t want to send off a request and not get a response if you’re trying to turn the corner.”

Vancsa underlines that there are commonalities, no matter the use case.  “It’s also important to mention that even if we are looking into telco or all the whole global industry, basically the requirements that it all boils down to are really similar in all these cases.”

They also talk about open source versus commercial edge, what’s next in tooling and current challenges in 16-minute interview.

You can check out all the Vancouver sessions, workshops and lightning talks focusing on edge here. And, if you’re interested in more about edge computing, read the new whitepaper created by the Edge Computing Group available at openstack.org/edge

Cover Photo // CC BY NC

The post The last mile: Where edge meets containers appeared first on Superuser.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

You prepared, you submitted, you were accepted; congratulations! The OpenStack community is intelligent and engaged, so expectations are always high. Whether this is your 50th or first talk at an OpenStack Summit, here’s five little ways to make sure your talk is a success.

Focus on the non-obvious

Assume your audience is smart and that they’ve heard a talk about your subject before. Even if it’s a 101 talk where your goal is educating about the basics, what can you say that will be unique to your presentation? What could they not find out by Googling your topic? Make sure to present something new and unexpected.

A good presentation sells better than a sales pitch

Unfortunately, the quickest way to empty a room—particularly in the OpenStack community—is to use talk time to push a service or product. This might conflict with company expectations––someone probably wants to see an ROI on your talk and maybe even sent over talking points. Instead, create interest in your company or product by being an outstanding representative and demonstrating smarts, innovation and the ability to overcome the inevitable challenges. The “sales pitch” is not what you say about a product, but it is you and how you present.

Shorten your career path story

It’s very common for talks to begin with “first, a little about me,” which often sounds like reading a resume. While this can create an audience connection, it eats up valuable presentation time and takes the focus off the topic. Instead, share only the relevant pieces of your career to set up your expertise and the audience’s expectations.

Take a look at the difference between these examples:

Frequently done: “My name is Anne and I’m currently a marketing coordinator at the OpenStack Foundation. I started off in renewable energy, focusing on national energy policy and community engagement; then I became a content writer for a major footwear brand; then worked at an international e-commerce startup; and now I’m here! In my free time I race bicycles and like riding motorcycles.”

The audience has learned a lot about me (probably too much!), but it doesn’t give them a single area of expertise to focus on. It distracts the audience from the topic of my talk.

Alternative: “My name is Anne and as the marketing coordinator at the OpenStack Foundation, I work on our social media team.”

I’ve established my professional connection to the topic, explained why they should listen and foreshadowed that we’ll be talking about social media marketing.

Conversation, not recitation

Memorizing a script and having the script in front of you (like on a phone) is a common device to try to soothe presentation nerves. Ironically this makes your presentation more difficult and less enjoyable for the audience. When you trip up on a word (and we all do!), it can cause you to lose the paragraph that precedes it. Reading off a device will make your presentation sound artificial.

Instead, rehearse your presentation but use slide graphics or brief bullets to keep you on message. Pretend you’re having a conversation with the audience; just a cup of coffee over a very large table.

P.S. Make sure you budget time for conversation with your audience, and bring a few thought-provoking questions of your own to get the discussion started.

Humor doesn’t always work in international audiences

OpenStack has a wonderfully international community, which means that many people in your audience may not be native or fluent in the language you are presenting in. Idioms, turns of phrase or plays on words can be particularly difficult to understand. Instead of leaning on humor, tell a story about how something came to be, or a critical error that we can all see the humor in.

Looking forward to the incredible talks slated for the upcoming Summit; good luck, presenters!

Cover Photo // CC BY NC

The post How to make your OpenStack Summit talk a big success appeared first on Superuser.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Open like the sky. Open like possibility. Open to a future where anything might happen. Data centers are increasingly opting for systems built on open infrastructure for public and private clouds.

“Open allows for collaboration between public and private cloud, creating needed flexibility,” Johan Christenson CEO of City Network Hosting said in a recent interview at Markets Media. “It allows for the selection of several vendors working well together, not only positioning for your business at the right price. Open vs. closed should be a simple choice for any enterprise today, as open enables the solution of tomorrow and accelerates innovation.”

The Sweden-based company should know. A two-time Superuser award nominee, City Network operates multiple data centers in Europe, U.S., Asia and United Arab Emirates as well as providing a clear strategy and implementation for data protection and regulatory aspects across regions.

Their public OpenStack-based cloud operates eight regions across three continents; all of their data centers are interconnected through private networks. In addition to the public cloud, they are also responsible for a Pan-European cloud for a finance vertical that’s tasked with solving a vast number of regulatory challenges. Over 2,000 users of City’s infrastructure-as-a-service (IaaS) solutions run over 10,000 cores in production.

You can hear more about what’s next in public cloud at the upcoming Vancouver Summit, where City Network’s Tobias Rydberg, along with Huawei’s Zhipeng Huang, will appear in a session titled “The largest global public cloud footprint – Passport Program Phase II.”
The Passport Program is an initiative from the Public Cloud Working Group to provide a unified way for users to access free trial accounts from OpenStack public cloud providers around the world, which allows them to experience the freedom, performance and interoperability of open source infrastructure. 

Full story on how data centers are increasingly open to innovation over on Markets Media.

The post Why open is the future of public and private cloud appeared first on Superuser.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview