Loading...

The post How to create a Spinnaker pipeline appeared first on Mirantis | Pure Play Open Cloud.

Previously on the Mirantis blog, we gave you a quick and dirty guide for installing Spinnaker. And that’s great, but it would help if we knew how to do things with it. Ultimately, we’re going to use Spinnaker for our whole CI/CD lifecycle management, right on to creating intelligent continuous delivery pipelines, but that involves a lot more configuration and integration, so let’s just start by creating a simple pipeline just to get our feet wet.

In this article, we’re going to create a Spinnaker application that lets us resize a cluster based on feedback from an external system.  We’ll do this in 5 steps:

  1. Create the application
  2. Create a server group
  3. Create a simple pipeline that checks the size of the server group
  4. Create a webhook for Spinnaker to call
  5. Create pipeline that resizes the cluster if the webhook says it should

Let’s start by creating the application and server group.  (If you haven’t already installed Spinnaker, go back and do that now.)

Create a Spinnaker application and server group

A Spinnaker application groups together all of the resources that you’re using, such as server groups, load balancers, and security groups. A server group is a group of instances that are managed together, with a cluster being a grouping of server groups.

Start by creating an application.

  1. In the upper-right-hand corner of the Spinnaker interface, you’ll see an Actions pulldown; choose Create Application.
  2. Specify a name and email for the application; for the moment, you can leave everything else blank.  I’m going to call mine sample. Click Create to create the application. 
  3. You’ll find yourself on the Clusters page for the sample application. Click the Create Server Group button.
  4. Next we’ll configure the server group. We’re going to create Kubernetes resources, so we’ll specify the Kubernetes account we created when we were deploying Spinnaker (my-k8s-account). We’ll specify the default namespace for the pods we’re going to deploy.
  5. We’re going to specify that this server group is part of the dev stack, and we’ll give it a detail of tutorial.  Note that these names are completely arbitrary, but they help to make up the name of the cluster the group goes into.
  6. Finally, we’ll need to specify containers to deploy into this server group; we’re not actually going to do anything with these pods in this exercise, so we’ll just specify that we want Nginx; if you start typing “nginx” in the Containers field, autocomplete will give you your available choices.
  7. We’ve got a number of other options we can set at this point, such as volumes, replicas, and the minimum and desired number of replicas to start out with, but for now just accept the default, which will give us one instance. Click Create.
  8. You can monitor the creation from here…
  9. Or click Close and check on it from the Tasks tab at any time. 
  10. Once the server group is created, you will see it in the Clusters tab.  Note the single green rectangle; that’s our single instance.  This page is handy because you can see the status of each of your instances.

Now we’re ready to create the pipeline.

Create a simple pipeline

A Spinnaker pipeline is a sequence of deployment actions called stages, used for managing your deployments. You can use them to create complex sequences that involve triggers, delays, decisions, and even human intervention.

We’re not going to do that right now.

No, right now we’re just going to create a simple pipeline that checks to make sure our server group isn’t getting too big before we do anything else to it.

  1. Start by clicking the Pipelines tab. As you can see, we don’t have any pipelines already created, so go ahead and click Configure a new pipeline.
  2. Choose a type of Pipeline and name it basic, then click Create.
  3. Now we want to add a Stage, which is a unit of action within a pipeline.  So click Add Stage to create the new stage, and choose Check Precondition as the type.
  4. Now we need to specify the actual precondition we’re checking for. Ultimately, we’ll be scaling up the server if our webhook says to, so we want to make sure it doesn’t get too big.  To make that happen, we’ll create a condition that checks the size of the server group, and stops the pipeline if it’s too big (that is, if the check fails). Click Add Precondition.
  5. Now we need to create the precondition. Specify that we want to Check the Cluster Size, and then specify the Kubernetes account the cluster lives on.  (In our case, that was my-k8s-account.) You can either type in the cluster name, or click Toggle for list of existing clusters and click sample-dev-tutorial.
  6. Now we need to specify the actual condition; we want to make sure that the cluster itself — note that’s ALL server groups, not just the one we just created — is less than 10.  So specify less than or equal to (<=), and then 10 in the expected size
  7. Finally, we want the pipeline to fail if this condition is not true, so check the Fail Pipeline box.Click Update to add the Precondition.
  8. Now click the Save Changes button at the bottom right of the window.
  9. Once you save changes, will be on the main Pipelines page. Pipelines can be triggered in a number of ways, such as a Jenkins job or another trigger. In this case, though, we’re going to click Start Manual Execution to test the pipeline.You’ll have the option to be notified when the pipeline completes using email, SMS, or HipChat, but in our case we’ve got a very short pipeline, so we won’t bother with that.  Click Run to start the execution. 
  10. You should see the success of the pipeline fairly quickly, and if you click the Details link on the left, you can see more information about the process.

If you were to change the server group so that it has more than 10 instances and run it again, you’ll see that the pipeline fails and any future stages won’t execute.

Now let’s go ahead and make our pipeline a little more sophisticated.

Add a webhook to a Spinnaker pipeline

In this step, we’re going to enhance our pipeline so that it calls a webhook.  (In the next section, we’ll make decisions based on what we get back from the webhook, but for now we just want to call the webhook from the pipeline.)

A webhook is a way to make a call out to an external service; normally you’ll be using webhooks for services such as Github, but in our case we’re just going to create a simple PHP script that writes to a file so that we know the call happened.  On a web server reachable from your Spinnaker instance (any public web server will likely do) create a new PHP file with the following script:

<?php
   $file = "spinnakerpipeline.txt";
   $f=fopen($file, 'a');
   fwrite($f,date('Y-m-d H:i:s') . ': ' . $_SERVER['REMOTE_ADDR']."\n");
?>

(If you prefer some other language, go right ahead; the actual contents aren’t important in this case.)

In this case we’re simply opening a file and appending the current time and date and IP address for the server accessing the script.

So if I were to then call the script by pointing my browser to it at

http://techformeremortals.com/webhook.php

There’s actually no output, but if I then looked at the file:

http://techformeremortals.com/spinnakerpipeline.txt

I’d see a single line of text:

2018-04-08 13:11:25: 24.74.161.174

OK, so that part works.  Now let’s go ahead and add it to the pipeline.

  1. First click the Configure link for the basic pipeline we created in the previous section to get back to the configuration page, then click Add Stage.
  2. This time, we want to add a stage of type Webhook, and we only want it to execute if the Check Preconditions stage passed, so make sure the new stage Depends On Check Preconditions. 
  3. Now let’s configure the stage itself. For the Webhook URL, put the URL where you are hosting the PHP script.  (This will be the URL where you tested it in the previous step.) To make things simple, leave the method as GET; we’re not going to pass in any information, we just want to call the script. We’re also not going to wait for completion; as long as the script gives a 2xx response — that is, as long as it doesn’t return an error — we’re going to assume that everything’s finished, and that everything is fine.  If it does return an error, we’ll simply halt the pipeline.
  4. Save the changes. This should bring you back to the Pipelines page.
  5. Now let’s test out our changes.  Click Start Manual Execution and Run.
  6. Once again, you should see a fairly quick success, but this time you can see that there are two stages (and in fact if you watch the pipeline you can see as each is executed). 
  7. You can see the status of a particular stage by clicking on that stage in the pipeline.  For example, if we click on the second stage, we can see the results of the Webhook execution.

Now if we pull up the text file again by pointing the browser at

http://techformeremortals.com/spinnakerpipeline.txt

We can see that there are now two lines: the first one, from when we tested the webhook, and a second with the IP of the Spinnaker server.  (No, that’s not the real IP of my Spinnaker server. Remember that note about being careful if you open up a hole like that?)

2018-04-08 13:11:25: 24.74.161.174 
2018-04-08 13:26:11: 35.207.235.17

Great, so we know that works.  You can execute the Spinnaker pipeline multiple times and for each time, you’ll see a new line in the file.

Now let’s add some intelligence to this process.

Creating a Spinnaker pipeline to make a decision

Now that we know how to create a Spinnaker pipeline and have it call a Webhook, we’re going to get a little fancy. What we want to do is create a webhook that tells a pipeline whether or not to increase the size of our server group.

A few words about webhooks

Now, if you’re just getting into this, you might be surprised to find that calling a webhook is more than just calling a URL and getting a response, as we did in the last step.  That’s because webhooks are used for asynchronous access to long(er) running processes. That means you kick it off and when it’s done, you get an answer.

For Spinnaker, that means that you’re “polling”, or repeatedly calling a URL until you..

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The post Hope is not a strategy: Continuous delivery and real world business appeared first on Mirantis | Pure Play Open Cloud.

Your company isn’t small, and it isn’t simple. But that doesn’t mean that you don’t want things to go smoothly.  And they should. After all, your people all know what they should be doing, and the best way to handle security and compliance issues.

Right?

RIGHT???

The reality is that once your company gets beyond a certain size, ensuring that development and deployment are handled properly can be a challenge at best, and a nightmare at worst. Even if you’ve taken the next step into CI/CD, you still have to standardize your process, which for most companies means an amalgam of scripts and processes that are all over the place. You can hope things will work out, but hope is not a strategy.

We’ve been thinking a lot about that here at Mirantis, where we’ve been working on our cloud-native continuous delivery platform based on Netflix’s Spinnaker and aimed at helping companies achieve cloud ROI at scale. You see, we know that building software and releasing it to production can be complicated; we hear it from our clients every day.

So how do you ensure that your developers aren’t unknowingly setting you up for a catastrophe — without getting in their way?

Security

Of primary concern for most companies today is the issue of security. While it’s easy to think about security as protecting yourself from bad actors on the outside — as in cyberattacks — it’s unfortunately not that simple.

Even developers with the best intentions can end up exposing your systems — and therefore your company — to enormous risk.  One study of over 6000 container images in Docker Hub showed that official Docker images had an average of 16 vulnerabilities each, including those as old and well-known as Heartbleed and Shellshock. These older vulnerabilities are particularly dangerous because they’re well-known and exploits are readily available. Community images were even less secure, averaging 40 vulnerabilities each.

That’s not to say that all images are vulnerable, of course, but even when starting with a clean and non-vulnerable state, there’s still the issue of configuration. Developers aren’t trained in hardening IT systems — nor should they necessarily be, as long as it gets taken care of.

You can solve this problem with standard operating procedures, of course, and even with scripts that perform the necessary tasks. But how do you ensure that everyone is following those steps, or even that they’re able to?

One way is through the use of golden images, which include standard software and are preloaded with security fixes and pre-configured appropriately. For example, part of our platform includes determining what images you need, baking them, and making them available to development teams.

That leads to the next challenge: compliance.

Compliance

Even if an application is functioning perfectly and has no security issues, it can still get you into trouble — especially these days. You’re probably aware of Europe’s General Data Protection Regulation (GDPR), which comes into effect on May 25 and affects any company that has data on any European citizen — no matter where that company exists. But it’s not as though that’s the first regulation to affect a company’s operations. Long before GDPR there was the Sarbanes Oxley Act (SOX), Health Insurance Portability and Accountability Act (HIPAA) and plenty of other regulations that require a company to keep careful control of its data.

The problem with many of these regulations is that even if your developers want to follow all of the rules, they might not even know what they are, much less how to ensure that what they do isn’t going to have regulators breathing down your neck.

In order to prevent problems, you need to be able to ensure that you have control over:

  • What is running? Is it approved software, without vulnerabilities, configured properly?
  • Where is it running? Are there geographic restrictions on what you’re doing? Are you exporting personal data between countries? Is your technology subject to export limitations regarding specific countries?
  • Who approved it to run? If there’s a step in your process that requires human verification, do you know who did that verification? What specifics were they verifying?

Again, hoping that everything is working as planned and that everyone is following procedure is not a viable way of doing business.

Instead, you need specific, approved pipelines that provide guardrails enabling your developers to do their jobs while still knowing they’re not going to accidentally put your company in legal jeopardy. For example, we provide both templated pipelines and best practices appropriate to your individual situation.

Even with these safeguards in place, however, there’s still one more thing to take into account: your actual business objectives.

Coordination

Now that we’ve made sure that your application is running properly and isn’t going to expose you to any legal jeopardy, you don’t have anything to worry about, right?  Well, sure — if you don’t care whether the application is actually accomplishing anything

You’re probably already aware that you need to ensure that your applications are aligned with your business goals, but what about their deployment and maintenance? You need to answer many of the same questions there, as well:

  • Who needs this application, and who’s affected by it? In other words, who do you need to involve in any potentially manual approvals? What about automatic notifications?
  • What does the application need to do, and are you sure it’s doing it? Are you monitoring it? Do you have any automatic monitoring in place that can take steps if there’s a problem?
  • Where does it need to run? This is partly a compliance question, as we discussed earlier, and partly a performance question. Do you need to move the application closer to the data? Or vice-versa?
  • When do you need to involve a human for verification, and how often? What kind of ongoing monitoring do you need?
  • How does all of this get done? And how do you know it’s getting done that way?

But all of this is to get to the most important question, which so often gets glossed over: why are you doing all this? Everything you do must be tied to some business objective, or it’s just so much noise.

That’s why coordination is so important. It’s not enough to understand your business, or to understand continuous integration and continuous delivery. It’s crucial to create a situation considering both together, so that your application development pipelines truly work with your business. For example, our platform includes services that help analyze your business needs and ensure that your pipelines and procedures are set up properly, and consistently, so that you can scale effectively.

All of this comes down to the same thing: you need to ensure that you’re considering security, compliance, and coordination in a very complicated environment. Make sure that you are not leaving it to chance. Remember: hope is not a strategy!

The post Hope is not a strategy: Continuous delivery and real world business appeared first on Mirantis | Pure Play Open Cloud.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The post How to deploy Spinnaker on Kubernetes: a quick and dirty guide appeared first on Mirantis | Pure Play Open Cloud.

It would be nice to think that open source applications are as easy to use as they are to get, but unfortunately, that’s not always true. This is particularly the case when a technology is very new, with little idiosyncrasies that aren’t always well documented. In this article I’m going to give you all the steps necessary to install Spinnaker, including the “magic” steps that aren’t always clear in the docs.

In general, we’re going to take the following steps:

  1. Create a Kubernetes cluster. (We’ll use a Google Kubernetes Engine cluster, but any cluster that meets the requirements should work.)
  2. Create the Kubernetes objects Spinnaker will need to run properly.
  3. Create a single pod that will be used to coordinate the deployment of Spinnaker itself.
  4. Configure the Spinnaker deployment.
  5. Deploy Spinnaker

Let’s get started.

Create a Kubernetes cluster

You can deploy Spinnaker in a number of different environments, including on OpenStack and on your local machine, but for the sake of simplicity (and because a local deployment of Spinnaker is a bit of a hefty beast) we’re going to do a distributed deployment on a Kubernetes cluster.

In our case, we’re going to use a Kubernetes cluster spun up on Google Kubernetes Engine, but the only requirement is that your cluster has:

  • at least 2 vCPU available
  • approximately 13GB of RAM available (the default of 7.5GB isn’t quite enough)
  • at least one scheduleable (as in untainted) node
  • functional networking (so you can reach the outside world from within your pod)

You can quickly spin up such a cluster by following these steps:

  1. Create an account on http://cloud.google.com and make sure you have billing enabled.
  2. Configure the Google Cloud SDK on the machine you’ll be working with to control your cluster.
  3. Go to the Console and scroll the left panel down to Compute->Kubernetes Engine->Kubernetes Clusters.
  4. Click Create Cluster.
  5. Choose an appropriate name.  (You can keep the default.)
  6. Under Machine Type, click Customize.
  7. Allocate at least 2 vCPU and 10GB of RAM.
  8. Change the cluster size to 1.
  9. Keep the rest of the defaults and click Create.
  10. After a minute or two, you’ll see your new cluster ready to go.

Now let’s go ahead and create the objects Spinnaker is going to need.

Create the Kubernetes objects Spinnaker needs

In order for your deployment to go smoothly, it will help for you to prepare the way by creating some objects ahead of time. These includes namespaces, accounts, and services that you’ll use later to access the Spinnaker UI.

  1. Start by configuring kubectl to access your cluster.  How you do this will depend on your setup; to configure kubectl for a GKE cluster, click Connect on the Kubernetes clusters page then click the Copy icon to copy the command to your clipboard.
  2. Paste the command into a command line window:
    gcloud container clusters get-credentials cluster-2 --zone us-central1-a --project nick-chase
    Fetching cluster endpoint and auth data.
    kubeconfig entry generated for cluster-2.
  3. Next we’re going to create the accounts that Halyard, Spinnaker’s deployment tool, will use.  First create a text file called spinacct.yaml and add the following to it:
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: spinnaker-service-account
      namespace: default
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: spinnaker-role-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - namespace: default
      kind: ServiceAccount
      name: spinnaker-service-account

    This file creates an account called spinnaker-service-account, then gives assigns it the cluster-admin role. You will, of course, want to tailor this approach to your own security situation.

    Save and close the file.

  4. Create the account by running the script with kubectl:
    kubectl create -f spinacct.yaml
    serviceaccount "spinnaker-service-account" created
    clusterrolebinding "spinnaker-role-binding" created
  5. We can also create accounts from the command line.  For example, use these commands to create the account we’ll need later for Helm:
    kubectl -n kube-system create sa tiller
    serviceaccount "tiller" created
    kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
    clusterrolebinding "tiller" created
  6. In order to access Spinnaker, you have two choices. You can either use SSH tunnelling, or you can expose your installation to the outside world. BE VERY CAREFUL IF YOU’RE GOING TO DO THIS as Spinnaker doesn’t have any authentication attached to it; anybody who has the URL can do whatever your Spinnaker user can do, and remember, we made the user the cluster-admin.For the sake of simplicity, and because this is a “quick and dirty” guide, we’re going to go ahead and create two services, one for the front end of the UI, and one for the scripting that takes place behind the scenes. First, create the spinnaker namespace:
    kubectl create namespace spinnaker
    namespace "spinnaker" created
  7. Now you can go ahead and create the services. Create a new text file called spinsvcs.yaml and add the following to it:
    apiVersion: v1
    kind: Service
    metadata:
      namespace: spinnaker
      labels:
        app: spin
        stack: gate
      name: spin-gate-np
    spec:
      type: LoadBalancer
      ports:
      - name: http
        port: 8084
        protocol: TCP
      selector:
        load-balancer-spin-gate: "true"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      namespace: spinnaker
      labels:
        app: spin
        stack: deck
      name: spin-deck-np
    spec:
      type: LoadBalancer
      ports:
      - name: http
        port: 9000
        protocol: TCP
      selector:
        load-balancer-spin-deck: "true"

    Here we’re creating two load balancers, one on port 9000 and one on port 8084; if your cluster doesn’t support load balancers, you will need to adjust accordingly or just use SSH tunneling.

  8. Create the services:
    kubectl create -f spinsvcs.yaml
    service "spin-gate-np" created
    service "spin-deck-np" created

While the services are created and IPs are allocated, let’s go ahead and configure the deployment.

Prepare to configure the Spinnaker deployment

Spinnaker is configured and deployed through a configuration management tool called Halyard.  Fortunately, Halyard itself is easy to get; it is itself available as an image.

  1. Create a deployment to host Halyard:
    kubectl create deployment hal --image gcr.io/spinnaker-marketplace/halyard:stable
    deployment "hal" created
  2. It will take a minute or two for Kubernetes to download the image and instantiate the pod; in the meantime, you can edit the hal deployment to use the new spinnaker account. First execute the edit command:
    kubectl edit deploy hal
  3. Depending on the operating system of your kubectl client, you’ll either see the configuration in the command window, or a text editor will pop up.  Either way, you want to add the serviceAccountName to the spec just above the containers:
    ...
        spec:
          serviceAccountName: spinnaker-service-account
          containers:
          - image: gcr.io/spinnaker-marketplace/halyard:stable
            imagePullPolicy: IfNotPresent
            name: halyard
            resources: {}
    ...
  4. Save and close the file; Kubernetes will automatically edit the deployment and start a new pod with the new credentials.
    deployment "hal" edited
  5. Get the name of the pod by executing:
    kubectl get pods
    NAME                   READY STATUS       RESTARTS AGE
    hal-65fdf47fb7-tq4r8   0/1 ContainerCreating   0 23s

    Notice that the container isn’t actually running yet; wait until it is before you move on.

    kubectl get pods
    NAME                   READY STATUS RESTARTS   AGE
    hal-65fdf47fb7-tq4r8   1/1 Running 0        4m
  6. Connect to bash within the container:
    kubectl exec -it <CONTAINER-NAME> bash

    So in my case, it would be
    kubectl exec -it hal-65fdf47fb7-tq4r8 bash

    This will put you into the command line of the container.  Change to the root user’s home directory:

    root@hal-65fdf47fb7-tq4r8:/workdir# cd
    root@hal-65fdf47fb7-tq4r8:~# 
  7. We’ll need to interact with Kubernetes, but fortunately kubectl is already installed; we just have to configure it:
    root@hal-65fdf47fb7-tq4r8:~# kubectl config set-cluster default --server=https://kubernetes.default --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    root@hal-65fdf47fb7-tq4r8:~# kubectl config set-context default --cluster=default
    root@hal-65fdf47fb7-tq4r8:~# token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
    root@hal-65fdf47fb7-tq4r8:~# kubectl config set-credentials user --token=$token
    root@hal-65fdf47fb7-tq4r8:~# kubectl config set-context default --user=user
    root@hal-65fdf47fb7-tq4r8:~# kubectl config use-context default
  8. Another tool we’re going to need is Helm; fortunately that’s also exceedingly straightforward to install:
    root@hal-65fdf47fb7-tq4r8:~# curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
      % Total    % Received % Xferd  Average Speed Time    Time Time Current
                                     Dload Upload Total Spent Left  Speed
    100  6689 100  6689 0   0 58819 0 --:--:-- --:--:-- --:--:-- 59194
    root@hal-65fdf47fb7-tq4r8:~# chmod 700 get_helm.sh
    root@hal-65fdf47fb7-tq4r8:~# ./get_helm.sh
    Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.8.2-linux-amd64.tar.gz
    Preparing to install into /usr/local/bin
    helm installed into /usr/local/bin/helm
    Run 'helm init' to configure helm.

    9)  Next we’ll have to run it against the actual cluster. We want to make sure we use the tiller account we created earlier, and that we upgrade to the latest version:

    helm init --service-account tiller --upgrade
    Creating /root/.helm
    Creating /root/.helm/repository
    Creating /root/.helm/repository/cache
    Creating /root/.helm/repository/local
    Creating /root/.helm/plugins
    Creating /root/.helm/starters
    Creating /root/.helm/cache/archive
    Creating /root/.helm/repository/repositories.yaml
    Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
    Adding local repo with URL: http://127.0.0.1:8879/charts
    $HELM_HOME has been configured at /root/.helm.
    
    Tiller (the Helm server-side component) has been installed into your Kubernetes
    Cluster.
    
    Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
    For more information on securing your installation see: https://docs.helm.sh/usi
    ng_helm/#securing-your-helm-installation
    Happy Helming!

OK!  Now we’re ready to do the actual configuration.

Configure the Spinnaker deployment

Deploying Spinnaker involves defining the various choices you’re going to make, such as the Docker repos you want to access or the persistent storage you want to use, then telling Halyard to go ahead and do the deployment.  In our case, we’re going to define the following choices:

  • Distributed installation on Kubernetes
  • Basic Docker repos
  • Minio (an AWS S3-compatible project) for storage
  • Access to Kubernetes
  • Version 1.6.0 of Spinnaker itself
  • UI accessible from outside the cluster

Let’s get started.

  1. We’ll start by setting up the Docker registry. In this example, we’re using Docker Hub; you can find instructions on using other registries here. In addition, we’re specifying just one public repo, library/nginx. From inside the halyard container, execute the following commands:
    ADDRESS=index.docker.io
    REPOSITORIES=library/nginx 
    hal config provider docker-registry enable
    hal config provider docker-registry account add my-docker-registry \
       --address $ADDRESS \
       --repositories $REPOSITORIES

    As you can see, we’re enabling the docker-registry provider, then configuring it using the environment variables we set.

    When you execute these commands, Halyard goes through a validation step and will call out things that you’re going to inevitably fix down the line, so don’t stress out about its messages too much:

    + Get current deployment
      Success
    + Add the my-docker-registry account
      Success
    Problems in default:
    - WARNING You have not yet selected a version of Spinnaker to
      deploy.
    ? Options include: 
      - 1.4.2
      - 1.5.4
      - 1.6.0
    
    + Successfully added account my-docker-registry for provider
      dockerRegistry.
  2. Now we need to set up storage. The first thing that we need to do is set up Minio, the storage provider.  We’ll do that by first pointing at the Mirantis Helm chart repo, where we have a custom Minio chart:
    helm repo add mirantisworkloads https://mirantisworkloads.storage.googleapis.com
    "mirantisworkloads" has been added to your repositories
  3. Next you need to actually install Minio:
    helm install mirantisworkloads/minio
    NAME:   eating-tiger
    LAST DEPLOYED: Sun Mar 25 07:16:47 2018
    NAMESPACE: default
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1beta1/StatefulSet
    NAME                DESIRED CURRENT AGE
    minio-eating-tiger  4 1 0s
    
    ==> v1/Pod(related)
    NAME                  READY STATUS    RESTARTS AGE
    minio-eating-tiger-0  0/1 ContainerCreating  0 0s
    
    ==> v1/Secret
    NAME                TYPE DATA AGE
    minio-eating-tiger  Opaque 2 0s
    
    ==> v1/ConfigMap
    NAME                DATA AGE
    minio-eating-tiger  1 0s
    
    ==> v1/Service
    NAME                    TYPE CLUSTER-IP EXTERNAL-IP  PORT(S) AGE
    minio-svc-eating-tiger  ClusterIP None <none>       9000/TCP 0s
    minio-eating-tiger      NodePort 10.7.253.69 <none>       9000:31235/TCP 0s
    
    NOTES:
    Minio chart has been deployed.
    
    Internal URL:
        minio: minio-eating-tiger:9000
    
    External URL:
    Get the Minio URL by running these commands:
        export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services minio-eating-tiger)export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
        echo http://$NODE_IP:$NODE_PORT

    Make note of the internal URL; we’re going to need it in a moment.

  4. Set the endpoint to the default for the internal URL you saved a moment ago.  For example, my internal URL was:
    minio: minio-eating-tiger:9000

    So I’d set my endpoint as follows:

    ENDPOINT=http://minio-eating-tiger.default:9000
  5. Set the access key and password, then configure Haylard with your storage choices:
    MINIO_ACCESS_KEY=miniokey
    MINIO_SECRET_KEY=miniosecret
    echo $MINIO_SECRET_KEY | hal config storage s3 edit --endpoint $ENDPOINT \
       --access-key-id $MINIO_ACCESS_KEY \
       --secret-access-key
    hal config storage edit --type s3
  6. Now we’re ready to set it to use Kubernetes:
    hal config provider kubernetes enable
    hal config provider kubernetes account add my-k8s-account --docker-registries my-docker-registry
    hal config deploy edit --type distributed --account-name my-k8s-account
  7. The last standard parameter we need to define is the version:
    hal config version edit --version 1.6.0
    + Get current deployment
      Success
    + Edit Spinnaker version
      Success
    + Spinnaker has been configured to update/install version "1.6.0".
      Deploy this version of Spinnaker with `hal deploy apply`.
  8. At this point we can go ahead and deploy, but if we do, we’ll have to use SSH tunelling.  Instead, let’s configure Spinnaker to use those services we created way back at the beginning.  First, we’ll need to find out what IP addresses they’ve been assigned:
    kubectl get svc -n spinnaker
    NAME           CLUSTER-IP EXTERNAL-IP      PORT(S) AGE
    spin-deck-np   10.7.254.29 35.184.29.246    9000:30296/TCP   35m
    spin-gate-np   10.7.244.251 35.193.195.231   8084:30747/TCP   35m
  9. We want to set the UI to the EXTERNAL-IP for port 9000, and the api for the EXTERNAL-IP for port 8084, so for me it would be:
    hal config security ui edit --override-base-url http://35.184.29.246:9000 
    hal config security api edit --override-base-url http://35.193.195.231:8084

OK!  Now we are finally ready to actually deploy Spinnaker.

Deploy Spinnaker

Now that we’ve done all of our configuration, deployment is paradoxically easy:

hal deploy apply

Once you execute this command, Halyard will begin cranking away for quite some time. You can watch the console to see how it’s getting along, but you can also check in on the pods themselves by opening a second console window and looking at the pods in the spinnaker namespace:

kubectl get pods -n spinnaker

This will give you a running look at what’s happening.  For example:

kubectl get pods -n spinnaker
NAME                                    READY STATUS RESTARTS AGE
spin-clouddriver-bootstrap-v000-pdgqr   1/1 Running 0 1m
spin-orca-bootstrap-v000-xkhhh          0/1 Running 0 36s
spin-redis-bootstrap-v000-798wm         1/1 Running 0 2m

kubectl get pods -n spinnaker
NAME                                    READY STATUS RESTARTS AGE
spin-clouddriver-bootstrap-v000-pdgqr   1/1 Running 0 2m
spin-orca-bootstrap-v000-xkhhh          1/1 Running 0 49s
spin-redis-bootstrap-v000-798wm         1/1 Running 0 2m
spin-redis-v000-q9wzj                   1/1 Running 0 7s

kubectl get pods -n spinnaker
NAME                                    READY STATUS RESTARTS AGE
spin-clouddriver-bootstrap-v000-pdgqr   1/1 Running 0 2m
spin-orca-bootstrap-v000-xkhhh          1/1 Running 0 54s
spin-redis-bootstrap-v000-798wm         1/1 Running 0 2m
spin-redis-v000-q9wzj                   1/1 Running 0 12s

kubectl get pods -n spinnaker
NAME                                    READY STATUS RESTARTS AGE
spin-clouddriver-bootstrap-v000-pdgqr   1/1 Running 0 2m
spin-clouddriver-v000-jswbg             0/1 ContainerCreating 0 3s
spin-deck-v000-nw629                    0/1 ContainerCreating 0 5s
spin-echo-v000-m5drt                    0/1 ContainerCreating 0 4s
spin-front50-v000-qcpfh                 0/1 ContainerCreating 0 3s
spin-gate-v000-8jk8d                    0/1 ContainerCreating 0 4s
spin-igor-v000-xbfvh                    0/1 ContainerCreating 0 4s
spin-orca-bootstrap-v000-xkhhh          1/1 Running 0 1m
spin-orca-v000-9452p                    0/1 ContainerCreating 0 4s
spin-redis-bootstrap-v000-798wm         1/1 Running 0 2m
spin-redis-v000-q9wzj                   1/1 Running 0 18s
spin-rosco-v000-zd6wj                   0/1 Pending 0 2s

As you can see, the pods come up as Halyard gets to them.  The entire process can take half an hour or more, but eventually, you will see that all pods are running and ready.

NAME                                    READY STATUS RESTARTS AGE
 spin-clouddriver-bootstrap-v000-pdgqr   1/1 Running 0 8m
 spin-clouddriver-v000-jswbg             1/1 Running 0 6m
 spin-deck-v000-nw629                    1/1 Running 0 6m
 spin-echo-v000-m5drt                    1/1 Running 0 6m
 spin-front50-v000-qcpfh                 1/1 Running 1 6m
 spin-gate-v000-8jk8d                    1/1 Running 0 6m
 spin-igor-v000-xbfvh                    1/1 Running 0 6m
 spin-orca-bootstrap-v000-xkhhh          1/1 Running 0 7m
 spin-orca-v000-9452p                    1/1 Running 0 6m
 spin-redis-bootstrap-v000-798wm         1/1 Running 0 8m
 spin-redis-v000-q9wzj                   1/1 Running 0 6m
 spin-rosco-v000-zd6wj                   1/1 Running 0 6m

When that happens, point your browser to the UI URL you configured in the last section; it’s the address for port 9000. For example, in my case it is:

http://35.184.29.246:9000 

You should see the Spinnaker “Recently Viewed” page, which will be blank because you haven’t done anything yet:

To..

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The post OpenStack embraces the future with GPU, edge computing support appeared first on Mirantis | Pure Play Open Cloud.

It wasn’t that long ago that OpenStack was the hot new kid on the infrastructure block. Lately, though, other technologies have been vying for that spot, making the open source cloud platform look downright stodgy in comparison. That just might change with the latest release of OpenStack, code-named Queens.

The Queens release makes it abundantly clear that the OpenStack community, far from resting on its laurels or burying its collective head in the digital sand, has been paying attention to what’s going on in the cloud space and adjusting its efforts accordingly. Queens includes capabilities that wouldn’t even have been possible when the OpenStack project started, let alone considered, such as GPU support (handy for scientific and machine learning/AI workloads) and a focus on Edge Computing that makes use of the current new kid on the block, Kubernetes.

Optimization

While OpenStack users have been able to utilize GPUs for scientific and machine learning purposes for some time, it has typically been through the use of either PCI passthrough or by using Ironic to manage an entire server as a single instance — neither of which was particularly convenient. Queens now makes it possible to provision virtual GPUs (vGPUs) using specific flavors, just as you would provision traditional vCPUs.

Queens also includes the debut of the Cyborg project, which provides a management framework for different types of accelerators such as GPUs, FPGA, NVMe/NOF, SSDs, DPDK, and so on. This capability is important not just for GPU-related use cases, but also for situations such as NFV.

High Availability

As OpenStack becomes more of an essential tool and less of a science project, the need for high availability has grown. The OpenStack Queens release addresses this need in several different ways.

The OpenStack Instances High Availability Service, or Masakari, provides an API to manage the automated rescue mechanism that recovers instances that fail because of process down, provisioning process down, or nova-compute host failure events.

While Masakari currently supports KVM-based VMs, Ironic bare metal nodes have always been more difficult to recover. Queens debuts the Ironic Rescue Mode (one of our favorite feature names of all time), which makes it possible to recover an Ironic node that has gone down.

Another way OpenStack Queens provides HA capabilities is through Cinder’s new volume multi-attach feature. The OpenStack Block Storage Service’s new capability makes it possible to attach a single volume to multiple VMs, so if one of those instances fails, traffic can be routed to an identical instance that is using the same storage.

Edge Computing

What’s become more than obvious, though, is that OpenStack has realized that the future doesn’t lay in just a few concentrated datacenters, but rather that workloads will be in a variety of diverse locations. Specifically, Edge Computing, in which we will see multiple smaller clouds closer to the user rather than a single centralized cloud, is coming into its own as service providers and others realize its importance.

To that end, OpenStack has been focused on several projects to adapt itself to that kind of environment, including LOCI and OpenStack-Helm.

OpenStack LOCI provides Lightweight OCI compatible images of OpenStack services so that they can be deployed by a container orchestration tool such as Kubernetes. As of the Queens release, images are available for Cinder, GlanceHeatHorizonIronic, KeystoneNeutron and Nova.

And of course since orchestrating a containerized deployment of OpenStack isn’t necessarily any easier than deploying a non-containerized version, there’s OpenStack-Helm, a collection of Helm charts that install the various OpenStack services on a Kubernetes cluster.

Other container-related advances

If it seems like there’s a focus on integrating with container-based services, you’re right. Another way OpenStack has integrated with Kubernetes is through the Kuryr CNI plugin. The Container Network Interface (CNI) is a CNCF project that standardizes container networking operations, and the Kuryr CNI plugin makes it possible to use OpenStack Neutron within your Kubernetes cluster.

Also, if your container needs are more modest — maybe you don’t need an actual cluster, you just want the containers — the new Zun project makes it possible to run application containers on their own.

Coming up next

As always, it’s impossible to sum up 6 months of OpenStack work in a single blog post, but the general idea is that the OpenStack community is clearly thinking about the long term future and planning accordingly. While this release focused on making it possible to run OpenStack at the Edge, the next, code-named Rocky, will see a focus on NFV-related functionality such as minimum bandwidth requirements to ensure service quality.

What’s more, the community is also working on “mutable configuration across services”, which means that as we move into Intelligent Continuous Delivery (ICD) and potentially ever-changing and morphing infrastructure, we’ll be able to change service configurations without having to restart services.

You can find the full OpenStack Queens release notes here.

The post OpenStack embraces the future with GPU, edge computing support appeared first on Mirantis | Pure Play Open Cloud.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The post First beta version of Kubernetes 1.10 is here: Your chance to provide feedback appeared first on Mirantis | Pure Play Open Cloud.

(If you’d like a good look at the new features and changes in Kubernetes 1.10, please join us on March 14, 2018 for What’s New in Kubernetes 1.10. This article first appeared on the Kubernetes.io blog.)
 

The Kubernetes community has released the first beta version of Kubernetes 1.10, which means you can now try out some of the new features and give your feedback to the release team ahead of the official release. The release, currently scheduled for March 21, 2018, is targeting the inclusion of more than a dozen brand new alpha features and more mature versions of more than two dozen more.

Specifically, Kubernetes 1.10 will include production-ready versions of Kubelet TLS Bootstrapping, API aggregation, and more detailed storage metrics.

Some of these features will look familiar because they emerged at earlier stages in previous releases. Each stage has specific meanings:

  • stable: The same as “generally available”,  features in this stage have been thoroughly tested and can be used in production environments.
  • beta: The feature has been around long enough that the team is confident that the feature itself is on track to be included as a stable feature, and any API calls aren’t going to change. You can use and test these features, but including them in mission-critical production environments is not advised because they are not completely hardened.
  • alpha: New features generally come in at this stage. These features are still being explored. APIs and options may change in future versions, or the feature itself may disappear. Definitely not for production environments.

You can download the latest release of Kubernetes 1.10 from https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md. To give feedback to the development community, create an issue in the Kubernetes 1.10 milestone and tag the appropriate SIG before March 9.

Here’s what to look for, though you should remember that while this is the current plan as of this writing, there’s always a possibility that one or more features may be held for a future release. We’ll start with authentication.

Authentication (SIG-Auth)
  1. Kubelet TLS Bootstrap (stable): Kubelet TLS bootstrapping is probably the “headliner” of the Kubernetes 1.10 release as it becomes available for production environments. It provides the ability for a new kubelet to create a certificate signing request, which enables you to add new nodes to your cluster without having to either manually add security certificates or use self-signed certificates that eliminate many of the benefits of having certificates in the first place.
  2. Pod Security Policy moves to its own API group (beta): The beta release of the Pod Security Policy lets administrators decide what contexts pods can run in. In other words, you have the ability to prevent unprivileged users from creating privileged pods — that is, pods that can perform actions such as writing files or accessing Secrets — in particular namespaces.
  3. Limit node access to API (beta): Also in beta, you now have the ability to limit calls to the API on a node to just that specific node, and to ensure that a node is only calling its own API, and not those on other nodes.
  4. External client-go credential providers (alpha): client-go is the Go language client for accessing the Kubernetes API. This feature adds the ability to add external credential providers. For example, Amazon might want to create its own authenticator to validate interaction with EKS clusters; this feature enables them to do that without having to include their authenticator in the Kubernetes codebase.
  5. TokenRequest API (alpha): The TokenRequest API provides the groundwork for much needed improvements to service account tokens; this feature enables creation of tokens that aren’t persisted in the Secrets API, that are targeted for specific audiences (such as external secret stores), have configurable expiries, and are bindable to specific pods.
Networking (SIG-Network)
  1. Support configurable pod resolv.conf (beta): You now have the ability to specifically control DNS for a single pod, rather than relying on the overall cluster DNS.
  2. Although the feature is called Switch default DNS plugin to CoreDNS (beta), that’s not actually what will happen in this cycle. The community has been working on the switch from kube-dns, which includes dnsmasq, to CoreDNS, another CNCF project with fewer moving parts, for several releases. In Kubernetes 1.10, the default will still be kube-dns, but when CoreDNS reaches feature parity with kube-dns, the team will look at making it the default.
  3. Topology aware routing of services (alpha): The ability to distribute workloads is one of the advantages of Kubernetes, but one thing that has been missing until now is the ability to keep workloads and services geographically close together for latency purposes. Topology aware routing will help with this problem. (This functionality may be delayed until Kubernetes 1.11.)
  4. Make NodePort IP address configurable (alpha): Not having to specify IP addresses in a Kubernetes cluster is great — until you actually need to know what one of those addresses is ahead of time, such as for setting up database replication or other tasks. You will now have the ability to specifically configure NodePort IP addresses to solve this problem. (This functionality may be delayed until Kubernetes 1.11.)
Kubernetes APIs (SIG-API-machinery)
  1. API Aggregation (stable): Kubernetes makes it possible to extend its API by creating your own functionality and registering your functions so that they can be served alongside the core K8s functionality. This capability will be upgraded to “stable” in Kubernetes 1.10, so you can use it in production. Additionally, SIG-CLI is adding a feature called kubectl get and describe should work well with extensions (alpha) to make the server, rather than the client, return this information for a smoother user experience.
  2. Support for self-hosting authorizer webhook (alpha): Earlier versions of Kubernetes brought us the authorizer webhooks, which make it possible to customize the enforcement of permissions before commands are executed. Those webhooks, however, have to live somewhere, and this new feature makes it possible to host them in the cluster itself.
Storage (SIG-Storage)
  1. Detailed storage metrics of internal state (stable): With a distributed system such as Kubernetes, it’s particularly important to know what’s going on inside the system at any given time, either for troubleshooting purposes or simply for automation. This release brings to general availability detailed metrics of what’s going in inside the storage systems, including metrics such as mount and unmount time, number of volumes in a particular state, and number of orphaned pod directories. You can find a full list in this design document.
  2. Mount namespace propagation (beta): This feature allows a container to mount a volume as rslave so that host mounts can be seen inside the container, or as rshared so that any mounts from inside the container are reflected in the host’s mount namespace. The default for this feature is rslave.
  3. Local Ephemeral Storage Capacity Isolation (beta): Without this feature in place, every pod on a node that is using ephemeral storage is pulling from the same pool, and allocating storage is on a “best-effort” basis; in other words, a pod never knows for sure how much space it has available. This function provides the ability for a pod to reserve its own storage.
  4. Out-of-tree CSI Volume Plugins (beta): Kubernetes 1.9 announced the release of the Container Storage Interface, which provides a standard way for vendors to provide storage to Kubernetes. This function makes it possible for them to create drivers that live “out-of-tree”, or out of the normal Kubernetes core. This means that vendors can control their own plugins and don’t have to rely on the community for code reviews and approvals.
  5. Local Persistent Storage (beta): This feature enables PersistentVolumes to be created with locally attached disks, and not just network volumes.
  6. Prevent deletion of Persistent Volume Claims that are used by a pod (beta) and 7. Prevent deletion of Persistent Volume that is bound to a Persistent Volume Claim (beta): In previous versions of Kubernetes it was possible to delete storage that is in use by a pod, causing massive problems for the pod. These features provide validation that prevents that from happening.
  7. Running out of storage space on your Persistent Volume? If you are, you can use Add support for online resizing of PVs (alpha) to enlarge the underlying volume it without disrupting existing data. This also works in conjunction with the new Add resize support for FlexVolume (alpha); FlexVolumes are vendor-supported volumes implemented through FlexVolume plugins.
  8. Topology Aware Volume Scheduling (beta): This feature enables you to specify topology constraints on PersistentVolumes and have those constraints evaluated by the scheduler. It also delays the initial PersistentVolumeClaim binding until the Pod has been scheduled so that the volume binding decision is smarter and considers all Pod scheduling constraints as well. At the moment, this feature is most useful for local persistent volumes, but support for dynamic provisioning is under development.
Node management (SIG-Node)
  1. Dynamic Kubelet Configuration (beta): Kubernetes makes it easy to make changes to existing clusters, such as increasing the number of replicas or making a service available over the network. This feature makes it possible to change Kubernetes itself (or rather, the Kubelet that runs Kubernetes behind the scenes) without bringing down the node on which Kubelet is running.
  2. CRI validation test suite (beta): The Container Runtime Interface (CRI) makes it possible to run containers other than Docker (such as Rkt containers or even virtual machines using Virtlet) on Kubernetes. This features provides a suite of validation tests to make certain that these CRI implementations are compliant, enabling developers to more easily find problems.
  3. Configurable Pod Process Namespace Sharing (alpha): Although pods can easily share the Kubernetes namespace, the process, or PID namespace has been a more difficult issue due to lack of support in Docker. This feature enables you to set a parameter on the pod to determine whether containers get their own operating system processes or share a single process.
  4. Add support for Windows Container Configuration in CRI (alpha): The Container Runtime Interface was originally designed with Linux-based containers in mind, and it was impossible to implement support for Windows-based containers using CRI. This feature solves that problem, making it possible to specify a WindowsContainerConfig.
  5. Debug Containers (alpha): It’s easy to debug a container if you have the appropriate utilities. But what if you don’t? This feature makes it possible to run debugging tools on a container even if those tools weren’t included in the original container image.
Other changes:
  1. Deployment (SIG-Cluster Lifecycle): Support out-of-process and out-of-tree cloud providers (beta): As Kubernetes gains acceptance, more and more cloud providers will want to make it available. To do that more easily, the community is working on extracting provider-specific binaries so that they can be more easily replaced.
  2. Kubernetes on Azure (SIG-Azure): Kubernetes has a cluster-autoscaler that automatically adds nodes to your cluster if you’re running too many workloads, but until now it wasn’t available on Azure. The Add Azure support to cluster-autoscaler (alpha) feature aims to fix that. Closely related, the Add support for Azure Virtual Machine Scale Sets (alpha) feature makes use of Azure’s own autoscaling capabilities to make resources available.

You can download the Kubernetes 1.10 beta from https://github.com/kubernetes/kubernetes/releases. Again, if you’ve got feedback (and the community hopes you do) please add an issue to the 1.10 milestone and tag the relevant SIG before March 9.

(Many thanks to community members Michelle Au, Jan Šafránek, Eric Chiang, Michał Nasiadka, Radosław Pieczonka, Xing Yang, Daniel Smith, sylvain boily, Leo Sunmo, Michal Masłowski, Fernando Ripoll, ayodele abejide, Brett Kochendorfer, Andrew Randall, Casey Davenport, Duffie Cooley, Bryan Venteicher, Mark Ayers, Christopher Luciano, and Sandor Szuecs for their invaluable help in reviewing this article for accuracy.)

(If you’d like a good look at the new features and changes in Kubernetes 1.10, please join us on March 14, 2018 for What’s New in Kubernetes 1.10.)

The post First beta version of Kubernetes 1.10 is here: Your chance to provide feedback appeared first on Mirantis | Pure Play Open Cloud.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The post Using Kubernetes Helm to install applications: A quick and dirty guide appeared first on Mirantis | Pure Play Open Cloud.

After reading this introduction to Kubernetes Helm, you will know how to:

  • Install Helm
  • Configure Helm
  • Use Helm to determine available packages
  • Use Helm to install a software package
  • Retrieve a Kubernetes Secret
  • Use Helm to delete an application
  • Use Helm to roll back changes to an application

Difficulty is a relative thing. Deploying an application using containers can be much easier than trying to manage deployments of a traditional application over different environments, but trying to manage and scale multiple containers manually is much more difficult than orchestrating them using Kubernetes. But even managing Kubernetes applications looks difficult compared to, say, “apt-get install mysql”. Fortunately, the container ecosystem has now evolved to that level of simplicity. Enter Helm.

Helm is a Kubernetes-based package installer. It manages Kubernetes “charts”, which are “preconfigured packages of Kubernetes resources.” Helm enables you to easily install packages, make revisions, and even roll back complex changes.

Last year, when my colleague Maciej Kwiek gave a talk at Kubecon about Boosting Helm with AppController, we thought this might be a good time to give you an introduction to what it is and how it works. Now, as we get ready to talk about Helm and other applications, we thought we’d revisit and update those instructions.

Let’s take a quick look at how to install, configure, and utilize Helm.

Install Helm

Installing Helm is actually pretty straightforward. Follow these steps:

  1. Download the latest version of Helm from https://github.com/kubernetes/helm/releases. (Note that if you are using an older version of Kubernetes (1.4 or below) you might have to downgrade Helm due to breaking changes.)
  2. Unpack the archive:
    $ gunzip helm-v2.8.1-linux-amd64.tar.gz
    $ tar -xvf helm-v2.8.1-linux-amd64.tar
    x linux-amd64/
    x linux-amd64/helm
    x linux-amd64/LICENSE
    x linux-amd64/README.md
  3. Next move the helm executable to your path:
    $ sudo mv l*/helm /usr/local/bin/.
  4. Finally, initialize helm to both set up the local environment and to install the server portion, Tiller, on your cluster. (Helm will use the default cluster for Kubernetes, unless you tell it otherwise.)
    $ helm init
    Creating /home/nick/.helm 
    Creating /home/nick/.helm/repository 
    Creating /home/nick/.helm/repository/cache 
    Creating /home/nick/.helm/repository/local 
    Creating /home/nick/.helm/plugins 
    Creating /home/nick/.helm/starters 
    Creating /home/nick/.helm/cache/archive 
    Creating /home/nick/.helm/repository/repositories.yaml 
    Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
    Adding local repo with URL: http://127.0.0.1:8879/charts 
    $HELM_HOME has been configured at /home/nick/.helm.
    
    Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
    Happy Helming!
    

Note that you can also upgrade the Tiller component using:

helm init --upgrade

Finally, you need to add the appropriate Kubernetes credentials:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule \
   --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy \
   -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

That’s all it takes to install Helm itself; now let’s look at using it to install an application.

Install an application with Helm

One of the things that Helm does is enable authors to create and distribute their own applications using charts; to get a full list of the charts that are available, you can simply ask:

$ helm search
NAME                              CHART VERSION    APP VERSION      DESCRIPTION                                       
stable/acs-engine-autoscaler      2.1.3            2.1.1            Scales worker nodes within agent pools            
stable/aerospike                  0.1.7            v3.14.1.2        A Helm chart for Aerospike in Kubernetes          
stable/anchore-engine             0.1.3            0.1.6            Anchore container analysis and policy evaluatio...
stable/artifactory                7.0.3            5.8.4            Universal Repository Manager supporting all maj...
...

In our case, we’re going to install MySQL from the stable/mysql chart. Follow these steps:

  1. First update the repo, just as you’d do with apt-get update:
    $ helm repo update
    Hang tight while we grab the latest from your chart repositories...
    ...Skip local chart repository
    Writing to /Users/nchase/.helm/repository/cache/stable-index.yaml
    ...Successfully got an update from the "stable" chart repository
    Update Complete. ⎈ Happy Helming!⎈ 
  2. Next, we’ll do the actual install:
    $ helm install stable/mysql

    This command produces a lot of output, so let’s take it one step at a time. First, we get information about the release that’s been deployed:

    NAME:   inky-manta
    LAST DEPLOYED: Thu Mar  1 03:10:58 2018
    NAMESPACE: default
    STATUS: DEPLOYED
    

    As you can see, it’s called inky-manta, and it’s been successfully DEPLOYED.

    Your release will, of course, have a different name. Next, we get the resources that were actually deployed by the stable/mysql chart:

    RESOURCES:
    ==> v1beta1/Deployment
    NAME              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    inky-manta-mysql  1        1        1           0          1s
    
    ==> v1/Pod(related)
    NAME                               READY  STATUS   RESTARTS  AGE
    inky-manta-mysql-588bf547d6-4vvqk  0/1    Pending  0         0s
    
    ==> v1/Secret
    NAME              TYPE    DATA  AGE
    inky-manta-mysql  Opaque  2     1s
    
    ==> v1/PersistentVolumeClaim
    NAME              STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
    inky-manta-mysql  Pending  1s
    
    ==> v1/Service
    NAME              TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)   AGE
    inky-manta-mysql  ClusterIP  10.102.240.155  <none>       3306/TCP  1s
    

    This is a good example because we can see that this chart configures multiple types of resources: a Secret (for passwords), a persistent volume claim (to store the actual data), a Service (to serve requests) and a Deployment (to manage it all).

    The chart also enables the developer to add notes:

    NOTES:
    MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
    inky-manta-mysql.default.svc.cluster.local
    
    To get your root password run:
    
        MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default inky-manta-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
    
    To connect to your database:
    
    1. Run an Ubuntu pod that you can use as a client:
    
        kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
    
    2. Install the mysql client:
    
        $ apt-get update && apt-get install mysql-client -y
    
    3. Connect using the mysql cli, then provide your password:
        $ mysql -h inky-manta-mysql -p
    
    To connect to your database directly from outside the K8s cluster:
        MYSQL_HOST=127.0.0.1
        MYSQL_PORT=3306
    
        # Execute the following commands to route the connection:
        export POD_NAME=$(kubectl get pods --namespace default -l "app=inky-manta-mysql" -o jsonpath="{.items[0].metadata.name}")
        kubectl port-forward $POD_NAME 3306:3306
    
        mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
    
    

These notes are the basic documentation a user needs to use the actual application. There let’s see how we put it all to use.

Connect to mysql

The first lines of the notes make it seem deceptively simple to connect to MySql:

MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
inky-manta-mysql.default.svc.cluster.local

Before you can do anything with that information, however, you need to do two things: get the root password for the database, and get a working client with network access to the pod hosting it.

Get the mysql password

Most of the time, you’ll be able to get the root password by simply executing the code the developer has left you:

$ kubectl get secret --namespace default inky-manta-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
DBTzmbAikO

Some systems — notably MacOS — will give you an error:

$ kubectl get secret --namespace default inky-manta-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
Invalid character in input stream.

This is because of an error in base64 that adds an extraneous character. In this case, you will have to extract the password manually. Basically, we’re going to execute the same steps as this line of code, but one at a time.

Start by looking at the Secrets that Kubernetes is managing:

$ kubectl get secrets
NAME                     TYPE                                  DATA      AGE
default-token-0q3gy      kubernetes.io/service-account-token   3         145d
inky-manta-mysql   Opaque                                2         20m

It’s the second, inky-manta-mysql that we’re interested in. Let’s look at the information it contains:

$ kubectl get secret inky-manta-mysql -o yaml
apiVersion: v1
data:
  mysql-password: a1p1THdRcTVrNg==
  mysql-root-password: REJUem1iQWlrTw==
kind: Secret
metadata:
  creationTimestamp: 2017-03-16T20:13:50Z
  labels:
    app: inky-manta-mysql
    chart: mysql-0.2.5
    heritage: Tiller
    release: inky-manta
  name: inky-manta-mysql
  namespace: default
  resourceVersion: "43613"
  selfLink: /api/v1/namespaces/default/secrets/inky-manta-mysql
  uid: 11eb29ed-0a85-11e7-9bb2-5ec65a93c5f1
type: Opaque

You probably already figured out where to look, but the developer’s instructions told us the raw password data was here:

jsonpath="{.data.mysql-root-password}"

So we’re looking for this:

apiVersion: v1
data:
  mysql-password: a1p1THdRcTVrNg==
  mysql-root-password: REJUem1iQWlrTw==
kind: Secret
metadata:
...

Now we just have to go ahead and decode it:

$ echo "REJUem1iQWlrTw==" | base64 --decode
DBTzmbAikO

Finally! So let’s go ahead and connect to the database.

Create the mysql client

Now we have the password, but if we try to just connect with the mysql client on any old machine, we’ll find that there’s no connectivity outside of the cluster. For example, if I try to connect with my local mysql client, I get an error:

$ ./mysql -h linky-manta-mysql.default.svc.cluster.local -p
Enter password: 
ERROR 2005 (HY000): Unknown MySQL server host 'inky-manta-mysql.default.svc.cluster.local' (0)

So what we need to do is create a pod on which we can run the client. Start by creating a new pod using the ubuntu:16.04 image:

$ kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never 

$ kubectl get pods
NAME                                      READY     STATUS             RESTARTS   AGE
hello-minikube-3015430129-43g6t           1/1       Running            0          1h
inky-manta-mysql-3326348642-b8kfc   1/1       Running            0          31m
ubuntu                                    1/1       Running            0          25s

When it’s running, go ahead and attach to it:

$ kubectl attach ubuntu -i -t

Hit enter for command prompt

Next install the mysql client:

root@ubuntu2:/# apt-get update && apt-get install mysql-client -y
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
...
Setting up mysql-client-5.7 (5.7.17-0ubuntu0.16.04.1) ...
Setting up mysql-client (5.7.17-0ubuntu0.16.04.1) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...

Now we should be ready to actually connect. Remember to use the password we extracted in the previous step.

root@ubuntu2:/# mysql -h inky-manta-mysql -p
Enter password: 

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 410
Server version: 5.7.14 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Of course you can do what you want here, but for now we’ll go ahead and exit both the database and the container:

mysql> exit
Bye
root@ubuntu2:/# exit
logout

So we’ve successfully installed an application — in this case, MySql, using Helm. But what else can Helm do?

Working with revisions

So now that you’ve seen Helm in action, let’s take a quick look at what you can actually do with it. Helm is designed to let you install, upgrade, delete, and roll back releases. We’ll get into more details about upgrades in a later article on creating charts, but let’s quickly look at deleting and rolling back revisions:

First off, each time you make a change with Helm, you’re creating a Release with a specific Revision number. By deploying MySql, we created a Release, which we can see in this list:

$ helm list
NAME             REVISION    UPDATED                     STATUS      CHART           NAMESPACE
inky-manta       1           Thu Mar  1 03:10:58 2018    DEPLOYED    mysql-0.3.4     default  
volted-bronco    1           Thu Mar  1 03:06:03 2018    DEPLOYED    redis-1.1.13    default   

As you can see, we created a revision called inky-manta. It’s based on the mysql-0.3.4 chart, and its status is DEPLOYED.

We could also get back the information we got when it was first deployed by getting the status of the revision:

$ helm status inky-manta
LAST DEPLOYED: Thu Mar  1 03:10:58 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME              STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
inky-manta-mysql  Pending  5h

==> v1/Service
NAME              TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)   AGE
inky-manta-mysql  ClusterIP  10.102.240.155  <none>       3306/TCP  5h

==> v1beta1/Deployment
NAME              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
inky-manta-mysql  1        1        1           0          5h

==> v1/Pod(related)
NAME                               READY  STATUS    RESTARTS  AGE
inky-manta-mysql-588bf547d6-4vvqk  0/1    Init:0/1  0         5h

==> v1/Secret
NAME              TYPE    DATA  AGE
inky-manta-mysql  Opaque  2     5h


NOTES:
MySQL can be accessed via port 3306 on the 
...

Now, if we wanted to, we could go ahead and delete the revision:

$ helm delete inky-manta

Now if you list all of the active revisions, it’ll be gone.

$ helm ls

However, even though the revision s gone, you can still see the status:

$  helm status inky-manta
LAST DEPLOYED: Thu Mar  1 03:10:58 2018
NAMESPACE: default
STATUS: DELETED

NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
inky-manta-mysql.default.svc.cluster.local
...

OK, so what if we decide that we’ve changed our mind, and we want to roll back that deletion? Fortunately, Helm is designed for that. We can specify that we want to rollback our application to a specific revision (in this case, 1).

$ helm rollback inky-manta 1
Rollback was a success! Happy Helming!

We can see that the application is back, and the revision number has been incremented:

$ helm ls
NAME             REVISION    UPDATED                     STATUS      CHART           NAMESPACE
inky-manta       2           Thu Mar  1 08:53:05 2018    DEPLOYED    mysql-0.3.4     default  
volted-bronco    1           Thu Mar  1 03:06:03 2018    DEPLOYED    redis-1.1.13    default  

We can also check the status:

$ helm status inky-manta
LAST DEPLOYED: Thu Mar  1 08:53:05 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
inky-manta-mysql  1        1        1           0          48s
...

Next time, we’ll talk about how to use custom charts for Helm. Meanwhile, don’t forget to join us on March 14, 2018 for What’s New in Kubernetes 1.10.

The post Using Kubernetes Helm to install applications: A quick and dirty guide appeared first on Mirantis | Pure Play Open Cloud.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The post What is Hyperconverged Infrastructure (HCI) and when should I use it? Pros and cons of HCI and traditional architecture appeared first on Mirantis | Pure Play Open Cloud.

In a traditional cloud environment, four node types are common: Controllers, compute nodes, storage nodes, and network nodes. This affords the design some flexibility, but on the surface it looks more complex than a hyperconverged design, where compute nodes also provide storage and networking services. In other words, in a Hyperconverged Infrastructure, we slap Nova Compute, Ceph, and some type of distributed virtual routing all onto a single node.

I will leave the networking piece to a future post, but just looking at storage and compute, you can begin to see where issues can emerge.

Why go Hyperconverged?

The most common reasoning people choose for using a hyperconverged infrastructure is the cost and space savings that arise from using fewer types of hardware and a smaller number of servers. This idea is supplemented by the notion that ‘just putting some storage on computes’ shouldn’t make much of a difference in complexity or performance. After all, hard drives are slow and shouldn’t need much in terms of resources, right? Besides, our cloud is not running at 100% anyway, so we are making some of those free resources work for us.

Not so fast…

While this design looks tempting on the surface, there are a number of things to consider.

Scalability

Scalability is touted as a strength of hyperconverged, and that is true if the required scale-out ratio for storage and compute happens to match the original design expectations. Unfortunately, that is rarely the case. Furthermore, this ratio needs to take into account not just capacity, but also performance on the storage side.

Let’s look at an actual example here. Let’s say we were going to build a cloud with 20 compute nodes with only boot drives, and 10 storage nodes with 20 drives each. If we were to convert this to a hyperconverged infrastructure, we could simply install 10 drives into each compute node instead of adding the storage nodes. If we do this, however, we now are locked into the “10 drives per compute node” ratio.

If it turns out that the storage capacity is sufficient, but we need to double our compute capacity, we have two choices. We can stick with HCI, but are going to end up with a lot of extra storage capacity. On the other hand, we can always ditch the HCI paradigm and add 20 more compute nodes.

Congratulations, we have just opened an entirely new can of worms by adding dissimilar infrastructure nodes.

Let’s make it even more fun: A new project comes along, which all of sudden needs our storage to be scaled out to 4x capacity. Now we have the rather unappealing options of

  • Adding drives to the 20 non-converged nodes we added earlier, plus another 40 nodes of unneeded compute capacity to satisfy storage capacity to end up with a HCI design again. A rather costly option, and it requires reconfiguring existing and active compute nodes.
  • Adding drives to the 20 compute nodes we added earlier, and adding 20 storage-only nodes. Now our HCI design is broken the other way, with standalone storage nodes.
  • Adding 30 storage-only nodes, thus breaking the HCI design even more, as we now have compute-storage, compute-only and storage-only nodes. On other words, in this case we’d have been better off with separate compute and storage in the first place.

Of course your environment might grow just the right way. Or it might not grow at all. Or you might migrate off it if you have to ever scale out. Either way, I recommend considering this factor very thoroughly before committing to HCI.

Cost

We can rephrase this as “HCI uses fewer servers, so it must be cheaper.”

Again, not so fast.

In order to make HCI work, you must dedicate additional resources on each node to the storage infrastructure. This means you have less compute capacity, which you can mitigate either by adding more compute nodes, or by adding more CPU and memory to the existing nodes.

For example, your cloud with 20 compute nodes is designed for 400 instances. Now you are adding disks to these nodes, which eat up 20% of the compute capacity of each nodes. Thus you can spec processors with 20% more cores or add 20% more nodes.

As you were diligent and specified the CPUs with the best cost/performance ratio, adding ‘hotter’ CPUs is going to cost quite a bit more than the 20% extra. Adding 20% more compute nodes also comes with added cost. In many cases storage chassis with low-spec CPUs turn out cheaper than either option.

Flexibility

Now imagine you are operating more than one cloud. Or a cloud and a container environment. Or baremetal. Having a hyperconverged infrastructure won’t stop you from sharing the storage infrastructure from your existing cloud with the newcomers, but you are adding a layer of vulnerability here. If a compute node with added storage (and accessibility from the outside) is compromised, so that storage — even if the storage is being used by a separate cloud. This doesn’t happen a traditional design, where separate storage nodes would have to be specifically compromised.

Also, using storage inside of one cloud to provide resources to another cloud is not a very clean or easy-to-operate design. You can achieve the same goal using a separate storage network and a storage cluster that is outside of all environments with proper separation in order to implement a clean design.

Why go Hyperconverged?

That’s not to say that an HCI environment is never appropriate.  For example, some situations where you’d want to seriously consider hyperconverged infrastructure include those where:

  • You’re subject to space constraints, especially in satellite locations.
  • Given your specific requirements, HCI actually does turn out to be cheaper, and you can live with the scalability and flexibility drawbacks.
  • You only need a very small storage cluster, and the number of storage nodes required to build a stable Ceph cluster would significantly add to the cost and would far exceed the storage capacity required.
So … what do we learn from all this?

The most important thing to remember is that you need to examine your use cases closely. Don’t fall for hype, but don’t reject hyperconverged infrastructure just because it is new, either. Make comparable models, ensure you understand the implications, and select the appropriate design.

In other words, resist the pressure from outside telling you to do one or the other ‘because it is clearly the better way.’

The post What is Hyperconverged Infrastructure (HCI) and when should I use it? Pros and cons of HCI and traditional architecture appeared first on Mirantis | Pure Play Open Cloud.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The post KQueen: The Open Source Multi-cloud Kubernetes Cluster Manager appeared first on Mirantis | Pure Play Open Cloud.

Kubernetes got a lot of traction in 2017, and I’m one of those people who believes that in 2 years the Kubernetes API could become the standard interface for consuming infrastructure. In other words, the same way that today we get an IP address and ssh key for our virtual machine on AWS, we might get a Kubernetes API endpoint and kubeconfig from our cluster. With the recent AWS announcement about EKS bringing this reality even closer, let me give you my perspective on k8s installation and trends that we see at Mirantis.

Existing tools

Recently I did some personal research, and I discovered the following numbers around the Kubernetes community:

  • ~22 k8s distribution support providers
  • ~10 k8s community deployment tools
  • ~20 CaaS vendors

There are lot of companies that provide Kubernetes installation and management, including Stackpoint.io, Kubermatic (Loodse), AppsCode, Giant Swarm, Huawei Container Engine, CloudStack Container Service, Eldarion Cloud (Eldarion), Google Container Engine,  Hasura Platform, Hypernetes (HyperHQ), KCluster, VMware Photon Platform, OpenShift (Red Hat), Platform9 Managed Kubernetes, and so on.

All of those vendor solutions focus more or less on “their way” of k8s cluster deployment, which usually means specific a deployment procedure of defined packages, binaries and artifacts. Moreover, while some of these installers are available as open source packages, they’re not intended to be modified, and when delivered by a vendor, there’s often no opportunity for customer-centric modification.

There are reasons, however, why this approach is not enough for the enterprise customer use case. Let me go through them.

Customization: Our real Kubernetes deployments and operation have demonstrated that we cannot just deploy a cluster on a custom OS with binaries. Enterprise customers have various security policies, configuration management databases, and specific tools, all of which are required to be installed on the OS. A very good example of this situation is one of a customer from the financial sector. The first time the started their golden OS image at AWS, it took 45 minutes to boot. This makes it impossible for some customers to run the native managed k8s offering at public cloud providers.

Multi-cloud: Most existing vendors don’t solve the question of how to manage clusters in multiple regions, let alone at multiple providers. Enterprise customers want to run distributed workloads in private and public clouds. Even in the case of on-premise baremetal deployment, people don’t a build single huge cluster for whole company. Instead, they separate resources based on QA/testing/production, application-specific, or team-specific clusters, which often causes complications with existing solutions. For example, OpenShift manages a single Kubernetes cluster instance. One of our customers wound up with an official design where they planned to run 5 independent OpenShift instances without central visibility or any way to manage deployment. Another good example is CoreOS Tectonic, which provides a great UI for RBAC management and cluster workload, but has the same problem — it only manages a single cluster, and as I said, nobody stays with single cluster.

most existing vendors do not solve the question of how to manage clusters in multiple locations

“My k8s cluster is better than yours” syndrome: In the OpenStack world, where we originally came from, we’re used to complexity. OpenStack was very complex and Mirantis was very successful, because we could install it the most quickly, easily, and correctly. Contrast this with the current Kubernetes world; with multiple vendors, it is very difficult to differentiate in k8s installation. My view is that k8s provisioning is commodity with very low added value, which makes k8s installation more as vendor checkbox feature, rather than decision making point or unique capability. At the moment, however, let me borrow my favourite statement from a Kubernetes community leader: “Yes, there are lot of k8s installers, but very few deploy k8s 100% correctly.”

Moreover, all public cloud providers will eventually offer their own managed k8s offering, which will put various k8s SaaS providers out of business. After all, there is no point paying for managed k8s on AWS to a third-party company if AWS provides EKS.

K8s provisioning is a commodity, with very low added value.

Visibility & Audit: Lastly, but most importantly, deployment is just the beginning. Users need to have visibility, with information on what runs where in what setup. It’s not just about central monitoring, logging and alerting; it’s also about audit. Users need audit features such as “all docker images used in all k8s clusters” or “version of all k8s binaries”. Today, if you do find such a tool, it usually has gaps at the multi-cluster level, can providing information only for single clusters.

To summarize, I don’t currently see any existing Kubernetes tool that provides all of those features.

KQueen as Open Cluster Manager

Based on all of these points, we at Mirantis decided to build a provisioner-agnostic Kubernetes cluster manager to deploy, manage and operate various Kubernetes clusters on various public/private cloud providers. Internally, we have called the project KQueen and, and it follows several design principles:

  • Kubernetes as a Service environment deployment: Provide a multi-tenant self-service portal for k8s cluster provisioning.
  • Operations: Focus on the audit, visibility, and security of Kubernetes clusters, in addition to actual operations.
  • Update and Upgrade: Automate updating and upgrading of clusters through specific provisioners.
    • Multi-Cloud Orchestration: Support the same abstraction layer for any public, private, or bare metal provider.
    • Platform Agnostic Deployment (of any Kubernetes cluster): Enable provisioning of a Kubernetes cluster by various community installers/provisioners, including those with customizations, rather than a black box with a strict installation procedure.
    • Open, Zero Lock-in Deployment: Provide a pure-play open source solution without any closed source.
  • Easy integration: Provide a documented REST API for managing Kubernetes clusters and integrating this management interface into existing systems.

We have one central backend service called queen. This service listens for user requests (via the API) and can orchestrate and operate clusters.

KQueen supplies the backend API for provider-agnostic cluster management. It enables access from the UI, CLI, or API, and manages provisioning of Kubernetes clusters. It uses the following workflow:

  1. Trigger deployment on the provisioner, enabling KQueen to use various provisioners (AKS, GKE, Jenkins) for Kubernetes clusters. For example, you can use the Jenkins provisioner to trigger installation of Kubernetes based on a particular job.
  2. The provisioner installs the Kubernetes cluster using the specific provider.
  3. The provisioner returns the Kubernetes kubeconfig and API endpoint. This config is stored in the KQueen backend (etcd).
  4. KQueen manages, operates, monitors, and audits the Kubernetes clusters. It reads all information from the API and displays it as a simple overview visualization. KQueen can also be extended by adding other audit components.
KQueen in action

The KQueen project can help define enterprise-scale kubernetes offerings across departments and give them freedom in specific customizations. If you’d like to see it in action, you can see a generic KQueen demo showing the architecture design and managing a cluster from a single place, as well as a demo based on Azure AKS demo. In addition, watch this space for a tutorial on how to set up and use KQueen for yourself. We’d love your feedback!

The post KQueen: The Open Source Multi-cloud Kubernetes Cluster Manager appeared first on Mirantis | Pure Play Open Cloud.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The post The Intelligent Delivery Manifesto appeared first on Mirantis | Pure Play Open Cloud.

It sat there in his inbox, staring at him.

Carl Delacour looked at the email from BigCo’s public cloud provider, Ganges Web Services. He knew he’d have to open it sooner or later.

It wasn’t as if there would be any surprises in it — or at least, he hoped not. For the last several months he’d been watching BigCo’s monthly cloud bills rising, seemingly with no end in sight. He’d only gotten through 2017 by re-adjusting budget priorities, and he knew he couldn’t spend another year like this.

He opened Slack and pinged Adam Pantera. “Got a sec?”

A moment later a notification popped up on his screen.  “For you, boss?  Always.”

“What’s it going to take,” Carl typed, “for us to bring our cloud workloads back on premise?”

There was a pause.

A long pause.

Such a long pause, in fact, that Carl wondered if Adam had wandered away from the keyboard.  “YT?”

“Yeah, I’m here,” he saw.  “I’m just … I don’t think we can do that the way everything is structured.  We built all of our automation on the provider API. It’d take months, at best, maybe a year.”

Carl felt a cold lump in the center of his chest as the reality of the situation sank in. It wasn’t just the GWS bill that was adding up in his head; the new year would bring new regulatory constraints as well.   It was his job to deal with this sort of thing, and he didn’t seem to have any options. These workloads were critical to BigCo’s daily business. He couldn’t just turn them off, but he couldn’t let things go on as they were, either, without serious consequences.  “Isn’t this stuff supposed to be cloud native?” he asked.

“It IS cloud native,” Adam replied. “But it’s all built for our current cloud provider. If you want us to be able to move between clouds, we’ll have to restructure for a multi-cloud environment.”

Carl’s mouse hovered over the monthly cloud bill, his finger suddenly stabbing the button and opening the document.

“DO IT,” he told Adam.

Carl wasn’t being unreasonable. He should be able to move workloads between clouds. He should also be able to make changes to the overall infrastructure. And he should be able to do it all without causing a blip in the reliability of the system.

Fortunately, it can be done. We’re calling it Intelligent Delivery, and it’s time to talk about what that’s going to take.

Intelligent Delivery is a way to combine technologies that already exist into an architecture that gives you the freedom to move workloads around without fear of lock-in, the confidence that stability of your applications and infrastructure isn’t in doubt, and ultimate control over all of your resources and cost structures.

It’s the next step beyond Continuous Delivery, but applied to both applications and the infrastructure they run on.

How do we get to Intelligent Delivery?

Providing someone like Carl with the flexibility he needs involves two steps: 1) making software deployment smarter, using those smarts to help the actual infrastructure, and 2) building in monitoring that ensures nothing relevant escapes your notice.

Making software deployment as intelligent as possible

It’s true that software deployment is much more efficient than it used to be, from CI/CD environments to container orchestration platforms such as Kubernetes. But we still have a long way to go to make it as efficient as it could be. We are just beginning to move into the multi-cloud age; we need to get to the point where the actual cloud on which the software is deployed is irrelevant not only to us, but also to the application.

The deployment process should be able to choose the best of all possible environments based on performance, location, cost, or other factors. And who chooses those factors? Sometimes it will be the developer, sometimes the user. Intelligent Delivery needs to be flexible enough to make either option possible.

For now, applications can run on public or private clouds. In the future, these choices may include spare capacity literally anywhere, from servers or virtual machines in your datacenter to wearable devices with spare capacity halfway around the world — you should be able to decide how to implement this scalability.

We already have built-in schedulers that make rudimentary choices in orchestrators such as Kubernetes, but there’s nothing stopping us from building applications and clouds that use complex artificial intelligence or machine-learning routines to take advantage of patterns we can’t see.

Taking Infrastructure as Code to its logical conclusion

Carl got up and headed to the break room for some chocolate, pinching his eyes together. Truth be told, Carl’s command wasn’t a surprise. He’d been worried that this day would come since they’d begun building their products on the public cloud. But they had complex orchestration requirements, and it had been only natural for them to play to the strengths of the GWS API.

Now Adam had to find a way to try and shunt some of those workloads back to their on-premises systems. But could those systems handle it? Only one way to find out.

He took a deep breath and headed for Bernice Gordon’s desk, rounding the corner into her domain. Bernie sat, as she usually did, in a rolling chair, dancing between monitors as she checked logs and tweaked systems, catching tickets as they came in.

“What?” she said, as he broached her space.

“And hello to you, too,” Adam said, smiling.

Bernie didn’t look up.  “Cory is out sick and Dan is on paternity leave, so I’m a little busy.  What do you need, and why haven’t you filed a ticket?”

“I have a question.  Carl wants to repatriate some of our workloads from the cloud.”

Bernie stopped cold and spun around to face him. He could have sworn her glare burned right through his forehead. “And how are we supposed to do that with our current load?”

“That’s why I’m here,” he said. “Can we do it?”

She was quiet for a moment. “You know what?” She turned back to her screens, clicking furiously at a network schema until a red box filled half the screen. “You want to add additional workloads, you’ve got to fix this VNF I’ve been nagging you about to get rid of that memory leak.”

He grimaced.  The fact was that he’d fixed it weeks ago. “I did, I just haven’t been able to get it certified. Ticket IT-48829, requesting a staging environment.”

Her fingers flew over the keyboard for a moment. “And it’s in progress.  But there are three certifications ahead of you.” She checked another screen.  “I’m going to bump you up the list. We can get you in a week from tomorrow.”

So far we’ve been talking about orchestrating workloads, but there’s one piece of the puzzle that has, until now, been missing: with Infrastructure as Code, the infrastructure IS a workload; all of the intelligence we apply to deploying applications applies to the infrastructure itself.

We have long-since passed the point where one person like Bernie, or even a team of operators could manually deploy servers and keep track of what’s going on within an enterprise infrastructure environment. That’s why we have Infrastructure as Code, where traditional hardware configurations such as servers and networking are handled not by a person entering command line commands, but by configuration management scripting such as Puppet, Chef, and Salt.

That means that when someone like Bernie is tasked with certifying a new piece of software, instead of scrambling, she can create a test environment that’s not just similar to the production environment, it’s absolutely identical, so she knows that once the software is promoted to production, it’ll behave as it did in the testing phase.

Unfortunately, while organizations use these capabilities in the ways you’d expect, enabling version control and even creating devops environments where developers can take some of the load off operators, for the most part these are fairly static deployments

On the other hand, by treating them more like actual software and adding more intelligence, we can get a much more intelligent infrastructure environment, from predicting bad deployments to getting better efficiency to enabling self-healing.

Coherent and comprehensive monitoring

Bernie Gordon quietly closed her bedroom door; regression and performance testing on the new version of Andy’s VNF had gone well, but had taken much longer than expected. Now it was after midnight as she got ready for bed, and there was something that was still bothering her about the cutover to production. Nothing she could put her finger on, but she was worried.

Her husband snored quietly and she gave him a gentle kiss before turning out the light.

Then the text came in. She grabbed her phone and pushed the first button her fingers found to cut off the sound so it wouldn’t wake Frank, but she already knew what the text would tell her.

The production system was failing.

Before she could even get her laptop out of her bag to check on it, her phone rang.  Carl’s avatar stared up at her from the screen.

Frank shot upright. “Who died?” he asked, heart racing and eyes wide.

“Nobody,” she said. “Yet. Go back to sleep.”  She answered the call.  “I got the text and I’m on my way back in,” she said without waiting.

With Intelligent Delivery, nobody should be getting woken up in the middle of the night, because with sufficient monitoring and analysis of that monitoring, the system should be able to predict most issues before they turn into problems.

Knowing how fast a disk is filling up is easy.  Knowing whether a particular traffic pattern shows a cyberattack is more complicated. In both cases, though, an Intelligent Delivery system should be able to either recommend actions to prevent problems, or even take action autonomously.

What’s more, monitoring is about more than just preventing problems; it can provide the intelligence you need to optimize workload placement, and can even feed back into your business to provide you with insights you didn’t know you were missing.

Intelligent Delivery requires comprehensive, coherent monitoring in order to provide a complete picture.

Of course, Intelligent Delivery isn’t something we can do overnight. The benefits are substantial, but so are the requirements.

What does Intelligent Delivery involve?

Intelligent Delivery, when done right, has the following advantages and requirements:

  1. Defined architecture: You must always be able to analyze and duplicate your infrastructure at a moment’s notice. You can accomplish this using declarative infrastructure and Infrastructure as Code.
  2. Flexible but controllable infrastructure: By defining your infrastructure, you get the ability to define how and where your workloads run. This makes it possible for you to opportunistically consume resources, moving your workloads to the most appropriate hardware — or the most cost-effective — at a moment’s notice.
  3. Intelligent oversight: It’s impossible to keep up with everything that affects an infrastructure, from operational issues to changing costs to cyberattacks. Your infrastructure must be intelligent enough to adapt to changing conditions while still providing visibility and control.
  4. Secure footing: Finally, Intelligent Delivery means that infrastructure stays secure using a combination of these capabilities:
    1. Defined architecture enables you to constantly consume the most up-to-date operating system and application images without losing control of updates.
    2. Flexible but controllable infrastructure enables you to immediately move workloads out of problem areas.
    3. Intelligent oversight enables you to detect threats before they become problems.
What technologies do we need for Intelligent Delivery?

All of the technologies we need for Intelligent Delivery already exist; we just need to start putting them together in such a way that they do what we need.  Let’s take a good hard look at the technologies involved:

  • Virtualization and containers:

Of course the first step in cloud is some sort of virtualization, whether that consists of virtual machines provided by OpenStack or VMware, or containers and orchestration provided by Docker and/or Kubernetes.

  • Multi-cloud:

Intelligent Delivery requires the ability to move workloads between clouds, not just preventing vendor lock-in but also increasing robustness. These clouds will typically consist of either OpenStack or Kubernetes nodes, usually with federation, which enables multiple clusters to appear as one to an application.

  • Infrastructure as Code:

In order for Intelligent Delivery to be feasible, you must deploy servers, networks, and other infrastructure using a repeatable process. Infrastructure as Code makes it possible to not only audit the system but also to reliably, repeatedly perform the necessary deployment actions so you can duplicate your environment when necessary.

  • Continuous Delivery tools:

CI/CD is not a new concept; Jenkins pipelines are well understood, and now software such as the Spinnaker project is making it more accessible, as well as more powerful.

  • Monitoring:

In order for a system to be intelligent, it needs to know what’s going on in the environment, and the only way for that to happen is to have extensive monitoring systems such as Grafana, which can feed data into the algorithms used to determine scheduling and predict issues.

  • Microservices:

To truly take advantage of a cloud-native environment, applications should use a microservices architecture, which decomposes functions into individual units you can deploy in different locations and call over the network.

  • Service orchestration:

A number of technologies are emerging to handle the orchestration of services and service requests. These include service mesh capabilities from projects such as Istio, to the Open Service Broker project to broker requests, to the Open Policy Agent project to help determine where a request should, or even can, go. Some projects, such as Grafeas, are trying to standardize this process.

  • Serverless:

Even as containers seemingly trounce virtual machines (though that’s up for debate), so-called serverless technology makes it possible to make a request without knowing or caring where the service actually resides. As infrastructure becomes increasingly “provider agnostic” this will become a more important technology.

  • Network Functions Virtualization:

Where today NFV is confined mostly to telcos and other Communication Service Providers, NFV can provide the kind of control and flexibility required for the Intelligent Delivery environment.

  • IoT:

As software gets broken down into smaller and smaller pieces, physical components can take on a larger role; for example, rather than having a sensor take readings and send them to a server that then feeds them to a service, the device can become an integral part of the application infrastructure, communicating directly where possible.

  • Machine Learning and AI:

Eventually we will build the infrastructure to the point where we’ve made it as efficient as we can, and we can start to add additional intelligence by applying machine learning. For example, machine learning and other AI techniques can predict hardware or software failures based on event streams so they can be prevented, or they can choose optimal workload placement based on traffic patterns.

Carl glanced at the collection of public cloud bills in his inbox. All together, he knew, they were a fraction of what BigCo had been paying when they’d been locked into GWS. More than that, though, he knew he had options, and he liked that.

He looked through the glass wall of his office. Off in the corner he could see Bernie. She was still a bundle of activity — you couldn’t slow her down — but she seemed more relaxed these days, and happier as she worked on new plans for what their infrastructure could do going forward instead of just keeping on top of tickets all day.

On the other side of the floor, Andy and his team stared intently at a single monitor. They held that pose for a moment, then cheered.

A Slack notification popped up on his monitor.  “The new service is certified, live, and ready for customers,” Andy told him, “and one day before GoldCo even announces theirs.”

Carl smiled. “Good job,” he replied, and started on plans for next quarter.

The post The Intelligent Delivery Manifesto appeared first on Mirantis | Pure Play Open Cloud.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The post Is 2018 when the machines take over? Predictions for machine learning, the data center, and beyond appeared first on Mirantis | Pure Play Open Cloud.

To learn more on this topic, join us February 6 for a webinar, “Machine Learning and AI in the Datacenter,” hosted by the Cloud Native Computing Foundation.

It’s hard to believe, but 2017 is almost over and 2018 is in sight. This year has seen a groundswell of technology in ways that seem to be simmering under the surface, and if you look a bit more closely it’s all there, just waiting to be noticed.

Here are the seeds being sown in 2017 that you can expect to bloom in 2018 and beyond.

Machine Learning
Our co-founder, Boris Renski, also gave another view of 2018 here.

Machine learning takes many different forms, but the important thing to understand about it is that it enables a program to react to a situation that was not explicitly anticipated by its developers.

It’s easy to think that robots and self-learning machines are the stuff of science fiction — unless you’ve been paying attention to the technical news.  Not a day goes by without at least a few stories about some advance in machine learning and other forms of artificial intelligence.  Companies and products based on it launch daily. Your smartphone increasingly uses it. So what does that mean for the future?

Although today’s machine learning algorithms have already surpassed anything that we thought possible even a few years ago, it is still a pretty nascent field.  The important thing that’s happening right now is that Machine Learning has now reached the point where it’s accessible to non-PhDs through toolkits such as Tensorflow and Scikit-Learn — and that is going to make all the difference in the next 18-24 months.

Here’s what you can expect to see in the reasonably near future.

Hardware vendors jump into machine learning

Although machine learning generally works better and faster on Graphics Processing Units (GPUs) — the same chips used for blockchain mining — one of the advances that’s made it accessible is the fact that software such as Tensorflow and Scikit-Learn can run on normal CPUs. But that doesn’t mean that hardware vendors aren’t trying to take things to the next level.

These efforts run from Nvidia’s focus on GPUs to Intel’s Nervana Neural Network Processor (NNP) to Google’s Tensor Processing Unit (TPU). Google, Intel and IBM are also working on quantum computers, which use completely different architecture from traditional digital chips, and are particularly well suited to machine learning tasks.  IBM has even announced that it will make a 20 qubit version of its quantum computer available through its cloud. It’s likely that 2018 will see these quantum computers reach the level of “quantum supremacy”, meaning that they can solve problems that can’t be solved on traditional hardware. That doesn’t mean they’ll be generally accessible the way general machine learning is now — the technical and physical requirements are still quite complex — but they’ll be on their way.

Machine learning in the data center

Data center operations are already reaching a point where manually managing hardware and software is difficult, if not impossible. The solution has been using devops, or scripting operations to create “Infrastructure as Code“, providing a way to create verifiable, testable, repeatable operations. Look for this process to add machine learning to improve operational outcomes.

IoT bringing additional intelligence into operations

Machine learning is at its best when it has enough data to make intelligent decisions, so look for the multitude of data that comes from IoT devices to be used to help improve operations.  This applies to both consumer devices, which will improve understanding of and interaction with consumers, and industrial devices, which will improve manufacturing operations.

Ethics and transparency

As we increasingly rely on machine learning for decisions being made in our lives, the fact that most people don’t know how those decisions are made — and have no way of knowing — can lead to major injustices. Think it’s not possible? Machine learning is used for mortgage lending decisions, which while important, aren’t life or death.  But they’re also used for things like criminal sentencing and parole decisions. And it’s still early.

One good example given for this “tyranny of the algorithm” involves the example of two people up for a promotion. One is a man, one is a woman. To prevent the appearance of bias, the company uses a machine learning algorithm to determine which candidate will be more successful in the new position. The algorithm chooses the man.  Why?  Because it has more examples of successful men in the role. But it doesn’t take into account that there are simply fewer women who have been promoted.

This kind of unintentional bias can be hard to spot, but companies and governments are going to have to begin looking at greater transparency as to how decisions are made.

The changing focus of datacenter infrastructures

All of this added intelligence is going to have massive effects on datacenter infrastructures.

For years now, the focus has been on virtualizing hardware, moving from physical servers to virtual ones, enabling a single physical host to serve as multiple “computers”.  The next step from here was cloud computing in which workloads didn’t know or care where in the cloud they resided; they just specified what they needed, and the cloud would provide it.  The rise of containers accelerated this trend; containers are self-contained units, making them even easier to schedule in connected infrastructure using tools such as Kubernetes.

The natural progression from here is the de-emphasis on the cloud itself.  Workloads will run wherever needed, and whereas before you didn’t worry about where in the cloud that wound up being, now you won’t even worry about what cloud you’re using, and eventually, the architecture behind that cloud will become irrelevant to you as an end user.  All of this will be facilitated by changes in philosophy.

APIs make architecture irrelevant

We can’t call microservices new for 2018, but the march to decompose monolithic applications into multiple microservices will continue and accelerate in 2018 as developers and businesses try to gain the flexibility that this architecture provides. Multiple APIs will exist for many common features, and we’ll see “API brokers” that provide a common interface for similar functions.

This reliance on APIs will mean that developers will worry less about actual architectures. After all, when you’re a developer making an API call, do you care what the server is running on?  Probably not.

The application might be running on a VM, or in containers, or even in a so-called serverless environment. As developers lean more heavily on composing applications out of APIs, they’ll reach the point where the architecture of the server is irrelevant to them.

That doesn’t mean that the providers of those APIs won’t have to worry about it, of course.

Multi-cloud infrastructures

Server application developers such as API providers will have to think about architecture, but increasingly they will host their applications in multi-cloud environments, where workloads run where it’s most efficient — and most cost-effective. Like their users, they will be building against APIs — in this case, cloud platform APIs — and functionality is all that will matter; the specific cloud will be irrelevant.

Intelligent cloud orchestration

In order to achieve this flexibility, application designers will need to be able to do more than simply spread their applications among multiple clouds. In 2018 look for the maturation of systems that enable application developers and operators to easily deploy workloads to the most advantageous cloud system.

All of this will become possible because of the ubiquity of open source systems and orchestrators such as Kubernetes. Amazon and other systems that thrive on vendor lock-in will hold on for a bit longer, but the tide will begin to turn and even they will start to compete on other merits so that developers are more willing to include them as deployment options.

Again, this is also a place where machine learning and artificial intelligence will begin to make themselves known as a way to optimize workload placement.

Continuous Delivery becomes crucial as tech can’t keep up

Remember when you bought software and used it for years without doing an update?  Your kids won’t.

Even Microsoft has admitted that it’s impossible to keep up with advances in technology by doing specific releases of software.  Instead, new releases are pushed to Windows 10 machines on a regular basis.

Continuous Delivery (CD) will become the de facto standard for keeping software up to date as it becomes impossible to keep up with the rate of development in any other way.  As such, companies will learn to build workflows that take advantage of this new software without giving up human control over what’s going on in their production environment.

At a more tactical level, technologies to watch are:

  • Service meshes such as Istio, which abstract away many of the complexities of working with multiple services
  • Serverless/event-driven programming, which reduces an API to its most basic form of call-response
  • Policy agents such as the Open Policy Agent (OPA), which will enable developers to easily control access to and behavior of their applications in a manageable, repeatable, and granular way
  • Cloud service brokers such as Open Service Broker (OSB), which provide a way for enterprises to curate and provide access to additional services their developers may need in working with the cloud.
  • Workflow management tools such as Spinnaker, which make it possible to create and manage repeatable workflows, particularly for the purposes of intelligent continuous delivery.
  • Identity services such as SPIFFE and SPIRE, which make it possible to uniquely identify workloads so that they can be provided the proper access and workflow.
Beyond the datacenter

None of this happens in a vacuum, of course; in addition to these technical changes, we’ll also see the rise of social issues they create, such as privacy concerns, strain on human infrastructure when dealing with the accelerating rate of development, and perhaps most important, the potential for cyber-war.

But when it comes to indirect effects of the changes we’re talking about, perhaps the biggest is the question of what our reliance on fault-tolerant programming will create.  Will it lead to architectures that are essentially foolproof, or such an increased level of sloppiness that eventually, the entire house of cards will come crashing down?

Either outcome is possible; make sure you know which side you’re on.

Getting ready for the 2018 and beyond

The important thing is to realize that whether we like it or not, the world is changing, but we don’t have to be held hostage by it. Here are Mirantis we have big plans, and we’re looking forward to talking more about them in the new year!

The post Is 2018 when the machines take over? Predictions for machine learning, the data center, and beyond appeared first on Mirantis | Pure Play Open Cloud.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview