Loading...

Follow Cloud {Native} | Swapnil Kulkarni's Blog on Feedspot

Continue with Google
Continue with Facebook
Or

Valid



I was recently interviewed by Dmitry Filippov, Product Marketing Manager at JetBrains related to OpenStack Development with PyCharm. Here is the link for the interview.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Introduction

With reference to steps listed at Using kubeadm to Create a Cluster for setting up the Kubernetes cluster with kubeadm. I have been working on an automation to setup the cluster. The result of it is kubeadm-vagrant, a github project with simple steps to setup your kubernetes cluster with more control on vagrant based virtual machines.

Installation
  • Clone the kubeadm-vagrant repo

git clone https://github.com/coolsvap/kubeadm-vagrant

  • Choose your distribution of choice from CentOS/Ubuntu and move to the specific directory.
  • Configure the cluster parameters in Vagrantfile. Refer below for details of configuration options.

vi Vagrantfile

  • Spin up the cluster

vagrant up

  • This will spin up new Kubernetes cluster. You can check the status of cluster with following command,

sudo su

kubectl get pods –all-namespaces

Cluster Configuration Options
  1. You need to generate a KUBETOKEN of your choice to be used while creating the cluster. You will need to install kubeadm package on your host to create the token with following command

# kubeadm token generate

148a37.736fd53655b767b7

  1. BOX_IMAGE is currently default with “coolsvap/centos-k8s” box which is custom box created which can be used for setting up the cluster with basic dependencies for kubernetes node.
  2. Set SETUP_MASTER to true if you want to setup the node. This is true by default for spawning a new cluster. You can skip it for adding new minions.
  3. Set SETUP_NODES to true/false depending on whether you are setting up minions in the cluster.
  4. Specify NODE_COUNT as the count of minions in the cluster
  5. Specify  the MASTER_IP as static IP which can be referenced for other cluster configurations
  6. Specify NODE_IP_NW as the network IP which can be used for assigning dynamic IPs for cluster nodes from the same network as Master
  7. Specify custom POD_NW_CIDR of your choice
  8. Setting up kubernetes dashboard is still a WIP with K8S_DASHBOARD option.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

With reference to Kata Containers Developers Guide steps, I setted up the  development environment. At the same time, I went ahead and created a little automation to recreate the environment with Vagrant.

The primary code to create the environment is pushed at vagrant-kata-dev.

For setting it up, you will need,

  • VirtualBox (Currently only tested with virtualbox)
  • Vagrant with following plugins
    • vagrant-vbguest
    • vagrant-hostmanager
    • vagrant-share

To Install the plugins, use following command,

$ vagrant plugin install <plugin-name>

The setup instructions are simple, once you have installed the prereqs, clone the repo

$ git clone https://github.com/coolsvap/vagrant-kata-dev

Edit the Vagrantfile to update details

  1. Update the bridge interface so the box will have IP address from your local network using DHCP. If you do not update, it will ask for the interface name you start machine.
  2. Update the golang version, currently its at 1.9.3

Create the vagrant box with following command

$ vagrant up

Once the box is started, login to the box using following command

$ vagrant ssh

Switch to root user and move to vagrant shared directory and install the setup script

$ sudo su

# cd /vagrant

# ./setup-kata-dev.sh

It will perform the steps required to setup the dev environment. Verify the setup done correctly with following steps

# docker info | grep Runtime
WARNING: No swap limit support
Runtimes: kata-runtime runc
Default Runtime: runc

Hope this helps new developers get started with Kata Development. This is just first version of the automation and please help me better with your inputs.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Health Checks

Health checks are becoming an essential part of modern microservices setup. Every service is expected expose a health check endpoint which can be accessed by server monitoring tool. Health checks provide important attributes as they allow the process responsible for running the application to restart or kill it when it starts to misbehave or fail. Design with this pattern needs to be incredibly careful and not too aggressive to use major cycles to utilize this.

What needs to be recorded with health checks is entirely one’s choice. However, you might run into some recommendations as follows,

– Data store connection status (general connection state, connection pool status)

– Current response time (rolling average)

– Current connections

– Bad requests (running average)

How to determine what would cause an unhealthy state needs to be part of the discussion during the design of the service. For example, no connectivity to the database means the service is completely inoperable, it would report unhealthy and would allow the orchestrator to recycle the container at the same time, an exhausted connection pool could just mean that the service is under high load, and while it is not completely inoperable it could be suffering degraded performance and should just serve a warning.

The same goes for the current response time, when you load test your service once it has been deployed to production, you can build up a picture of the thresholds of operating health. These numbers can be stored in the config and used by the health check. For example, if you know that your service will run an average service request with a 50 milliseconds latency for 4,000 concurrent users; however at 5,000, this time grows to 500 milliseconds as you have exhausted the connection pool. You could set your SLA upper boundary to be 100 milliseconds; then you would start reporting degraded performance from your health check. This should, however, be a rolling average based on the normal distribution. It is always possible for one or two requests to greatly be outside the standard deviation of normal operation, and you do not want to allow this to skew your average which then causes the service to report unhealthy, when in fact the slow response was actually due to the upstream service having slow network connectivity, not your internal state.

When discussing health checks, the pattern of a handshake is considered in most occasions, where each client would send a handshake request to the downstream service before connecting to check if it was capable of receiving its request. Under normal operating conditions and most of the time, this adds an enormous amount of chatter into your application resulting in an overkill. It also implies that you are using client-side load-balancing, as with a server side approach you would have no guarantees that the service you handshake is the one you connect to. The concept however of the downstream service making a decision that it can or can’t handle a request is a valid one. Why not instead call your internal health check as the first operation before processing a request? This way you could immediately fail and give the client the opportunity to attempt another endpoint in the cluster. This call would add almost no overhead to your processing time as all you are doing is reading the state from the health endpoint, not processing any data.

Load balancing

When we discussed service discovery, we examined the concepts of server-side and client-side discovery. For many years server-side discovery was the only option, and there was also a preference for doing SSL termination on the load balancer due to the performance problems. It is a good idea to use TLS secure connections internally. However, what about being able to do sophisticated traffic distribution? That can only be achieved if you have a central source of knowledge. However, there could be a benefit to only sending a certain number of connections to a particular host; but then how do you measure health? You can use layer 6 or 7, but as we have seen by using smart health checks, if the service is too busy then it can just reject a connection. To be able to implement multiple strategies for the load balancer, such as round-robin, random, or more sophisticated strategies like distributed statistics, across multiple instances you can define your own strategy.

Caching

One way you can improve the performance of service is by caching results from databases and other downstream calls in an in-memory cache or a side cache like Redis, rather than by hitting a database every time. Caches are designed to deliver massive throughput by storing precompiled objects in a fast-access data store, frequently based around a concept of a hash key. We know from looking at algorithm performance that a hash table has the average performance of O(1); that is as fast as it gets. Without going too in depth into Big O notation, this means it takes one iteration to be able to find the item you want in the collection. What this means is that, not only can one reduce the load on database, can also reduce your infrastructure costs. Typically, a database is limited by the amount of data that can be read and written from the disk and the time it takes for the CPU to process this information. With an in-memory cache, this limitation is removed by using pre-aggregated data, which is stored in fast memory, not onto a state-full device like a disk. This comes at the cost of consistency because one cannot guarantee that all clients will have the same information at the same time.

Caching strategies can be calculated based on your requirements for this consistency. In theory, the longer the cache expiry, the greater cost saving, and the faster system is, at the expense of reduced consistency. So when planning a feature, one should be talking about consistency and the tradeoffs with performance and cost, and documenting this decision, as these decisions will greatly help create a more successful implementation.

You have probably heard the phrase Premature optimization, so does that mean you should not implement caching until you need it? No; it means you should be attempting to predict the initial load that your system will be under at design time, and the growth in capacity over time, as you are considering the application lifecycle. When creating this design, you will be putting together this data, and you will not be able to reliably predict the speed at which a service will run at. However, you do know that a cache will be cheaper to operate than a data store; so, if possible, you should be designing to use the smallest and cheapest data store possible, and making provision to be able to extend your service by introducing caching at a later date. This way you only do the actual work necessary to get the service out of the door, but you have done the design up front to be able to extend the service when it needs to scale.

The cache will normally have an end date on it. However, if you implement the cache in a way that the code decides to invalidate it, then you can potentially avoid problems if a downstream service or database disappears. Again, this is back to thinking about failure states and asking what is better: the user seeing slightly out-of-date information or an error page? If your cache has expired, the call to the downstream service fails. However, you can always decide to serve the stale cache back to the calling client. In some instances, this will be better than returning a 50x error.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Before we delve deeper into circuit breaking pattern let us understand couple of patterns which will help us understand it better.

Pattern – Timeouts

A timeout is an incredibly useful pattern while communicating with other services or data stores. The idea is that you set a limit on the response of a server and, if you do not receive a response in the given time, then you write a business logic to deal with this failure, such as retrying or sending a failure message back to the upstream service. A timeout could be the only way of detecting a fault with a downstream service. However, no reply does not mean the server has not received and processed the message, or that it might not exist. The key feature of a timeout is to fail fast and to notify the caller of this failure.

There are many reasons why this is a good practice, not only from the perspective of returning early to the client and not keeping them waiting indefinitely but also from the point of view of load and capacity. Timeouts are an effective hygiene factor in large distributed systems, where many small instances of a service are often clustered to achieve high throughput and redundancy. If one of these instances is malfunctioning and you, unfortunately, connect to it, then this can block an entirely functional service. The correct approach is to wait for a response for a set time and then if there is no response in this period, we should cancel the call, and try the next service in the list. The question of what duration your timeouts are set to do not have a simple answer. We also need to consider the different types of timeout which can occur in a network request, for example, you have:

Connection Timeout – The time it takes to open a network connection to the server

Request Timeout – The time it takes for a server to process a request

The request timeout is almost always going to be the longest duration of the two and I recommend the timeout is defined in the configuration of the service. While you might initially set it to an arbitrary value of, say 10 seconds, you can modify this after the system has been running in production, and you have a decent data set of transaction times to look at.

Pattern – Back off

Typically, once a connection has failed, you do not want to retry immediately to avoid flooding the network or the server with requests. To allow this, it’s necessary to implement a back-off approach to your retry strategy. A back-off algorithm waits for a set period before retrying after the first failure, this then increments with subsequent failures up to a maximum duration.

Using this strategy inside a client-called API might not be desirable as it contravenes the requirement to fail fast. However, if we have a worker process that is only processing a queue of messages, then this could be exactly the right strategy to add a little protection to your system.

Pattern – Circuit breaking

We have looked at some patterns like timeouts and back-offs, which help protect our systems from cascading failure in the instance of an outage. However, now it’s time to introduce another pattern which is complementary to this duo. Circuit breaking is all about failing fast, it is a way to automatically degrade functionality when the system is under stress.

Let us consider an example of a frontend example web application that is dependent on a downstream service to provide recommendations for services user can use. Because this call is synchronous with the main page load, the web server will not return the data until it has successfully returned recommendations. Now you have designed for failure and have introduced a timeout of five seconds for this call. However, since there is an issue with the recommendations system, a call which would ordinarily take 20 milliseconds is now taking 5,000 milliseconds to fail. Every user who looks at a services is waiting five seconds longer than usual; your application is not processing requests and releasing resources as quickly as normal, and its capacity is significantly reduced. In addition to this, the number of concurrent connections to the main website has increased due to the length of time it is taking to process a single page request; this is adding load to the front end which is starting to slow down. The net effect is going to be that, if the recommendations service does not start responding, then the whole site is headed for an outage.

There is a simple solution to this: you should stop attempting to call the recommendations service, return the website back to normal operating speeds, and slightly degrade the functionality of the services page. This has three effects:

– You restore the browsing experience to other users on the site.

– You slightly degrade the experience in one area.

– You need to have a conversation with your stakeholders before you implement this feature as it has a direct impact on the system’s business.

Now in this instance, it should be a relatively simple sell. Let’s assume that recommendations increase conversion by 1%; however, slow page loads reduce it by 90%. Then isn’t it better to degrade by 1% instead of 90%? This example, is clear cut but what if the downstream service was a stock checking system; should you accept an order if there is a chance you do not have the stock to fulfill it?

So how will it work ?

Under normal operations, like a circuit breaker in your electricity switch box, the breaker is closed and traffic flows normally. However, once the pre-determined error threshold has been exceeded, the breaker enters the open state, and all requests immediately fail without even being attempted. After a period, a further request would be allowed and the circuit enters a half-open state, in this state a failure immediately returns to the open state regardless of the error threshold. Once some requests have been processed without any error, then the circuit again returns to the closed state, and only if the number of failures exceeded the error threshold would the circuit open again.

Error behaviour is not a question that software engineering can answer on its own; business stakeholders need to be involved in this decision. When you are planning the design of your systems, you talk about failure as part of your non-functional requirements and decide ahead of time what you will do when the downstream service fails.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

With monolithic applications, services invoke one another through language level methods or procedure calls. This was relatively straightforward and predictable behavior. As application complexity increased we realized that monolithic applications were not suitable for the scale and demand of modern software, so we moved towards SOA or service-oriented architecture. The monoliths were broken into smaller chunks that typically served a particular purpose. But SOA brought its own caveats in the picture with inter-service calls, SOA services ran at well-known fixed locations, which resulted in static location of services, IP addresses, reconfiguration issues with deployments to name a few.

Microservices are easy; building microservice systems is hard

With microservices all this changes, the application typically runs in a virtualized or containerized environment where the number of instances of a service and their locations can change dynamically, minute by minute. This gives us the ability to scale our application depending on the forces dynamically applied to it, but this flexibility does not come without its own share of problems. One of the main ones knows where your services are to contact them. Without the right patterns, it can almost be impossible, and one of the first ones you will most likely stumble upon even before you get your service out into production is service discovery.

With Service discovery, the services register with the dynamic service registry upon startup, and in addition to the IP address and port they are running on, will also often provide metadata, like service version or other environmental parameters that can be used by a client when querying the registry. Some of the popular examples of service registry are Consul, Etcd. These systems are highly scalable and have strongly consistent methods for storing the location of your services. In addition to this, the consul has the capability to perform health checks on the service to ensure its availability. If the service fails a health check then it is marked as unavailable in the registry and will not be returned by any queries.

There are two main patterns for service discovery,

Server-side service discovery

Server-side service discovery is a microservice antipattern for inter-service calls within the same application. This is the method we used to call services in an SOA environment. Typically, there will be a reverse proxy which acts as a gateway to your services. It contacts the dynamic service registry and forwards your request on to the backend services. The client would access the backend services, implementing a known URI using either a subdomain or a path as a differentiator.

Server side discovery eventually runs into some well known issues, one of them being reverse proxy bottleneck. The backend services can be scaled quickly enough but it requires monitoring. It also introduces latency causing increase in cost for running and maintaining the application.

Server side discovery also potentially increases the failure patterns with downstream calls, internal services and external services. With server side discovery you also need to have a centralize failure logic on server side, which abstracts most of API knowledge from client, handles failures internally, keeps retrying internally and keeping client completely distant till its a success or catastrophic failure.

Client-side service discovery

While server-side service discovery might be an acceptable choice for your public APIs for any internal inter-service communication, I prefer the client-side pattern. This gives you greater control over what happens when a failure occurs. You can implement the business logic on a retry of a failure on a case-by-case basis, and this will also protect you against cascading failure.

This pattern is similar to its server-side partner. However, the client is responsible for the service discovery and load balancing. You still hook into a dynamic service registry to get the information for the services you are going to call. This logic is localized in each client, so it is possible to handle the failure logic on a case-by-case basis.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Event processing is a model which allows you to decouple your micro services by using a message queue. Rather than connect directly to a service which may or may not be at a known location, you broadcast and listen to events which exist on a queue, such as Redis, Amazon SQS, RabbitMQ, Apache Kafka, and a whole host of other sources.

The message queue is a highly distributed and scalable system, and it should be capable of processing millions of messages so we do not need to worry about it not being available. At the other end of the queue, there will be a worker who is listening for new messages pertaining to it. When it receives such a message, it processes the message and then removes it from the queue.

Due to the async nature of the event processing pattern there needs to be a requirement to handle failures in a programmable way,

Event processing with at least once delivery

One of first and basic sync mechanism is to request for delivery, we add the message to the queue and then wait for an ACK from the queue to let us know that the message has been received. Of course, we would not know if the message has been delivered but receiving the ACK should be enough for us to notify the user and proceed. There is always the possibility that the receiving service cannot process the message which could be due to a direct failure or bug in the receiving service or it could be that the message which was added to the queue is not in a format which can be read by the receiving service. We need to deal with both of these issues independently, with handling errors discussed next.

Handling Errors

It is not uncommon for things to go wrong with distributed systems and is the essential factor in micro-service based software design. As per above scenario, if a valid message can not be processed one standard approach is to retry processing the message, normally with a delay. It is important to append the error every time we fail to process a message as it gives us the history of what went wrong, it also provides us with the capability to understand how many times we have tried to process the message because after we exceed this threshold we do not want to continue to retry we need to move this message to a second queue or a dead letter que which we will discuss next.

Debugging the failures with Dead Letter Queue

It is most common practice to remove the message from queue once it is processed. The purpose of the dead letter queue is so that we can examine the failed messages on this queue to assist us with debugging the system. Since we can append the error details to the message body, we know what the error is and we know where the history lies if we need it.

Working with idempotent transactions

While many message queues nowadays offer At Most Once Delivery in addition to the At Least Once, the latter option is still the best for large throughput of messages. To deal with the fact that the receiving service may receive a message twice it needs to be able to handle this in its own logic. One of the common methods for ensuring that the message is not processed twice is to log the message ID in a transactions table. If the message has already been processed and if it will be disposed.

Working with the ordering of messages

One of the common issue while handling failures with retry is receiving a message out of sequence or in an incorrect order, which will end up with inconsistent data in the database. One potential way to avoid this issue is to again leverage the transaction table and to store the message dispatch_date in addition to the id. When the receiving service receives a message then it can not only check if the current message has been processed it can check that it is the most recent message and if not discard it.

Working with atomic transactions

This is the common issue found when moving the legacy systems to micro-services. While storing data, a database can be atomic: that is, all operations occur or none do. Distributed transactions do not give us the same kind of transaction that is found in a database. When part of a database transaction fails, we can roll back the other parts of the transaction. By using this pattern we would only remove the message from the queue if the process succeeded so when something fails, we keep retrying. This gives us a kind of eventually consistent transaction.

Unfortunately, there is no one solution fits all with messaging we need to tailor the solution which matches the operating conditions of the service.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Summarizing the information for setting up the development environment for my first project in Kata-Containers. I have setted up the dev environment for proxy project.

First Things FirstInstall Golang as a prerequisite to the development. Ensure you follow the complete steps to create the required directory structure and test the installation.

Get The Source

This guide assumes you already have forked the proxy project. If not please for the repo. Once you have successfully forked the repo, clone it on your computer

git clone https://github.com/<your-username>/proxy.git $GOPATH/src/github.com/<your-username>/proxy

Add the upstream proxy project as remote to the local clone to fetch up the updates.

$ cd proxy
$ git remote add upstream https://github.com/kata-containers/proxy.git

The proxy project requires following dependencies to be installed prior to build. Use following command to install them.

$ go get github.com/hashicorp/yamux
$ go get github.com/sirupsen/logrus

Do the first build. This will create the executable file kata-proxy in the proxy directory.

$ make
go build -o kata-proxy -ldflags “-X main.version=0.0.1-02a5863f1165b1ee474b41151189c2e1b66f1c40”

To run unit tests run

$ make test
go test -v -race -coverprofile=coverage.txt -covermode=atomic
=== RUN TestUnixAddrParsing
— PASS: TestUnixAddrParsing (0.00s)
=== RUN TestProxy
— PASS: TestProxy (0.05s)
PASS
coverage: 44.6% of statements
ok github.com/coolsvap/proxy 1.064s

To remove all generated output files run

$ make clean
rm -f kata-proxy

This is for this time. I am working on setting up the development environment with GolangD IDE. Keep you posted.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Cloud {Native} by Swapnil Kulkarni - 1w ago

Expert Talks 2017 was my first participation in the Expert Talks conference held in Pune. The conference started a couple of years before as an elevated form of Expert Talks Meetup series by Equal Experts, this year’s conference had a very good mix of content. It included talks on a variety of topics including BlockChain, Containers, IoT, Security to name a few. This is the first edition of the conference which had a formal CFP which witnessed 50+ submissions from different parts of the country and 9 talks were selected out of it. This year the conference was held at Novotel Hotel Pune.

The conference started with registration desk which was well organized for everyone registered to pick up their kit. Even for a conference scheduled on a Saturday, the attendance was quite noticeable. The event started with a welcome speech to all participants and speakers.

The first session delivered by Dr. Pandurang Kamat on demystifying blockchain was a very good start to the event with much anticipated and buzzed topic at the moment. He covered the ecosystem around blockchain with precise detail for everyone to understand the example of most popular blockchain application “BitCoin”. He also gave the overview of Open Source Frameworks like Project Hyperledger for blockchain implementations.

The following session Doveryai, no proveryai – an introduction to TLA+ delivered by Sandeep Joshi was well received by the audience as the topic was pretty unique in terms of the name as well as content. The session started a bit slowly with the audience getting the details of TLA+ and PlusCal. This was well scoped with some basic details and a hands-on demo. The model checker use case was well received after looking at the real world applications and we had the first coffee break of the day after it.

Mr. Lalit Bhatt started well with his session about Data Science – An Engineering Implementation Perspective which discussed the mathematical models used for building the real world data science applications and explained the current use-cases he has in the organization.

Swapnil Dubey and Sunil Manikani from Shlumberger gave good insight into their microservice strategy with containers with building blocks like Kubernetes, Docker and GKE. They also presented how they are using GCE capabilities to effectively reduce the operational expenses.

Alicja Gilderdale from Equal Experts presented some history about container technologies and how they validated different container technologies for one of their projects. She also provided some of the insights into the challenges and lessons learned throughout their journey. The end of this session gave thunder to the participants with the lunch break.

Neha Datt, from Equal Experts, showcased the importance of Product Owner in the overall business cycle in the current changing infrastructure world. She provided some critical thinking points to bridge the gap between business, architecture and development team and also how product manager can be the glue between them.

Piyush Verma, took the Data Science – An Engineering Implementation Perspective discussion forward with his thoughts about Distributed Data Processing. He showcased typical architectures and deployments in distributed data processing by splitting the system into layers; defining the relevance, need, & behavior of each. One of the core attraction points of the session was the drawn diagrams incorporated in his presentation which he did as a part of the homework for the same.

After the second official coffee break of the day, Akash Mahajan enlightened everyone with the most crucial requirement in the currently distributed workloads living on the public clouds, the security. He walked everyone with different requirements for managing secrets with a HashiCorp Vault example while explained the advantages & caveats of the approach.

The IoT, Smart Cities, and Digital Manufacturing discussion were well placed with providing application of most of the concepts learned throughout the day to the real world problems. Subodh Gajare provided details on the IoT architecture, its foundation with requirements related to Mobility, Analytics, Big data, Cloud and Security. He provided very useful insights into the upcoming protocol advances and the usage of Fog, Edge computing in the Smart City application of IoT.

It was a day well spent with some known faces and an opportunity to connect with many enthusiastic IT professionals in Pune.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

OpenStackers,

I am Swapnil Kulkarni(coolsvap), I have been a ATC since Icehouse and I wish
take this opportunity to throw my hat for election to the OpenStack Technical
Committee this election cycle. I started contributing to OpenStack with
introduction at a community event and since then I have always utilized every
opportunity I had to contribute to OpenStack. I am a core reviewer at kolla
and requirements groups. I have also been active in activities to improve the
overall participation in OpenStack, through meetups, mentorship, outreach to
educational institions to name a few.

My focus of work during TC would be to make it easier for people to get
involved in, participate, and contribute to OpenStack, to build the community.
I have had a few hickups in the recent past for community engagement and
contribution activities but my current employment gives me the flexibilty
every ATC needs and I would like to take full advantage of it and increse
the level of contribution.

Please consider this my application and thank you for your consideration.

[1] https://www.openstack.org/community/members/profile/7314/swapnil-kulkarni
[2] http://stackalytics.com/report/users/coolsvap
[3] https://review.openstack.org/510402

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview