As we all know, Central Office Architected as Data Center (CORD) is a general-purpose service delivery platform. It can be configured to host a set of services in support of residential (known as Residential CORD), enterprise (known as Enterprise CORD), mobile technologies (known as M-CORD), cross-cutting capabilities (Analytics for CORD) and emerging edge applications (IoT, gaming, VR).
M-CORD, the open reference solution for service-driven 5G architecture, includes the release of end-to-end open source slicing from programmable Radio Access Network (RAN) to disaggregated and virtualized Evolved Packet Core (EPC) and M-CORD Mini1. These open source building blocks will facilitate the transition from the traditional Central Office to a virtualized data center. In this blog, we will discuss and compare two different approaches for M-CORD deployment: Top-down vs. bottom-up approach and put forth the pros and cons of each.
Top-down approach for M-CORD Deployment
M-CORD uses XOS as orchestration framework and sees traction towards using ONAP.
XOS brings Everything-as-a-Service (XaaS) approach to CORD. It’s a service orchestration layer built on top of OpenStack and ONOS that manages scalable services running in a Central Office Rearchitected as a Datacenter (CORD). XOS unifies the management of SDNbased control applications, NFVbased data plane functions, and traditional cloud services.
Open Network Automation Platform (ONAP) provides a comprehensive platform for real-time, policy-driven orchestration, and automation of physical and virtual network functions which enables IT provides and developers to rapidly automate the new services. Both take a top-down approach where the Orchestration framework is chosen, which offers lots of in built-in and production deployment readiness features, which include redundancy, infra monitoring, automated deployment, scalability, etc. The user needs to build the components/services as per the requirement and ready to deploy.
Overall it looks promising with ready to use in-build features. However, when it comes down to deployment, we might see performance challenges in the components (specifically to data plane NFV) to be deployed using framework w. r. F networking, which is the core requirement in the Telecom world. So, if we hit the issue it will be at a very late stage and also, might end up with more time investigating it and bottoming up the issues and which might lead to critical changes to the heart of the entire framework.
Bottom-up approach for M-CORD Deployment
Considering the cons in Top down approach, we decided to explore the bottom-up approach where we focus on system performance as the foremost acceptance criteria. While researching the roadblocks in system performance, we discovered that we might require the system level architecture changes or networking changes. We first carried out performance analysis for DPDK based Data Plane node in bare metal, and then with KVM virtualization. We also tried implementing micro service platform with the bottom-up approach using Docker instead of Kubernetes/Mesos, while choosing the networking option, which gives us optimum performance. This approach helped us understand the performance bottlenecks at the lower level. However, now, we need to develop the deployment framework from the grounds up.
After carefully analyzing both the approaches, we realized that they complement each other and are working towards converging them to develop an optimal solution. We at Great Software Laboratory are weighing the benefits provided by the built-in open-source M-CORD framework vs. the performance bottlenecks at ground level.
A/B Testing feature gives you an amazing way to test new website content, processes, workflows, etc. by routing the production traffic into multiple slots. At a very high level, you split your website’s traffic between two deployments of code and measure the success of each version of the site based on your needs. Normally it requires some 3rd party tools to be deployed to achieve this simple looking (but actually much more complex) feature. However, Azure App Service enables us to set up it very quickly with the help of “Testing in Production Feature” & “Deployment Slots”.
Setup & configuration
To start with it, we need at least 2 deployment slots (A & B) for the Web App. I’ve created a “To-Do list creation “Java app following this tutorial and deployed it on the default (production) slot. To clearly differentiate, I added “Production” Label on my home page as below
As our Production Slot is ready, we would need slot “B” for A/B testing. Typically in real world scenario, this would be the slot having new features/enhancements for which you want to judge end user’s experience.
As we already explained in the blog here, I created a slot called “staging” and deployed updated build of my App by having some minor change like, addition of creation date column in task list & making the site label to have “Staging” text to quickly recognize the difference.
Now we have our 2 slots (production & staging) ready for A/B testing. To configure the traffic routing rule, navigate to Web App blade of your production slot > Testing in Production. This will display existing traffic routing set up for the slots. As we can see, by default all the traffic is getting routed to production slot.
You can change the traffic % as you wish and save the configuration. Your site is configured for A/B testing. Production Traffic would be adjusted & users can now experience the new features based on the rule settings.
Testing of this feature can be done in a number of ways from automated tools to simply hitting refresh in the browser. For this tutorial I created a quick and simple console application that will hit the production site, count the number of times each version comes up, and calculates the percentage for me. The code is very simple and uses HttpClient to hit production slot URL and check the response content & prints % hits for prod and staging slots on the console.
Running this app displays all the information I need to ensure the Traffic Routing is properly configured and the percentages are in line with my setup.
############################# Result ##################
No of Hits : 50, Prod hit 25 times, Staging hit 25 times
############################# End #####################
In actual cases, testing methods would be much deeper and can leverage App Service feature such as “Application Insights” to capture matrices on User behavior.
In this post, we saw how you can very quickly and simply set up A/B testing on any website hosted on Azure App Services. A/B testing provides site owners an amazing way to test new features, layouts, and more. In the next blog we will explore the setup and usage of “Application Insights” extension in Web App.
We discussed about creating a web app on Azure using Visual Studio here and through eclipse here. In this blog, we’ll go through Kudu – a managing & monitoring service available for every App Service on Azure.
Web Apps Dashboard on Azure
Once Web App is created successfully, we can see Web App management dashboard as below.
Fig. 1: Web App Dashboard
It will list down App URL, App Service Plan & FTP Server Details. Azure will deploy the binaries to the server and provide very limited access for updating the binaries and log files through FTP.
Web Apps Management & Troubleshooting through “Kudu”
Every Azure Web App includes a “hidden” or “background “service site called Kudu. It is useful for capturing memory dumps, looking at deployment logs, viewing configuration parameters and much more.
We can access the Kudu service through the portal by navigating to Web App dashboard > Advanced Tools > Click on Go.
Fig. 2: Kudu Access through Azure Portal
Another, more direct method is to modify your web app’s URL. Specifically, you should insert the site control manager (“scm”) token into the URL, as shown below:
If you’ve mapped your own public DNS name to your web app, then you’ll still need to use the original *.azurewebsites.net DNS name to access Kudu.
Kudu console General Navigation
Following screenshot shows Azure web app’s Kudu dashboard with the Debug console and Tools menus simultaneously exposed.
Fig. 3: Kudu Dashboard
The Environment page gives Azure web site administrators several pieces of valuable information:
This data is enormously helpful to have, especially when you recall that Azure web apps use a platform-as-a-service (PaaS) model in which we have limited direct control of the underlying Hyper-V virtual machine(s).
Of course, that data (especially raw connection strings) is sensitive, so accessing the kudu console requires authenticating yourself as an Azure administrator.
Let’s review some other useful Kudu-based web app administrative tasks.
Retrieve Diagnostic Dump
Azure PaaS web apps run on Windows Server VMs and Internet Information Services (IIS). As you know, IIS offers verbose logging options. In Kudu, fetch the diagnostic logs by clicking Tools > Diagnostic Dump. This action yields a .zip file that contains the log data, current to their generation time.
View running processes
Click Process Explorer on the Kudu top navigation bar to see a stripped-down, web-based version of Windows Task Manager. This is a read-only view of your PaaS VM’s running processes. It is mainly useful to see if any, and which, processes consume too many resources in your web app.
Launch a Debug console
Kudu provides powerful command line access for monitoring logs and various folders under our VM. Because we don’t have full-stack access to the underlying VM the same way we do with the Azure infrastructure-as-a-service (IaaS) scenario, we can use diagnostic console to view and modify data with PowerShell & CMD.
Fig. 4: Kudu Debug Console
Site extensions are the utilities provided by Azure for Web Apps. Those can be viewed and managed through Kudu’s Site Extension tab. Best example of such utility is “Application Insights”. It provides complete monitoring of the deployed Web App. Additional Extensions could be browsed through Gallery.
REST API endpoints
The Kudu dashboard has a REST API section that lists various service data endpoints for your web app. For instance, I can use below REST API endpoint to update war file for my Java Web Application.
Kudu site always connects to a single instance even though the Web App is deployed on multiple instances. E.g. If the site is hosted in an App Service plan which is scaled out to 3 instances, then at any time the KUDU will always connects to one instance only. However, there’s way to access Kudu for specific instance using ARRAffinity cookie.
The ARRAffinity cookie is unique and bounds to an instance. If there are multiple instances, then there will be ARRAffinity cookie corresponding to those many instances. So we can modify the ARRAffinity cookie in the HTTP response and force KUDU to connect to a specific instance, provided the cookie is enabled in Web App settings.
We have explored various capabilities of Kudu Service for managing and monitoring Web App in Azure App Service. Keep following our blog series where we will further explore various features of Azure Web Apps like Authentication, Deployment Slots, and A/B Testing etc.
Imagine coming to office from a long commute, being greeted with a warm hello, having the coffee maker brew your favorite coffee, turning on the lights and the AC in your cabin to your preferred settings and having the entire day’s schedule chalked out in front of you, at your command! Who wouldn’t want such a positive start for their day?
Wake up, guys! This is no longer a distant dream but near future of the workplace and would soon be a reality with smart assistants like Alexa and Google home. The concept of smart homes and devices has been there for quite some time now. We have been using smart assistants to order our meals, play our favorite songs, make shopping lists and navigate to our favorite destinations. In fact, real estate agents these days are integrating virtual assistants with homes and inviting potential home buyers with home automation and security solutions. These smart solutions work because they cater to two crucial aspects of living – safety and security and comfort of the buyers. It is a well-known fact that almost one-third of our lives are spent at work so why not leverage technology to add safety, comfort, and convenience to our workspace? For millennials who have grown up with technology adept in using Amazon, Swiggy, Flipkart, Uber apps at their fingertips, workplace automation is the next logical step in improving their lives.
While the future of work is one of the hottest topics of discussion, it is often debatable. There are several speculations about how artificial intelligence (AI) and robotics will affect our jobs, skills, and income. Will technology take away our jobs or make us more productive? If you look at history, there were similar speculations about the industrial revolution where people were then concerned about machines taking over their jobs. Back then, machines helped humans perform strenuous or mechanical tasks like digging wells, or transporting heavy goods or assembling products. However, machines today, equipped with newer technologies like Machine Learning (MI) and Artificial Intelligence (AI) are way smarter. They no longer automate routine tasks but they seem to be doing wholly different things and opening up new possibilities. Machines today can perform cognitive tasks using tacit knowledge- tasks which don’t necessarily involve a fixed set of instructions to be performed in a sequence, they are actually learning to do something—they can discover patterns, and find innovative solutions on their own. So what exactly will change?
The future workspace will no longer be confined to a physical space and working hours won’t be restricted to a nine to five relic. A smart office would be more of a virtual construct, a collaborative, well connected comfortable co-working space located literally anywhere in the world. Work hours will be spaced out to accommodate workers’ lives. Smart devices like intelligent sensors and network edge devices will be used to create more personalized and less interruptive workspaces. To accommodate a variable number of co-workers in a shared office space, configuring workplace settings like lighting, AC, desktop settings as per each user will be of utmost importance. Future offices will have cloud connectivity implemented inside devices like ACs, coffee machines, printers, elevators and lighting systems to make configurations on the fly, reducing wait times and optimizing working conditions.
According to Gartner research, there are six assertions to how we will work in 2028:
Stay tuned for our next blog where we uncover some use cases on workspace automation.
With technological advances in healthcare delivery, clinicians and healthcare delivery center staffs do not have access to an overwhelming number of available databases capturing vital clinical information – patient’s health record, administrative information, and medical provider’s orders guiding the process of patient care. Yet, based on our interactions with healthcare delivery centers only a few resources can adequately describe the use of existing data from available sources for enhanced care delivery.
Data by itself is of little use unless the data sources can be easily integrated, accessed and shared with right stakeholders. Research reveals that 65 percent of sentinel events (euphemism for patient deaths) is because of communication failure. Today, there’s a huge communication gap between clinicians, delivery care staff, patients, and labs. Generating a need for integrating data from monitoring and therapy devices with lab results, entries in electronic medical records and clinical information systems across healthcare delivery center.
Although information capturing, accessing, analyzing and sharing are essential to health care delivery, the Indian healthcare sector as a whole has historically lagged most other industries in use of information/communication technologies. Moreover, it is observed that most healthcare related information/communication technologies investments are concentrating on the administrative side, rather than on clinical care of the business.
Need for Connectivity and Remote Access
One of the greatest benefits of connected systems in acute care is to support patient’s safety with efficient workflows for timely decision-making. As revealed above, failure in timely communication with right stakeholders has led to 65 percent of sentinel events for the healthcare industry.
Integrating patient data at present available in silos across multiple systems/departments will help care delivery centers provide continuity-of-care to patients. It would also help reduce the expenses associated with co-relating databases that collect and store patient care information from diverse data sources manually and inadvertently duplicating tests/diagnosis.
Over the years technology companies have helped HealthCare Delivery centers make comprehensive clinical data available at the point-of-care using HL7 and ASTM interfaces facilitate the exchange of patient information between the network and other hospital systems. The imported data can be pulled into custom build applications empowering clinicians, care delivery staff with patient data (monitoring systems data, patient history, lab reports) at a click of a button for timely decision making.
The tagline ‘Speeding kills’ might be acceptable on the road, but the ‘lack of speed’ when it comes to your internet connection can literally make you kill your Internet Service Provider (ISP). In today’s fast-paced business world, everyone is looking for speed – faster browsing, faster connections, and faster responses, faster downloads, and faster uploads. Enterprises today are investing in the fastest internet connections they can afford. Through this blog, let’s explore why we need a faster internet connection.
Better connectivity with more devices: We live in a connected world today making use of smart devices and apps than ever before. As more and more of these devices are connected to the internet, they consume more bandwidth and require high internet speeds to exchange data. Quick response time is a measure of reliability and faster internet connections ensure that businesses deliver the right information at the right time.
Better collaboration and productivity: Good connectivity fosters collaboration and in turn increases productivity. With the faster internet, employees can perform the same tasks more efficiently, meet deadlines, take data backups more frequently, work remotely and help organizations attain business continuity.
Embracing newer technologies: More and more businesses today are shifting to the Cloud to scale their apps. However, poor internet connectivity increases response times from these hosted apps and decreases productivity. Enterprises are preferring VoIP (Voice over Internet Protocol) over landlines for seamless and cost-effective communication. While this is good for business sustenance, VoIP connections require high internet speeds – about 100k of bandwidth per second for each VoIP conversation.
Data-rich content: These days information exchange is becoming more visual with the use of resource heavy images, graphics, audio, or video files. Users prefer watching videos over reading content, remote employees prefer video conferencing over telephonic meetings. While data-rich content increases user experience, it also consumes huge bandwidth and hence requires a faster internet connection.
Basically, fast internet is like bread and butter. What was considered fast a couple of years ago is slow today! The need for speed is growing day by day. While most companies are investing in the fastest internet connections they can afford, software giants like Google are finding innovative ways to better utilize the available internet bandwidth and maximize their ROIs. Despite the high-speed internet, customers complain about slow websites or poor quality videos due to network congestion. Recently, Google has launched a new network congestion control algorithm, called BBR, for Bottleneck Bandwidth and Round-trip propagation time, to improve network speeds. Google’s BBR algorithm helps a network to detect the possibility of congestion and accordingly slows down data transfers.
Click here to read more about Google’s BBR. Also check out our blog on Network Congestion control