The multiple layers of interdependent proprietary layers in the SAP environment complicate troubleshooting SAP application issues. Monitoring SAP to capture the end user’s perspective is easier said than done. With Riverbed SteelCentral Aternity, IT teams can ensure an excellent end user experience for SAP.
Enterprises rely on SAP Solution Manager Diagnostics (SMD), Computing Center Management System (CCMS), System Guard for SAP®, or third-party SAP monitoring tools for checking the availability and response time of the data center components and cloud infrastructure supporting SAP. But IT requires additional SAP monitoring capabilities to understand the actual end user experience of the workforce as they use SAP business applications in the course of their jobs.
Change management in SAP is complex. Whether it is transitioning SAP from ECC to S/4HANA, or upgrading to the latest enhancement pack of SAP ERP or version of SAP Fiori. SAP monitoring tools should provide IT visibility into end user experience to ensure these changes actually result in better service. After all, what counts is the user experience your finance, manufacturing, sales, or support employees see when they conduct SAP transactions.
Monitoring SAP with Riverbed End User Experience Monitoring
Aternity monitors end user experience for any type of enterprise app running on laptops, PCs, VDI, or mobile devices
Riverbed SteelCentral Aternity augments the SAP monitoring capabilities available from SAP itself and third-party vendors. It automatically monitors and correlates together the three streams of data that constitute true user experience. These are user productivity, device health and performance, and application performance.
Domain-specific IT management products monitor performance and availability of a portion of the IT infrastructure. But SteelCentral Aternity monitors the end user experience of SAP, and every other business critical application, from the perspective of the end user’s device. With SteelCentral, you can address a broad set of IT Service Management challenges for the entire IT organization and the line of business.
Troubleshooting SAP business applications from the user’s perspective
SteelCentral augments SAP monitoring products, like SMD, CCMS, or System Guard, by monitoring the “click to render” time of the major steps within SAP Transaction Code processes. It notifies IT operations when response time exceeds automatically generated baselines or manually established thresholds. With SteelCentral Aternity, you get the following capabilities for troubleshooting SAP.
Troubleshoot slow response in SAP business applications by monitoring the response time of the key steps within the SAP Transaction Code process, isolating the source of delay, and displaying information by department, location, device type, or backend. In this case, the network is the primary source of delay for the Search Account transaction.
The dashboard below shows how you can isolate the likely cause of end user problems by analyzing the characteristics shared by affected users. Then you can drill down into the details of the application or device to investigate issues.
Isolate the likely cause of end user problems by analyzing the characteristics shared by affected users, then drill down into the details of the application or device to investigate issues.
Monitor the end user experience of any local, cloud, or enterprise mobile app
SteelCentral enables End User Services teams to troubleshoot end user issues quickly by correlating application performance and health (including key steps within an SAP Transaction Code process), as seen by the end user, to the performance and health of the user’s device. End User Services teams can use SteelCentral for troubleshooting and monitoring SAP in the following ways.
Validate user complaints automatically. No need for excessive user interrogation or stopwatch timing.
Isolate problems to the user’s device, the network, or the backend to reduce finger-pointing.
Resolve issues quickly by drilling into device details to pinpoint device components causing the problem.
Review all of a user’s applications running on any device. Identify every business activity performed, including major steps of SAP Transaction Code processes, and track response time vs. baseline. Use color-coded status to immediately validate complaints of poor application performance. In this case, the majority of delay in the “Search Account” transaction is due to the S/4HANA backend.
Validate the impact of IT change on workforce productivity and customer service
With SteelCentral, business and IT executives can measure the impact on end user experience of strategic IT projects, like cloud, mobility, and data center transformation. They can do the same for more routine, tactical changes like upgrading to the latest version of SAP ERP and Fiori or making changes to the S/4HANA backend.
Validate the impact of change by analyzing end user experience before and after a change to infrastructure, applications, or devices, to ensure the desired results are achieved.
Quantify the financial impact of app performance on workforce productivity by analyzing every transaction made on business critical apps, including key steps within SAP Transaction Code processes.
Determine where investment is needed most by analyzing Transaction Code process step performance relative to SLAs, by department or geography.
Analyze trends in app adoption across the enterprise to track the effectiveness of key strategic initiatives like cloud, mobile, and virtualization.
Validate the impact of a change to applications like SAP ERP or Fiori by comparing the response time of key SAP Transaction Code process steps before and after the change. In this case, the change results in slower response for the Search Account and Create Account transactions.
Start monitoring SAP today
SteelCentral Aternity helps you ensure the reliability of SAP and every other business-critical application, running on mobile, virtual, and physical devices. With SteelCentral, you have options for monitoring SAP. In addition to on-premises deployment, SteelCentral Aternity can be run in the cloud, enabling customers to get up and running fast, with no major capital investment, hardware provisioning, or server deployment.
An amazing 92% of organizations are running services in the public cloud.1 This growth is expected to continue at an impressive rate and by 2020, enterprises plan to run as much as 83% of their workloads in the cloud.2
Currently, AWS offers the largest global footprint in the public cloud market. No other cloud provider offers as many regions with multiple Availability Zones. Among enterprises, AWS has an impressive 68% share of adoption.3
Unfortunately, IT frequently falls short of these expectations. They are often unable to detect
Cloud with periscope
problems in cloud environments before they impact users due to the distributed, transitory nature of cloud ecosystems and the lack of cloud visibility. In fact, there’s a 60% chance that end users will notice issues before IT in the cloud while there’s only a 38% chance on-premises.4
Organizations across the board report that managing cloud service providers is harder than managing on-prem. This is, in part, tied to a lack of visibility into the physical cloud infrastructure and difficulty pinpointing where the problem lies.
That’s about to change…
Riverbed and AWS collaborate to solve cloud performance problems
At re:Inforce in Boston today, AWS is introducing Amazon VPC traffic mirroring, allowing customers to gain native insight and access to the network traffic across their VPC infrastructure for performance and threat monitoring. With this feature, customers can copy network traffic at any Elastic Network Interface (ENI) in their VPC, and send it to SteelCentral AppResponse Cloud for network and application analysis in order to monitor and troubleshoot performance issues.
Traffic mirroring provides a native AWS solution that simplifies operations by allowing customers to natively mirror their VPC traffic, without using packet-forwarding agents. Today, customers have to install and manage third-party agents on EC2 instances to capture and mirror VPC traffic. This poses scaling challenges and increases operational burden. It also improves security by providing packet capture at the Elastic Network Interface level rather than from the user space within the instance, which cannot be trusted if the instance is already compromised.
SteelCentral AppResponse Cloud provides rich, unparalleled network and application visibility into AWS Cloud. It enables IT Operations to quickly pinpoint performance degradations and high latency in cloud and hybrid networks. AppResponse Cloud automatically identifies more than 2000 applications for detailed application analysis as well as identifies and troubleshoots network issues faster.
SteelCentral AppResponse Cloud can be deployed stand-alone or in combination with physical and virtual SteelCentral AppResponse appliances to provide seamless, end-to-end network and application analysis across on-premises, virtual, and cloud environments.
Support for Amazon VPC traffic mirroring is part of our upcoming release of AppResponse Cloud, currently tracking for a September release. SteelCentral AppResponse Cloud currently supports SteelCentral Agent, Cisco CSR 1000v, Big Switch Big Monitoring Fabric, Gigamon GigaSECURE Cloud, and Keysight/IXIA CloudLens for packet capture in the AWS Cloud environment.
Things got hot at Velocity 2019 and I just don’t mean the unseasonably hot weather that topped 100 degrees or the power outage at the SJ Marriott. If you weren’t able to make it, enjoy the air conditioning and read on to learn about how companies are embracing cloud-native technologies to boost customer and employee satisfaction. As attendee and SRE Liz Fong-Jones tweeted, “There are only two things your organization should be spending time on: making your customers happy, or making your people happy.” Below we share 6 buzzwords that we heard everywhere at Velocity 2019 and think you’ll be hearing a lot of too: cloud-native application development, observability, SRE, AIOps, DevSecOps, and hyperscale.
1. Cloud-native application development
Without a doubt, the Velocity 2019 conference validated that to succeed in the digital economy, companies in every industry are revisiting the way they design, build, and use applications to take advantage of newer cloud technologies. Cloud-native application development–which leverages containers, microservices, and orchestration–accelerates release cycles and drives competitive advantage and user satisfaction.
Developers build observability into the code to externalize internal contexts, to be used for debugging and code optimization. Unfortunately, observability does not scale the way monitoring can since developers must configure spans while writing the code itself. Monitoring complements observability because it provides a broad view of performance as measured externally, whether that is time-series metrics or call stack execution. You can learn more about observability vs monitoring in Amena’s excellent blog.
3. Site Reliability Engineer (aka SRE)
Site Reliability Engineering is still emerging as a practice despite its origins more than a decade ago. SREs ensure applications and services are reliable and define what reliable means in terms of service levels. For example, a login process can be available, but if performance is unacceptably slow, it is not meeting service levels. SREs were well-represented at Velocity–a testimony to the role’s growing importance.
We’ve spoken about AIOps previously: Artificial Intelligence for IT Operations (AIOps) refers to the application of big data, machine learning, analytics and automation to AIOps in order to address today’s need to make sense of large quantities of mostly structured, specialized, cross-domain IT data. These insights can be used to identify and resolve issues before they impact customers and employees and identify usage trends in applications to prioritize development efforts and boost competitive advantage.
Like the name implies, DevSecOps recognizes the importance of security in the software development life cycle (SDLC). The term has been around for a while but at Velocity 2019, Sebastien Deleersnyder took this to the next level and led hands on training on how to use threat modeling to integrate security in the DevOps workflow, introduce threat modeling as code, and how to build a security culture in your organization.
Massive scale. As its name implies, hyperscale infrastructures are designed for high levels of performance required for big data and cloud computing. As the amount of data collected for performance analysis grows exponentially–18x according to DEJ–application performance monitoring tools must scale as well.
More from Velocity 2019
APM Consultant Jeremy Tupa and Senior IT Manager Marcelo Soares from Dell join Riverbed’s Jon Hodgson on stage
At Velocity 2019, Dell shared its performance journey in the featured presentation: How Dell manages application performance at scale (sponsored by Riverbed). APM Consultant Jeremy Tupa and Senior IT Manager Marcelo Soares from Dell detailed how a small team of performance engineers has developed a strategy and culture to manage the performance of thousands of legacy and cloud-native applications. We’ll post the video of their presentation as soon as it’s available.
SteelCentral AppResponse 11.7.0 is very exciting. It’s jam packed with great new features. There are several I think you will get enthused about. We call them the “big rocks.”
All IP traffic
Previously AppResponse really only calculated TCP metrics. But we know how important it is to you to see all the data, so we enhanced our engine. Now you can see all IP protocols in the metadata–both TCP and UDP traffic–without having to go to the packets.
View all IP Protocols in metrics – both TCP and UDP
This has led to some changes in how data is displayed. The availability of this finer-grained detail is indicated by the HD (high definition) tag in the Navigator. Also, we removed the Advanced section in the ASA module. Now you drill into the HD links for a faster experience with more details.
There are several new integrations, such as with ServiceNow, Syslogs, and Integration Links. Let’s break them down individually:
AppResponse and SteelCentral Packet Analyzer Plus now integrate with ServiceNow for consolidated management of IT trouble tickets, support requests, etc. Trouble tickets from AppResponse, Packet Analyzer Plus, Aternity, and AppInternals can now all be stored in one place. This consolidation helps IT Operations prioritize and route tasks, improving IT productivity. ServiceNow customers can find the SteelCentral apps in the ServiceNow App Store by searching on Riverbed.
AppResponse and Packet Analyzer Plus can also send Syslog alerts to remote syslog recipients, like Splunk.
There are two types of integration links: Riverbed SteelCentral and External links. Riverbed SteelCentral links let you integrate with other SteelCentral products, such as Aternity, AppInternals and NetIM.
External Links allow you to integrate with third-party tools, such as Geotool, Traceroute, ARIN WHOIS Search, Malware Domain List, etc. These links are contextual and only show up when the data is relevant to the link. You can add your own external links.
Use external integration links to gain more information about the network.
We also did a lot of work around policies and notifications. The most notable is Built-in Policies, which are a first step toward self-alerting. Built-in Policies bubble up interesting events happening on your network. They are a first step toward self-alerting. They are pre-configured out of the box to highlight potential problems in any customer environment while avoiding false positives. They produce live notifications, which you will see as a red arrow in the lower right corner of any page. This arrow will take you to the policy insight.
Below is the list of Built-in Policies. All policies are enabled by default, except network packet loss. Some of the other policies, like Web Server Errors, Poor VOIP and Video Call Quality, and Database Availability depend on the license you purchased.
Built-in-policies are the first step toward self-alerting
Data encryption at rest
Data encryption at rest is an exciting and necessary feature for government users who need to conform to certain data security standards as well as for enterprise users who need to conform with GDPR, PCI-DSS or HIPPA regulations. Encryption at rest allows AppResponse users to enable encryption of the data while it is on the SCAN-SU-10, the 120TB Storage Unit for the AppResponse 8180. This prevents unauthorized access to the data while it is on the disk. Encryption on SCAN-SU-10 Storage Units uses transparent disk self-encryption to encrypt the data without performance loss.
There are “small rocks,” too. Small rocks are consider features such as business hour polices, alert “all clear” notifications, and the ability to assign tags to IP addresses and IP conversations, just to name a few.
It’s no secret that application environments are not getting any simpler.
From the complexity of the mix of applications to manage (there is indeed an app for everything!) to the complexity of building and deploying those apps (microservices, containers and the burgeoning DevOps/cloud-native ecosystem) to the complexity of the delivery infrastructure (mobile, virtual, cloud, SaaS), IT is faced with more apps, systems and platforms than ever to keep up and running in peak condition.
With greater complexity driving the need for greater visibility, the processes, tools and methods of monitoring are also growing ever more diverse. One of the ways in which this is happening is through the DevOpsification of monitoring a.k.a. observability.
What is observability?
Observability data types
Traditionally, monitoring has been the domain of ops—sort of an outside looking in view for someone who doesn’t actually build the code. Observability, as a culture and practice, represents a shift left for monitoring as developers take on the task of building applications able to externalize their internal state. A number of open-source technologies have made this inside-out view possible, including:
fluentd—unified log data collection
Zipkin, Jaeger, OpenTracing/OpenTelemetry—distributed tracing systems and standards supporting polyglot languages like Go, PHP, Python, C/C++
Semantic logging—structured logs based on strongly typed events
Monitoring vs observability: a false dichotomy
Monitoring and observability have different purposes. Monitoring aims to provide a broad view of anything that can be generically measured based on what you can specify in external configuration, whether it is time-series metrics or call stack execution. However, it lacks information as to the developer’s intent. Observability is built into the code by that very developer to provide insight into the system based on the intended behavior. Systems that incorporate observability can be effectively debugged with context rather than conjecture.
However, observability does not scale the way monitoring can. Spans are configured while writing code, and for it to be failsafe would require the developer to mark up every span of interest—and a developer is only human! Monitoring, on the other hand, will simply capture every single method call and parameter (or whatever you specify based on runtime characteristics).
Cloud native adoption
Observability, along with cloud-native development, has become a cornerstone of modern application and systems design, and a wide variety of open-source monitoring, search, analytics, and visualization platforms have emerged to consume these new data sources. You’ve probably heard of Prometheus, Elastic, Grafana, etc. A quick glance at the CNCF landscape illustrates the sheer number of options. The free availability of these options has resulted in widespread adoption among developers and DevOps teams.
Concurrently, there has been a staggering increase in the amount of data generated by modern apps. Last year’s DEJ study noted an 18x increase in data collected, on average, from monitoring components and dependencies in container-based vs. more traditional monolith environments. And not only has there been an explosion in the volume and velocity of data, but the variety of data types and sources continues to grow as well.
This has created a fresh set of problems.
More data, more problems
While having all this data empowers developers to shift left and fix bugs early in the app lifecycle, relying on multiple commercial and open-source tools is impractical for production environments where there is greater urgency to respond to user issues fast and fix them before business is impacted. Manually correlating data across disparate tools and silos to isolate and triage a user issue is slow and near impossible unless you already know what you are looking for. And while many of the free and low-cost options may appear attractive at first, it quickly becomes apparent they cannot scale in enterprise environments where there may be as many as tens of thousands of containerized components to track and monitor, and where the number of executed transactions can number in the billions every day.
As a result, forward-looking organizations recognize a strong need for:
Unified visibility across multiple IT functions, including app, network and cloud infrastructure
Big data technology capable of capturing, streaming and storing the volume, velocity and variety of data without undue stress on the underlying system or storage infrastructure
AIOps approaches for analyzing and visualizing the resulting big data to extract meaningful, even predictive, information about the IT environment
This is where an enterprise-scale monitoring solution providing a single-pane-of-glass view can be immensely valuable.
Combining application performance monitoring (APM) with observability data, you can have the best of both worlds: insight into the intent of the code, as well as complete end-to-end traces and transaction flow mapping, unified views and drill-downs across multiple IT functions, all with rich metadata context.
If you’re interested in learning how to implement observability with open tracing, check out this online course on Distributed Tracing.
With the recent announcement of the Riverbed SaaS Accelerator service it gives us pause to really consider where we have been, where we are today in a digital age that’s only just starting to rev its engine, and the opportunities we have together as you, our customers, shift away from old-style technologies toward modern cloud infrastructures and SaaS applications to drive your businesses.
Think about it: distributed enterprise architectures and budgets used to be all about centralized data centers and applications housed there. The idea of early WAN Optimization was about making sure BW costs for MPLS networks could remain manageable while also making sure far away branch workers could take advantage of apps that were managed and maintained in a central data center.
But enterprise IT has evolved dramatically since those times. In today’s digital enterprise, we have so many more choices available to offset the bottomless pit of expense required to manage and maintain massive data center architectures at scale, and on-premises applications. Today we have multi-cloud environments deployed to address different workloads. We’re also facing a massive insurgence of SaaS applications driving the same business functions that were previously dedicated to on-premises traditional apps.
We’ve also evolved when it comes to how our workforces execute their jobs, and how, where and when each employee accesses the apps they use. It used to be more about sitting at a desk, working in front of a computer screen and going home. Today’s employees are more available and dynamic, accessing SaaS apps in the office, and then opening their laptop to work from home, a coffee shop, an airport, or a client site on the other side of the world. I myself work from all of those places and a ferry in a given work day. No matter where we are, because of the advances in technology over the last several years, we’re expected to be available and responsive, from any device, anywhere we happen to be. Today’s digital business never slows down.
The potential benefits of SaaS applications are undeniable. For starters, they’re easy to deploy and scale, and the business no longer assumes the financial burden of owning and maintaining the hosting infrastructure. However, there are serious challenges. One of the most significant of these challenges is around application performance.
42% of enterprises report that at least half of distributed or international workers suffer consistently poor experience of the SaaS apps they use to get their jobs done. (ESG Enterprise SaaS Survey, March 2019)
And when application performance suffers, the ability for an enterprise to stay ‘on its toes’—to be competitive, to be able to transact or quickly act on an opportunity, to build client relationships, to build pipeline—that ultimately suffers too.
So how does this tie back to the foundational principles that made WAN Optimization so critical 15 years ago? It’s absolutely connected, and not dissimilar. It’s just evolved. And for those who understand those principles that made sure centralized applications could perform over distance, it can be opportunistic to apply those to the complexity we face as we rationalize cloud workloads and SaaS application performance today. For many—it can have direct ties to a company’s ability to get ahead.
Let’s look at the reasons why SaaS performance may suffer.
When exploring why SaaS performance may suffer, it helps to consider the various ways companies deliver SaaS apps to their employees—variability that can lead to wildly inconsistent user experiences.
Here are a few typical SaaS performance scenarios:
As you might expect, sometimes our users have direct-to-net access and are doing their work from a branch or headquarters located very close to a cloud point of presence (pop). In those cases, most of the time, SaaS should run okay.
Another example though, are places in our business that are more remote. What’s the global nature of your enterprise? Where are your offices located on a global scale? Are there far away branch offices in remote locations where latency is going to be higher? This is going to cause SaaS performance slow-downs—and a less than productive user experience.
Adding on, in some places in the world bandwidth is not so cheap, so the cloud traffic going through the available pipes slows down. How will available bandwidth be impacted when applications such as those in Office 365 move from an on-premises environment to the cloud? Even in areas where bandwidth is relatively inexpensive, a change to SaaS at scale can be cause for alarm, since new pipes will need to be significantly beefed up for application performance.
Next, there are still many enterprise companies these days that backhaul SaaS traffic through a data center because of the need to comply with a firm security posture—and while there may be a plan to evolve from this, it won’t happen overnight. Distance is distance and the speed of light doesn’t get any faster. Backhauling creates more distance and causes longer delays, and that greatly hinders performance.
And last but not least, as suggested earlier, now we have dynamic and highly mobile workforces logging on and accessing SaaS applications from so many different places and networks as they move through their workdays, many of those places out of IT control. This makes predicting performance much more challenging. And this particular scenario is only growing in scope and intensity.
Of course, many of us have a combination of these scenarios that impact SaaS performance—and therefore, workforce productivity.
So how do we get in front of this?
Well, we want you as IT leaders to be able to maintain that ease and scale that comes from SaaS, but we also want to make sure you’re in the driver’s seat. We want you to be able to control how your users are experiencing the apps that are taking such a significant role in advancing your digital business. So first, we want to arm you with visibility on a global scale to be able to understand user experience anywhere they might be, and we want to give you an easy way to proactively accelerate the performance of the SaaS applications they’re using so that they can always stay productive.
If we understand the scenarios above, it’s actually not hard to do. Today Riverbed has leveraged its founding principles from accelerating old-school data center applications and eliminating costly bandwidth and latency restrictions and applied those principles to cloud workloads and SaaS applications. This has resulted in the world’s first, cloud-based SaaS Accelerator service that is easy to spin up in minutes and deploy anywhere.
Combined with Riverbed end-user experience monitoring (EUEM) technology, and its proven end point offerings for branch offices and laptops, you have the ability to incorporate a whole SaaS Performance Management system into your SaaS strategy. Today’s Riverbed has leveraged years of learning and foundational concepts about how to deliver performance over distance and has made it very easy for our customers in the age of SaaS. Now you can continue to experience the ease and scalability of SaaS, without the loss of control and questionable performance that all too often comes with it.
And in today’s fast-paced, dynamic and competitive world where seconds and minutes count and can make the difference in millions of dollars, reputation, client confidence, and more, why wouldn’t everyone want to take advantage of that?
Why visit San Jose, CA in June? You might be thinking great weather, a side trip to the beach, or … Velocity. We’re proud to be back sponsoring Velocity 2019 and co-presenting with Dell on 25 Billion Transactions and Counting: How Dell Manages Application Performance at Scale. As you know, modern systems pose a number of thorny challenges. They’re inherently complex, span multiple technologies, groups, and sometimes even different organizations altogether. Below I tackle 5 questions that DevOps and SRE teams face as you struggle to manage performance across the full cloud-native application stack.
Question 1: Are microservices the new DevOps standard?
93% of organizations use or are planning to use microservices. Yes, microservices and containers are quickly becoming the new normal for application development along with cloud-native platforms such as Kubernetes, OpenShift, Pivotal Cloud Foundry, Azure, AWS, and Google. But the era of open source and collaboration creates an explosion of options which can overwhelm DevOps teams. Riverbed’s application performance monitoring (APM) solution simplifies cloud-native monitoring with unified visibility across user, trace, log, wire, and systems data—all in one.
Question 2: How important is performance, really?
1 in 3 say that they’ll walk away from a brand they love after just one bad experience. In other words: Experience is EVERYTHING. But, it’s increasingly difficult to ensure the performance of modern applications. They’re frequently updated, span multiple technologies, have components that are spun up and down, and deployed across multiple geographies. It’s also tricky to measure what your customers and employees are actually experiencing—how they perceive performance—unless you measure the end user experience. With Riverbed, you can trace every user transaction—all the way from the user’s device through the backend. You can then understand crucial transaction flows and dependencies and ensure exceptional user experience.
Question 3: How well do traditional APM tools meet the needs of modern applications?
54% of IT professionals said that managing containers and microservices with legacy tools is simply not effective. Legacy tools were designed to manage static monolithic applications. But, monitoring dynamic cloud-native apps with legacy tools creates a lot of blind spots if you don’t monitor at high frequencies. Even if there are triggers in place to collect more data when an issue is detected, you can miss the historical context necessary for problem resolution. This lack of data is exacerbated when diagnosing intermittent issues in modern, complex systems.
Question 4: How do you scale to analyze big data generated by modern apps?
18x more data is generated by cloud-native vs monolithic apps. That’s a lot of data. Because the rate of change and the volume and velocity of data generated by containers and microservices is exponentially greater, that means that the new generation of APM tools needs to support big data. With big data, you can effectively optimize performance—manage service levels, troubleshoot complex issues, and understand usage trends to better align development efforts with business priorities. Riverbed’s highly scalable architecture can collect and store full details for billions of transactions/day so you always have all the evidence on hand to reconstruct and diagnose any issue.
One final question: are you interested in learning more at Velocity?
Join us in San Jose at Velocity 2019, June 10-13, 2019 at Booth 406 and for the session on Wednesday, June 12 @ 1:25pm:
We are excited to announce volume 2 of our Riverbed APMUser Conference for the SteelCentral AppInternals and Aternity community, to be held on August 29–30 in the Washington, DC area. Save the date!
If you are interested in connecting peer-to-peer with other experts, sharing knowledge and expanding your performance engineering skills, you won’t want to miss this.
This will not be your typical vendor user conference. We’re changing up the format based on what we’ve seen you want. You want more interaction with each other and less marketing from us, more technical hands-on education, and less PowerPoint. This will be a PEER-DRIVEN event where we encourage YOU, our community of experts, to drive the agenda and the content. We’ll be examining a wide range of topics ranging from monitoring containers in OpenShift or Pivotal, to key metrics in AWS or Azure public cloud, to best practices in monitoring user experience for SaaS-based applications and Office 365.
As this is a conference for practitioners, by practitioners we encourage you to contribute “hands-on” and “how-to” content. The call for presentations (CFP) is now open for you to submit your proposals. There will be three types of sessions:
Ignite talks:This will be our kickoff event to help break the ice! A series of short (and fun) 5 minute presentations on any topic of your choice. A perfect opportunity for you to share an anecdote, story or even practice your performance-related standup routine.
User technical sessions:These will be 30-45 minute presentations led by you, our community experts.
Open spaces: Anyone at the event can pitch an impromptu topic and have interested parties join the discussion in our open spaces. This will give you the opportunity to form a quick focus group on any topic that’s on your mind.
In addition to the user conference on August 29-30, we are also looking into facilitating an optional day of APM and EUEM training at the same venue on August 28. We’ll keep you posted as more details on classes become available.
Seating is limited and registration is on a first come first served basis so please be on the lookout for the formal invite and registration notice. You’ll want to reserve your spot early as we’ll likely run out of seats fast.
Okay, our professional environment is massively different from the one that our parents worked in. I still remember my parents’ busy work schedules. They stayed late in the office and often drove in on weekends to ensure that an important report was completed.
When was the last time you had to drive into the office on a Saturday in order to complete a report? Are we working less nowadays? Are we lazier than our parents’ generation or is our productivity dropping? Clearly not. The simple fact of the matter is that we work very differently than we used to. Instead of heading to the office on Saturday, we flip on the laptop in the living room and log on. Many of us work remotely and across highly distributed organizations. We are connecting from more places, often using a variety of device types and to a wider array of application types. Furthermore, the rapid move to cloud and SaaS has transformed companies’ ability to create new digital revenue streams and improve employee productivity. No need to spend millions in deploying and maintaining Microsoft Exchange server on premises when you get it off-the-shelf as part of the Office 365 SaaS environment. SaaS application use within the enterprise is growing dramatically.
Losing control of app performance
So two things are changing:
Employees’ increasing use of mobile technology, accessing dozen of SaaS apps to do their work
SaaS-based apps are outside the control of the IT team
On the surface, this is great for your business but the loss of control of the app performance to the SaaS vendor has created problems for your employees.
How big of a problem is it? At the same time that employees use an average of eight SaaS applications at work, 73% of organizations report poor SaaS performance on a monthly basis (ESG, April 2019).
This issue is significantly worse when the employee is mobile, when he or she is located remotely, or when traffic is backhauled due to corporate security policy. In addition, you no longer have visibility into the application performance and do not have the data to hold your SaaS vendors accountable. Your employees are constantly complaining but you are not in a position to (1) verify if the performance is poor or (2) do anything to improve it.
Instead, you are wasting time in long support calls with your SaaS provider, unable to prove the problem as you have no data.
The bottom line is that we need to improve our infrastructure to meet the demands of the more flexible workplace, but how do we take our network infrastructure and move it forward to meet these demands?
Time to gain control of SaaS apps and satisfy your employees
Today, Riverbed announced several key enhancements that will help our customers do just that. We introduced the first and only complete SaaS Performance Management solution. Organizations can monitor performance of SaaS-based applications like O365, Salesforce, ServiceNow, Box and rapidly accelerate application performance when network issues are resulting in degraded user experience. When network latency is the cause of the performance problem, your company can leverage Riverbed’s SaaS Accelerator to dramatically improve performance, especially in distributed workforce scenarios. With SaaS Accelerator, your organization does not have to compromise the security posture and continue to backhaul SaaS traffic through a corporate network while still delivering great SaaS app performance to all employees, wherever they are located.
We also announced enhancements to our SD-WAN solution, SteelConnect, which provides a single, unified orchestration and connectivity fabric across the entire distributed enterprise network with embedded security, optimization and visibility. This release boosts agility, security and operation efficiency with advances in multi-cloud network automation, enhanced service-chaining for on-premises and cloud-based security services and increased flexibility to deploy and manage SD-WAN in complex global environments.
Also announced were advancements to our Digital Experience Management solution that enables companies to improve digital service continuity and reduce business disruptions. It features a new “active” page that gives customers a heads-up display of important performance information upon login. It also introduces end user remediation that helps support organizations resolve common user device issues. These scripts can be deployed by support agents to improve resolution times or leveraged on the end user’s device to proactively resolve issues before they are escalated to support. The result is better user satisfaction, lower IT costs, and improved stability.
In my previous post, I discussed one of my favorite topics: The Heisenberg Principle of Security vs. Privacy. There is another law of physics I typically use that has an analogue in security: lightspeed. The closer you get to lightspeed, the more energy you need to go faster, and conversely any object with mass cannot actually achieve lightspeed. Similarly, spending money on security improves your security. But, as you get more secure, you will be spending increasingly more money for decreasing amounts of additional security. The equivalent of mass here is the information, by the way. Perfect security cannot be achieved unless you have no information to protect.
To understanding why there are diminishing returns in security spending, consider the various side-channel attacks that have recently been in the news, such as Spectre, Meltdown, and RowHammer. Most of these attacks rely on extensive repetitive actions to slowly leak information. RowHammer is a great example—by repeatedly hitting rows in memory, you might be able to flip a bit in an adjacent row. The numbers on this are staggering: 140,000 row activations gives the attacker a 1 in 1,700 chance of flipping a bit in an adjacent row on DDR3 memory. Yes, it can be done—but how much money and effort should we expend to protect ourselves from that remote possibility?
What are the chances this bitflip is going to be “productive” from a security perspective? After all, one needs access to a memory row adjacent to a row of security interest, likely something in huge swaths of kernel memory. Similarly, the Meltdown/Spectre class of vulnerabilities takes a tremendous effort to usefully leak memory. Examples show leaks of 1 KB per second are theoretically possible, but computers have quite a bit of memory, and mitigating the vulnerability means a 30% performance hit on some servers. Put in other terms, where you previously spent $10,000 on data center costs, you now will need to spend $13,000. How much security is “enough?
Maximize your security spending
The trick of security spending naturally is to get to maximum value, meaning, the “most” amount of security for the dollars spent. Moreover, since security is dependent on the value of the data (i.e. what are you trying to protect), this is never a one-size-fits all. Common practices are certainly to be followed: access control, separation of privileges, encryption of data at rest; but even in these, there are implementation decisions that are very organization specific. To get started, I’m sharing three guiding thoughts:
Defend in depth: firewalls and network access control are so affordable you can slice and dice access to the most minimal set needed. And please remember to do egress filtering on subnet, DNS, NTP, etc.
Make better use of your topology. They are natural defenses. With “topology,” I mean your understanding of your attack surface. Complex public-facing services are natural entry points for bad guys, so spend time and money defending those. However, seal off remote corners of your network from unneeded access (in and out). This does not need to cost a lot.
Seek shared tooling—NOC and NSOC typically both look at the same data feeds, but do different things with the data. Seek tooling that can be shared by both teams, because it saves money in terms of procurement, lifecycle management, and often overlooked: savings in time lost in communication breakdowns between teams.
Ultimately, the “best security for the money” comes down to deciding what you need to protect, and identifying the lowest hanging fruit. Pluck those first and move up the tree from there. You don’t need to go the speed of light, you just have to move faster than the bad guys.