Loading...

Follow Cato Networks Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

As more businesses require 24/7 uptime of their networks, they can’t afford to “put all their eggs in one basket.” Even MPLS with it’s vaunted “5 9s” SLA, has struggled with last-mile availability. SD-WAN offers a way forward that significantly improves last-mile uptime without appreciably increasing costs.

Early Attempts To Solve The Problem

Initial efforts to solve the problems and limitations of the last mile had limited success. To improve overall site availability, network managers would pair an MPLS connection with a backup Internet connection, effectively wasting the capacity of the Internet backup. A failover also meant all the current sessions would be lost and typically the failover process and timeframe was less than ideal.

Another early attempt was link-bonding which aggregates multiple last-mile transport services. This improved last mile bandwidth and redundancy but didn’t create any benefits for the middle mile bandwidth. Functioning at the link layer, link-bonding is not itself software-defined networking, but the concept of combining multiple transports paved the way for SD-WAN that has proven itself to be a solution for today’s digital transformation.

How The Problem is Solved Today

Building off the concept from link-bonding to combine multiple transports and transport types, SD-WAN improves on the concept by moving the functionality up the stack. SD-WAN aggregates last-mile services, representing them as a single pipe to the application. The SD-WAN is responsible for compensating for differences in line quality, prioritizing access to the services and addressing other issues when aggregating different types of lines.

With Cato, we optimize the last mile using several techniques such as policy-based routing, hybrid WAN support, active/active links, packet loss mitigation, and QoS (upstream and downstream). Cato is able to optimize traffic on the last mile, but also on the middle mile which provides end-to-end optimization to maximize throughput on the entire path. The need for high availability, high bandwidth, and performance is achieved by enabling customers to prioritize traffic by application type and link quality, and dynamically assign the most appropriate link to an application.

The Cato Socket is a zero-touch SD-WAN device deployed at physical locations. Cato Socket uses multiple Internet links in an active/active configuration to maximize capacity, supports 4G/LTE link for failover, and applies the respective traffic optimizations and packet-loss elimination algorithms.

Willem-Jan Herckenrath, Manager ICT for Alewijnse, describes how Cato Cloud addressed his company’s network requirements with a single platform: “We successfully replaced our MPLS last-mile links with Internet links while maintaining the quality of our high definition video conferencing system and our Citrix platform for 2D and 3D CAD across the company.”

SD-WAN Leads The Way

The features and capabilities of Cato Cloud empower organizations to break free from the constraints of MPLS and Internet-based connectivity last mile challenges and opens up possibilities for improved availability, agility, security, and visibility. Bandwidth hungry applications and migrations to the Cloud have created a WAN transformation revolution with SD-WAN leading the way.

The post How SD-WAN Overcomes Last Mile Constraints appeared first on .

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Retailers, financial services firms, and other kinds of companies want to become more agile in their branch strategies: be able to light up, move, and shut down branches quickly and easily. One sticking point has always been the branch network stack: deploying, configuring, managing, and retrieving the router, firewall, WAN optimizer, etc., as branches come and go. And everyone struggles with deploying new functionality at all their branches quickly: multi-year phased deployments are not unusual in larger networks.

Network as a Service (NaaS) has arisen as a solution: use a router in the branch to connect to a provider point of presence, usually over the Internet, with the rest of the WAN’s functionality delivered there.

In-net SD-WAN is an extension of the NaaS idea: SD-WAN—centralized, policy-based management of the WAN delivering the key functions of WAN virtualization and application-aware security, routing, and prioritization—delivered in the provider’s cloud across a curated middle mile.

In-net SD-WAN is an extension of the Network as a Service (NaaS) idea: SD-WAN—centralized, policy-based management of the WAN delivering the key functions of WAN virtualization and application-aware security, routing, and prioritization—delivered in the provider’s cloud across a curated middle mile.

In-net SD-WAN allows maximum service delivery with minimum customer premises equipment (CPE) because most functionality is delivered in the service provider cloud, anywhere from edge to core. We’ve discussed how the benefits of this kind of simplification to the stack. It offers a host of other benefits as well, based on the ability to dedicate resources to SD-WAN work as needed, and the ability to perform that work wherever it is most effective and economical. Some jobs will best be handled in carrier points of presence (their network edge), such as packet replication or dropping, or traffic compression. Others may be best executed in public clouds or the provider’s core, such as traffic and security analytics and intelligent route management.

Cloud Stack Benefits the Enterprise: Freedom and Agility

People want a lot out of their SD-WAN solution: routing, firewalling, and WAN optimization, for example. (Please see figure 1.)

Figure 1: Many Roles for SD-WAN

Enterprises working with in-net SD-WAN are more free to use resource-intensive functions without feeling the limits of the hardware at each site. They are free to try new functions more frequently and to deploy them more broadly without needing to deploy additional or upgraded hardware. These facts can allow a much more exact fit between services needed and services used since there is no up-front investment needed to gain the ability to test an added function.

Enterprises are also able to deploy more rapidly. On trying new functions at select sites and deciding to proceed with broader deployment, IT can snap-deploy to the rest. On lighting up a new site, all standard services—as well as any needed uniquely at that site—can come online immediately, anywhere.  

Cloud Stack Benefits: Security and Evolution

The provider, working with a software-defined service cloud, can spin up new service offerings in a fraction of the time required when functions depend on specialized hardware. The rapid evolution of services, as well as the addition of new ones, makes it easier for an enterprise to keep current and to get the benefits of great new ideas.

And, using elastic cloud resources for WAN security functions decreases the load on networks, and on data center security appliances. Packets that get dropped in the provider cloud for security reasons don’t consume any more branch or data center link capacity, or firewall capacity, or threaten enterprise resources. This reduces risk for the enterprise overall.

Getting to Zero

A true zero-footprint stack is not possible, of course. Functions like link load balancing and encryption have to happen on premises so there always has to be some CPE (however generic a white box it may be) on site. But the less that box has to do, and the more the heavy lifting can be handled elsewhere, the more the enterprise can take advantage of all these benefits in an in-net SD-WAN.

The post Pursuing the Zero Stack Solution: Migrating to a Branch Stack in the Cloud appeared first on .

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Like many other telecommunications companies that provide networking services, the Canadian national telco company Telus has ambitious goals for network functions virtualization (NFV) and digital transformation. However, at the Digital World Transformation 2018 event last year, Telus CTO Ibrahim Gedeon voiced his opinion that network functions virtualization (NFV) had yet to live up to the original expectations and that exorbitant software licensing costs are undermining the NFV business case. 

NFV was supposed to revolutionize the telecom business, allowing operators to separate hardware from software and become more efficient companies. What Telus has learned, according to Gedeon, is that the anticipated cost savings of NFV aren’t there.

He says the high software licensing costs and maintenance charges eat into the expected cost savings. What’s more, NFV has led to increasing complexity for the Telus network, and the company had to increase the size of its operations team to support both the virtualized environment and the legacy appliances. Complexity can stem from having to integrate disparate technologies within the new NFV framework similar to the old model.

Bryce Mitchell, Director of the NFV, Cloud & National Innovation Labs at Telus, echoed Gedeon’s comments at Light Reading’s NFV and carrier SDN conference. In a speech, Mitchell pointed out that  network service providers are spending too much time and effort testing, validating and deploying the third-party VNFs, and none of those tasks are really automatable. He also cited problems of integrating the process of spinning up VNFs with the telco’s back-end billing and provisioning systems or into the company’s OSS management systems. Mitchell believes the full value of NFV won’t be achieved until these services are developed in an API-driven, cloud-native fashion.

The VNF approach is fundamentally flawed

Telus’s experiences aren’t unique. Numerous implementers and industry experts are realizing the limitations of NFV. (For a complete list of NFV problems, see here.)  The approach is fundamentally flawed because NFV is a simply repacking the same paradigm it was trying to displace. We’re still thinking about managing complex services as appliances, albeit as software rather than hardware appliances.

Thus, despite the industry hype, NFV will largely look like the managed or hosted firewalls and other devices of the past, with some incremental benefits from using virtual instead of physical appliances. Customers will end up paying for all the appliance licenses they use, and they will still need to size their environment so they don’t over- or under-budget for their planned traffic growth.

From a managed service perspective, offering to support every single VNF vendor’s proprietary management is an operational nightmare and a costly endeavor. One thing that’s lacking is an effective orchestration framework that manages the deployment of the network functions. As the Telus people acknowledged, more, not fewer, people are needed to simultaneously support the complexity of virtualization along with the legacy technologies.

Ultimately, if NFV doesn’t allow network service providers to reduce their infrastructure, management, and licensing costs, customers will not improve their total cost of ownership (TCO), and adoption will be slow.

Bust the paradigm with cloudification of the functions

How do we bust the appliance paradigm? By hosting the services that have traditionally been appliances as Network Cloud Functions (NCFs) to form a cloud-native software stack.

Unlike VNFs, NCFs are natively built for cloud delivery. These may be any network function, such as SD-WAN, firewalls, IPS/IDS, secure web gateways and routers. Instead of separate “black box” VNF appliances, the functions are converged into a multi-tenant cloud-based software stack. Rather than having separate VNFs for each customer, the NCFs support multiple customers; for example, one firewall for all customers on the cloud, rather than a separate firewall for each customer. However, NCFs are configurable for each customer, either on a self-service basis or as a managed service, through a single cloud-based console.  

The Network Cloud Functions approach is much more manageable than the Network Functions Virtualization approach. When a function like a firewall needs to be updated, it is updated once for the entire network and it’s done. When a firewall is deployed as a separate VNF on numerous customers’ networks, each one needs to be updated individually. This greatly reduces the operational challenges of NFV that are proving to bog down the network service providers.

NCFs promise simplification, speed and cost reduction. In some cases, these benefits come at a reduced vendor choice. It’s for the enterprise to decide if the benefits of NCFs are greater than the cost, complexity, and skills needed to sustain NFV-based, or on-premises networking and security infrastructure.

The post NFV is Out of Sync with the Cloud-Native Movement. Here’s a Solution appeared first on .

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

What’s transitioning like to SD-WAN? Ask Nick Dell. The IT manager at a leading automotive components manufacturer recently shared his experience transitioning his company from MPLS to Cato SD-WAN. During the webinar, we spoke about the reasons behind the decision, the differences between carrier-managed SD-WAN services and cloud-based SD-WAN, and insights he gained from his experience.

Dell’s company has been in business for over 60 years and employs 2,000 people located across nine locations. Manufacturing plants needed non-stop network connectivity to ensure delivery to Ford, Toyota, GM, FCA, Tesla, and Volkswagen. Critical applications included cloud ERP and VoIP.

Before moving to SD-WAN, the company used an MPLS provider that managed everything. The carrier provided a comprehensive solution to address the critical uptime requirements by having three cloud firewalls at each datacenter, and an LTE wireless backup at each location. When they signed the agreement with the MPLS provider, the solution seemed to be exactly what they needed to support their applications and uptime requirements. However, they quickly discovered problems with the MPLS solution that were impacting the business.

The Catalyst to Make a Change

Dell noticed a few challenges with the MPLS service:

#1 Bandwidth — Usage would peak at certain times and the provider’s QoS configuration didn’t work properly. Nick wanted to add bandwidth, but for some sites, the MPLS provider offered only limited or no fiber connections. For example, the MPLS provider would say fiber is not available at a certain site, but the local LEC delivered the T1s using fiber.

#2  Internet Configuration Failures — The company also wanted to give OEM partners access the cloud ERP system,  but the MPLS provider was unable to successfully configure Internet-based VPNs for the partners. Internet failover also did not work as promised. When sites would fail, not all components would switchover properly, creating failures in application delivery.

#3 Authentication Failures —  The user authentication functionality provided by the MPLS provider was supposed to help when users would move their laptops or other endpoints from wired to wireless connections. However, the authentication process often failed, leaving users without Internet access. Only after two years did the provider propose a solution – software that would cost $5,000 and require installing agents on all the laptops.

These issues manifested themselves in day-to-day operations. Someone sending an email with a large attachment would cause the ERP system to be slow to respond, which in turn caused delays in getting shipments out.

Dell and other leadership knew it was time for a change. They needed high availability Internet with more bandwidth that worked as designed. Moreover, they wanted a provider that would work in a partner relationship that could deliver 100% Internet uptime, fiber to all locations, provide a lower cost solution, and include all-in-one security.

The SD-WAN Options on the Table

Dell investigated three SD-WAN scenarios to replace the MPLS network.

  • Carrier Managed SD-WAN
  • Appliance-based SD-WAN
  • Cloud-based SD-WAN

    Moving to SD-WAN with the same carrier they were using for MPLS seemed like an easy move, but Dell was not inclined to deal with some of the same issues of poor service, and a “ticket-taker” attitude rather than problem-solving. The carrier also couldn’t guarantee a 4-hour replacement window for the SD-WAN hardware.

    The appliance-based SD-WAN solution would free them from the carrier issues, and ownership and management of the solution would fall to Dell and his team. The upfront costs were high, and security was not built-in to the solution.

    Dell also looked into other Cloud-based SD-WAN providers, but because of their size, the provider wanted to put them with an MSP where SD-WAN is not their core business. The solution didn’t provide full security so they would need to buy additional security appliances. The provider could also not guarantee a 4-hour response time to replace failed hardware.
Why Cato

With the Cato Cloud solution, Dell is able to choose any ISP available at each location and now have fiber at all locations with 5-20x more bandwidth than before. This has allowed them to have more redundancy to the Internet and High Availability (HA) – with both lines and appliances – at every location. The bandwidth constraints are gone and QoS actually works. When there is downtime, the failover process works as expected.

Describing the deployment experience as fast and easy, Dell only needed a 30-minute lunch break to cut over one location that previously was one of the most troublesome with outages and backup issues.

One of the driving factors that convinced Dell to go with Cato was the support, which he describes as “transparent and quick to resolve” issues. “They really listen to us, they really want to solve our problems,” says Dell. He was also pleasantly surprised that Cato was the only vendor of all the solutions they investigated that didn’t try to cash-in on an HA solution with a recurring fee.

Dell demonstrated his ROI on the Cato solution in a few ways. Bandwidth has increased significantly, the increased network visibility lets him troubleshoot faster, security is integrated, and at the same time, overall costs have decreased by 25%.  Users satisfaction is also down. Users are less frustrated because they’re no longer “being blocked from websites,” he says. As for IT, well, they’re also less frustration because dealing with support and opening tickets is, as Nick put it, “…so easy now.”

The post How SD-WAN Provided an Alternative to MPLS – A Case Study appeared first on .

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Even though an enterprise network is considered the lifeline of an organization, there are certain challenges that have limited the efficiency of the enterprise networks. Malware threats, limited data replication performance, network availability, sluggish network connectivity  — all are challenges that can have an immediate impact on the business. Here’s how to address them.

Ransomware, Malware, and BYOD

Enterprise networks are affected by different types of security challenges. The usual culprits include ransomware, malware, ill-considered BYOD (Bring Your Own Device) strategies,  and vulnerable protocols. Ransomware makes use of backdoor entry predominantly, compromising the network security as well as the data security. With small branch offices often lax in their security policies, they become a favorite entry port for all too many attackers.

Personal mobile devices are another critical entry point. The adoption of BYOD practices by organizations means IT needs to take care when allowing personal device access to the network. Otherwise, malware, perhaps unknowingly, brought into the organization could move laterally across the network and infect computers in other locations. Apart from this, there are certain network protocols which are vulnerable to network attacks. Communication protocols like SSH, RDP HTTP are good targets for network attacks, through which an attacker can gain access to the network. Let’s take the example of SSH. A typically large enterprise with 10,000+ servers could have more than one million SSH keys. Lack of proper key management techniques can impact how employees rotate or redistribute their keys which on its own is a security risk. Moreover, SSH keys that are embedded directly into the code are hardly rotated which can open backdoors for hackers if a vulnerability exists.

RDP has had a history of vulnerabilities since it was released. Since at least 2002 there have been 20 Microsoft security updates specifically related to RDP and at least 24 separate CVEs.

Enterprise Data Replication & Bandwidth Utilization

Data replication is an important aspect of data storage, ensuring data security. Modern enterprise architecture also comprises multi-level data tiered storage for creating a redundant and reliable backup. However, data replication is subjected to higher usage of network bandwidth. As large chunks of data are transferred over a network for replication, they consume a major proportion of network bandwidth, ultimately causing network bottleneck. This can severely impact network performance.

Network Performance

Network performance is critical as far as an enterprise is concerned. The network performance can be segregated into network speed and network reliability. Both of them are key performance parameters for an enterprise network.

If an enterprise network becomes unstable with higher downtime, then it will impact the overall performance of an enterprise network. Moreover, in case of an unscheduled outage, the break-fix solution might include replacement of legacy devices or failed devices. This costs both, time and resources. It impacts productivity as well.

WAN outages have been one of the top contributors that negatively impact the productivity of enterprise networks.

Complexity and Connectivity to Cloud

Today, the majority of the organizations have connected their enterprise networks to the cloud and often to multiple clouds. However, multi-cloud architectures pose certain challenges for the enterprise network. It will be a challenge to manage the different providers and apply an integrated security standard to all the providers. At the same time, it will be difficult to strike a proper balance between on and off-premises environments.

This includes the challenge of deriving a perfect model that can connect on-premise datacenters to the cloud. An enterprise network can deliver a better performance with reliability if the on-premise environment and off-premise environment is perfectly balanced. This should be defined by a proper cloud strategy of an organization.

Software Defined WAN Solution

Most of the challenges faced by the enterprise network could be effectively solved with the implementation of software-defined WAN (SD-WAN), based on software-defined networking (SDN) concepts.

SD-WAN for enhanced network security

SD-WAN presents new security features with service chaining that can work with the existing security infrastructure. Cato has integrated foundational security policies to curb issues pertaining to malware, ransomware, and vulnerable protocols. Security policies can also be set for the entire network from Cato’s management console, making updating and enforcing security that much easier. Enterprises that require higher security measures can use the advanced security and network optimization functions that run within the Cato Cloud.

SD-WAN for enhanced network performance

SD-WAN uses the internet to create secure, high-performance connections, that eliminates most of the obstacles pertaining to MPLS networks. SD-WAN can work alongside WAN optimization techniques that can offer MPLS-like latency while routing the data across the network, resulting in better performance. Cato, for instance, offers a unique multi-segment optimization that addresses performance issues at a fraction of the cost of MPLS and traditional WAN optimization.

The performance benefits offered by SD-WAN include WAN Virtualization and Network-as-a-Service. Network-as-a-Service allows the organization to use internet connections for optimized bandwidth usage.

SD-WAN for data replication and disaster recovery

With SD-WAN in place, enterprises have more choices in terms of data replication and disaster recovery. Rather than a tape-based backup, datacenters can move to a WAN-based data transfer and replication. The usual WAN challenges like high latency, packet loss, bandwidth limitations, and congestion can be solved with the help of SD-WAN with an affordable MPLS alternative that offers fast, reliable and affordable data transfer between datacenters.

In this post, we’ve covered some of the real world challenges that are common in enterprise networking. This includes problems with security, connectivity, performance, replication, and connectivity to the cloud. However, with the help of SD-WAN and related technologies, modern businesses can make their networks more efficient, reliable and secure without having to rely on expensive MPLS optimizations.

The post 4 Real World Challenges in Enterprise Networking & How SD-WAN Can Solve Them appeared first on .

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It’s no secret that CIOs want their networks to be more agile, better able to accommodate new requirements of the digital business. SD-WAN has made significant advancements in that regard. And, yet, it’s also equally clear that SD-WAN alone cannot futureproof enterprise networks.

Mobile users, cloud resources, security services — all are critical to the digital business and yet none are native to SD-WAN. Companies must invest in additional infrastructure for those capabilities. Skilled security and networking talent are still needed to run those networks, expertise that’s often in short supply. Operational costs, headaches, and delays are incurred when upgrading and maintaining security and networking appliances.

Outsourcing networking to a telco managed network service does not solve the problem. Capital, staffing, and operational costs continue to exist, only now marked-up and charged back to the customer. And, to make matters worse, enterprises lose visibility into and control over the traffic traversing the managed network services.

How then can you prepare your network for the digital business of today — and tomorrow? Cloud-native networks offer a way forward.

Like cloud-native application development, cloud-native networks run the bulk of their route calculation, policy-enforcement, and security inspections  — the guts of the network — on a purpose-built software platform designed to take advantage of the cloud’s attributes. The software platform is multitenant by design operating on off-the-shelf servers capable of breakthrough performance previously only possible with custom hardware. Eliminating proprietary appliances changes the technical, operational, and fiscal characteristics of enterprise networks.

5 Attributes of Cloud-Native Network Services

To better understand their impact, consider the five attributes a provider’s software and networking platform must meet to be considered cloud-native: multitenancy, scalability, velocity, efficiency, and ubiquity.

Multitenancy

With cloud-native networks, customers share the underlying infrastructure with the necessary abstraction to provide each with a private network experience. The provider is responsible for maintaining and scaling the underlying infrastructure. Like cloud compute and storage, cloud-native networks have no idle appliances; multitenancy allows providers to maximize their underlying infrastructure.

Scalability

As cloud services, cloud-native networks carry no practical scaling limitation. The platform accommodates new traffic loads or new requirements. The software stack can instantly take advantage of additional compute, storage, memory, or networking resources. As such, enabling compute-intensive features, such as SSL decryption, does not impact service functionality.

Velocity

By developing their own software platforms, cloud-native network providers can rapidly innovate, making new features and capabilities instantly available. All customers across all regions benefit from the most current feature set. Troubleshooting takes less time since support and platform development teams are bound together. And as the core functionality is in software, cloud-native networks can expand to new regions in hours and days not months.

Efficiency

Cloud-native network design promote efficiency that lead to higher network quality at lower costs. Platform ownership reduces third-party license fees, and nominal support costs. Leveraging the massive build-out of IP infrastructure avoids the costs telcos incurred constructing and maintaining physical transmission networks. A smart, software overlay, monitors the underlying network providers and selects the optimum one for each packet. The result: carrier-grade network at an unmatched price/performance.

Ubiquity

Like today’s digital business, the enterprise network must be available everywhere, accessible from many edges supporting physical, cloud, and mobile resources. Features parity across regions is critical for maximum efficiency. Access to the cloud-native network should be using physical and virtual appliances, mobile clients, and third-party IPsec compatible edges. This way, truly one network can connect any resource, anywhere.

A Revolutionary, Not Evolutionary, Shift in Networking

By meeting all five criteria, cloud-native networks avoid the cost overhead and stagnant process of traditional service providers. Such benefits cannot be gained by merely porting software or hosting an appliance in the cloud. It’s a network that must be built with the DNA of cloud service from scratch. In this, cloud-native networks are a revolution in network architecture and design.

The post The Cloud-Native Network: What It Means and Why It Matters appeared first on .

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It’s always great to see a winning customer implementation;  it’s even better when others see it too. We just announced that a customer of ours, Standard Insurance Co.,  has won an ICMG Architecture Excellence Awards for its digital transformation initiative. Kudos to the entire Standard Insurance IT team.

“The cost of the total solution Cato is providing us – including the centralized management, cloud-based monitoring, and reports – matches the cost of the firewall appliances alone. But with appliances, we would still need to add the cost of appliance management, the advanced protection,  and other firewall components,” says Alf Dela Cruz, head of infrastructure and cybersecurity at Standard Insurance.

Standard Insurance’s digital transformation was so effective it won an ICMG award for architectural excellence

The ICMG Architecture Excellence Awards is a vendor-independent, global competition benchmarking enterprise and IT architecture capabilities. Nominations are submitted by IT teams worldwide and evaluated by a select group of judges. Winning submissions include companies such as Credit Suisse, L’Oreal, and Unisys.

Back in 2016, Standard Insurance’s CEO initiated a multiyear digital transformation initiative emphasizing the importance of online insurance selling. As part of that effort, the company needed to upgrade its backend infrastructure, changing its core insurance software and migrating from a private datacenter to AWS.

Standard Insurance needed an enterprise network optimized for the hybrid cloud and with strong protection for Internet-borne threats. After two ransomware incidents, the CEO demanded a dramatically improved security posture.

Cato connected the company’s 60 branches, the headquarters in Makati, Philippines,  and the company’s AWS instance into Cato Cloud. Branch firewall appliances were replaced with Cato Security Services, a tightly integrated suite of cloud-native services built into Cato Cloud that include next-generation firewall (NGFW), secure web gateway (SWG), URL filtering, and malware prevention.

With Cato, Standard Insurance eliminated branch firewalls and connected 60 branches and AWS into one, seamless network

So effective was the implementation that Dela Cruz now encourages others to migrate to Cato. “We are recommending Cato to our business partners,” says Dela Cruz. “We love that the solution is cloud-based, easy to manage, and less expensive than other options.”

To read more about Standard Insurance’s implementation click here.

The post Standard Insurance Transforms WAN with Cato Cloud to Win ICMG Award For Best IT Infrastructure Architecture appeared first on .

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In 2014, Gartner analysts wrote a Foundational Report (G00260732, Communication Hubs Improve WAN Performance) providing guidance to customers on deploying communication hubs, or cloud-based network hubs, outside the enterprise data center. Five years later, that recommendation is more important than ever, as current enterprise computing strategies dictate the need for a modern WAN architecture.

What is a communication hub?

A communication hub is essentially a datacenter in the cloud, with an emphasis on connectivity to other communication hubs, cloud data centers, and cloud applications. Hubs house racks of switching equipment in major colocation datacenters around the world, and together they form a series of regional Points of Presence (PoPs). These PoPs are interconnected with high-capacity, low-latency circuits that create a high-performance core network. Communication hubs also have peering relationships with public cloud data centers such as those from Amazon, Microsoft and Google, and major cloud applications from Microsoft, NetSuite, Salesforce and more. This helps deliver predictable network performance.

At the edge of this network, customers can connect their branch locations, corporate data centers, mobile and remote users to the core network via their preferred carrier services (MPLS, broadband, LTE, etc.) using secure tunnels. Each entity connects to the communication hub nearest them to reduce latency.

Communication hubs also host regionalized security stacks so that traffic going to/coming from the Internet and external clouds can be inspected thoroughly for threats. This eliminates or vastly reduces the need for customer locations to host security appliances of their own.

The need for communication hubs, and the benefits they provide

According to the Gartner report, the primary reasons for developing a WAN architecture based on communication hubs are the same reasons Cato has been articulating for years:

  • Cloud services are responsible for moving more applications out of the corporate datacenter and onto IaaS and SaaS platforms. This need to send traffic directly into the cloud requires the core WAN backbone based on the hubs to become the new corporate LAN.
  • An increasing number of mobile users needing access to enterprise applications want a high-quality user experience, without the latency of backhauling their traffic to a corporate data center.
  • Voice and video traffic is on the rise, and it requires high bandwidth, low latency transport. Also, companies need the ability to prioritize certain types of traffic across the WAN.

We would add to this list the need to distribute security to the regional locations close to where the users are, without having to have hardware appliances in the branches.

The Gartner report notes that creating a WAN backbone architecture based on communication hubs connected with high-speed links provides many benefits to the enterprise, including:

  • Minimize Network Latency — This type of architecture ensures the fastest network path between an enterprise’s strategic sites, which include data centers, branch locations, cloud providers and a large population of the enterprise’s customer base.
  • Keep Traffic Regionalized — Minimize the backhauling of traffic into a corporate datacenter when it has to go from the enterprise network to the Internet, or for audio/Web/video collaboration.
  • Utilize Ethernet for Cloud Connectivity — Cloud services can be accessed via private connectivity via Ethernet and MPLS, providing more predictable performance.
  • Provide On-Demand Flexibility — Easily and quickly modify bandwidth as business needs change by provisioning new circuits within days via self-service.
Cato Cloud is the ultimate network of communication hubs

From the very beginning, Cato’s unique vision has been very similar to the WAN architecture described in Gartner’s report. Cato has built a global network of PoPs – our term for “communication hubs” – where each PoP runs an integrated network and security stack. At this writing, there are more than 40 PoPs covering virtually all regions of the world. Our goal is to place a PoP within 25 milliseconds of wherever businesses work.

The PoPs are interconnected with multiple tier-1 carriers that provide SLAs around long-haul latency and packet loss, forming a speedy and robust core network. The PoP software selects the best route for each packet across those carriers, ensuring maximum uptime and best end-to-end performance. The design offers an immediate improvement in network quality over the unpredictable Internet links at a significant cost reduction over MPLS.

All customer entities connect to the Cato Cloud backbone using secure tunnels that can be done in a couple of ways. Cato can establish an IPsec tunnel from customers’ existing equipment such as a firewall in a datacenter or branch location. A second way is to use a Cato Socket, a zero touch SD-WAN device to manage traffic across the last mile from a branch office. Mobile users can connect via a Cato Client on their device. Thus, every customer location and user can connect easily and securely to the WAN.

Cato applies a layer of optimization at the cloud, for both cloud data centers and cloud applications. For cloud applications, Cato can set egress points on its global network to get the Internet traffic for specific apps to exit at the Cato PoP closest to the customer’s instance of that app; for example, for Office 365. For cloud data centers, the Cato PoPs co-locate data centers directly connected to the Internet exchange points as the leading IaaS providers such as AWS and Azure. Cato is dropping the traffic right in the cloud’s data center, the same way a premium connection like Direct Connect and ExpressRoute would. These services are no longer needed when using Cato Cloud.

In short, Cato’s unique multi-segment acceleration combines both edge and global backbone and allows Cato to maximize throughput end-to-end to both WAN and cloud destinations. This is the crux of the argument for communication hubs.

Security is an integral component of Cato’s global network. Convergence of the networking and security pillars into a single platform enables Cato to collapse multiple security solutions such as a next-generation firewall, secure web gateway, anti-malware, and IPS into a cloud service that enforces a unified policy across all corporate locations, users and data. Cato’s holistic approach to security is found everywhere throughout the Cato Cloud platform.

Communication hubs provide a flexible WAN architecture with significant benefits. Companies can choose to build their own network of hubs at great expense, or they can plug into the Cato Cloud and enjoy all the benefits of a modern WAN from day one.

The post How To Best Design Your WAN for Accessing AWS, Azure, and the Cloud appeared first on .

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Unified Communications-as-a-Service (UCaaS) is increasingly attractive to organizations looking to eliminate the cost of operating on-premises platforms and PSTN access contracts. However, those looking to adopt UCaaS to save money may be in for a nasty surprise.

UCaaS offerings move unified communications capabilities — integrated calling, conferencing, and messaging applications — normally delivered from on-premises servers, into the cloud. The idea, like so many cloud services, is that UCaaS will lower the adoption barrier by eliminating capital expenses to procure new applications, while also reducing UC implementation and operational costs – and to an extent that’s true.

Our research also shows, though, that many enterprises experience an increase in WAN costs to support connectivity to the cloud.  Approximately 38% of companies benchmarked by Nemertes Research in 2018 saw their WAN costs rise as a result of their adoption of UCaaS, with a mean increase in spend of 23.5%.   More than a third cited rising network costs as the biggest contributor to increasing their UC open spend in their first year of moving to the cloud.

What’s driving these network cost increases?  Two factors in particular:
  1. The need to increase bandwidth between the organization and the Internet to support connectivity to the UCaaS provider
  2. The need to add bandwidth between locations to support new features commonly available from UCaaS providers, like video conferencing.

Those seeing rising network costs typically purchase additional MPLS bandwidth from their existing WAN supplier(s).  They have not yet begun to deploy SD-WAN to add bandwidth, support real-time applications, and reduce WAN spend.

SD-WAN reduces WAN expense by virtualizing network access services, allowing organizations to replace or reduce expensive MPLS access links with lower cost Internet services while maintaining necessary performance and reliability to support voice and video communications.  Emerging SD-WAN service providers further build upon the benefits of SD-WAN by offering guaranteed end-to-end performance across the globe, as well as direct network connectivity to many UCaaS providers, enabling efficient call flows.

Additional cost reductions result from collapsing the branch stack, replacing dedicated firewalls, WAN optimizers, session border controllers, and routers with converged functions that run features as virtual instances on a virtual customer-premises equipment (vCPE) or are provided by the SD-WAN.  Nemertes also finds that network management costs decline on average by 20% for those organizations who have converted at least 90% of their WAN to SD-WAN.

An example of real-world potential savings is shown below.  In this scenario, a 200-site organization using MPLS spends $3.476 million per year on network costs.  Shifting to 100% SD-WAN reduces those costs to $2.154 million, a net savings of $1.321 million per year.  

SD-WAN adoption results in further demonstrable benefits, including improved resiliency by adding secondary network connections to branch offices, faster turn-up of new branch offices, and the ability to more rapidly increasing branch office bandwidth.

Those considering, or adopting UCaaS would be wise to evaluate the impact that UCaaS adoption will have on their network, particularly with regard to demands for additional bandwidth to support video conferencing, and the need for high resiliency, low latency, and low jitter network performance.  Evaluate SD-WAN as a means of meeting the performance and reliability needs of UCaaS while reducing WAN spend.

The post Reducing WAN Spend when Adopting UCaaS appeared first on .

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

For global companies still operating with a legacy WAN architecture, WAN modernization is mandatory today for a variety of reasons. For example, digital transformation is based on business speed, and the lack of network agility can hold an organization back. A company that has to wait months to install networking equipment in order to open a new location might miss a fleeting business opportunity.

Many businesses have spent millions of dollars increasing their level of application and computer agility through the use of cloud resources, and now it’s time to update the network with a software-defined WAN. When it comes to modern cloud-based applications, a poor network will result in a poor experience.

SD-WAN” is a very broad category. Architectures can vary greatly from one vendor to another, and one service provider to another. CPE (customer premise equipment), broadband transport, security, and other factors can be quite different from one provider to another. If a company chooses the wrong SD-WAN, it can be detrimental to the business.

Global companies have unique networking needs. Workers across far-flung locations around the world often need to communicate and collaborate. For example, product developers in the U.S. need to confer in real-time with managers in manufacturing plants in Asia. Architects in Europe need to send blueprints to builders in South America. These routine work activities place special demands on the network pertaining to bandwidth, response times and data security.

We asked Zeus Kerravala, Principal Analyst with ZK Research, to outline his set of SD-WAN considerations for global companies. According to Kerravala, the choice of network is critically important for companies with locations across the globe. He explains the importance of considering Internet transport for global connections, managing CPE, and securing branch offices.

WAN transport considerations

Many SD-WAN solutions are big proponents of augmenting or replacing MPLS circuits with broadband connectivity, says Kerravala. “Broadband Internet transport is fine for short distances but it can add significant latency in global connections.” He pointed to a chart drawn from his research that demonstrates sample response times of these longer distances using the Internet versus a private network.

Sample Average Response Times
Internet (seconds) Private Network (seconds)
Dubai to Dallas 1.185 0.375
Dubai to London 4.24 0.19
Frankfurt to Shanghai 1.99 0.2
San Jose to Shanghai 3.97 0.306
San Jose to Chicago 0.194 0.158

“A lot of these response times have to do with how Internet traffic works. ‘The Internet’ is really a collection of interconnected networks, and it’s difficult to know how your traffic moves around on this system of networks,” says Kerravala. “Various factors can affect Internet response time, such as the time of day, but it’s easy to see that the differences are staggering compared to using a private network. You might look at some of these figures and think that the difference isn’t very much, but if you are moving large packets of data, say for data center replication, it might actually make a difference in how long it takes to perform an activity.” Latency can affect important applications like voice and video.

Kerravala points out that there are a lot of SD-WAN vendors, and many of them target different kinds of customers. “The service providers that have their own private backbone are a better fit for global companies because they leverage the benefit of broadband as an on-ramp but it doesn’t become the transport network.”

Managing CPE

Many SD-WANs require significant CPE and managing them globally is an issue. “It’s expensive and time-consuming for an engineer to visit branch locations around the globe to install firewalls and routers. The process can hold up opening new offices,” says Kerravala. “The traditional model of having the networking equipment on premises is actually getting in the way of businesses. Digital transformation is about agility. If a company is trying to take advantage of some sort of market transition and open up a new office but now they have to wait a couple of months in order to get a box shipped to a certain location and have an engineer hop on a plane, that’s a problem. How you manage the CPE is as important as how you manage the transport.”

There’s been a lot of chatter in the industry about NFV (network functions virtualization) or virtual CPE and the ability to take a network function and run it as a virtual workload on some kind of shared appliance. Conceptually, putting a WAN optimizer or a router on some sort of white box server sounds great. “I can take multiple appliances, consolidate them down to one and all of a sudden I have a better model,” says Kerravala. “On the upside, it does lower the cost of hardware. The problem is, it doesn’t really address many of the operational issues. I have replaced physical with virtual and maybe I can deploy it faster because I can remotely install it but operationally, I’m still managing these things independently.”

A company that has 100 global offices might have 100 virtual firewalls instead of 100 physical ones, but they still need to be managed independently. Administrators need to worry about firewall rule sets, configuration updates, and software updates. Moreover, the company doesn’t get the same kind of elastic scale that it would get from the cloud. So, the company has addressed half the problem in that its hardware costs are less but they have introduced some new operational challenges. Kerravala calls the lack of hardware scaling capabilities “the dark side of vCPE” that doesn’t get talked about much.

He recommends that global companies shift their networking equipment to the cloud to get better scalability and to eliminate the need to maintain equipment locally. “There’s no reason today to not leverage the cloud for as much as possible. We do it in the computing world and the application world and we should do it for the network environment as well,” says Kerravala.

“If I’m going to move to this virtualized overlay type of network or some sort of SD-WAN, then a better model is to take my vCPE and push it into the cloud. And so, the functions now exist in your cloud provider and they inherit all the benefits of the cloud—the concept of pay per use and elastic scaling, the ability to spin services up and spin services down as needed. If I want to open a new office, I know I need routing capabilities and a firewall and maybe a VPN. I can just pick those from a menu and then have them turned up almost immediately. So, there’s no infrastructure management needed, there are no firmware updates, there are no software updates. The cloud provider handles all of that. I have a lot more assurance that when I request a change, it is going to propagate across my network at once. I don’t have to manage these things node by node. It can significantly change the operational model.”

Security considerations

Along with CPE and transport, global companies have to think about security implications as well. For example, securing branch offices independently is complicated and error-prone.

Traditional CPE-based security is very rigid and inflexible, and in an era when companies want to do things quickly, it can be a challenge to have to manage security solutions from multiple vendors. The process of keeping rules up-to-date and keeping policies up to date is complicated because not all vendors use the same syntax or follow the same rules. That process for even two vendors is so overly complicated that it’s hardly worth the effort.

Say a company has 100 offices and not all of them have been upgraded to the same level of firewall software. The company wants to put in a new security patch, but it might not be possible until all the firewalls have been upgraded. Anyone involved in networking knows that configurations get out of alignment with each other very quickly. vCPE offers some benefits but it really doesn’t change that model.

Kerravala explains that the middle mile is not all that secure. “You can protect the edges but that middle mile is where a lot of the threats come from, and so you get inconsistent protection across the organization. This is where thinking about changing the security paradigm by moving a lot of these functions into the cloud makes a lot more sense because now security is almost intrinsic across the entire network. You can protect the edges but you can also protect that middle mile where a lot of the breaches happen today,” he says.

In summary

Because of the unique needs that global organizations have, they must thoroughly evaluate the architectures of various SD-WANs. Kerravala recommends implementing much of the SD-WAN infrastructure in the cloud to simplify management and operations and to improve security.

For more information on this topic, watch the recorded webinar The Practical Blueprint for
MPLS to SD-WAN Migration
.  

The post SD-Wan Consideration Factors for Global Companies appeared first on .

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview