Loading...

Follow Packet Design Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Packet Design Blog by Don Jacob, Technical Marketing Spec.. - 1y ago

We are happy to announce the release of the latest version of our Explorer Suite – version 18.1. Our latest release introduces two features that help network engineers make more informed decisions for network planning and maintenance: (i) capacity planning and (ii) comparison reports.

We have also added support for streaming telemetry, a new mechanism for collecting network performance data without the scalability limitations of today’s “pull-based” methods. In this blog, we provide a summary of the three major features in Explorer Suite version 18.1.

Capacity Planning

When managing networks, it is important to ensure that sufficient bandwidth is available for application and data delivery. Lack of sufficient bandwidth can result in network downtime, data delivery failures, congestion, degraded application performance, or high latency, any of which can lead to financial and productivity losses for the organization. For this reason, network engineers and planners cannot afford to guess when bandwidth consumption will reach a particular value, or defer equipment upgrades until it’s too late and the available link bandwidth is saturated. It is important to be able to forecast utilization trends so that informed decisions can be made for upgrading bandwidth capacity or rerouting traffic over links.

Using the Capacity Planning feature in Explorer Suite version 18.1, network planners can determine how bandwidth utilization on a link will grow over time to a date in the future. The increase in bandwidth can be calculated using exponential, linear or user-defined growth rates, based on historical daily 95th percentile, average or maximum values.

Capacity Planning Report

This information provides network engineers with actionable data and helps eliminate the guesswork when planning bandwidth upgrades or provisioning new services in the network.

Comparison Reports

Numerous industry reports have cited network changes as the number one cause of downtime. In order to ensure that the state of the network is as expected after a configuration change, software upgrade or other scheduled maintenance, network engineers must be able to compare the state of network devices at different times. This capability is the other major feature we have added to the Explorer Suite and it will help network engineers assure network uptime.

Network engineers simply select the devices whose state is to be compared and specify the before and after times. The new Comparison Report highlights the differences.

Comparison Report

From the Comparison Report, it is easy to determine if the state of devices, links, IGP prefixes or Traffic Engineering tunnels have changed from up to down or vice versa. Engineers can click and drill down into any of the changed states for more detail. For example, IGP prefix changes will show the before and after state and metric. In the case of changes to TE tunnels, drilling down can show if tunnel attributes have changed and if paths or FRR protection has been added or removed.

Comparison Report – Drill Down Into Changes

Using this report, network engineers can quickly determine what changed after network maintenance or what change led to network downtime or rerouting of traffic.

Streaming Telemetry

Streaming telemetry is a new “push-based” mechanism used for collecting network performance data. It is a more scalable method for gathering performance data without any of the limitations of “pull-based” mechanisms, such as SNMP. We covered the basics of streaming telemetry previously and Explorer Suite version 18.1 now supports collection of streaming telemetry from Juniper and Cisco devices.

In addition to these major features, there are also other enhancements, such as top link in and out dashboard controls; link context pages that display mini-maps with interface, description, TE metric, bandwidth and SRLG information; baselines of BGP routes which show baseline BGP routes by next hop, second next hop AS, origin AS and neighbor AS, and the deviation from the baseline.

We hope you find the new capabilities of Explorer Suite exciting. And as always, a big thank you to all our users whose ideas and feedback help us continue to enhance our products in ways that really help network teams manage their networks better.

Interested to know more? Request a Personalized Demo

The post Explorer Suite Version 18.1 – What Is New? appeared first on Packet Design.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Packet Design Blog by Don Jacob, Technical Marketing Spec.. - 1y ago

With the advent of multi-layer, multi-service networks, monitoring and management in service provider networks is more complex than ever before. While network operators can gain insights into the health of the IP network – including routing, traffic, and performance using a mix of packet, SNMP, and flow-based monitoring tools – consolidating and correlating this point-specific data into actionable knowledge is challenging.

Add multi-layer and multi-service to this mix, and network engineers are limited by the lack of a comprehensive and meaningful view of network performance. For example, without visibility into the underlying optical layer combined with multi-layer analytics, operators cannot perform optimal traffic engineering that exploits the performance, protection, and cost characteristics of each layer.

The Monitoring Challenges in Multi-Layer, Multi-Service Networks

Many service provider networks are multi-layered. These consist of the IP/MPLS layer, which supports constraint-based destination routing, and the optical layer. With its high speeds and superior bandwidth utilization capabilities, the optical layer allows the transport of large traffic volumes over long distance point-to-point fiber links.

However, when it comes to monitoring multi-layer networks, there are no unified network monitoring tools that can collect and correlate network performance data from Layer 0 to 3. This lack of end-to-end, correlated visibility means that network engineers managing the optical network are not aware of any possible performance issues at the IP layer and vice-versa. This in turn affects network planning, traffic optimization, and troubleshooting of service-delivery issues.

For instance, the failure of an optical fiber that carries multiple IP/MPLS links can result in traffic being dynamically rerouted over another IP link. But this new link can end up being over-utilized due to the sudden shift in traffic, thereby impacting overall network performance and data delivery.

Multi-service networks present another set of challenges. In the past, service providers had networks dedicated to different customers and their applications. That has changed with service providers using the same underlying network to deliver different applications, ranging from banking and trading data to live voice and video streaming. Running multiple applications over a converged network is complex since each has unique performance requirements, growth rates, and fault-tolerance characteristics.

For example, congestion on one link can result in dynamic rerouting of traffic over a longer, high-latency link that is not suitable to low-latency applications such as banking or trading. With only static performance metrics collected using SNMP or flow analytics, network engineers are left blind to these changes that can result in failure to meet SLAs and customer dissatisfaction.

Network Analytics Solution

To run effective multi-layer, multi-service networks, service providers and network operators require visibility into routing, traffic, and network performance along with insights into the performance of the underlying optical layer. But in large, complex networks, this data is useful only if the network engineer can determine how a performance issue is affecting routing or data delivery in another part of the network.

This can be achieved by using a network analytics tool that collects and processes routing, traffic, and performance metrics and consolidates them to provide actionable intelligence. For instance, correlation of routing and performance measurements can help a network engineer understand how device errors notified by SNMP are causing traffic to be rerouted over a longer, high-latency path, resulting in connectivity issues.

Another capability needed is visibility into both the optical and IP/MPLS layers, which not only aids faster troubleshooting of network issues, but also helps network operators achieve benefits such as assuring truly diverse paths or data sovereignty.

We have previously talked about how network analytics with visibility into both the optical and IP/MPLS layers can help monitor and manage multi-layer, multi-service networks in these blogs:

https://www.packetdesign.com/blog/self-driving-networks-have-arrived/

https://www.packetdesign.com/blog/network-automation-use-case-multi-layer-optimization/

The correlation of network analytics data from different layers helps network engineers understand present and past network behavior, detect potential issues, and predict outages even before end users have had a chance to complain.

The combination of Ciena Blue Planet software with Packet Design Explorer Suite is a tool that can provide the network analytics and intelligence that will enable service providers to overcome these challenges. Learn more about how Packet Design Explorer Suite with Ciena Blue Planet can help with Layer 0 to 3 visibility here:

https://www.packetdesign.com/blog/rush-multi-layer-orchestration/

Request a Personalized Demo

The post Monitoring and Managing Multi-Layer, Multi-Service Networks appeared first on Packet Design.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It’s the “age of the customer” according to Forrester Research: “Empowered customers are shaping business strategy. Simply put, customers expect consistent and high-value in-person and digital experiences. They don’t care if building these experiences is hard or requires a complex, multifunction approach from across your business. They want immediate value and will go elsewhere if you can’t provide it.”

Service providers know this all too well. Satisfying their customers requires building agile, automated networks that can rapidly provision services and help them achieve greater ROI. So how are they doing? Automation is the “latest and probably greatest collective challenge ever put to the sector” writes Heavy Reading analysts:

“While telecom operators have generally done a good job of automating their customer-facing activities through self-service portals, there is often a long lead time for services to be activated, due to the presence of multiple manual processes between initial order and final delivery. To compete successfully with Webscale insurgents {Amazon, Facebook, Google}, network operators need to extend automation beyond service fulfillment and provisioning to include performance monitoring and service assurance.”

Automation Across the Service LifeCycle

At Packet Design, we understand how challenging automation is for service providers. To deliver what their customers are demanding, they must optimize their networks for numerous services. These applications have differing, often conflicting performance requirements, growth rates, and fault-tolerance characteristics.

On top of this, service providers must also process a higher volume and rate of requests for network resources that have to be provisioned and de-provisioned rapidly, often within seconds. A few examples include:

  • Adding a new customer to the network
  • Supporting a time-sensitive trading application for a financial services enterprise
  • Temporarily increasing bandwidth between two sites

An SDN controller can automate these requests by issuing commands to the network devices without human intervention. However, as the Heavy Reading analysts state, automation alone is not enough. SDN presents many management challenges for operators, including loss of visibility into changes taking place in the network and the need to capture engineering expertise in SDN applications. SDN controllers lack the management intelligence needed for autonomous networking.

Referring back to our examples, how do operators know if adding a new client to the network will impact performance for other customers? For the time-sensitive trading application, how can network engineers compute short delay paths, segregate the application traffic from other traffic, and fully protect it from link and router failures? To ensure adequate bandwidth between two sites, how can they calculate the optimum path and prevent other services from being affected?

Fortunately, these challenges can be addressed by SDN analytics, fed by network topology, traffic, and performance telemetry as well as projections and algorithms. Telemetry is simply the data; analytics provide actionable conclusions, guiding service providers on what to do with the data gathered from the network.

With its telemetry, analytics, and a policy-driven path computation and optimization engine, the Packet Design Explorer SDN Platform uniquely provides the SDN management intelligence service providers need. Using it, their networks can provision network resources dynamically to accommodate different service types, variable demands, and failures. It’s built on the company’s ability to provide real-time monitoring, back-in-time forensic analysis, and interactive network event and demand modeling.

For instance, the Platform’s predictive analytics give operators accurate impact assessments of application requests for network resources – including whether or not the requested changes will adversely affect other services – and the best way to provision them. Historical traffic matrices (by time of day, day of week) make it possible to determine if network load is likely to change significantly after an application request is satisfied (for example, the predictable increase in market data and trading traffic that occurs when stock markets open).

With this real-time telemetry, analytics, optimization, and policy, the Explorer SDN Platform is making it possible for service providers to reap the benefits of automation. They can build more resilient networks, accelerate service activation, improve operational efficiency, enhance customer satisfaction, and better use infrastructure and human resources.

More Network Automation Use-Cases

The post Network Automation Use Case: Rapid Service Provisioning appeared first on Packet Design.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Packet Design Blog by Hariharan Ananthakrishnan, Distingu.. - 1y ago

This blog post discusses different YANG models for describing network topologies. There are multiple applications for a topology data model. For example, nodes within the network can use a data model to capture their understanding of the overall network topology and expose it to a network controller. A network controller can then use the instantiated topology data to compare and reconcile its own view of the network topology with that of the network elements it controls. A network controller might even use the data model to expose its view of the topology it controls to applications via northbound APIs.

YANG Model Use Cases

The Architecture for the Interface to the Routing System allows for a mechanism where the distributed control plane can be augmented by an outside control plane through an open, accessible interface. The IETF’s I2RS working group summarized the use cases in this document. One of the use cases being to (a) provide a coherent view of the network topology from the collected data, (b) present the topology view to applications that need to understand the topology, and (c) use topology information to improve application-specific mechanisms, such as path selection, resources reservation, etc.

Just before the IETF 101 meeting in London, some important RFCs were published by the IETF in the Network Management domain. Here, we review three of them.

  • RFC 8342, Network Management Datastore Architecture (NMDA)
  • RFC 8345, A YANG Data Model for Network Topologies
  • RFC 8346, A YANG Data Model for Layer 3 Topologies
Network Management Datastore Architecture (NMDA)

Datastores are a fundamental concept binding data models written in the YANG data modeling language to network management protocols, such as the Network Configuration Protocol (NETCONF) and RESTCONF. RFC 8342 defines an architectural framework for datastores that addresses requirements that were not well supported in an earlier, simpler model.

Agreement on a common architectural model for datastores ensures that data models can be written in a way that is agnostic to the encoding used by network management protocols. The architectural framework identifies a set of conceptual datastores, but it does not mandate that all network management protocols expose all these conceptual datastores.

A YANG Data Model for Network Topologies

RFC 8345 defines an abstract (base) YANG data model to represent networks and topologies. The data model is divided into two parts:

The first part enables the definition of network hierarchies or network stacks (i.e., networks that are layered on top of each other) and maintenance of an inventory of nodes contained in a network.

The second part of the data model augments the basic network data model with information to describe topology. Specifically, it adds the concepts of “links” and “termination points” to describe how nodes in a network are connected to each other.

The data model also introduces vertical layering relationships between networks that can be augmented to cover both network inventories and network/service topologies.

A YANG Data Model for Layer 3 Topologies

RFC 8346 introduces a YANG data model for Layer 3 (L3) network topologies, specifically L3 Unicast. The model gives applications a holistic view of the topology of a L3 network, all contained in a single conceptual YANG datastore. The data model builds on and augments the data model for network topologies defined in RFC 8345.

The document also defines an example model that covers OSPF.

As a co-author of YANG models for network topology and L3 topologies, Packet Design is enabling and embracing the Software Defined Networking revolution. Some of our customers have already embraced SDN in their production networks.

The post YANG Models for Network Topologies appeared first on Packet Design.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Packet Design Blog by Don Jacob, Technical Marketing Spec.. - 1y ago

In a previous blog, we covered the basics of multicast routing. Multicast is preferred over unicast and broadcast by content and service provider networks for IPTV, video content distribution, stock market live feeds, and in data centers to synchronize data between servers. Large enterprises also use multicast for video surveillance and teleconferencing. Because multicast is used for these critical and high-volume applications, any downtime can lead to major loss of revenue and customer churn.

In this blog post, we will look at the challenges of running multicast applications and how you can monitor and manage multicast networks.

Multicast versus Unicast

Though multicast requires considerably less resources and reduces delay when compared to unicast, it does not provide the same level of visibility. With unicast, a single source sends to a single destination and the IP address of each participating device is known. But with multicast, a single source sends to a group of devices – a host group – that can be dynamic. This makes managing and monitoring network performance more complex and many network operators are challenged to know what multicast addresses are used in their network and what they are used for. For example, a financial services organization we talked to estimated there are sixteen thousand multicast groups in their network. While they would like to reduce that number, they can’t because they do not know what each group is used for.

Challenges with Multicast Routing

There are other challenges associated with running a multicast network. Here are four common issues:

  • If multicast is not enabled on an upstream interface that connects to the Rendezvous Point (RP), it can result in what is known as an RPF (Reverse Path Forwarding) failure, which will break multicast routing.
  • To enable multicast on an interface, PIM must be enabled on all the routers connected to that interface. But if it is a low capacity interface, the high volume of multicast traffic reaching it may result in congestion. In some cases, this can cause downstream receivers to request missing packets. This in turn increases the volume of multicast traffic and can possibly result in a total network meltdown.
  • Problems occur when multicast group to RP mappings are misconfigured. Because these mappings are statically configured, an incorrect or different configuration on routers can result in some receivers never receiving the data.
  • Changes made to the underlying unicast routing that affect the choice of the upstream multicast router can result in the branch from the multicast tree to be grafted to another router, leading to multicast routing failure. Diagnosing such changes requires visibility not only into multicast, but also into the underlying unicast routing.
Monitoring Multicast Networks

Traditional network and traffic monitoring tools are insufficient for monitoring multicast routing. Information from SNMP MIBs provides no visibility into multicast logical constructs and these tools often lack information about the underlying IGP and BGP control plane.

Enterprises, content and network service providers who operate multicast networks require specialized tools that collect multicast routing topology (groups, trees, sources and receivers), establish baselines, detect changes and correlate them with underlying IGP and BGP routing changes. With this information, when a link goes down, they can be alerted that all multicast trees using the link will have to re-converge.

Monitoring tools must also track multicast-specific issues, such as RP mismatches, interfaces without PIM enabled, PIM interfaces with no router connection, groups with no receivers, etc., and issue alerts so that preventive actions can be taken quickly.

We hope to have highlighted some of the complexities of managing and monitoring multicast networks. Have you faced any of these challenges in your multicast network?

If you have or would like to learn more about monitoring and managing multicast better, download our whitepaper that covers all this and more in detail:

Understanding & Managing Multicast Routing

The post Complexities of Monitoring Multicast Networks appeared first on Packet Design.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview