Loading...

Follow ADVA Optical | Technically Speaking on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Enterprise infrastructure has become increasingly dependent on the cloud, which is naturally leading to growing concerns about data security.

While the cloud in many ways offers better security than the traditional on-premises data center, this should not obscure the fact that complex architectures tend to have more vulnerabilities than simple ones, and the cloud is nothing if not complex.

One of the key challenges is securing data as it navigates between the cloud and the user. Whether this journey runs across town or halfway around the world, data is likely to transition between multiple network providers and other handlers even if the entire wide area infrastructure is harnessed under a single WAN or SD-WAN solution. Not only does this introduce gaps in security that can be exploited, it also often leads to the deployment of multiple encryption and other security mechanisms, all of which act to hamper performance and diminish the kind of visibility the enterprise needs to manage traffic flows.

Full network protection

This is why many organizations are turning to network-level encryption for their entire cloud ecosystem. By roping all data communications under a single solution, organizations are finding that they can quickly fulfill the requirements of emerging regulatory regimes like GDPR and PCI DSS across their distributed data footprints, while at the same time cut down on the management headaches of having to oversee countless service- or provider-based solutions.

According to Markets and Markets, the network encryption market is set to expand from $29 billion today to $4.6 billion by 2023, a compound annual growth rate of 9.8%. This is, in fact, one of the few areas of the IT stack that is expected to be dominated by hardware rather than software in the coming years. With a solid hardware foundation, network security benefits from top performance in high-speed, low-latency environments, and a single platform can provide robust security across all endpoints, networks and applications.

This trend can be seen in companies like Colt Technology Services, which recently began providing an Ethernet Line Encryption service using the ADVA FSP 150 appliance running the ConnectGuard security system. In this way, the company provides end-to-end data protection on low-latency infrastructure up to 10Gbit/s, all while meeting the stringent regulatory environments of Europe, North America and Asia.

Encryption in a box

The FSP 150 is a Layer 2/3 service demarcation solution that provides forwarding, filtering and other advanced services for IP traffic. When combined with the ConnectGuard™ system, the device provides L2 MACsec encryption at line rate on a per-EVC basis, as well as robust AES encryption and a key distribution mechanism based on the IEEE 802.1X standard coupled with a dynamic key exchange process using the Diffie-Hellman algorithm.

All of these tools can be implemented with only microseconds of added latency and virtually no impact on throughput, something that cannot be said of competing solutions like IPSec. 

Going forward, we can expect the vast majority of network traffic to be encrypted, which means vital services will slow to a crawl (or fail to emerge at all, as in the case of rising IoT infrastructure) unless there is a performance penalty-free way of protecting data on the network level.

The key challenge for enterprises migrating to the cloud is to implement network encryption before workloads become scattered across multiple providers and platforms each with their own solution. A distributed data architecture is already complicated enough without having to go back and reconfigure something as fundamental as data encryption – particularly when diminished, or even disrupted, service will have such detrimental consequences to the business model.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In last month’s column, “Artificial intelligence: I think therefore I am?,” I felt I only scratched the surface of what we understand to be artificial intelligence (AI) and, in this month’s post, I want to expand my thoughts a little further.

How to build technology for the future

So, last month, I suggested that what we understand today as “AI” is nothing more than clever programming and smart technology and, I dare say, that’s largely true, despite others suggesting otherwise. We don’t have thinking machines, since software engineers have programmed our technology to behave in a pre-determined manner, along with predefined behaviors and outcomes.

You may recall, over a bottle of red, I presented the philosophical conjecture provided by René Descartes and the Scottish philosopher George Campbell’s work surrounding their rationale regarding the separation of the mind and body. I firmly believe that their “dualism” perspective is exactly what we need to achieve with “”full” AI or artificial general intelligence (AGI). With this in mind (no pun intended), if we look at the possible constitution of a robot, of some kind, we expect the mechanical gubbins to ultimately be controlled by a brain (or mind). I’m using the human physiological belief and the Cartesianism view here as to how we might build such technology in the future, although I do still question this possibility.

How to create an intelligent entity

I even scoffed, last month, at existing technology, where the so-called walking robots and dogs don’t provide the fluidity that we humans can – it’s so seamless and effortless for us to lift ourselves up and walk, yet I do acknowledge the complexity in replicating or mimicking human physiology. I also suggested that playing chess, driving cars and so on, for me, isn’t what’s understood to be AI and that having a conversation with Alexa is often strained and incredibly frustrating because human language is so complex. We add a level of uncertainty that only humans can decipher and interpret correctly – well, sometimes, we do! In fact, I touched up on this in a previous column, “Introducing the holographic virtual assistant” in January 2018, which might indeed become a precursor to the full-on humanized robot.

So, I can only imagine the sheer ingenuity that’s needed to create something that’s mechanically, humanly perfect and, alas, as I mentioned previously, we are generations away from this becoming a reality. Now, this human mind thing – our brain! I see the brain and mind, as two separate things: First, the brain is an organ, which operates our complex nervous system; whereas the mind is, in essence, our consciousness; our unique ability to perceive, imagine, think, reason and so on. While we might understand, to some extent, its biological makeup, we have yet to truly understand how the mind exists!

Using terms to explain things away

Indeed, we might have structure in our thinking, akin to a machine; following rules and procedures with expected outcomes and whatnot, but how do we create and where does that spontaneity come from? You know, that Eureka moment of solving complex problems, creating music or writing a bestselling book. The mind-sets of Einstein, Hawking and the like created their own legacies, which will be remembered forever. We also have the wired brain of Ted Bundy, for example, which continues to baffle both psychologists and neurologists, since I’m sure they wonder what kind of synaptic schematic made him do the things he did.

The human mind is a complex entity and, of course, there are some who suggest that the mind is in fact the human soul, but often we can all be guilty of providing such conjecture when we clearly don’t understand something and use such terms to explain things away. I still don’t understand what we really want from AI and I’m confident that, as Stephen Hawking said in his interview, “The development of full artificial intelligence could spell the end of the human race.

Ending the human race

We build robots and similar mechanical devices to take away the human-mundaneness from factory lines, where such technology provides better accuracy and reliability; we develop smart sensors to let us know about when might be the best time to plant our seeds and, likewise, we have sensors that predict weather, which allow us to warn the public in advance about heatwaves or floods. We’ve also developed medical devices that can detect breast cancer in patients that has a greater success rate than a practitioner seeing the same images (Fighting breast cancer with AI early detection, Dr. Sarah-Jayne Gratton). Industry mistakenly call this AI, yet I call it clever programming and smart technology!

I do believe it’s possible to use Cartesianism’s “dualism,” for example, as a template, of sorts, where on one hand you have the mechanical engineered humanoid, along with synthetics and whatnot and, then you introduce or “upload” the mind (Transcendence-esque) although I’m not entirely confident that this can be done at this time; however, from a theoretical standpoint, it sounds plausible, since there’s existing research surrounding what’s called transhumanism or humanity+ (H+)! It’s a philosophy that promotes the transformation of the human condition with technologies which, in turn, aim to enhance both human intellect and physiology.

Until next time …

In all honesty, I’m not sure if developing thinking humanoids can ever be achieved in the way that’s portrayed in so many movies. The portrayal of an independent-thinking machine or robot that has the autonomy to make decisions for itself – I actually don’t think it’s possible! After all, do we want to rob humanity of its uniqueness? Perhaps, we should forego our mission and just revel in the science fiction fantasy of movies since, as I again echo from last month’s column, “Be careful what you wish for!”

I offer one last throw-away thought: If we ever conceivably create an independent thinking machine that can not only assert itself and is also capable of questioning its own existence, is it then still AI

So, this is where your “AI psychologist” Dr. G signs off.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The enterprise has long struggled with the intricacies of connecting branch offices and other remote sites to centralized data facilities. Not only does navigating the WAN require a host of third-party providers, but the terminal equipment at individual sites is often complex, expensive and difficult to maintain.

But with the advent of the cloud as a means to support production workloads, not to mention even more disparate resources on the IoT edge, the need to simplify wide area networking is no longer just a helpful development but a competitive necessity.

Networks gone virtual

This is the primary driver behind SDN, NFV and other forms of virtual networking on the WAN. By converting to an abstract, software-defined architecture, long-haul networking becomes much more flexible and much easier to troubleshoot and manage. But even these advanced technologies are only marginally helpful if networked sites are still relying on complex, platform-dependent hardware. In the modern era, nothing less than a low-cost, universal networking device is warranted.

Enter the new field of universal customer premises equipment (uCPE). This new class of network device is described as the jack-of-all-trades for service providers and the enterprise. It is a converged solution that puts compute, storage and networking on a commodity, off-the-shelf server, allowing it to provide NFV, SD-WAN, virtual WAN and a host of other services, including security, to virtually any site on an extended network.

According to IHS Markit, the uCPE market is expected to top $1 billion in 2022, which is remarkable considering that it commanded barely $7.7 million as late as 2017. Its advantages over existing network devices are compelling:

  • It streamlines network architecture by incorporating functions that were previously handled by dedicated systems, including application delivery, resource optimization and firewall management.
  • It cuts capital costs by replacing proprietary equipment with generic hardware.
  • It simplifies network management by allowing local or remote provisioning, monitoring, service upgrades and other tasks.
  • It supports multiple VPNs on a single, streamlined hardware footprint.

While this is likely to improve networking performance in the data center, it is expected to really shine on the network edge, says Nemertes Research’s John Burke. The ability to deploy services faster and at locations that are closer to end users should unleash a wave of innovation influencing everything from recreation and ecommerce to healthcare and finance. This should also streamline network infrastructure across the board because organizations will be able to field a single, integrated virtual network architecture from the data center right to the customer’s home.

Tomorrow’s technology today

We can already see this environment taking shape. ADVA’s recent collaboration with Dell EMC to unite the Ensemble Connector with the Virtual Edge platform delivers a ready-made uCPE solution that employs a consolidated management stack and a full suite of more than 50 VNF configurations. Already, this system is being used by Verizon to push a number of software-based services to its customers.

As the data universe continues to spread across multiple platforms and geographic locations, networking will become the determining factor in maintaining performance and delivering robust user experiences. Today’s patchwork of switches, routers and individualized management environments was designed to accommodate the data loads of yesteryear. For the future digital services market, networks must become more streamlined, interoperable and easier to deploy and maintain.

If you haven’t yet encountered your first uCPE deployment, you will soon.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As networks evolve and grow, many customers are looking to gain key insight and actionable events from what they learn. Currently, the market is mostly using telemetry data and basic machine learning (ML) to analyze that data. Leading edge cloud providers have gone a step further and are using artificial intelligence (AI) to predict better and adapt networks. The advantage is clear; AI allows networks to operate more efficiently and with higher utilization. I expect to see AI rapidly moving into enterprise and service provider (SP) networks as they adopt AI to enhance their networks and the user experience.

SP networks are about to embrace AI in a big way. The number of devices, led by internet of things (IoT) and edge computing, is about to explode and the ability to apply AI to those networks will allow the human to scale. Over the next several years, the number of IoT devices on the network will surpass the number of human connected devices (smartphones, tablets, etc.). There will be too much data from too many unique sources for humans to be involved much in operating networks going forward. This is especially true with devices that aren’t operating correctly (either hijacked and used for nefarious purposes or broken).

Enhancing reliability

As an example, several vendors today are working with SPs to analyze data flow and system configurations. AI and ML are allowing SPs to see potential issues before pushing new configurations or before an outage occurs. This type of technology is helping make the SP network more robust and reliable as well as enable automation. As AI and ML become more robust, SPs can automatically implement the changes.

In the long-term, AI will automatically change the network, isolate devices and flows, and heal the network without human involvement. This will allow operators to apply human resources more effectively and reduce the cost of running their networks. Imagine a scenario where AI could predict that user experience would degrade in a certain region because of a new latency-sensitive app like augmented reality, and via AI the SP was able to spin up more edge compute resources in that area and reroute the content, so the user didn’t have that degraded experience. This is just the tip of the iceberg on what AI can do, and it will only get more effective with time as it gets more data to learn and train.

AI opportunities

In the short term, AI will improve operator efficiency and network performance with a human remaining in the loop to verify or approve the network changes. But operators will quickly learn and rely less on the human.

SPs embracing AI in their networks is an opportunity for the supply chain and will lead to a better consumer experience.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The enterprise is quickly expanding its cloud environment to a hybrid multi-cloud architecture. Where once a single provider enabled little more than bulk storage and processing offload, today’s cloud encompasses numerous providers offering services as diverse as disaster recovery, advanced analytics and quantum computing.

According to Gartner, more than 80% of enterprises are already working with at least two providers, and by 2023 the top 10 cloud providers will control only half of the total marketplace as users seek to avoid vendor lock-in and craft customized solutions for individual use cases.

But this is leading to a number of challenges on the wide area network. Not only must today’s WAN provide the same flexibility as the traditional LAN, it must adopt new forms of security to accommodate the multiple platforms and application loads prevalent in this diverse new environment.

Time for a security update

At the moment, however, the two predominant means of securing privacy and data integrity across multiple clouds are IPSec encryption and application-level security solutions like SSL. However, neither of these is adequate for today’s production workloads.

While IPSec does have the advantages of user invisibility, application independence and full traffic monitoring, it also suffers from high CPU overhead, lack of full support among software developers and the fact that some of its algorithms have already been compromised. Meanwhile, SSL is very effective at maintaining encrypted communications across the internet but its elaborate data exchange mechanism to establish and authenticate connections can severely hamper performance in high-scale environments. 

What is needed is a highly flexible solution that maintains robust security and protection in high-scale, highly complex environments. As well, it should be easy to implement at a low price point and should lend itself to the kinds of automated traffic management and network provisioning operations that are coming to define the modern data universe.

Guarding connections

One approach that ADVA has adopted for the multi-cloud adaptation of its ConnectGuard™ platform is to utilize low-cost universal customer premises (uCPE) solutions to push encrypted content to remote workers and branch offices. In this way, the enterprise can leverage both a hosted cloud deployment and uCPE clients to encrypt traffic not just at Layers 2 and 3, but on the Layer 4 host-to-host transport as well. 

In this way, organizations maintain robust security between switches, routers and terminal equipment (Layers 2 and 3) while also protecting segmented virtual networks, which would otherwise be vulnerable to attack should hackers gain control of the access control list at the networking layer.

What’s more, as a full software solution, ConnectGuard™ can provide hybrid and multi-cloud protection on any COTS server deployment under a simple subscription or perpetual license basis. And as a transport independent and FIPS-compliant solution, it provides robust end-to-end encryption at a far lower price point than current appliance-based offerings. 

The enterprise can even take advantage of the system’s NFV capabilities to simplify day-to-day operations, integrate multiple cloud and other service-level platforms and streamline hardware footprints in centralized and remote locations. Using zero-touch provisioning, the system can be set up to automatically optimize network environments for individual applications based on the type of resources available at a given site – all within minutes. 

In this age of digital services and high customer demand for personalization and robust performance, data has become the most valuable commodity in the enterprise business model. But collecting and storing this data is of little value unless it can be shared and utilized properly.

A multi-cloud environment offers the best way to ensure peak performance and solid data protection, but only if the network between these disparate sites is outfitted with top-flight security. And as with any data infrastructure, the best time to ensure your security is adequate is before your data has been compromised, not after.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The data center market is about to go through a significant upgrade cycle in technology that has implications for how data center buildout designs evolve. Several new technologies are all going to hit the market at the same time. The industry is moving towards 7nm technology across most of the semiconductor space. At the same time, chiplets are moving front and center as an important building block. 112Gbit/s SerDes, custom-built artificial intelligence (AI) chips, and the adoption of PCI Gen4 are about to be adopted.

All these technologies are a catalyst towards more traffic in the data center and higher speeds networks within the data center and across regions. We’ve seen robust merger activity in the semiconductor space as vendors position themselves for the new market opportunities these technologies enable. NVIDIA is purchasing Mellanox to gain access to a portfolio of high-speed networking and server accelerators to push their AI strategy forward. Xilinx is purchasing Solarflare to accelerate smart and programmable NICs. Intel just announced its intent to purchase Barefoot, increasing its presence in the networking silicon.  

Booming ecosystem

AI will continue to see massive investment dollars over the next several years and move to mainstream technology for enterprises and away from mostly a consumer technology where it’s common today.  Vendors are preparing themselves for this shift by making significant investments internally or through acquisitions as enterprises, both themselves and through cloud providers, look to better use the data they collect. Most AI training is likely to occur in cloud providers, but many customers too are expected to experiment with edge-based AI for both training and inference. Colocation providers will be a prime location for this type of edge computing as it allows the developer to develop on a cloud provider and push that toward the edge. Having colocation providers involved in the edge is another tool and opportunity as customers and providers define and evolve their edge strategies.

During the next 12 to 18 months, cloud providers will begin deploying these next-generation technologies in significant volumes, which will require networks to grow significantly to support the increased traffic that AI generates. I’d expect to see a significant increase in what each server will be able to generate and, as 400Gbit/s becomes more widely available, an increase in the amount of equipment getting deployed for data center interconect (DCI). DCI adoption will increase even more as 112Gbit/s SerDes enters the market; this will allow the market to begin adopting 800Gbit/s.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Software-defined wide area network (SD-WAN) is the host application driving transformation at the edge of the network. Enterprises and telcos are replacing closed SD-WAN and other appliances with virtualized software implementations running on universal customer premises equipment (uCPE), which is often standard servers from leading manufacturers.

The question now facing these enterprises and telcos is: What’s the best way to implement virtualized SD-WAN? And how do you best combine it with other applications such as firewalls and internet of things (IoT)?

Starting with bare metal is a dead end

Many initial deployments of SD-WAN were implemented as a software virtual network function (VNF) running directly on the uCPE server. This approach is known as bare metal, because there is no virtualization layer. Telcos adopted this approach because it was simpler than a hosted approach. Some SD-WAN suppliers assert that they can also host other VNFs. Even so, their portfolio of VNFs is small and doesn’t include other SD-WAN suppliers. 

In taking this path, users lost many of the benefits of uCPE. But we have seen a shift in the market. Now, operators who initially deployed SD-WAN on bare metal are moving to an open hosting platform. We’re also seeing demand from enterprises for a cloud-centric or hosted model.

Benefits of uCPE and a virtualization layer

uCPE by itself helps in breaking open the closed appliance model. But, by combining uCPE with an open virtualized hosting layer (as shown below), users can get all of the benefits of the cloud.

  • A true multi-vendor system with separate suppliers of servers, hosting software, and VNFs. This approach enables selection of best-of-breed at each layer, eliminating lock-in and powering innovation. 
  • Separation of hosting from network functions enables dynamic service deployment and eliminates the vendor lock-in seen when VNFs also provide hosting.
  • De-risking SD-WAN selection. There will continue to be consolidation in the SD-WAN space, with some suppliers going away. Separating the SD-WAN function from the hosting layer provides a recovery path for replacing SD-WAN VNFs due to technical or commercial reasons. 
  • Optimization at each layer. An open hosting layer enables innovation to occur separately at each layer. You can add a new server with lower cost or higher performance, while keeping the VNFs unchanged. Likewise, VNFs can be combined in a service chain to deliver innovative features and to meet customer requirements for particular VNF suppliers.
  • Networking and operational features. An optimized hosting layer goes beyond just providing a home for VNFs. It also enables advanced networking and operational features, such as zero-touch provisioning, advanced networking, fault and alarm management, and security.
  • Support for a wide range of COTS servers. A dedicated hosting layer will support a much broader range of hardware platforms and features than will an SD-WAN VNF that also provides hosting.

Result: open, neutral and cloud-centric hosting

With an open hosting layer, users of uCPE can get all of the benefits they expect. They can use best-of-breed components to provide features and reduce risk. They can build dynamic services, changing them as needed without changing the network. And with the large number of SD-WAN suppliers in the market, they can reduce the risk of picking the wrong one. Leading telcos like Verizon, Colt and others are taking this path for SD-WAN. Shouldn’t you?

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It’s no surprise to see disaggregated optical networks being widely adopted by data center providers.  It’s an approach that fulfils their needs perfectly. Disaggregation triggers innovation, increases flexibility to mix and match best-of-breed networking components, avoids implementation lock-in and integrates into open source and commercial orchestration. What’s more, it accommodates the different lifecycles of various networking components. Line terminals, for example, usually follow a much shorter innovation cycle than optical line systems. 

It’s this key advantage that is now pushing disaggregation into the carrier space. Carriers can benefit from the latest and greatest technology without overhauling the entire infrastructure. They only need to replace the pieces that have undergone an innovation step. Typically that’s the line terminals, which have the shortest innovation cycle of about two to three years, driven by innovation in optical components, processors, new or enhanced modulation formats, components integration, etc.

Of course, carriers don’t usually build infrastructure from scratch. In most cases, they already have an installed base of various networking components. First of all the fiber, which defines some parameters, such as attenuation, dispersion, optical signal-to-noise rates, and as such potentially limits the maximum reach. Being the asset with the longest lifetime, fibers are not supposed to be replaced. A “bad fiber” can significantly impact reach.

With an average lifetime of about three to five years in the DCI space, and even more in a telecom environment, optical line systems remain in the network about twice as long as the terminals. Line systems deployed some years ago might still use fixed grids of, for example, 50GHz. If this has been deployed several years ago it could potentially limit the channel baud rate.

Coherent channel bit rates can be realized by adjusting the spectrum and/or the modulation format, depending on the available spectrum and distance. The picture below shows the options for 300Gbit/s.

Increasing the symbol rate (higher QAM values) goes hand-in-hand with an increased noise penalty, limiting the distance. Higher baud rates might not match with existing filters.

Usually, these channel bit rates are achieved in discrete steps, as highlighted by the dots in the graph. There are significant performance steps between QAMs, which impacts reach, and significant bandwidth steps between QAMs, which impacts filtering. 

The figure below extends this explanation to multiple channel bit rates from 100Gbit/s to 600Gbit/s, displayed in steps of 50Gbit/s. Each dot represents an option to realize a given channel bit rate. The higher the channel rate you want to achieve, the more limited you become with distance and spectrum: While distance and spectrum might not impact lower bit rates, higher bit rates might not be able to run over longer distances and through filters using “thinner” grids.

With our FSP 3000 TeraFlex™, we’re softening these boundaries / discrete steps, so the required channel bandwidth can be realized through a flexible combination of fractional QAM capabilities (which allows a flexible mix of two adjacent modulation formats, e.g., from 0% 16QAM and 100% 8QAM to 100% 16QAM and 0% 8QAM) and continuous bandwidth setting for close adaption to filter passbands (so the signal fits into any spectrum).

This allows us to make use of each and every option along the channel rate curves above, which gives far more choices, especially in areas where the curves show a steeper shape.

This translates into the following benefits: 

  • In a given environment, where the reach and filters are defined and cannot be changed, we can realize higher channel bit rates (as shown below). With discrete modulation steps and baud rates, a 300Gbit/s channel cannot be realized in the given environment, limited by reach and filter. With fractional QAM enabling ultra-flexible modulation, there are several options for a 300Gbit/s channel to run through the given environment. (E.g., a 50GHz grid and a given distance might only allow for 200Gbit/s channels if we use discrete steps. However, with ultra-flexible modulation we can use 300Gbit/s channels as well.) Of course, 300Gbit/s is just an example. The same principle applies for higher channel rates all the way up to 600Gbit/s

This can create a significant increase in the total capacity transported over a given network, lowering the cost per Gbit mile in brownfield scenarios. A fixed grid structure is no longer a barrier to growth.

  • For a given channel bit rate we can increase the reach (by moving from left to right along the curves). So we’re able to bridge longer links, going from metro to regional to long-haul to even the distances required for submarine links – depending, of course, on the channel bit rate. 

In summary, our TeraFlex™, is not only a preferred option in greenfield and data center interconnect scenarios, it enables existing deployments to be optimized in terms of capacity and distance, significantly reducing total cost of ownership.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Artificial Intelligence (AI) has been bandied around for the last few decades or so and, I’m sure, most of us are still wondering: What exactly is this?

What’s the point of humans?

So, will we be faced with an army of Terminator-like humanoids who will reign terror across the world? Nah, we’re already effortlessly doing that all by ourselves! Will we witness “I, Robot”-like humanoids attending to our homecare needs – you know, washing, ironing, cooking and the like? Nah, I already have a wife that’s dutifully doing that. Okay, stop – I know I shouldn’t go there – just one small footnote though: My wife, Sarah, isn’t remotely domesticated, although she does cook once in a while!

Seriously though, I do fear that’s where most, if not all, of us are with our thoughts regarding AI – we dread the thought that robots or humanoids will ultimately replace us all – if so, what’s the point of humans, right? Unfortunately, it’s an inescapable thought surrounding AI but, for me, it’s largely been taken out of context and, in true industry fashion, the concept has endured an unimaginable amount of hyperbole. Likewise, Hollywood is also partially responsible for portraying the many thinking machines and exaggerating their capabilities!

Be careful what you wish for …

So, with this in mind, let me initially quash the misplaced paranoia: Firstly, I echo Professor Stephen Hawking’s sentiment in a BBC interview, which took place in December 2014. He said, “The development of full artificial intelligence could spell the end of the human race.” However, there’s one prediction that I can confidently make right now and that is the Terminators and “I, Robots” are fortunately a long way off and I will certainly not witness such technology in my lifetime nor, dare I say, will my daughter’s children’s children! 

We often observe in various news stories or on TV, the walking robots or robot dogs that awkwardly respond to their pre-set conditions; we have the ‘Sophia robot’ that apparently can hold a so-called conversation with you, which I find is akin to a frustrated dialogue with Amazon’s Alexa. In short, this technology is nowhere near what Hawking’s was alluding to when he described “full” AI, which is nowadays referred to as Artificial General Intelligence (AGI). AGI has been described as “the intelligence of a machine that could successfully perform any intellectual task that a human being can.”

For me, AGI does not refer to the ability to play chess, autonomously drive a car, see and/or identify objects, understand human speech or walk – I would consider this nothing more than the “AI effect.” In fact, I would argue that AGI (full AI) specifically refers to the ability for a machine to think, reason and assert itself. As such, I’m so, so confident that we are generations away from such a reality and Hawking’s statement in his interview is nothing more than “be careful what you wish for.”

It’s just clever programming and smart technology

So, I want address what I think is a more accurate reality of what is touted today and arguably misunderstood to be AI – more so, machines don’t think!

With advances in technology and the promise of how it will ease what might be considered “the mundane,” software and hardware have both become effectual in defining smart and intelligent. We have many engineers who develop software and hardware to address real-world issues. These engineers develop code that follows procedural rules; identifies patterns with known events, which in turn execute predetermined actions or results. This is not necessarily intelligence. Moreover, it’s rather straightforward for a software engineer to devise and develop algorithms that look for behavioural patterns based on data that’s sourced from a sensor, for example – we call it predictive this and that – it’s just clever programming and smart technology.

To perform intellectual tasks that human beings can

It is this clever programming and smart technology that’s currently regarded as AI and perhaps this is largely a misnomer, since there’s no actual intelligent entity, as such, behind the software and hardware making the decisions. The decision-flow process has been programmed to take actions based on certain events. Society is however wrapped up in the sci-fi fantasy of machines that make their own decisions, just like humans. There are some of us, in fact, that may even yearn for this to materialise (be careful what you wish for –remember!).

You see, as a former software engineer (25 years or so), I developed some wonderful things and provided some clever software which, in turn, did some damn ingenious stuff and in true software engineering arrogance, I considered myself “God-like.” Behind my software was a “mindset” of sorts to resolve specific problems, which I achieved through a series of instructions, patterns and numerous algorithms, accompanied with predetermined decisions, outcomes and behaviours. Yet, placing my arrogance aside, I struggle as to how to instantiate or create and bestow an intelligent entity, a “mind,” if you will, within software and hardware that will allow a machine to think and make decisions for itself. 

One last terrifying thought …

I want to now enter the philosophical realms of Cartesianism, broaching both René Descartes and the Scottish philosopher George Campbell’s work surrounding their rationale regarding the separation of the mind and body. I touch upon this subject, as I think it draws similar parallels as to how we might conceive a “mind” within a machine. I’m now off to crack open a bottle of red wine before I continue …

Descartes “Dualism” theory sought to investigate the connection between the mind and body and, ultimately, how they both interacted. His most notable quote was “Cogito, ergo sum” or “I think, therefore I am.” Descartes regarded this “as proof of the reality of one’s own mind;” the ability to doubt your existence was testimony of a thinking entity. I dare say such a philosophy can be considered a testbed or used as a benchmark for a machine to assert its own existence – I’m alive! 

Now that’s a terrifying thought!

Until next time …

Realistically, artificial intelligence is nothing more than clever software programming and smart technology. The only intellectual bearing in the making of this technology are the engineers who design, develop and build it; these engineers bestow some kind of intelligence, but this is still a far cry from the robotic stereotypes of Hollywood!

So, this is where your “AI psychologist” Dr. G signs off. 

 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

ADVA and Intel recently announced that Ensemble Connector is now a software option for Intel® Select Solutions for uCPE. This means that users can buy an integrated and verified hardware-software solution that is ready to deploy for universal CPE (uCPE) applications. What does that mean for service providers, integrators, and enterprises? It means they can now buy a solution that they know will work in the real world.

Intel® Select Solutions for uCPE

Created as integrated and optimized data-centric products for a range of industries, Intel® Select Solutions include the Universal Customer Premise Equipment (uCPE) configuration. Intel® Select Solutions for uCPE, based on the Intel® Xeon® D processor, deliver agile service provisioning to the edge of service provider networks, and include models from Supermicro, Advantech, Lanner, Silicom, Premier, Nexcom, and others. These servers are optimized to deliver the performance needed for demanding uCPE applications.

The addition of Ensemble Connector to the Intel® Select Solution for uCPE brings a new, truly carrier-grade software layer to the picture.

Our award-winning Ensemble Connector is a highly scalable and performance-optimized virtualization platform for hosting multi-vendor VNFs. It enables pure-play virtualization: open software running on open commercial off-the-shelf servers, like those that are part of Intel® Select Solutions for uCPE. This eliminates vendor lock-in so that users are free to mix and match best-of-breed software and hardware. Pure-play virtualization means total choice – choice of software, hardware and where to locate functionality to best suit the business needs of the provider.

The power of Ensemble Connector – networking, operations and choice

Ensemble Connector brings the power of the cloud to the demanding world of uCPE with its key features:

Networking

  • Fast and light forwarding engine to support L2 or L3 VPN over anything
  • Modular datapath supports protocol extensibility
  • Platform security and datapath encryption

Operations

  • Zero touch provisioning and multi-WAN
  • All cloud deployment models
  • Fault, performance and troubleshooting
  • Upgrade management for uCPE

Choice

  • Largest ecosystem of VNFs
  • Largest set of supported compute platforms
  • Open and neutral platform – no lock-in!

A solution for today … and tomorrow

Service providers, integrators and enterprises are now deploying NFV and uCPE to offer basic managed services such as SD-WAN and firewall, with the added benefits of choice in selection of hardware and software. That’s great, but the opportunities for uCPE are much bigger than that. With the open and cloud-native Ensemble Connector, operators can move from today’s services to using Connector as a platform for innovation, enabling adoption of advanced 5G, IoT, and other applications. In addition, Ensemble Connector enables hosting of end-user applications in a micro-cloud model. Service providers can now deliver a combined networking and cloud package hosted on a single device, enabling a “store-in-a-box” model for small businesses and retail locations.

ADVA and Intel Select Solutions for uCPE – a powerful combination

Talk to us at ADVA to learn how ADVA Ensemble and Intel® Select Solutions for uCPE can help you realize the benefits of uCPE in your own applications.

 

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview