Loading...

Follow Vertical Industries Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

OpenStack got its start at NASA, but open source communities have given it life. OpenStack’s architecture is made up of numerous open source projects steered by IT professionals around the world. Red Hat takes part in many of those projects and continues to be a top corporate contributor to the OpenStack community. So, which projects within the OpenStack community matter most to telecommunications service providers?

This Q&A with Thomas Nadeau, Red Hat’s technical director for NFV, explores the OpenStack community and telco-centric projects, the work the community is doing now, and how to get involved.

What does Red Hat’s “upstream first” philosophy mean for Red Hat’s customers?

This philosophy means that the code used to create our enterprise-class, fully supported downstream products – either direct distribution of upstream projects or aggregates of multiple projects combined into a single, supported product – is shared and developed with the work done by the full OpenStack community.

This is quite important as it puts all the cards on the table so to speak about agendas or features. There are many advantages to this, including, but not limited to, building and developing solutions from these distributions within a larger ecosystem that includes our partners and customers. It also means that along with our upstream partners, we can more easily build solutions that are bigger (and we believe better) than any single corporation can hope to build and maintain. Kubernetes, which we use to create our downstream product called Red Hat OpenShift Container Platform, and Red Hat OpenStack Platform (OSP), are perfect examples. (Here’s a great description of Red Hat’s upstream philosophy.)

Why is the community so critical to OpenStack development and adoption, and how involved is Red Hat in the OpenStack community?

The upstream community is critical for many reasons. First, as I explained earlier, it would be very difficult for any one organization to develop and maintain such a massive project on its own. Not only would they need sizable  engineering resources to work on the code, but also other supporting functions, such as sales, support, and testing.

Secondly, a community that builds projects like this often builds them in ways that suit a wider set of end customer use cases simply because of the diversity of the contributors’ downstream customers. This makes the project more broadly applicable and allows the project to enjoy a certain economy of scale. To illustrate this, imagine starting two OpenStack projects with similar outcome goals. Then consider the impact the combined teams would have and the overhead eliminated with a single, cohesive project.  

Finally, the code for these projects is often high quality. Attracting the best and brightest developers is tough on such a large project. However communities often solve this by having developers and contributors from a diverse pool of backgrounds and companies.

How does the community work? What’s the process for getting involved?

It is first important to understand that any open source project’s community has many components beyond the obvious software developers. There are project managers, marketing, CI/CD, testing, and documentation components that are all critical to the success of these projects. individuals can get involved in any of these areas. To get started, visit the OpenStack Community welcome page to find the sections that best suit your interests. From there you can dive into the subproject or area of your choice.  

How are new projects created, and how does the work done in the projects progress into documentation and implementation?

New open source projects are created in different ways depending on the umbrella organization shepherding the project (i.e., The Apache Software Foundation, The Linux Foundation, etc.). That said, a project typically starts with the creation of a sub-organization that has some form of governance rules and a charter. Then, the primary project creators often make an initial code drop.

For OpenStack, projects are first created in a few ways. One way is to informally create a project. As the code grows in functionality, the community reaches a critical mass around it. At that point, a blueprint for the project is created and proposed to the Technical Steering Committee (TSC) for formal inclusion. Alternatively the TSC can create a project by invoking its blueprint as part of their formal planning process ahead of each major release. Those blueprints are vetted and voted for by the TSC.

There are probably more community projects important to telco than we have time to discuss, but what are some of the key projects today’s service providers (SPs) should be aware of? What problems are these projects aiming to solve for SPs and their customers?

We have significant investment and efforts in a wide range of open source networking projects that pertain to telecommunications service providers. They include our downstream products as well as some more forward-looking ones that do not have downstream products at this time. Many can be found in the Linux Foundation’s Networking (LFN) umbrella area.

The importance of any project to SPs really depends on each provider’s specific goals. I would say there is no one most important project, but some of the key projects we at Red Hat are working on are OpenDaylight Project, Open Virtual Switch (OVS), FD.io, Open Platform for Network Functions Virtualization (OPNFV), Open Network Automation Platform (ONAP), Akraino Edge Stack, and Data Plane Development Kit (DPDK). We also have significant efforts in the Cloud Native Container Foundation.

Why should SPs be involved in the OpenStack community?

SPs should definitely get involved because it is a way for them to  be part of the community and drive what functions and features are added to projects. Since those projects form downstream products offered by Red Hat and others, they can influence products they consume through bringing requirements to the project, helping with project testing, or directly contributing source code to the projects themselves.

The latest OpenStack release – Rocky – came out this past August. What can you tell us about the benefits and advantages it will bring SPs?

Rocky provides important operational enhancements for ease of use and adds better support for “bare metal” deployments, “remote” clusters, and support for containers. Look for further enhancements to these functions as we move to the next release.

Looking ahead, what are some of the most exciting developments underway in the OpenStack community?

I have to say that I am most excited about the support for container functions, as well as the networking enhancements to support them. As a project matures, new functionality for it often wanes. These days the rapid rate of technology evolution means that any pause in development that lasts too long could quickly make a platform irrelevant. The OpenStack community realized that support for containers was “the next big thing,” rapidly moved to add it and is continuing to enhance it.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As traditional cable, satellite, and terrestrial TV are challenged by over-the-top (OTT), internet protocol television (IPTV), and video-on-demand services — made even more challenging by higher-bandwidth, last-mile fiber, and 5G mobile network extensions — service providers are transforming their technology infrastructures to decrease lag time for viewers. Werner Gold, Red Hat’s emerging solutions evangelist for the telecommunications industry, discussed the issue with TelecomTV at 2018’s Mobile World Congress in Barcelona, Spain.

Increased OTT and IPTV traffic creates more congestion on aggregation networks, Gold explained. “We need to have more powerful, flexible content delivery networks that are also being virtualized, and new caches can be built at the network edge to host the content there to offload this kind of traffic from the aggregation networks,” he said. “But also the OTT traffic is competing with live linear TV… where we need to bring the content very fast to the customers as well.”

A new infrastructure is needed for a new world where the telecommunications and media industries are melding together. Moving functions like content delivery networks (CDNs) closer to the customers goes hand-in-hand with increasing investments in network functions virtualization (NFV). More customers are also looking closely at containerizing workloads, and Gold said he expects to see the containerization of workloads as a follow-up to this infrastructure change.

That’s particularly true for live linear TV, which Gold said is one of the most challenging workloads as it requires taking “care of the time lags between, for instance, satellite deployment and internet deployment.” Such lags are a problem especially when it comes to sports events. As Gold explains it: No one wants to hear a neighbor watching the same game shout “goal!” when they are dealing with a 30-second lag. Multicast playout is key for live linear TV. The secret for removing delay is to have a “common infrastructure in between the origin servers and the consumers, and so you can use multicast for playout and this way you can get rid of the time lag,” he said.

For this evolving environment, Red Hat provides technology such as Red Hat OpenStack Platform for resource virtualization and Red Hat OpenShift Container Platform. “We are providing infrastructure for the new playout channels and delivery formats within the internet, and this was where we long had expertise,” Gold said.

Red Hat has recently expanded our Partner Program to support the specific needs of the telco industry. The Global Network Ecosystem Partner (NEP) initiative is designed to accelerate a vibrant ecosystem of technology companies whose solutions run on or integrate with Red Hat products. We have expanded and enhanced our core certification activities that are the foundation of Red Hat’s offerings for telco service providers, including our NFV and VNF certifications.

This expansion may help service providers re-architect and transform their networks from traditional, physical network functions to virtualized network functions (VNFs) using open source software and white-box servers. It may also help service providers explore network edge use cases, such as virtual central office (VCO), virtualized customer premise equipment (vCPE), head end virtualization in streaming services, and the delivery of other mobile services.

By migrating to VNFs and network edge use cases, providers should be able to dynamically allocate resources for optimal efficiency, deploy new services more quickly and closer to customers, and automate network deployments and operations. They look to gain simplified networks, reduced expenses, and increased speeds.

To hear more from Werner Gold on this topic, watch his full TelecomTV interviews on unlocking SP virtualization and getting content closer to customers. And take a look at this white paper, written for Red Hat by Roz Roseboro, a principal analyst with Heavy Reading: “Open Source: A Framework for Digital Services Modernization.”

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As service providers (SPs) continue preparations for 5G, building a common platform with an agile process for delivering new services to support customer needs remains a key goal. That’s according to Ian Hood, Red Hat’s chief technologist for global service providers, as he explained in an interview with TelecomTV at last year’s Mobile World Congress in Barcelona, Spain.

In the interview, Ian points to multi-access edge computing (MEC) as one solution worth considering. This moves task processing to the region of the operator network that is closest to the user, thus improving application delivery and performance as well as reducing network congestion. Ian cites five key use cases for mobile services that network edge computing can deliver, demonstrating its value: virtual radio access network (vRAN), business services, the internet of things (IoT) everywhere, virtualized video, and enhanced consumer services.

Artificial intelligence (AI), machine learning (ML), and blockchain were also discussed, and the interview delves into the impact of these technologies on SPs. According to Ian, AI/ML are tools for SPs to manage data and solve service delivery hurdles, not just user-focused tools. For example, AI/ML can help SPs automate their infrastructures, manage IoT data from the network, and deliver new applications with AI and video imaging.

During the interview, Ian also shared his thoughts on how blockchain can benefit SPs, although he admits it’s a more complex answer than a snapshot view can provide. Ian explains that with the evolution of the eSIMS, blockchain can enable SPs to bill back for international roaming in a distributed way, serve as a discovery protocol for network switching, and provide a way to bill for IoT services.

There’s a lot of possibility down the path to 5G, but challenges exist too. “The technologies available to us to virtualize the architecture haven’t quite lived up to the promise we expected yet … and similarly some of the applications customers want to use … need to be modified and modernized to actually deliver this new cloud-native thing,” he says.

But Ian encourages SPs to embrace these possibilities. As he puts it, “If you are not feeling some pain, you are not driving fast enough.”  

To hear more from Ian Hood on this topic, watch his full TelecomTV interview, “Not feeling pain? Drive faster!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Organizations are implementing edge infrastructure to provide important applications closer to their use, while still wanting to have a single interface to perform administration from a single location. You can do this using Red Hat Virtualization.

Planning

The first step in implementing an edge site is determining the specifications of the site. In this post we’ll walk through the things you should consider when planning an edge site, and how to plan to manage and scale the site as your needs evolve.

By specifications I meant to say how do you define your edge site

  • What will be the size of the site?
  • Can the edge site have its own local storage?
  • Can you afford to waste CPU cycles on the edge site for any control processes? Or do you wish to allocate all of your CPU cycle to your workloads?
  • Do you wish to contain your edge resources (storage, network, CPU) only within the edge site?
  • How often will you scale your edge site?

Wouldn’t be nice if scaling of your edge site will be just a matter of powering and stacking of your physical servers and rest of the configuration such as automatic discovery, IP assignment, loading of the operating system, and loading of your chosen hypervisor is handled automatically from a centralized place?

It sounds like it could be complicated and difficult to implement, maintain, and scale. However, the good news is there is a user can achieve all of the above with the help of Red Hat Virtualization.

Red Hat Virtualization is an enterprise-grade virtualization platform built on Red Hat Enterprise Linux. Virtualization allows users to provision new virtual servers and workstations more easily and can provide more efficient use of physical server resources. With Red Hat Virtualization, you can manage your virtual infrastructure – including hosts, virtual machines, networks, storage, and users – from a centralized graphical user interface or RESTful API.

This post details one way to implement edge infrastructure in an easier and more scalable fashion than managing systems individually. The implementation will address most of the edge characteristics highlighted above, but there are many ways you can implement your edge infrastructure. These will be based on decisions you make with the above detailed framework.

Assumptions

The edge implementation discussed in this post assumes that:

  • This provides another implementation option for Edge use case deployment using Red Hat Virtualization.
  • There is shared storage available at the centralized region where Red Hat Virtualization Manager will be hosted.
  • There is local storage available at each of the edge sites preferably in the form of Red Hat Gluster. An NFS server or attached disk are also acceptable methods.
  • Storage networking can be implemented using IP or Fiber Channel and supports Network File System (NFS), Internet Small Computer System Interface (iSCSI), GlusterFS, Fibre Channel Protocol (FCP), or any POSIX compliant networked filesystem.
  • Providing high availability for management requires two physical servers at the region.
  • Connectivity between region and the edge sites should be adequate, otherwise if latency is too high or throughput too low, there may be issues with Red Hat Virtualization Manager managing the nodes.
  • In order to keep the edge sites isolated from each other, we create Red Hat Virtualization Data Center in each of the edge sites so that resources like network and storage are contained within the site.
  • This implementation assumes there is no requirement to live migrate VMs from one edge site to another. Hence, edge sites are assumed to be independent to each other.

In this implementation, we are defining “region” as a centralized place from where an administrator would like to monitor and maintain their edge sites. In other words, the “region” is a place where our Red Hat Virtualization Manager lives. This could be at the central office, or at a regional data center.

Red Hat Virtualization Manager and edge design

So, what is Red Hat Virtualization Manager?

Red Hat Virtualization is an enterprise-grade virtualization platform built on Red Hat Enterprise Linux.

The Red Hat Virtualization Manager provides centralized management for a virtualized environment. A number of different interfaces can be used to access the Red Hat Virtualization Manager. Each interface facilitates access to the virtualized environment in a different manner.

The Red Hat Virtualization Manager provides graphical interfaces and an Application Programming Interface (API).

Red Hat Virtualization Manager will reside in the region/central and will be hosted on two physical hosts for high availability (HA) purposes. Recommended hardware requirement can be found in the Red Hat Virtualization Manager installation guide.

For the edge site, if the operator wishes to have only one host then local storage (attached to the host) can be used. The caveat to using local storage is that live migration cannot occur, which also means that you will have zero high availability. However, if the operator wishes to have more than one host in the edge site then it is advisable to use external storage like Gluster. This will allow for HA/live migration within the edge site. Configuring external storage at the edge site is out of scope in this discussion.

This screenshot shows the storage type as “shared” which is allocated to the region. Note it is marked as “Default.”

The next screenshot demonstrates storage used at an edge site. It is marked as “east-data-center” and storage type is marked as “Local.”

In this example we’re only demonstrating a single host at the edge site and using its local attached disk for storage.

Next, we have a screenshot of Red Hat Virtualization Manager (hosted engine) which is in the region. In this example we are using a single host in the region, which is also hosting Red Hat Virtualization Manager. For high availability purposes we recommend having two physical hosts and shared storage, but this demonstrates you can work with a single host. Hosted engine is in the form of a virtual machine running on host “rhv-central.example.com” and is attached to the data center named “Default.”

Finally, let’s talk about the networking. As an administrator one has to confirm that there is enough network bandwidth available between region and edge sites to host the ovirtmgmt network. Note that ovirtmgmt can be used to carry network traffic like management, display, and/or migration (if applicable).

By default, one logical network is defined during the installation of the Red Hat Virtualization Manager i.e. the ovirtmgmt management network.

This screenshot shows the ovirtmgmt network running via “Default” Data Center which is “region” in our case.

Note that in this example I’m not using bonding on the interfaces. It is, however, advisable to bond the two interfaces together if possible. Bonded network interfaces combine the transmission capability of the network interface cards included in the bond to act as a single network interface, they may provide greater transmission speed than that of a single network interface card. Also, a bonded interface can remain up unless all network interface cards in the bond fail, so bonding provides better fault tolerance.

One limitation is that the network interface cards that form a bonded network interface must be of the same make and model so that all network interface cards in the bond support the same options and modes.

There will be three types of networks flowing through each of the edge sites.

  • Ovirtmgmt
  • logical networks (many depending upon workloads requirements)
  • Storage network (IP based)

This screenshot captures the logical network and ovirtmgmt network flowing through the data center “east-data-center.”

You can see networks are bound to different NIC’s. All this may be customized per the workload’s requirements.

Overall this is what the implementation will look like:

And when an administrator spins up the VM from the centralized region, this is how it will look:

In this case “east-vm1” is the name of VM running in Data Center “east-data-center” and cluster “east-data-center.”

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
What is “the Edge”?

Mobile Edge Computing (MEC) or Multi-Access Edge Computing, is defined in many ways. In fact, the definition of MEC varies widely by the context you consider it, and sometimes by audience. In this post we will explain the nomenclature and concepts that define telecommunications service providers’ network edge and its use in the delivery of mobile, business and residential services. First let’s take a look at the broader term edge. What is it?

The edge, in the traditional usage, has referred to the point where a “customer connects to the provider.” The provider being the organization providing a service. Largely, this was one of three situations:

  1. An enterprise customer connecting to a service provider’s (SP) edge for network services.
  2. A retail customer connecting to mobility services.
  3. A home user connecting to broadband services.

As we know, today we live in a world focused on cloud service providers (CSP). CSP’s are not primarily concerned with network services, but rather providing a place to easily run various workloads at scale. This includes compute, storage, network, AI/ML, IoT, databases, etc. So what is the “edge” in this context?

This brings us to the latest usage of the term “edge.” In this context, the edge is focused on “where the workload is located.” Do you see what just happened? Two things changed. First, we made a pivot from network centric services to workload centric services.

Considering a portion of many organization’s workloads have migrated to CSPs, it’s not surprising focus has shifted away from network to application workloads. Secondly, we are seeing the industry exploring the geography or proximity of the workloads to its users. There may be significant advantages when workloads are placed closer to the user or where the data is generated (more examples to come). The diagram here illustrates the current definition:

Here is the newer definition of edge given the context above:

The edge is a new model of bringing technology resources, including compute and related infrastructure, closer to its end-consumer in order to increase performance and expand technical capabilities.

What is “MEC”?

Now that we’ve taken a closer look at the edge, let’s look at a special use case for service providers. Just when you thought all the cool leading edge toys were only for the cloud companies, we’re right back to talking about service providers again.

Today, for example, when a 4G connected device attaches to a SPs mobile network, most of the mobile applications or “the core,” also called Evolved Packet Core (EPC), are centrally located in large mobile data centers, and therefore further away from the end-user. When a SP moves mobile workloads closer to the user, to increase throughput and reduce latency, we end up with a new mobile architecture called MEC.

According to ETSI, MEC is defined as follows:

MEC or Mobile edge Computing provides an IT service environment and cloud-computing capabilities at the edge of the mobile network, within the Radio Access Network (RAN) and in close proximity to mobile subscribers.”

Depending on where the MEC nodes are placed or located, there are two flavors of MEC. The first case, is when the MEC node resides “inside” the service provider’s domain. The second, is then the MEC node resides on the customer’s premise.

  1. Telco Mobile Core Network: This is where the MEC concept was originally established. This use case is known as Mobile Edge Computing or Multi-access Edge Computing (MEC). This span use cases from CS/Passive Optical Network (PON)/Mobile Telephone Switching Office (MTSO), virtual central offices (VCO), Video Hub Office (VHO), Soft Handover (SHO) and others.
  2. Telco Customer-Premises Service Providers: This concepts points to an architectural design with smart edge device providing the traditional terminations (i.e. NTU, NTD, ONT, STB’s, etc.). At the same time, these new edge devices are capable of running value added or 3rd party specialized services. Example of these 3rd parties specialized services can be: cloud native workloads, IoT gateway services or even extend CSP’s services to customer’s premises.
What is “fog computing”?

Edge computing is considered to be a subset of another overarching concept: “fog computing.” The truth is that fog computing is the superset that defines the functional characteristics of edge computing, but more importantly, it works to improve the efficiency of the data transported to and from the edge device to the cloud (private or public) for further processing, analysis and long term retention. Because of this, many terms and concepts across fog computing, edge computing and MEC are used interchangeably.

We can visualize the relationship among these terms with this conceptual diagram.

What does edge computing look like for different use cases? Here are three example use cases for the edge in the enterprise.

  1. Enterprise/Consumer Edge: This concept points to an architectural design where the edge device provides the network termination functions while capable of delivering enterprise NFV, on-premise CSP’s functionalities, IoT services, software-defined wide area network (SD-WAN) like universal Customer Premises Equipment (uCPE), or even capabilities for the organization to deploy their remote workloads as cloud native workloads. This includes domain specific specialized features for retail, healthcare, financial and others.
  2. Oil & Gas/Mining Edge: The concept behind this edge design tends to be related to use cases combining IoT sensors data and actionable results. In these scenarios, some BigData analytics and certain AI functionalities need to happen in real-time or near real-time at the remote location. Examples of these remote locations are oil rigs, oil platforms, or equipments in remote mines. In these setups, network interruptions and edge isolation are expected, but the workloads must be executed to determine immediate actions. Eventually, certain or all data processed by these edge nodes is synced back to the organization’s private or public cloud services.
  3. Aerospace and Defense Industry Edge: For these particular industries, the edge use cases span across the enterprise & Telcos use cases. Some unique characteristic of this industry is the fact that an edge can be disconnected for months or years (think submarines or autonomous satellites) before being able to send their data back to the organization’s cloud services. Even with these extremes, these edge (which could be multi-node clusters) follow common edge computing characteristics.

As we can see, the functionalities of an edge varies among use cases. Even with these multiple scenarios, at the end, edge computing has a common sets of characteristics, attributes and capabilities. The use case specific functionalities are developed on top of a common set of characteristics and attributes. This is what makes it imperative to revisit the assumptions developed in the past, and consider modern cloud native friendly designs.

To learn more about the network edge and Red Hat’s related products and solutions, please visit https://www.redhat.com/vco, https://www.redhat.com/nfv and https://www.redhat.com/telco.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As customer demands and competitive pressures have grown, telecommunications service providers (SPs) around the world are continuing their march towards digital transformation. This includes taking advantage of private, public, and hybrid clouds, along with technologies that allow SP customers’ business-critical applications to be portable across all the different clouds they manage. In a recent interview with TelecomTV, Ranga Rangachari, Red Hat’s VP & GM of storage and hyperconverged infrastructure, discussed the need for portability and scale-out storage in this hybrid cloud environment, and how Red Hat is helping SPs tackle these challenges with software-defined infrastructure (SDI).

According to Ranga, the hybrid cloud elevates the importance of SDI.  Telco SPs need to “provide their customers a unified way on how the applications can be portable across the private cloud as well as public clouds,” Ranga explained in the interview. “They are trying to move from an asset-heavy to asset-light model. To their customers, they need to provide time to value. How quickly can they get infrastructure up and running, and more importantly, does the vendor or partner they work with have a strong ecosystem of partners to help them have a complete solution?” These are key ingredients for SPs, according to Ranga.

Many SPs also have large volumes of rich media to deliver to their constituents, especially as the lines between telcos and cable companies blur, and this makes storage important. As Ranga noted, SDI is made up of software-defined compute, software-defined networking, and software-defined storage—which correlate with processing, moving, and storing data. “Having a true SDI is important for providers to be successful in this journey,” he said. Hyper-converged infrastructure allows SPs to have both compute and storage on a single platform, which can reduce costs and the overall footprint, and improve manageability.

Ranga pointed out just how important storage, and the management of that storage, has become to SPs. “[They] are not talking gigabytes or terabytes, they are talking about hundreds of petabytes or even exabytes of data,” he said.

“They need one single backbone of storage where it all gets stored and delivered…. It really calls out why it’s very important to have a scale-out storage. Because these CSPs are not talking about gigabytes or terabytes of storage, they’re talking about hundreds of terabytes or even petabytes of storage and the only way they can tame that storage beast is to have a true software-defined, but a scale-out, storage implementation. That’s what Red Hat brings to the table.”

We encourage you watch Ranga’s complete interview on how hybrid cloud elevates importance of true software defined infrastructure.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

“This standardization means better interoperability between our systems and helps us avoid the incompatibility issues we tend to see with proprietary software.” — Pablo Recepter, CIO at BCCL

As industry and regulatory environments in Argentina have evolved, Banco Credicoop Cooperativo Limitado (BCCL) needed to upgrade its core banking platform as part of its digital transformation initiative. To take advantage of this new platform, BCCL, a large cooperative bank that handles 2.65 million customer accounts and up to 4 million transactions a day in 267 branches and 24 service centers across the country, also needed to upgrade its software to create a robust IT foundation for its core banking systems and Temenos software.

Keeping pace with change

According to Pablo Recepter, CIO at BCCL, the project’s goal was to implement the necessary technology to ensure the bank’s transactions could function reliably and efficiently in an increasingly complex industry, as well as meet changing regulatory requirements. Cost containment was another major consideration.   

After evaluating several solutions, BCCL selected open source solutions from Red Hat, a Temenos certified partner. Already, the open source Red Hat platform has met key imperatives of BCCL’s operations so it can offer the kinds of innovative banking services it wants to deliver to its customers, effectively and safely operate in a complex regulatory environment and scale to meet its higher processing requirements.

Improved reliability for customers

The project has delivered several key benefits for BCCL and its customers:

  • Greater system reliability—With Red Hat software in place, BCCL has been able to avoid unscheduled downtime for its critical banking systems. Without this level of reliability, the bank would be unable to offer core retail banking services, such as customer access to account details or fund transfers. “Red Hat JBoss EAP is robust, with no outages to date, and it saves time in daily maintenance,” said BCCL’s CTO Ricardo Soto in the case study. “One of the capabilities we found most useful is the online live updating feature that eliminates any downtime for updates.”
  • Open source innovation—The bank has made open source a priority and uses it whenever possible, largely because of all the documentation and community support. “The variety of options helps us to meet our requirements, including keeping costs low and complying with increasingly complex regulation, with software that uses standard formats and specifications,” Recepter noted. “This standardization means better interoperability between our systems and helps us avoid the incompatibility issues we tend to see with proprietary software.”  
  • Faster processing—Faster workflows were necessary to support the bank’s daily 4 million transactions. Red Hat Process Automation Manager seamlessly handles dozens of processes and thousands of daily procedures across the bank’s departments, from human resources to foreign trade, allowing employees to devote attention to value-creating work instead of rote processes.
Scaling for digital transformation

The bank was familiar with Red Hat Enterprise Linux from previous projects, and based on these positive experiences, selected it to support its Temenos T24 solution, as well as its Oracle 11g database. The bank deployed Red Hat JBoss Enterprise Application Platform (EAP) as an application server in cluster mode for its Temenos software, as well as Red Hat Process Automation Manager and Red Hat Enterprise Linux High-Availability Add-On for its network file system (NFS).

Implementation occurred in multiple phases, and even during the implementation, modifications were required to keep pace with regulatory changes. More than 25,000 test cases were undertaken during the testing process, and the final system was implemented in one location before being rolled out to all branches. As of August 2018 when the case study was completed, BCCL was running nearly 100 servers on Red Hat Enterprise Linux, 32 JBoss EAP cores, and 16 Process Automation Manager cores.

Working with Red Hat, Temenos, and other local partners on its core banking project has helped BCCL implement new capabilities to keep pace and scale with market demands while avoiding the cost and vendor lock-in of proprietary software and other vendor solutions. Learn more about how Red Hat works with its partners to create robust IT environments for the financial services industry in the complete BCCL case study.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

With an aim to separate hype from reality in Day 4 at Sibos, I was on a mission to understand what the existing and near-term applications of Artificial Intelligence (AI) were in banking.  With machine learning described as “table stakes” now, Richard Harris (Feedzai) during The Ethical Side of AI panel, suggested that the closest we have to understanding the impact AI will have is by looking at the internet – knowing the internet would change everything but twenty years ago, we didn’t know how – describes the state of AI today.

Risk mitigation appears to be an active area for current AI application. For example, with a worldwide impact of money laundering estimates between 2% to 5% of global GDP (upwards of $2 trillion USD), Heike Riel, IBM (Sensemaker: The interconnectedness of everything and advanced AI) cited a case where they found a reduction in false positives of 95% to 50%, along with a reduction of 27% in manual effort by using AI/ML to help discover the undefined unknowns in the data. Using AI to help triage fraud for human interpretation and action is considered ‘narrow’ AI – the application of AI to one particular task.  

Broadening the scope of AI beyond a single task may be on the horizon. In the future I can see a time when an AI would become a new hire to the bank, employed to derive new, company-wide insights to improve processes, identify efficiencies or ways to improve customer experience.

As Ayesha Khanna (ADDO AI) mentioned in her breakfast keynote, we will need to be able to accept the insights from AI for this to be successful, and not dismiss them simply because we never thought of them before.

For now AI use is openly described for risk mitigation and advisory applications with a general expectation that this is only the beginning. And although AI begins with a use case – with a defined goal and data to learn from – ultimately the application of AI needs to create value. Currently value is focused on generating efficiencies, improving operations and cutting costs. But in the broader applications of ‘true AI’ we will likely need to reconsider how to measure value.

As Genevieve Bell put it during the closing plenary we will need question the metrics upon which we assess value, especially when considering autonomous applications of AI.  Harkening back on previous industrial revolutions that created entirely new disciplines (like computer science during the 3rd industrial revolution) to this 4th industrial revolution powered data, AI, sensors and other advances she pointed out the likelihood of entirely new disciplines to form.  

Perhaps by then we’ll also have new metrics to ascribe value of AI  – like measuring the transparency, or trustworthiness of AI. The human doesn’t leave the equation in AI, for labelling data for example, but we may need to redefine how we treat it – possibly more, as Bell termed it during her session, a colleague than an algorithm.   To learn more about some of the people we are working with in AI, and their stories, don’t miss “The People behind OpenAI” from our Open Source Stories series.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

An increasing number of industries seem to be dipping their toes into the blockchain arena. According to a recent report from PwC, 84 percent of respondents said their company is involved with blockchain in some capacity, whether that be testing new capabilities in a lab setting, building proofs of concept or running full-scale deployments. The World Bank and the United Nations have introduced blockchain initiatives, as has Red Hat customer, the Australian Securities Exchange (ASX), following the lead of companies in communications and media, retail, energy and utility, healthcare, and other industries.

Blockchain seems to be everywhere. However, according to the PwC survey, there’s still one industry that’s seen as leading the pack: financial services. That financial services leads the pack makes sense, given the fact that blockchain started out as a way to record currency transactions for Bitcoin, a type of digital currency that operates independently of a central bank. And while the initial leading industry for blockchain application was financial services, it’s clear that the technology has moved well beyond this, and financial services organizations are exploring the use of blockchain in different ways.

Bank of America, the second-largest bank in the United States, has alone filed more than 50 patents related to blockchain technology as of June 2018. And a host of other industries are expected to follow suit, according to a recent report from CB Insights. We explored several of these use cases more deeply in an earlier blog post.

Despite the enthusiasm for blockchain, and the fact the discussion and implementation has moved beyond bitcoin, there are still hurdles to overcome in all industries, including financial services. According to the PwC report, there is still confusion about what blockchain is and what it can do for businesses.

“Blockchain’s role as a dual-pronged change agent — as a new form of infrastructure and as a new way to digitize assets through tokens, including cryptocurrency — is not easy to explain,” the report says. In addition, trust remains an issue. Is it reliable and secure enough? Will it dull application performance? Will it scale? A lack of standards and interoperability issues add to the list of concerns, according to the PwC report.

Red Hat’s collaborations with BlockApps and Clovyr are seeking to make building distributed ledger applications easier.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Join Red Hat’s telecommunications team at OpenStack Summit Berlin, November 13-15, to learn about virtual central office (VCO), open platform for network functions virtualization (OPNFV), smart OpenStack cloud, Kubernetes, Red Hat Ceph Storage, and more. With more than 200 sessions and a number of extra events there’s a lot happening this year! To give your summit schedule some focus, keep reading for a highlight of key sessions, lightning talks, and events we recommend.

On Tuesday, November 13, Nick Barcet, Red Hat’s director of product management for OpenStack, will deliver a keynote at 9:25 a.m., in level 1, hall A4-6. After that, join us at 11:00 a.m., for the Red Hat-sponsored track in hall 7, level 2, room 7.2a Helsinki 1, where we’ll share the following sessions:

Migration Without the Migraines: Enterprise Use Cases, presented by Ian Pilcher, principal product manager for OpenStack and virtualization at Red Hat, and Robert Rother, solution architect for EMEA at Trilio Data Inc. The session will provide an understanding of the migration journey to help attendees avoid vendor lock-in, explore hybrid and multi-cloud approaches, and understand the importance of a reliable backup and recovery solution. This session will be held on Tuesday, November 13 from 11:00 – 11:40 a.m.

Managing data analytics in a hybrid cloud, presented by Karan Singh, senior architect at Red Hat, and Daniel Gilfix, storage and HCI marketing manager at Red Hat. The presenters will review how infrastructure teams can now provide self-service, workload-isolated computer environments, either through Red Hat OpenStack Platform or by containerizing analytic tools on Red Hat OpenShift Container Platform, to achieve TCO and analytic performance improvements. This session will be held on Tuesday, November 13 from 11:50 a.m. – 12:30 p.m.

The Fast Forward Upgrades Show, presented by Darin Sorrentino, cloud solutions architect at Red Hat, Maria Angelica Bracho, principal product manager at Red Hat, and Chris Janiszewski, OpenStack architect at Red Hat. Upgrading a single OpenStack deployment through three releases of OpenStack with minimal-to-no impact to downtime takes planning and time. The presenters will cover the three overarching steps of the upgrade in a cooking show format and discuss the finer details around the decisions and experiences some of our customers went through when deciding to leverage the fast forward upgrade process in their own data center, including workload impact, operational flow, and how to recover if you run into issues. This session will be held on Tuesday, November 13 from 1:40 – 2:20 p.m.

Automated Migration of Workloads, presented by James Labocki, product owner for Red Hat Cloud Suite as part of Red Hat’s Integrated Solutions Business Unit. This session will provide technical details about Red Hat’s infrastructure migration solution that accelerates the adoption of OpenStack by migrating workloads from your traditional hypervisor. This session will be held on Tuesday, November 13 from 2:30 – 3:10 p.m.

Distributed HyperConvergence: Pushing Openstack and Ceph to the Edge, presented by Sean Cohen, Red Hat senior product manager, Sébastien Han, principal software engineer and storage architect at Red Hat, and Giulio Fidente, software engineer at Red Hat. Data gravity and the arrival of 5G networks are driving stateful and specialized workloads to the edge, driving infrastructure to iterate at the speed of software. This session will showcase OpenStack hyperconverged infrastructure (HCI) features enabling edge functionalities, including resource control with containers, OpenStack TripleO features for supporting distributed network functions virtualization (NFV), and more. This session will be held on Tuesday, November 13 from 3:20 – 4:00 p.m.

On Wednesday, November 14, join Red Hat’s global telco solutions manager, Hanen Garcia, for his presentation on VCO: Prepare for edge services by virtualizing your central office. VCO is a common platform from which service providers can deliver value-added services to mobile subscribers, enterprises, and residential customers as well as services that support over-the-top applications. Attendees will learn how VCO can deliver mobile enterprise and residential services using Red Hat software-defined infrastructure products and Red Hat’s broad telecommunications partner ecosystem. This session will be held on Wednesday, November 14 from 11:00 – 11:40 a.m. in hall 7, level 2, room 7.2a Helsinki 1.

In between sessions, make sure to visit us in Booth B1 in the Expo Hall for product demonstrations, to speak with Red Hat Distribution of OpenStack (RDO) and other community leaders about upstream projects, and to snag some of our giveaways including Red Hat OpenStack Platform anniversary t-shirts and RDO ducks (while supplies last!).

On Tuesday evening, November 13, join Red Hat and Trilio for a ‘90s party at The Matrix, one of the biggest clubs in Berlin. The party starts at 8:30 p.m. Make sure to RSVP as we expect to fill our capacity for the event.

We hope to see you at OpenStack Summit Berlin later this month. To help you prepare, here’s a short list of things to check out:

Not able to attend OpenStack Summit Berlin? Watch this blog for a report on OpenStack Summit and recaps of Red Hat’s sessions at the event. 

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview