Loading...

Follow Mainframe News on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Executives who are leaders in applying IT infrastructure are emphatic. Business technology is key to their strategic vision, where the economics support a compelling business case. These trailblazers are already investing with bold expectations of gaining agility, driving innovation and capturing new revenue growth, process efficiencies, or improved customer experiences.

The next wave of business technology, known as cognitive computing or artificial intelligence (AI), is already helping the early adopters in North America, by providing intelligent systems able to adapt and learn. Moreover, by expanding digital automation exponentially, cognitive technologies possess the ability to deepen and broaden human capabilities.

Build your business case for AI

A successful AI strategy starts with a well-defined business challenge. Think about a specific use case that’s important to your organization. Begin by fully understanding the problem you want to solve and define the commercial benefits. Be ambitious–don’t fear the big challenges that need solving. Outlining the definition of a goal, and the desired business outcome from your AI application, will help you build a business case for your project.

An essential component to all AI solutions is access to relevant data that applies to your use case. Therefore, the quality and amount of data available should guide your selection process.

Harness your untapped data assets

Many businesses have realized that their unique data assets are key to competitive success, and now they want to utilize that data to work with AI. To scale, your IT team needs to adopt new tools and techniques that will allow them to get better results and deliver more insights to your business stakeholders.

Our smart solution is an automatic machine learning (ML) platform that gives you an experienced “data scientist in a box” to quickly create ML-driven products and services to transform your business and thereby gain a competitive edge.

AI solutions for agile business demands

Increasing the business impact of AI by solving a wider variety of business problems is a key goal of every successful data science team. H2O Driverless AI is optimized to run with GPU acceleration and automates key portions of the data science process–including feature engineering and parameter tuning to dramatically reduce the time needed to produce accurate models.

Model deployment remains one of the most common challenges for data scientists. Models can take weeks or even months to reach production and may be modified to work with production systems. H2O Driverless AI creates ultra-low latency automatic scoring pipelines for easy deployment. In addition, this solution supports training, testing and model versioning so that data science and business teams can work together to bring models from conception to production in minutes, not months.

Accelerate your path to AI innovation

You too can be on a rapid path to business results from proven, practical AI applications. Act now and reach out to an IBM representative or Business Partner for your free consultation and get an immediate assessment for a quick-start AI project.

Call us at 1-866-883-8901 | Priority code: IBM Systems

The post Quick-start AI–your rapid path to business results appeared first on IBM IT Infrastructure Blog.

The post Quick-start AI–your rapid path to business results appeared first on Mainframe News.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

These days, the business world is changing rapidly. Data-driven enterprises need IT solutions that are evolving and improving as quickly as the companies themselves. Recent announcements from IBM Storage demonstrate that IBM is continuing to provide leading-edge data management and storage innovation across the entire information lifecycle, from creation to archive.

IBM Spectrum Scale, a member of the market-leading[1] IBM Spectrum Storage family of software-defined storage solutions, provides a superb example of this accelerating cadence of IBM innovation. IBM Spectrum Scale is a highly scalable, high-performance data and file management solution that can provide simplified data management and integrated information lifecycle tools capable of managing billions of files and petabytes of unstructured data… and unstructured data is growing explosively, in both volume and value.[2]

IBM Spectrum Scale is rapidly innovating to meet the needs of new workloads to manage and gain insight from all this data. Today it delivers a comprehensive set of file data management tools including advanced storage virtualization, multicloud capabilities, automated tiered storage management, and performance enhancements to help manage the many types of data being used today. IBM Spectrum Scale and IBM Cloud Object Storage integrate with new IBM Spectrum Discover to make identifying data for analysis easier than ever. Recently, IBM announced IBM Spectrum Scale Data Access Edition to replace the previous IBM Spectrum Scale Standard Edition. The new edition provides a more straightforward and predictable cost structure, with pricing per terabyte instead of per socket, making it much simpler to understand and plan for future growth.

Because it’s a true software-defined storage solution, IBM Spectrum Scale can be deployed in countless ways, but one of the most powerful and flexible is in an IBM Elastic Storage Server (ESS) configuration. IBM has just announced the latest ESS version, which offers improved storage capacity and economy, upgrades without disruptions and even higher performance connectivity. New ESS GLxC models improve efficiency by using denser enclosures to provide up to 26 percent more capacity in 17 percent less rack space.[3] In addition, non-disruptive upgrade capability eases adding storage capacity to installed systems. And a new integrated 100Gbps Ethernet switch provides a high-performance industry standard interconnect alternative to InfiniBand.

IBM Storage Solutions

IBM Storage Solutions for IBM Cloud Private bring together the elements needed for building powerful and agile private cloud environments. Each combines IBM storage systems; IBM Cloud Private software and a pre-tested, easily deployed and blueprint. IBM Storage Solutions for IBM Cloud Private are designed to help enterprises leverage the benefits and advantages of private cloud while helping reduce complexity, deployment errors and cost.

There’s a new IBM Storage Solution for IBM Cloud Private that enables usage of IBM block storage for objects with the Minio object storage server. This solution supports application development and testing or other elastic workloads with IBM Cloud Private.

IBM Storage Solution for SAP is designed specifically to provide enhanced data protection for SAP Hana environments leveraging the capabilities of IBM Storage, IBM Spectrum Protect and IBM Spectrum Copy Data Management. This solution is enhanced with SAP TDI certification of FlashSystem 9100, new IBM Storwize V7000 and ESS.

IBM and SAP have been working closely together for many years. In fact, earlier this year SAP selected IBM for the prestigious SAP Global Partner of the Year – Infrastructure award, which recognizes the SAP partner “who has demonstrated a commitment to deliver innovative solutions that meet a multitude of customer deployment scenarios, including on-premises, cloud, and hybrid, as well as virtualized architectures.”[4]

IBM Storage Solutions for Healthcare are enhanced with IBM FlashSystem 9100 certification for electronic healthcare records with Epic software. A similar certification is in progress for Meditech software, planned to be available in 2019.[5]

To help organizations which are developing, monitoring and delivering autonomous vehicles, IBM Storage has outlined best practices for data management in our IBM Storage Solutions for ADAS and Autonomous Driving. The solution has been developed with multiple organizations tackling the unique data and performance requirements required to develop the next generation of vehicles.

Supporting our clients’ adoption of blockchain, IBM plans to extend IBM Storage Solutions to include solutions supporting blockchain in 2019.5

For more complete information about IBM’s latest storage enhancements, read our blog post or view our webcast.

[1] IBM Press Release: IBM Ranked # 1 in Worldwide Software-Defined Storage Software Market, April 2017 (https://www-03.ibm.com/press/us/en/pressrelease/52189.wss)

[2] Forbes: The Big (Unstructured) Data Problem, June 2017 (https://www.forbes.com/sites/forbestechcouncil/2017/06/05/the-big-unstructured-data-problem/#4f589aea493a), IDC White Paper: Using Tape to Optimize Data Protection Costs and Mitigate the Risk of Ransomware for Data-Centric Organizations, April 2018 IDC #US43710518

[3] Model GL6C compared to GL6S; varies by model

[4] SAP Partners: SAP Pinnacle Awards 2018: Winners and Finalists (https://www.sap.com/partner/find.awardwinning.html)

[5] Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only.

The post Innovating Deployment for Critical Applications appeared first on IBM IT Infrastructure Blog.

The post Innovating Deployment for Critical Applications appeared first on Mainframe News.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

High availability is becoming an increasingly popular term due to advances in infrastructure components such as networking, compute and storage. It’s also a necessity today for businesses in the banking, financial and insurance sectors.

IBM Spectrum Virtualize HyperSwap is a dual-site solution that provides continuous availability of data during planned and unplanned outages. If storage at either site goes offline, HyperSwap will automatically fail over storage access to the system at the surviving site.

In this blog post, I want to share my experience deploying HyperSwap for an insurance company based in India and show how HyperSwap made a difference in the life of the company’s five million customers.

What to do about slow replication and recovery

This client had its Oracle database and applications running on heterogeneous storage systems. The workload hosted on these systems was protected by an application level asynchronous replication to a remote site that was lagging an hour behind the primary site.

It was taking nearly three hours for the client to resume its business at the remote site, involving various manual recovery procedures at the application level by several teams following a failure of the storage system at the primary site. This had a direct impact on the services availed by the insurance company’s users.

IBM Systems Lab Services helped the client with a comprehensive solution to migrate and consolidate its heterogenous storage infrastructure to an IBM Spectrum Virtualize-based high-availability HyperSwap solution at its primary data center without affecting the existing ability of recovering at the remote site.

Delivering a HyperSwap solution

The solution consists of migrating the company’s data from existing storage systems to new storage hardware and technology. Three IBM Storwize V7000 storage systems provided in HyperSwap topology to achieve high availability against storage hardware failure at the primary site. The solution uses IP quorums as a tie breaker during a split-brain situation in the HyperSwap cluster configuration. IP quorum eliminates the need for an expensive Fiber Channel quorum located outside the V7000 cluster system.  The client already had an application cluster in place to sustain a failure on its server hardware.

I was involved in the design and build phase, and after the deployment, we did extensive testing to demonstrate the setup — including testing V7000 controller failure, quorum failure and the split-brain situation. During tests, there was no interruption to I/O workload and the application sustained the various failure scenarios.

Business benefits of HyperSwap

IBM HyperSwap, in conjunction with the application cluster, enabled the insurer to perform automatic failover of storage access across failure domains with no change in its existing host infrastructure. This enabled the company to ensure continuous availability of services to its customers during an unforeseen outage at either site.

The client used IBM Spectrum Virtualize data efficiency features to reduce the amount of storage needed to maintain two copies across two failure domains.

IBM flash drives with data reduction pool and environment-specific tunables set by IBM Systems Lab Services helped it further respond to business faster, with consistent sub-millisecond response times even with data efficiency features in place.

The solution was designed in such a way that the workload was balanced on both failure domains so that none of the storage controllers are idle during regular operations.

Why is IBM HyperSwap a preferred solution for high availability?

IBM HyperSwap is built into IBM Spectrum Virtualize software and doesn’t require any additional software or hardware components, such as additional multipathing software on the host or SAS-FC bridges for the communication between two sites. It only requires traditional Fiber Channel (FC or FCIP) connectivity between the sites and native multipathing drivers on the host. Organizations can also take advantage of IBM’s proven storage virtualization features of IBM Spectrum Virtualize to consolidate and unify storage replication requirements at each site.

IBM Spectrum Virtualize also offers advanced data efficiency features such as thin provisioning, compression and deduplication that can be used along with HyperSwap to minimize your total cost of ownership while maintaining two copies across sites.

HyperSwap is available on all IBM Spectrum Virtualize–based hardware that supports more than one node pair in a cluster, such as Storwize V5000, V7000 and V9000.

Where you can find support

IBM HyperSwap provides continuous availability of data during planned and unplanned outages. If you are looking for a zero-downtime solution for mission critical workloads, IBM Systems Lab Services Storage and Software Defined Infrastructure has helped clients around the world efficiently design, plan, deliver and protect data with superior performance.

Reach out to IBM Systems Lab Services today.

The post IBM HyperSwap: A modern data protection solution appeared first on IBM IT Infrastructure Blog.

The post IBM HyperSwap: A modern data protection solution appeared first on Mainframe News.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

For the third year in a row, IBM has been recognized as a Leader in the Gartner Magic Quadrant for Distributed File Systems and Object Storage for both its completeness of vision and ability to execute in the data-driven scale-out storage market for unstructured data. Gartner’s report evaluated two solutions in IBM’s suite of software-defined storage, IBM Cloud Object Storage and IBM Spectrum Scale.

Both IBM products can scale to massive capacities and help customers change the economics of storage by leveraging the flexibility of their software-defined architectures. IBM Cloud Object Storage and IBM Spectrum Scale are both part of the IBM Spectrum Storage Suite of software-defined storage products.

The release of this report comes soon after the recent announcement that IBM is expanding the capabilities of IBM Cloud Object Storage for more concurrent use cases and is also intending to release an NVMe flash-based object storage solution in 2019. IBM has over 600 patents for its object storage platform and continues to invest in expanding beyond archive and backup to penetrate deeper into large content repositories for media, analytics, AI, and IoT workloads. With a number of customers over 1000PB of capacity, IBM is leading the way with a growing number of customers with massive PB proven data repositories.

IBM Spectrum Scale has made advances since last year’s Gartner Magic Quadrant report, continuing and consolidating its position as a leader for software-defined file storage management.

The latest version of IBM Spectrum Scale delivers improvements in performance, space efficiency, ease of installation management, security and compliance. IBM Spectrum Scale was selected to meet the world-beating storage capacity and performance demands of the world’s smartest, most powerful supercomputer, the Oak Ridge National Laboratory’s Summit system.

IBM Cloud Object Storage

IBM Cloud Object Storage breaks down barriers associated with storing massive amounts of data by creating a modern and transformational approach to storage using IBM’s patented technologies. Combining scalability, availability, reliability, efficiency and security into a platform used by a number of large-scale mission-critical environments. IBM is relied upon by some of the largest data repositories in the world with an EB of capacity or more of data with millions or even billions of files or objects.

IBM’s flexibility allows customers to start with terabytes of storage but grow online to petabytes or even exabytes of data.

While some competitors offer limited fixed configurations with efficiency parameters or throughput configurations, IBM offers both fixed and custom configurations along with the ability to grow capacity or throughput with system or software only configurations. Our proven solutions turn storage challenges into business advantages.

IBM Spectrum Scale

IBM Spectrum Scale is an enterprise-grade parallel file system that delivers highly scalable software-defined storage capacity and supreme performance for large, demanding, mission-critical data analytics, artificial intelligence, machine learning and technical computing workloads.

IBM Spectrum Scale multi-protocol file access is designed to allow clients to transparently unify storage silos within a single namespace, including support for Hadoop environments and HDFS without requiring any changes to applications. The solution, also available embedded in IBM’s Elastic Storage Server, provides automated policy-based placement and migration of data across flash, disk, tape and cloud storage, allowing clients to balance performance and cost.

IBM Spectrum Scale has been addressing client storage management needs and has focused reliability for twenty years, and continues to invest in maintaining and extending its position as an industry leader.

You can read a copy of the Gartner report here.  

Disclaimer:
Gartner Magic Quadrant for Distributed File Systems and Object Storage, Julia Palmer, Raj Bala, John McArthur October 18, 2018.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

The post IBM continues as a Leader in 2018 Magic Quadrant for Distributed File and Object Storage appeared first on IBM IT Infrastructure Blog.

The post IBM continues as a Leader in 2018 Magic Quadrant for Distributed File and Object Storage appeared first on Mainframe News.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The cost of managing storage infrastructure can be a major component of the total lifetime cost of storage.[1] This is one of the primary reasons companies often have only 50 percent storage utilization rates[2], since it can be easier to throw more capacity at a storage problem than it is to troubleshoot and fix issues around poor storage utilization. But IBM is changing that with the announcement of sophisticated and comprehensive new data management enhancements and solutions that use artificial intelligence (AI) to help understand and optimize storage. This will help data-driven enterprises to gain greater value from their vast data assets.

With the addition of IBM Spectrum Discover to the market-leading IBM Spectrum Storage family, IBM can address the unique challenges of the rapid growth in unstructured data for both file and object deployments.

System metadata alone—the information provided by filesystems about unstructured file and object data—doesn’t necessarily provide storage administrators with the view of storage consumption and data quality needed for effective storage optimization. Such basic system-level metadata can also be inadequate for data scientists, business analysts, and knowledge workers who spend much of their time searching for the right data or struggling to identify files and objects that contain confidential or sensitive data.

To overcome these challenges, enterprises are turning to metadata management solutions that offer exceptional data visibility by enhancing system metadata with custom metadata that brings semantic context to massive unstructured data stores.[3] Once organizations have a clearer understanding of their unstructured data aligned to their business taxonomies and operational goals, they can optimize storage systems, working to ensure that data meets governance and compliance policies, and more efficiently harnessing the value of their oceans of data for competitive advantage and critical data insights.

IBM Spectrum Discover is a sophisticated metadata management solution that provides data insight for exabyte-scale unstructured storage, file and object. It connects to IBM Cloud Object Storage and IBM Spectrum Scale (with plans to add Dell-EMC Isilon in 2019[4]) to rapidly ingest, consolidate and index metadata for billions of files and objects, providing a rich metadata layer that enables storage administrators, data stewards and data scientists to efficiently manage, classify and gain insights from massive amounts of unstructured data. The insights gained accelerate large-scale analytics, improve storage economics and help with risk mitigation to create competitive advantage and accelerate critical research.

IBM Business Partners understand the advantages that deeper insight into their unstructured data can provide to clients:

“The simultaneous explosion of unstructured data and rapid rise of open AI and analytics frameworks have presented an unprecedented opportunity for enterprises and institutions looking to harness the power of data to discover new insights and enable amazing user experiences in the era of data-driven digital transformation,” notes Stan Wysocki, Vice President of IBM Platinum Business Partner Mark III Systems.  “As an enterprise full stack partner, we’re excited to have IBM Spectrum Discover, with its ability to provide useful and actionable insights into unstructured data at exabyte scale, in the hands of our data scientists, developers, and engineers to work with our clients to solve difficult challenges, open up new operating models, and innovate their organizations like never before.”

IBM Spectrum Discover can be easily deployed and is extensible using an SDK for integration with many IBM and non-IBM tools to orchestrate big data, analytics, AI, machine learning (ML) and other data management workflows.

“The digital transformation underway in most enterprises today is shifting them towards data-centric IT strategies that are presenting many new challenges, and in particular is significantly increasing the amount of unstructured data they must handle,” said Eric Burgener, Research Vice President, Server and Storage Infrastructure Group, IDC. “To effectively analyze all this data to turn it into insights that can help drive better, more efficient business operations while meeting IT governance guidelines, metadata fueled file- and object-based data management platforms like that provided by IBM’s Spectrum Discover support the large-scale analytics, risk mitigation, and data optimization use cases IT organizations need to achieve these goals.”

IBM Storage Insights is a powerful storage management tool available to IBM customers that uses a range of AI technologies and management techniques, to assess storage environments, help maximize performance and availability, optimize storage costs and streamline support. As a cloud-based service, it deploys in only a few minutes. Storage Insights monitors on a single pane of glass: health, capacity and performance for all IBM block storage and external storage under management, helping IBM customers understand and plan storage capacity and performance. It provides proactive best practices and helps identify potential issues before they become problems, then speeds resolution when support is needed.

Thanks to its AI roots and ongoing innovation by IBM, Storage Insights is constantly expanding its repertoire of insights and capabilities. IBM Storage Insights offers new AI technology to help diagnose and address the issues around SAN “gridlock”. In some cases, applications request data from storage systems, which send it faster than the server can accept it. Data backs up in the storage network fabric, consuming switch buffer credits, which blocks inter-switch links. Eventually, SAN performance is significantly affected by a problem that appears to be storage related when, in fact, it is caused by servers. The new release of Storage Insights uses its cloud-based AI technology to detect these situations. It then alerts IBM storage technicians who review the SAN status and contact storage administrators to help address the issue.

Additionally, Storage Insights now supports enhanced, customized reporting options that help enable administrators to create individualized reports, schedule automated report generation, and delivery. These new capabilities are also available on-premises with IBM Spectrum Control.

Sophisticated and comprehensive data management tools and solutions such as new IBM Spectrum Discover and Storage Insights help data-driven enterprises more effectively and efficiently monitor, analyze, manage and optimize data and storage infrastructure to derive even more value from data assets. Discover how these new solutions and enhancements recently introduced by IBM can help your enterprise thrive in the 21st century.

For more complete information about IBM’s latest storage enhancements, read our blog post or view our webcast.

[1] Three estimates of in-house storage costs, https://blog.hubstor.net/3-estimates-on-in-house-storage-costs#

[2] Average of individual customer Analysis Engine Reports produced by IBM Butterfly Software

[3] Gartner, Market Guide for File Analysis Software, https://www.gartner.com/doc/3869701/market-guide-file-analysis-software

[4] Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only.

The post Sophisticated and comprehensive data and storage management appeared first on IBM IT Infrastructure Blog.

The post Sophisticated and comprehensive data and storage management appeared first on Mainframe News.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Just a few months ago, IBM flash storage with NVMe was recognized as the most innovative in the marketplace.[1] You might think that to win such recognition, we had placed all of our bets on the table. But the recent announcements of new solutions, enhancements, and upgrades across our flash portfolio demonstrate that IBM continues to sit at the top of our innovation game.

One of the best examples of IBM flash storage innovation is our rapid implementation of the Non-Volatile Memory Express (NVMe) protocol portfolio-wide. Only a little more than a year ago, IBM announced our NVMe strategy based on optimizing the entire storage system stack. Since then, we have maintained a brisk pace of NVMe implementation with Infiniband NVMe over Fabric (NVMe-oF) support on FlashSystem 900 in February and all-flash NVMe-based FlashSystem 9100 in July.

Now, IBM is introducing a new model of Storwize V7000. This new model brings NVMe to the industry acclaimed Storwize family and starts with NVMe-based IBM FlashCore Modules and industry standard NVMe SSDs with expansion options including SAS flash drives and hybrid options. The new Storwize V7000 model delivers 2.7x maximum throughput when using compression.[2] It also helps lower both CAPEX and OPEX by leveraging existing storage or by adding SAS expansion enclosures, and uses the extensive AI-based storage resource management, predictive analytics, streamlined support, and automated data placement provided by IBM Spectrum Virtualize and its Easy Tier functionality.

The new Storwize V7000 model helps protect and extend your storage investments with the ability to cluster with new or existing Storwize V7000 systems to support a maximum of 32PB of flash capacity. The new Storwize V7000 can deliver comprehensive enterprise-class data services across more than 440 different storage systems from IBM and others. When it comes time to dispose of older storage, the new Storwize V7000 supports nondisruptive, transparent data migration in the background while keeping your servers and applications running.

“I believe compression with FlashCore Modules is the most impactful change with the new Storwize V7000,” says Andy Davis, Storage Architect at Drillinginfo. “Storwize V7000 performance has always been great but having FCM compression included in the price–considering how great the ratios are–basically turns a database environment physical footprint from 100TB to 30TB. This makes it even more affordable to put data on flash to meet our demanding application requirements.”

Delivering on our previously stated plans, IBM is announcing major upgrades in our NVMe-oF capabilities across the portfolio, enabling NVMe-oF for Fibre Channel with systems that are built with IBM Spectrum Virtualize: many models of IBM FlashSystem 9100 and V9000, Storwize V7000F/V7000, SAN Volume Controller, VersaStack solutions that leverage those storage systems, and IBM Spectrum Virtualize software-defined configurations. This new capability is available as a nondisruptive software upgrade, supporting installed systems from September 2016 onwards. For FlashSystem 900, a new 16Gbps adapter also supports Fibre Chanel NVMe-oF.

We are also restating our plans to deliver NVMe over Ethernet in 2019 for systems built with IBM Spectrum Virtualize and also IBM Spectrum Accelerate (FlashSystem A9000/R).[3] Rounding out the portfolio support for NVMe, IBM plans to to support NVMe flash drives with IBM Cloud Object Storage software in SDS configurations in 2019.3

For IBM FlashSystem A9000R, a new more affordable entry configuration offers 40 percent lower list price[4] but no compromise on scalability. When data availability is top of mind for many businesses, IBM is adding the ability to use both high-availability HyperSwap and disaster recovery capabilities at the same time to enhance data availability. Finally, capacity planning with deduplication can be difficult: it can be hard to know the savings from deleting a volume or moving data from one system to another. IBM plans to enhance FlashSystem A9000/R in 2019 with AI technologies from IBM Research to simplify this process.3

We are also moving forward with incorporation of iSER, a network protocol that extends the iSCSI protocol to use Remote Direct Memory Access (RDMA), essentially the same type of technology as NVMe. iSER is now available for server connection and clustering for many systems built with IBM Spectrum Virtualize. Thanks to its RDMA foundation and its ability to reduce SCSI overhead, iSER can achieve ultra-low latencies across standard Ethernet–a great benefit for service providers and other enterprises around the world with substantial Ethernet investments.

IBM Spectrum Virtualize software operates on premises in systems such as Storwize V7000 and FlashSystem 9100 but can also be deployed in the cloud with IBM Spectrum Virtualize for Public Cloud for use cases such as data mobility between on-premises and public cloud infrastructure, facilitating disaster recovery or workload mobility.  Helping enable our clients’ deployment of multicloud environments, IBM plans to support IBM Spectrum Virtualize for Public Cloud on Amazon AWS in 2019.3

“The recent announcements from IBM Storage demonstrates the company is continuing to innovate across the entire portfolio,” commented Randy Kerns, Senior Strategist, Evaluator Group. “This accelerated cadence of innovation around NVMe technology now includes IBM Storwize family solutions. IBM is launching a generational update of Storwize V7000, bringing the advantages of NVMe and IBM’s award-winning FlashCore technology to customers.  The Storwize V7000 also delivers data reduction with both inline compression and high performance deduplication, adding to the economic value of performance and capacity improvements.”

When it comes to accelerating the performance of IBM storage, NVMe and iSER are only part of the story. We also announced upgrades to our zHyperLink technology, which can make our mainframe solutions even faster and more productive. zHyperLink is a short distance mainframe attachment link to market-leading[5] IBM DS8880 data systems designed for up to 10x lower latency than High Performance FICON. It’s the first new mainframe input/output (I/O) channel link technology since FICON.[6] The original zHyperLink implementation provided ultra-low latencies for reads, and now we have extended the benefits to writes as well. Low I/O latencies help deliver value through improved mainframe workload elapsed times, faster transactional responses, and lower scaling costs. zHyperLink accelerated writes dramatically speed Db2 for z/OS transaction processing and improve active log throughput.[7] The zHyperLink write capability supports IBM Metro Mirror replication and IBM HyperSwap as well.

Though performance is crucial to data-driven enterprises these days, the recent IBM Storage announcements are about more than speed; increasing storage capacity and efficiency have also been important focuses of IBM innovation. We are introducing new 15.36TB custom flash that can double the storage density and raw capacity for IBM DS8880F systems.[8] Larger storage capacities, such as those enabled by the new flash, can offer multiple benefits. Not only do they enable more data to be stored in the same physical space—for example, DS8888F systems can now reach eight petabytes of capacity in a single system—they also allow the consolidation of multiple workloads, including big data analytics, technical computing, media streaming, blockchain, and machine learning. And thanks to the DS8880F systems that will leverage the new flash, these benefits hold true for IBM Z, IBM LinuxONE, IBM Power Systems, and other supported systems.

It’s important to note that the new high-capacity flash will be used in IBM DS8880F systems that also leverage the advantages of IBM Easy Tier functionality. Easy Tier uses AI technology to automate data placement across all capacity in a storage pool. With Easy Tier, data can be moved between high performance and high-density storage as needed to increase efficiency and performance while helping reduce storage costs.

“NVMe is one of the hottest technologies in the marketplace,” states Hugh Hayes EVP of IBM Business Partner Alliance Technology Group. “Last year, IBM announced their commitment to implementing NVMe portfolio-wide. Since then, we’ve seen a constant stream of NVMe announcements coming from IBM Storage. One of the most important is the recent release of the new Storwize V7000, which brings NVMe and support for NVMe over Fabrics to the Storwize lineup. Now we have a powerful new solution to offer our many customers where cost-efficiency is a crucial purchasing factor. This is the NVMe solution the market has been waiting for.”

Increased capacity is also being delivered with IBM FlashSystem 900, the flagship of IBM FlashCore technology. But in typical fashion, IBM FlashSystem 900 takes an innovative approach–increasing the DRAM in its high capacity module, which has the effect of doubling the effective capacity to nearly 44 TB, depending on the compressibility of the stored data.[9] The latest IBM FlashSystem 900 model also sports an updated user interface, faster rebuilds, NVMe-oF connectivity and improved capacity reporting, among a number of other improvements.

If you intend to infuse your business with a constant stream of the latest data systems technology, look no farther than award-winning IBM flash. The flood of announcements from IBM Storage demonstrate that no matter how fast your business world is changing, IBM innovation is moving even faster.

For more complete information about IBM’s latest storage enhancements, read our blog post or view our webcast.

[1] IBM IT Infrastructure Blog: Flash Memory Summit Award: Most Innovative Flash Memory Technology — IBM FlashSystem 9100, August 2018 (https://www.ibm.com/blogs/systems/flash-memory-summit-award-innovative-flash-memory-technology/)

[2] Storwize V7000 Gen2+ with 24 flash drives and software compression compared with Storwize V7000 Gen3 with 24 FlashCore Modules using hardware compression

[3] Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only.

[4] New entry configuration of one flash enclosure and three grid controllers compared to previous two flash/four grid controller entry configuration.

[5] Based on IBM analysis of IDC Worldwide Quarterly Enterprise Storage Systems Tracker data, March 1, 2018, https://www.idc.com/tracker/showproductinfo.jsp?prod_id=5#

[6] IBM developerWorks: IBM DS8880 zHyperLinks gives low latency access to storage, April 2017 (https://developer.ibm.com/storage/2017/04/15/ibm-ds8880-zhyperlinks-gives-low-latency-access-storage/)

[7] Based on IBM lab measurements comparing DS8886 attached using zHyperlink with same system using zHPF.

[8] DS8882F: 737.3TB compared to previous 368.6TB; improvement varies by model.

[9] 44TB effective capacity based on 2.4:1 compression.

The post Flash innovation at the speed of business appeared first on IBM IT Infrastructure Blog.

The post Flash innovation at the speed of business appeared first on Mainframe News.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

“Today, enterprises must innovate quickly and constantly–or risk losing competitive advantage,” states Senior Analyst Mark Peters of Enterprise Strategy Group (ESG). IBM lives this observation, constantly innovating at breakneck speed to deliver the best possible solutions to the market. To further our commitment to our customers, today IBM is announcing several new solutions in the hottest spaces of the storage market, as well as significant enhancements and upgrades across our industry-acclaimed storage and storage software portfolios, driving solutions that span the lifecycle of data from creation to archive.

This stream of innovation is moving along multiple pathways–a new NVMe enabled all-flash storage system, flash storage and NVMe networking enhancements, modern data protection and security improvements, new storage solutions and certifications, and an entirely new offering that is designed to enable data-driven enterprises to derive even more value from their oceans of data.

We’re introducing enhancements that provide end-to-end NVMe technology, faster performance, and increased capacity across nearly our entire flash portfolio:

  • A new Storwize V7000 system that delivers 2.7x maximum throughput[1] for data-driven workloads when using compression and extends end-to-end NVMe capability into the IBM Storwize family. The new Storwize V7000 system starts as all-flash and offers significant expansion capabilities. Easy tier AI-based management automatically moves data to the most appropriate media tier based on use patterns.
  • Major expansion of lower latency and higher throughput Non-Volatile Memory Express (NVMe) fabric support across our storage portfolio. Updates to IBM Spectrum Virtualize enable NVMe over Fabrics (NVMe-oF) support for Fibre Channel through a simple nondisruptive software upgrade for FlashSystem 9100 and many installed FlashSystem V9000, Storwize V7000F/V7000, SAN Volume Controller systems, and VersaStack solutions that use those storage systems. A new 16Gbps adapter adds Fibre Channel NVMe-oF support for FlashSystem 900, which has supported Infiniband NVMe-oF since February 2018. IBM is also outlining plans to add NVMe capability to IBM Cloud Object Storage software in SDS Configurations in 2019.[2]
  • Up to double the maximum storage density and effective flash capacity (after compression) for FlashSystem 900, which can help to reduce costs and further simplify storage solutions.[3]
  • For DS8880F, the number one family of storage systems supporting mainframe-based IT infrastructure,[4] new custom flash provides up to double maximum flash capacity[5] in the same footprint. An update to our zHyperLink solutions helps speed application performance by significantly reducing both write and read latency.
  • A new entry level configuration is now available for IBM FlashSystem A9000R, delivering a 40 percent lower entry list price but no compromise on scalability. IBM has added the ability to use both high-availability HyperSwap and disaster recovery capabilities at the same time to enhance data availability. Finally, IBM plans to enhance FlashSystem A9000/R in 2019 with AI technologies from IBM Research to simplify capacity management with deduplication.2

The new storage system capabilities have caught the eye of IBM customers:

“I believe compression with FlashCore Modules is the most impactful change with the new Storwize V7000,” says Andy Davis, Storage Architect at Drillinginfo. “Storwize V7000 performance has always been great but having FCM compression included in the price–considering how great the ratios are–basically turns a database environment physical footprint from 100TB to 30TB. This makes it even more affordable to put data on flash to meet our demanding application requirements.”

Data-driven business leaders need to be able to identify the right data to drive their analytics and when working with data, they need to keep that data moving. Our flash and NVMe enhancements help but new software solutions complement them:

  • IBM Spectrum Discover can help you to better understand your oceans of data, which, in turn can improve and accelerate large-scale analytics, ease data governance, and improve storage economics. By characterizing data for analytics, IBM Spectrum Discover can help improve competitive advantage and speed critical research. Based on technology from IBM Research designed to provide data insight of unstructured data for analytics, governance and optimization, IBM Spectrum Discover automatically enhances and then leverages metadata—data about your data—to provide these capabilities. At a time when data is growing at 30 percent per year,[6] finding the right data for analytics and AI can be harder than finding a needle in a haystack. IBM Spectrum Discover rapidly ingests, consolidates, and indexes metadata for billions of files and objects from your data ocean, enabling you to more easily gain insights from massive amounts of unstructured data. IBM Spectrum Discover supports unstructured data in IBM Cloud Object Storage and IBM Spectrum Scale with plans to also support Dell-EMC Isilon in 2019.2
  • IBM Storage Insights, cloud-based management for IBM block storage, leverages new AI technology to identify storage network “gridlock” which can slow applications to a crawl across your SAN. While very difficult to diagnose, Storage Insights embedded AI technology provides an ideal solution to solve this perplexing problem. When Storage Insights detects gridlock, it alerts IBM support staff who proactively call clients.

IBM’s Business Partners are extremely enthusiastic about the wide spectrum of storage innovation announced today:

“NVMe is one of the hottest technologies in the marketplace,” states Hugh Hayes EVP of IBM Business Partner Alliance Technology Group. “Last year, IBM announced their commitment to implementing NVMe portfolio-wide. Since then, we’ve seen a constant stream of NVMe announcements coming from IBM Storage. One of the most important is the recent release of the new Storwize V7000, which brings NVMe and support for NVMe over Fabrics to the Storwize lineup. Now we have a powerful new solution to offer our many customers where cost-efficiency is a crucial purchasing factor. This is the NVMe solution the market has been waiting for.”

Requirements for data protection and security never stand still. IBM is also announcing new capabilities to help protect data and enhance data security:

  • Regulatory requirements are critical for many businesses; IBM Cloud Object Storage small CD mode configurations support policy-based WORM and lockable vaults that can help meet these requirements even for smaller companies. IBM Cloud Object Storage also supports more distinct use cases with substantial increase in the number of vaults to support 1,500 unique environments.
  • In today’s world of cybercrime and ransomware, tape media plays a vital role in creating an “air gap” between servers and backup data copies. New TS1160 enterprise tape drives support-doubled native capacity[7] (20TB) and improved performance. These new drives are supported in IBM TS3500 and TS4500 libraries, enhancing those existing investments with greater capacity.
  • Cost concerns are never far from mind, even when considering high availability. IBM Storage Utility Offering provides cloud-like storage pricing for on-premises storage based on actual monthly usage, turning your storage budget from CAPEX into OPEX. Now, IBM Storage Utility Offering enables the acquisition of HA configurations (two systems) for a starting monthly rate from only 20 percent more than leasing a single system.[8] IBM Storage Utility Offering has also been extended to include IBM TS7760 virtual tape libraries.

IBM customers recognize the advantages of relentless innovation:

“ESG’s research clearly shows that data security is the number one IT priority for organizations of all types and sizes,” said Mark Peters, Principal Analyst and Practice Director, ESG. “Technology solution providers that want to maintain market leadership positions, or hope to capture market share, must constantly innovate within the data security domain. These latest announcements from IBM confirm that it is continuing to aggressively follow this approach.”

In addition to all of the innovation noted above, IBM is enhancing a number of sophisticated new multicloud storage and data management solutions for next-generation applications:

This is an impressive list of new solutions and substantial enhancements and updates to announce all at once. But at IBM Storage, the pace of innovation is incredibly brisk. For our customers and Business Partners, this long list means more opportunity, greater advantage and increased benefits.

You can learn more about the innovations described above by watching our webcast or reading these blog posts from other IBM executives:

[1] Storwize V7000 Gen2+ with 24 flash drives and software compression compared with Storwize V7000 Gen3 with 24 FlashCore Modules using hardware compression

[2] Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only.

[3] New high-capacity 18TB module supports up to 44TB effective capacity after compression, compared with 22TB for previous module.

[4] Based on IBM analysis of IDC Worldwide Quarterly Enterprise Storage Systems Tracker data, March 1, 2018, https://www.idc.com/tracker/showproductinfo.jsp?prod_id=5#

[5] DS8882F: 737.3TB compared to previous 368.6TB; improvement varies by model.

[6] IDC White Paper: Using Tape to Optimize Data Protection Costs and Mitigate the Risk of Ransomware for Data-Centric Organizations, April 2018 IDC #US43710518

[7] Compared with IBM TS1150

[8] Two IBM FlashSystem 9150 with twenty-four 19.2TB FlashCore Modules with 36-month SUO base commitment compared with typical 36-month lease for one similar system.

[9] IBM IT Infrastructure Blog: Flash Memory Summit Award: Most Innovative Flash Memory Technology — IBM FlashSystem 9100, August 2018 (https://www.ibm.com/blogs/systems/flash-memory-summit-award-innovative-flash-memory-technology/)

The post IBM Storage innovation from data creation to archive appeared first on IBM IT Infrastructure Blog.

The post IBM Storage innovation from data creation to archive appeared first on Mainframe News.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In today’s digital world, data is the primary asset for most organizations. Access to data and changes of the data need to be restricted to authorized persons, devices or processes. IT devices need to be protected from the execution of non-authorized code as well as from denial of service attacks.

The cost of data breaches is increasing year over year. In a study conducted by the Ponemon Institute and sponsored by IBM, “2018 Cost of Data Breach Study: Impact of Business Continuity Management”, the global average cost of a data breach is $3.86 million, up 6.4 percent over the last result.

Security is critical – with weaknesses in security, companies are potentially not only risking their core assets, their reputation and customer confidence, they also face a direct financial burden from the incident itself or from increasing risk of getting fined by regulations like GDPR when personal client data is lost.

Applications and IT systems in critical domains often need multiple level of security, which means that there can be different security classification levels for information processed within the same system.

The concept of Multiple Independent Levels of Security/Safety (MILS) was published in 2005. It is designed to ensure that security systems cannot be bypassed or evaluated, and are tamper-proof. A MILS system enforces security policies by authorizing information flow only between components in the same security domain or through trustworthy security monitors.

The MILS concept was applied in Europe with the European project EURO-MILS. This project aimed at secure European virtualization applications in critical domains.

The EURO-MILS project has published a protection profile for Operating Systems, which states that it is conformant to the “Assurance package EAL5 augmented with AVA_VAN.5” as defined in the Common Criteria for Information Technology Security Evaluation, Part 3.

The Common Criteria portal[1] contains all published certifications for operating systems. Only versions of PR/SM (Hypervisor of IBM Z and LinuxONE) have ever been certified at EAL5+AVA_VAN.5. That certification level makes IBM LinuxONE relevant for critical systems implementations in Europe.

LinuxONE systems provide hardware assisted pervasive encryption. Cryptographic co-processors integrated in every LinuxONE microprocessor core can fully encrypt all data in-flight and at rest. With pervasive encryption, the risk that hackers can access or modify the data is minimized.

An IBM LinuxONE system with Crypto Express accelerators meets Federal Information Processing Standards (FIPS) 140-2 Level 4[2]. At level 4, the system provides full protection of the cryptographic module. It detects and responds to unauthorized attempts of physical access.

A top concern for many organizations is the protection of encryption keys. Hackers often target encryption keys when they are exposed in memory while being used. LinuxONE can help to protect these keys in tamper-resistant hardware that allows the invalidation of keys in case of a detected intrusion.

Furthermore, an IBM LinuxONE system can establish a secured operating environment to help protect against insider threats from privileged users. The Secure Service Container is a secured deployment of software appliances. Secure Service Containers on LinuxONE deliver protection against internal and external threats by encrypting all data without changing the application. In the x86 world, Software Guard eXtensions require a specific application design, distinguishing small encrypted and trusted and large unencrypted components.[3] They are already known to be vulnerable.

Since many applications in a cloud environment–public or private cloud–tend to become critical to their users, the standards for security should be as high as possible and affordable. IBM LinuxONE plays in its own league offering scalability, performance and response time, uptime and highest certifications. Also, it offers cost advantages for consolidation scenarios.  Because of these characteristics, IBM chose to run its IBM Blockchain Platform Enterprise Plan and IBM Hyper Protect services in IBM Cloud on LinuxONE.[4]

[1] List of Common Criteria certified Operating Systems Products: https://www.commoncriteriaportal.org/products/#OS

[2] The Security Requirements for Cryptographic Modules – FIPS 140-2 – are published here: https://csrc.nist.gov/publications/detail/fips/140/2/final. Detailed specifications about IBM Crypto Cards are available here: https://www-03.ibm.com/security/cryptocards/index.shtml

[3] Application Design Consideration for applications to use Software Guard eXtensions can be found here:   https://software.intel.com/en-us/sgx-sdk/details

[4] See https://www-03.ibm.com/press/us/en/pressrelease/50169.wss and  https://www.ibm.com/cloud/hyper-protect-services

The post Security considerations for critical environments appeared first on IBM IT Infrastructure Blog.

The post Security considerations for critical environments appeared first on Mainframe News.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Information is at the core of today’s digital business operations. As the value of your organization’s information grows, it can become the primary target for a breach. A data breach occurs when someone is allowed to read data without authorization to access it.

Once hackers can read your data, they can steal or modify it. Sensitive and confidential information hackers are interested in may involve personal health information, personal identity information, trade secrets or intellectual property. Stolen data could be sold to other parties, which could incur further damage to the business and to clients for whom the data is relevant.

The consequences for businesses that experience a data breach can be severe, including destruction or corruption of a database, disruption of normal business operations, loss of data, revenue and reputation, damaged intellectual property and even subjection to a lawsuit from stakeholders. The 2017 Cost of Data Breach Study from the Ponemon Institute puts the global average cost at $3.6 million, or $141 per data record. The average cost of a data breach in the US is even higher at $7.3 million.

With the damage and consequences of breaches and data loss in mind, how sure are you that your company’s data is secure from unauthorized access? And how would you proceed now to identify and fix the weak points before hackers can take advantage of them?

To answer these questions, you have to look into how the system is accessed and how your data is protected. A secure system is a relative concept. A system can be as secure as it is intended to be only if security controls and features are properly configured and implemented.

Protect your data with SAP HANA

SAP HANA is an in-memory database management system for business processes with built-in support for business intelligence and analytics. HANA is certified to run on Red Hat Enterprise Linux and SUSE Linux Enterprise Server. Supported hardware platforms as of today are Intel-based hardware platforms and IBM Power Systems.

Just like other databases, your SAP HANA system stores and processes important data that may be critical to your business operations. You must take measures to ensure that the SAP HANA environment and its data are well protected from unauthorized access. SAP HANA provides a comprehensive set of security capabilities for addressing security and regulatory requirements not only for the database layer but also for other data engines and its integrated application server. The security capabilities cover user and identity management, authentication, authorization, encryption and so forth. These capabilities should be studied and implemented following your company and industry regulations and according to your own specific SAP HANA usage scenario.

Protect your database with a solid OS foundation

Now that you have SAP HANA set up and running and you implemented security measures following SAP HANA security guidelines, is your SAP HANA environment immune from security exposures? Not really, and here’s why.

SAP HANA runs on servers, whether they are Red Hat Enterprise Linux or SUSE Linux Enterprise Server. These servers should provide the first line of defense against any unauthorized access to the system.

It’s axiomatic to say that good security is like an onion. Layered security provides the best protection because it doesn’t rely solely on the integrity of any single element. SAP HANA is highly dependent on the operating system (OS) it is running on for security services. While IBM Power Systems has security built-in at all layers, from processor to hypervisor, OS security misconfiguration could leave more attacking surfaces open to hackers for accessing the system, and thus for accessing the SAP HANA environment.

As a result, it becomes extremely important for businesses to identify threats and reduce their exposure at the base OS layer so as to provide a solid and secure OS foundation for your SAP HANA database, application and data.

How IBM Systems Lab Services can help

I’m part of an IBM Systems Lab Services consultant team specializing in Power Systems security. Our security consultants have proven expertise in helping companies to assess their environment of security risks, identify key vulnerability areas and take risk remediation actions. We also provide one-to-one workshops to our Power Systems clients on security technology education and security capability enablement.

One of our popular services is the OS security assessment service specifically designed for the SAP HANA environment. This service was developed based on SUSE Linux Enterprise Server’s security hardening guide for SAP HANA as well as US National Security Agency recommendations for Linux. Reach out to Lab Services today to get our help in securing the servers that support your SAP HANA database.

The post System security assessment for servers that support SAP HANA appeared first on IBM IT Infrastructure Blog.

The post System security assessment for servers that support SAP HANA appeared first on Mainframe News.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

HOUSTON – October 18, 2018 – BMC, a global leader in IT solutions for the digital enterprise, today announced BMC AMI, automated mainframe intelligence solutions that will deliver higher performing, self-managing mainframe environments to meet the g…

The post BMC Modernizes the Mainframe with Automated Mainframe Intelligence appeared first on Mainframe News.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview