April 2018 marked the 54th anniversary of IBM’s mainframes. It is uncertain if anyone could have ever predicted such a long and successful life for them. A quick historical sketch: IBM launched the first modern mainframe, IBM 360, on April 7, 1964. With 229,000 calculations per second, it was the mainframe that played a key role in taking mankind to the moon. Fast forward to the current era: multiple banks, the healthcare industry, along with the insurance and government sector, still continue to use mainframes. Mainframes might not be trending as the hottest technology of the decade, but they act as the backbone for these and various other industries due to their reliability, availability, security, and transaction processing speed.
IBM defines a mainframe as “a large computer, in particular one to which other computers can be connected so that they can share facilities the mainframe provides. The term usually refers to hardware only, namely, main storage, execution circuitry and peripheral units.”
In many ways it closely resembles the rise of the present-day “cloud” in terms of its huge infrastructure, high computing power, and storage capabilities. Of course there are significant differences, the cloud is not centralized while mainframes are for example, but rather than having endless debates about the cloud killing or surpassing mainframes, perhaps we can bring them together in order to best utilize them. We can leverage our current investments in mainframes and adopt the cloud model. We can, for example, plan to back up mainframe data on the cloud, or use the cloud as a disaster recovery solution for mainframes—the possibilities are endless.
In this article, we are going to explain some key points which can help mainframe administrators and organizations currently using mainframes to get started and progress on their cloud journey. We will explain the difference between the mainframe and the cloud-operating model. Also, we will highlight some of the use cases which can help in making a smooth transition to the hybrid model, taking advantage of both mainframes and the cloud.
Essential Knowledge Before Setting Off on Your Cloud Journey
If you are a mainframe administrator or an organization planning to head off on your cloud journey, you probably have many doubts and questions you’d like answered. With the help of the framework below, we will try to explain and provide you with the right approach to help your transition go smoothly:
Let’s do a basic training course – Before embarking on any new journey, it is always good to do some homework. As an organization, you should plan and arrange a basic cloud administration training course for the core team so that they can learn the jargon and various important technical terms. If you think it is going to be easy to hire external resources and get started, keep in mind, they might take a decent amount of time to learn the organizational culture, thereby slowing down the implementation cycle.
Consider it a lab experiment – Up to a defined level, there is no up-front cost involved with using public cloud resources. Companies should promote the experiments, and appreciate learning from their failures.
Link up with a good partner – As an organization, you can plan to link up with an external partner who can help you in setting up the framework. Later on, once the team is mature enough, the show can be run in-house, without external support.
Be cautious; start low and slow – It is always good to start with a low-criticality use case. Migrating the database of a tier one application to the cloud might not be the right approach and in case of a failure, can have a huge impact on the overall journey to the cloud. Archiving data or data backup in the cloud or disaster recovery site in the cloud, can be good starting points.
Cloud vs. Traditional IT: Life Is not Going to Be the Same for an Administrator
There is a complete paradigm shift when we talk about a cloud operating model as compared with the traditional way we have been running our IT stores and mainframe administrators should take note of it. We will touch upon some of the key differences:
Elasticity and Scalability – The cloud is elastic, meaning that resources such as compute and storage will be added and removed from the servers on the fly, whenever demand increases or comes down respectively. As an administrator, you no longer need to worry about augmenting extra compute and memory power to the servers to tackle month-end peak loads, nor add storage to keep more data, the cloud will take care of it automatically.
Resilient – Although there have been multiple cloud-related outages, still we must agree that the cloud has been built to be one of the most resilient architectures. Applications and data are distributed across various data centers in different countries and continents. It is next to impossible to replicate this resiliency on your corporate data centers and within the price range as offered by various cloud players.
OpEx model – The cloud operates on an operating expenses (OpEx) model and you have to pay according to the resources you use, at a regular interval and there is no single monthly peak 4-hour interval that sets the pricing for the whole month. Also, there is no need to order storage boxes, compute chassis and other infrastructure components, nor track the shipment and match the bill of materials, or rack and stack the servers, as huge infrastructure is available at the click of a button.
Shared responsibility – Information security in the cloud is a shared responsibility model and administrators should clearly understand their role. Your cloud service provider will be responsible for the physical security of: the data center, the physical servers, as well as the storage and network devices operating out of the data center. However, the customer has to take care of : protecting customer data with proper configuration of various infrastructure components, encryption and user and access management.
Compliance – Compliance is one of the key areas where the role of admins will change drastically, they will have to be more vigilant from now on. What data set can be moved to the cloud in line with the regulatory and compliance guidelines and what has to be stored in-house, are some of the important decisions they will now be tasked with. Additionally, some of the key questions you should be ready to answer are: Which country can data reside in? Is it a shared cloud infrastructure, and how are on-premise controls enabled in the cloud? Having the answers to these questions at the ready will save you a lot of time.
How to Get Started on Your Cloud Journey
In light of the advantages that the cloud offers such as elasticity, high availability, and agility, it is pretty clear that you cannot neglect it, or else you will lag behind your competitors. In this section, we will explain some of the use cases which are the best starting points when setting out on a cloud journey:
Backup – It is critical that you backup your critical data so that in the event of data corruption or an outage, recovery can be quickly initiated. Many regulatory authorities also mandate having a proper backup and recovery plan before you go live with a new application. A cloud-based backup is one of the easiest use cases to get started with. If you are running your critical workloads on mainframes, you can opt for cloud-based backups and hence eliminate tape architecture and other expensive solutions.
Disaster recovery (DR) – A secondary site really comes in handy when there is a natural disaster or power failure. The cloud can act as your DR data site by storing data at different data centers within the same country or on two different continents, according to your needs. Predefined service level agreements guarantee a quick recovery, saving your company both time and money.
Archive – The cloud is ideal for storing your infrequently accessed data. There are many sectors such as the financial and healthcare industries which are heavily regulated and are bound to store data for long durations of up to 10 years or more. It’s not a forward-thinking strategy to store this infrequently accessed data in your on-premise data center, instead offload this data for cheap on the cloud and restore it whenever needed. It not also saves you the effort involved in buying disks or tape hardware but it is also the most cost-effective solution.
Test setup – If you are planning to set up a lab environment where your teams will play around with different technologies, the cloud is the way to go. Instead of waiting for a long provisioning cycle, the cloud offers the infrastructure within no time and can be decommissioned in the blink of an eye. Time to market is the key differentiating factor and the cloud guarantees your company this expediency.
Mainframes have been the most stable and mature platform for the last 50+ years, hands down. However, these days it is all about collaboration, innovation, and new economics. The business landscape is changing at a lightning pace and requires that you dig deep to climb new heights. Hence, as a company you should make full use of your current investments in mainframes along with leveraging the new technological advancements that have taken place over the last decade with the cloud being one of the biggest breakthroughs. This will require adapting to a new working style as the cloud is quite different from the traditional way of working, however, the results will speak for themselves in terms of the rewards you will reap in huge cost savings and time-to-market reduction.
Today’s modern data centers are composed of many different systems that works both independently and in conjunction with one another to deliver business services. The hardware, software, network resources, and services required for the operation and management of an enterprise IT environment is commonly referred to as your IT infrastructure.
The Challenge of Managing IT Infrastructure
It can be challenging for IT architects and executives to keep up with the burgeoning IT infrastructure. Homogeneous systems were common in the early days of computing, but most medium to large organizations today have adopted heterogeneous systems. For example, it is not uncommon for Linux, Unix and Windows servers to be deployed throughout a modern IT infrastructure. And for larger shops, add in mainframes, too.
And when multiple types of servers are deployed that impacts everything else. Different operating systems, software, and services are required for each type of server. Data, quite frequently, must be shared between the disparate applications running on different servers, which requires additional software, networking, and services to be deployed. The modern computing infrastructure is inherently complex.
Furthermore, technology is always changing – hopefully advancing – but definitely different than it was even just last year. Consider the database management system (DBMS). Most organizations have anywhere from three to ten different DBMSes. Just a decade ago it was a safe bet that most of them were SQL/relational, but with big data and mobile requirements many NoSQL database systems are being deployed. And NoSQL does not rely on an underlying model like relational, every NoSQL DBMS is different from every other NoSQL DBMS. And let’s not forget Hadoop, which is not a DBMS but can be used as a data persistence layer for unstructured data of all types and is frequently used to deploy data lakes.
Additionally, consider the impact of cloud computing – the storing and accessing of data and programs over the internet instead of your own servers. As organizations adopt cloud strategies components of their IT infrastructure will move from on-premises to the cloud. What used to reside in-house (whether at your organization or at an outsourced location), now requires a combination of in-house and external computing resources. This is a significant change.
Application delivery has changed significantly as well. Agile development methodologies combined with continuous delivery and DevOps enable application development to produce software in short cycles, with quicker turnaround times than ever before. With microservices and APIs software components developed by independent teams can be combined to interact and deliver business service more reliably and quicker than with traditional methodologies, such as the waterfall model. This means that not just the procured components of your IT infrastructure are changing, but your in-house developed applications are changing more rapidly than ever before, too.
And there is no end in sight as technology marches forward and your IT infrastructure morphs to adopt new and useful components. The end result is a modern, but more complex environment that is more difficult to understand, track, and manage. Nevertheless, few would dispute that it is imperative to keep up with modern developments to ensure that your company is achieving the best possible return on its IT investment.
But keeping track of it all can be daunting. It is easy to miss systems and components of your infrastructure as you work to understand and manage the cost and value of your technology assets. And, you cannot accurately understand the cost of your IT infrastructure, let alone be sure that you are protecting and optimizing it appropriately, if you do not know everything that you are using. In other words, without transparency there is anarchy and confusion.
IT Cost Transparency
The goal of IT must be to run it like a business, instead of as a cost center. It is often the case that senior executives view IT as a black box; they know it requires capital outlays but have no solid understanding as to where the money goes or how expenditures enable IT to deliver business value. On the other hand, it is not uncommon for senior IT managers to look at company expectations as unrealistic given budget and human resource constraints.
The problem is that there has been no automated, accurate method for managing and providing financial visibility into IT activities. Budget pressure and IT complexity, as discussed earlier, have conspired to make it more and more difficult to provide IT cost transparency.
But a new category of software is emerging that delivers cost transparency for IT organizations. The software offers automatic discovery of IT assets with the ability to provide cost details for each asset. By applying analytics to the IT infrastructure and cost data, the software can offer a clear picture of the cost of providing applications and services to your enterprise. This useful insight enables CIOs and other IT managers to make faster, fact-based decisions about provisioning and purchases.
The capabilities of such software can vary by vendor and solution, but capabilities to look for include:
Automated collection of IT asset details
Tracking of operational metrics and usage
Cost modeling capabilities
Executive and CIO dashboard
Custom reporting and analysis
Forecasting and budgeting
Chargeback reporting and billing capabilities
ROI analysis for IT projects
Armed with the facts provided by IT cost transparency, CIOs can accurately discuss budget allocations with business executives. When you understand your IT infrastructure and know what you spend on IT resources and applications, your company can make informed decisions because they know where the money is spent.
But IT cost transparency is not just a solution for improved communication. By using IT cost transparency software to model and track the total cost to deliver and maintain IT software and services, better decisions can be made. For example, infrastructure components like servers and storage arrays are frequently deployed with more power or capacity than is needed. Such, over-provisioning – whether it is CPU, memory, storage or any other IT asset – costs money and wastes resources. Over-provisioning is a problem for both mainframe and distributed systems. With mainframes you may have more MSU capacity or DASD than you currently need. Both can be costly in terms of software licensing fees, administration, and cost to manage. For distributed systems you may have many servers that are not running at peak capacity, meaning hardware that you paid for but are not using. And again, there is administration and management costs to factor in.
With an accurate view of what is being used, how it is being used, and what it costs, it becomes possible to provision capacity as needed. Not provisioning until necessary can significantly reduce costs, by delaying spending as well as taking advantage of Moore’s Law – or the tendency for capacity and performance to increase while cost decreases for technology over time. For mainframe systems, you can use soft-capping to decrease MSU usage, raising the cap as your capacity needs increase (or automate soft-capping with management software).
With an accurate view into your IT infrastructure and its cost it becomes easier to keep your IT and business initiatives on track. Armed with accurate data, your business and IT teams can better align investment with goals and therefore better manage budgets and spending. Cost transparency solutions provide decision-makers with knowledge of where money is actually being spent throughout the business. IT leaders can use this information to make accurate decisions about current allocations and future investments.
Chargeback for IT services has traditionally been troublesome for organizations with a complex IT infrastructure. How can you accurately charge for IT services when you may not know all of the services being delivered and on what components, let alone the actual cost of those services? IT cost transparency can be used to drive service-level agreements (SLAs) based on actual costs and requirements.
IT cost transparency can be a significant help for outsourced IT departments. An actual accounting of the real IT costs can help you to negotiate – ore re-negotiate – your outsourcing contract and bill.
The Bottom Line
Attempting to understand the cost of IT is too frequently a one-off effort conducted to address a crisis such as an audit, a contract negotiation or to define a new budget. A better approach is to understand your IT infrastructure and the cost of IT services and software with a proactive approach using automated IT cost transparency software. Let’s face it, we’ve been living with anarchy for too long and an informed, automated, analytical approach to managing IT costs is long overdue.
It’s been a while since my last overly alarmist IT security rant, and with the holiday season just past—when millions of IoT devices flooded our homes and offices—I figure what better time to scare the crap out of the general public.
But, in this instance, it’s not actually about scaring people. It’s more about acknowledging and accepting the human condition for what it is, and working to mitigate risk against it. Case in point, the far-too-many aspects of the IT security arms race.
The human factor plays into almost all of it
After all, it’s humans who build the networks, it’s humans who write the code to attack the networks, and it’s humans who in turn work to defend said networks. Simply put, technology has little to do with technological warfare … sort of.
The best way to approach this on a human level is to look closely at all those involved in preparing for and fighting against cyberattacks. It introduces a new perspective: how we all need to work together, and who we need to work with.
The magical IT Department can do anything
Firstly, there is IT—the department within any organization that is generally responsible for making any company function on a day-to-day basis. It’s this department that takes the brunt of the abuse from both the nefarious outside cyber attackers, to the crushing expectations of those who live inside the four walls—those who think IT people can and will be able to do absolutely any task no matter the complexity.A tough place to be, right? History dictates a view of IT that falls into expectations that can only be described as magical. An all-seeing, all-knowing group of people who will keep us safe, warm, and business ready at all times. And when new technology is introduced, that very magic is wielded in such a way that everyone in IT is instantly endowed with all knowledge as though they were plugged into The Matrix like Keanu Reeves.And, as an IT person myself, I think I speak for all of us when I say, “NOT FAIR!”—especially when it concerns IT security. Knowing the time and effort it takes to become an expert in just one aspect of technology, people need to understand that IT departments need help. Engaging with outside IT security specialists is not admitting defeat—it’s admitting that you are smart enough to hire the right people to help defend the masses.
Partners keeping would-be attackers at bay
Which, of course, brings us to the next human element: the partners. Engaging with IT security partners chips away at human vulnerability. It does away with hubris, as well as a lack of confidence, creating a level playing field whereby two entities come together for the common good to establish the right technology, the right infrastructure, the right protocols, and the right habits all designed as an interconnected ecosystem of smart people doing smart things to keep would-be attackers at bay.
Folks outside IT are sometimes the attacker’s best friend
Then we have the folks outside IT. The people you smile at in the halls and at company events; the people you work with on inter-departmental projects; and people you perhaps bowl with. So, what do they have to do with IT security … a hell of a lot!It’s the people who are outside of the know who can sometimes be unwittingly the attacker’s best friend, and the worst enemy to those who employ them. Why? Again, the human condition. Without knowing the seedy underbelly of IT and all things bad in the world, they can let the Trojan horse through the proverbial gates. Not because they are in anyway involved with the so-called attackers, more because they love horses. Not understanding cybersecurity is a human issue that revolves around constant education. Giving real-world examples of how they can be compromised, can transform them from would-be and unknowing accomplices to heavily armed guards at the cyber gate.Lifting the veil to teach them the do’s and don’ts, along with letting them know they shouldn’t be embarrassed if and when they become a victim, will help lock down the fortress that much more. Again, addressing the human factor without getting too embroiled in technology.
Humans, shore up your defences against cyberattacks or else
As our world continues to evolve, there will be more attacks on the horizon. From DDoS attacks leveraging IoT devices, to mobile intrusions, to ransomeware, and more—no one can prevent attacks like us humans.
And if we should ever fall to the rise of the machines like some post-apocalyptic Christian Bale movie, I for one welcome our new machine overlords and hope for a job that doesn’t involve a wireless shock collar—although as an IoT device it may be easy to hack and use to defeat the robots … only time and human history will tell.
Outsourcing has been the rage for years now, and the outsourcing of computing resources to the cloud leads the way. And why not? It allows you to increase efficiency (focus on core business activities instead of running a datacenter), cut costs (you don’t have physical datacenter costs), reduce risk (or pass it on to the third party), and scale when needed with (hopefully) minimal cost and fuss.
This is why many companies opt for outsourced computing services, but does that mean you should think (or not think) about your datacenter? As in out of sight, out of mind? You have third-party outsourcers, and they take care of you, right? But do they have all of your best interest at heart? Possibly not. It’s interesting to note that, according to a recent survey, 32 percent of companies don’t evaluate their third party vendors. That should raise some red flags, at least I hope it does. Because let’s face it, what motivation does your third-party outsourcer have to help you to control your outsourcing costs? Think about that for a minute.
There are ways to do this but typically, they involve extra work for accountants and IT staff who are on your payroll now, and who are actually pretty busy doing what they do now. So do you hire more bodies to do it? That’s no fun. That means hiring full-time bodies dedicated to a project that you may or may not stick with long term – that’s not smart HR management. Alternatively, you could hire temporary staff or contract people to do it – but that means extra cost and possibly losing expertise that you need if-and-when you decide to maintain the activity and the personnel move on. So where does that leave you?
Well, there is some good news – there are tools out there that can help you keep tabs on your outsourced server costs. There are also some that will help you keep tabs on both your outsourced servers and your in-house severs – using the same toolset. One or two toolsets out there will even help you keep tabs on all your servers and your mainframe systems, if you have those, as well. So if you want IT cost transparency, you can get that. And really, what responsible CIO – or CFO for that matter – doesn’t want that?
The truth is that you can save a tremendous amount of money just by right-sizing your servers – both onsite and outsourced. What that means is reconfiguring your servers in such a way that they are provisioned with cores and memory to handle their expected workloads (with a suitable buffer). If you were to discover that most of your servers were over-provisioned by 50% or worse, that could translate into a serious amount of money protracted over a month… a year. Datacenters will typically charge you a fixed base price per server, plus a price for each core, GB RAM, GB Disk, etc. Now think about several thousand servers over-provisioned by 50% or more.
The situation on the mainframe side of the datacenter is somewhat different, and more complex. For one thing, mainframe costs are typically based on MSUs/MIPS and CPU usage and, more importantly, they are tallied either as a monthly total, or some variety of monthly peak, total peak hour, or other capacity-related metric. Indeed, having the transparency into cost information, particularly for outsourcing scenarios is critical. Consider that IT organizations feel almost blind when dealing with mainframe monthly billing, and that goes doubly true for those who outsource their mainframe usage.
Finding ways to control outsourced mainframe costs is important whether you’re outsourcing or not. Transparency into your mainframe data, however, can help you to eliminate usage glitches, shift workloads, adjust available capacity, control application resource usage, and much more. Adding business data (company name, department name, resource cost values, etc.) will greatly enhance your ability to stay on top of both outsourced and/or internal costs.
So who’s going to look out for you? This is up to you, and you need the right tools and expertise to do it. The tools are available; you just need to invest in some time to discover just how much money you’re leaving on your IT table.
The need for modern mobile solutions for mainframe data centers
Today most CIOs, CTOs and IT experts are aware of the impact of digital transformation (DX) on their respective businesses and IT organizations. Disruptive technologies like Big Data, business analytics, cloud (computing and storage), digital payment, and more recently the algorithmic economy and the Internet of Things (IoT), have and are making their impacts felt. However, the most impactful of all DXs may very well be mobile.
Mobile – the biggest digital transformation
Mobile technology has been thrust upon businesses by a demanding and changing customer dynamic – as millennials make their presence felt as consumers of goods and services. As you probably already know, most banks today offer at least some measure of mobile access to their customers. Any bank refusing to comply with customer demands would surely lose a portion of their customer base to their competitors.
More and more retailers are following suit – think of an online experience that includes only a sample of products offered, compared to the Walmart or Costco online experience. The difference is very much apparent, and the retail laggards suffer. Even some government departments are getting into the game.
Any business that deals directly with customers that is seen as being somehow deficient in providing acceptable mobile access, is probably leaving money on the table, and is more than likely risking ongoing loss of market share to their competition.
A digital strategy
A recent Forrester study concludes that most companies believe that they have a digital strategy to address their customer’s mobile demands, but in reality, they do not have the processes in place to execute digital strategies. This can result in approaching a critically important business objective – like the company’s mobile presence – in a limited and cautious way, resulting in delay, and almost certainly low customer satisfaction. In essence, failure.
The customer has spoken
In a recent Gartner study, actual customer reactions to unacceptable mobile experiences were identified – 52% said that a bad mobile experience made a customer less likely to engage with a company. 55% said that a frustrating mobile experience hurts their opinion of the company. And a full 66% said that they were disappointed in a bad mobile experience from a company, even if they liked the brand.
So why aren’t you providing good mobile access to your customers?
If you’re like other organizations running mainframe systems in their data centers, there is one major reason for this seemingly unacceptable business practice. You can only offer limited mobile access because your business intellectual property (IP) – the method though which you interact with your customer – is locked away in your legacy mainframe applications.
Making matters worse, the expertise that helped build these (largely COBOL) applications over the past two, three, or even four decades, is in short supply. In extreme cases, some businesses have none of this expertise on staff, and have been living with this business risk for many years.
What to do about it
This situation can lead to knee-jerk reactions that are not well thought out, or are made without gathering all of the information available. First, if you’re running out of COBOL expertise, does that mean you should migrate all business processing off of the mainframe? Remember that your mainframe is responsible for perhaps 75% of your revenue. Migrating off your primary revenue system might not be the most prudent action, and it may not be necessary at all.
Third, if you migrate off the mainframe, doesn’t that mean that you’ll have to re-engineer all of your decades-worth of IP from the ground up? (The answer to that question would be yes – some vendors will tell you that you won’t, but unless you’re running small, simple mainframe applications, you’ll have to do it.) What would the cost be to recreate decades worth of IP, especially if you have difficulty extracting it from your legacy COBOL applications? The answer to that question is often in the hundreds of millions of dollars or more; in fact there are many failed-migration horror stories quietly circulating out there.
Finally, to obtain the security, reliability (5-nines), performance and throughput capacity designed into mainframe systems, you’ll have to carefully build all that into any project that uses another platform. You lose the “cost advantage” that you thought you were getting by moving to a “cheaper platform”. You must then ask the question, just how much time, capital expense and risk you’re willing to invest in a migration?
The good news is that this is a solvable problem. And it doesn’t involve a forklift, nor does it involve a mega-reverse engineering project to rediscover your own IP.
Getting to where you need to be
Take the shortest path (and the most cost-effective path) to providing the mobile access to your services that your customers demand. You have decade’s worth of IP wrapped up in your legacy COBOL applications; why not leverage it as much as you can? There are solutions available today that allow you to do just that.
Optimize what you have now, and then build on what you have now – to get to where you need to be quickly, and for as little cost as possible.
In many IT organizations with multi-platform data centers, the mainframe has been in a silo for some time – no new hires, upgrades when transaction processing workload increases demand it – where cost control is paramount. For the most part, the mainframe is not part of any DevOps initiative, and new development, or new spending of any kind is discouraged, as new development is focused on other platforms.
The truth is, however, that the mainframe still accounts for the lion’s share of corporate revenue. If you want to implement mobile accessibility that leverages the mainframe, you’re going to need to optimize what you have now. And that’s not really a bad thing.
Optimization means increasing efficiency, using less system resources to do the same or more processing, and bringing operations cost down as much as possible – you’re not going to get an argument from the CFO on that one. New mobile apps are going to be used by your customers, and that will mean more transactions to process, and you need to start with a solid foundation. Optimization will do that for you.
Within this context, optimization needs to be cost-effective, it must either improve performance or at least not make things worse, and above all, it needs to be low-risk and relatively quick implementation. Ideally, it should also come with a positive ROI and a fast payback. Optimization solutions like this are available, but you have to shop around. Generally smaller vendors will have the best targeted solutions, while the larger vendors will try to sell you big-dollar suites containing a lot of “shelf-ware”.
Then build on what you have
The first key to building mobile apps quickly is to leverage the existing business logic currently running in your mainframe COBOL applications. The second key is a modern hybrid solution that will leverage your existing mainframe applications and databases virtually unchanged.
Such a solution will add no extra burden to whatever COBOL expertise that you have now, and you’ll be able to create new mobile apps or new application logic (or anything else) using millennial programmers and the toolsets that they feel most comfortable using. The new apps can run anywhere – on mid-range servers, mobile devices, or even on the mainframe on LinuxOne.
There is no need to re-invent the wheel.
How the hybrid solution works
Your legacy batch programs, data storage, transaction programs and interfaces are unchanged. The only changes are an integration layer on the mainframe that can be used to define and deliver customized APIs that can drive the new mobile interfaces. Applications developed on distributed systems and running on mobile devices (or the web, or anywhere else) access mainframe assets using TCP/IP or MQ, through the new integration layer. Legacy CICS applications respond by accessing mainframe data as required, and returning results to the requesting mobile/distributed application. In this way, new mobile applications can be written and brought online as fast as they are ready. There are no infrastructure change costs, no new COBOL projects, and no application rewrites, just a modest cost for an integration solution that allows seamless mobile access to existing mainframe assets.
The solution allows your millennial programmers to leverage your existing assets, without reinventing the wheel. Modern skill sets (JS, Java, .NET, etc.) can be used to build new apps on mobile systems that will access existing mainframe applications. Tight but powerful APIs will allow new applications to communicate with the legacy mainframe assets. New applications can also be written (running on distributed servers) to augment the legacy mainframe applications.
Results using a modern hybrid development solution
The solution removes the need to re-invent the wheel – the cost of a reverse engineering project is avoided, as is the cost of a forklift infrastructure swap-out. The difference in the development effort needed using contemporary thinking and a modern hybrid development solution is remarkable – it is orders of magnitude less.
Equally impressive is the time needed to implement a mobile project – weeks/months using the modern low-risk hybrid development solution, and years using the high-risk contemporary solutions. And no less important is the improved customer satisfaction – you’re delivering the user experience that they demand.
A modern hybrid development solution is the quickest, least risky and most cost-effective way to provide your customers with the user experience they expect: mobile access to your services. You leverage your own IP without re-inventing the wheel. The technique involves optimizing your current assets, and then building on what you have now. And your millennial programmers do most of the work.
A few years ago, I invited a group of CIOs to become an Advisory Board for my company. Their role was to advise us on the direction the company should go, and how we should position our products and services most effectively from the perspective of the Fortune 500 CIO. To this day, we still engage our Advisory Board on many subjects.
Our Board members have helped us both strategically and tactically, and have made a big difference. A while back, we talked to them about an initiative that our internal marketing management team came up with: controlling the cost of business transactions. Seemed like an obvious thing to us, but our CIO Advisory Board—to a man—dismissed the idea, and convinced us to drop it. They just didn’t see how it was important.
Generally, we go with the recommendations of our Board members, but in this case, this year we’re going to proceed with the cost-per-transaction concept. The reason is that, two years later, it still makes sense to us. Now our challenge is to make the normally tight-fisted banking CFOs see that reducing the cost of their transactions can make a difference to them and their businesses.
And that brings me to an unpopular subject for everyone who has money in the bank—and that means everyone reading this, and most people on the planet—banking service charges. Some of you may be old enough to remember that at one time there were far fewer service charges. Now, of course, we pay service charges on every account, and pretty much every transaction. Think about it: you pay for the privilege of having a bank account (!); you pay to use an ATM; you pay to get monthly statements; you pay to write a check, to order new checks, and for overdraft protection; and you pay to transfer money from one of your accounts to another, and so on—the list never really ends.
The point is that banks understand the idea that if you charge a small amount of money on all or most transactions, it adds up to a tremendous amount of money. It shouldn’t be much of a stretch to see that saving a small amount of money on every transaction could make a big difference to the bottom line as well—every bit as much as any new service charge can. The difference is that saving money on transactions doesn’t adversely affect the bank’s customer base in any way, unlike a new service charge.
For the most part, banks—at least the larger banks—use mainframe systems at the heart of their transaction processing infrastructure. They do this for several reasons. Firstly, their transaction processing applications were originally designed to run on mainframe systems: they are optimized to run on the mainframe, on the z/OS mainframe operating system. Secondly, the banks have been developing these applications for years, adding to them, improving them, and updating them as they make changes to their customer product offerings—these applications are under continuous development. And thirdly, the platform is perfect for transaction processing.
The mainframe has grown and improved over the years, right along with banks’ growing and changing needs. It is the most secure platform—pretty important for banking. It has a superior transaction throughput capacity when compared to any other platform because that is exactly what it was designed to do. It also happens to be the most cost-effective platform when running the types of workloads typical of large banks, credit card processors, financial services organizations, etc.
A segue on cost effectiveness: mainframe systems, when configured for five 9s reliability and redundancy, when configured for extremely high transaction throughput, high levels of security, etc., cost less to procure and to run than comparably configured non-mainframe platform networks. Mainframe systems use less power, require far less data center real estate and, most importantly, require far fewer of the costly technical personnel needed to support them. End segue.
So how do you optimize transactions that are running on the best transaction processing platform on the planet? Well, without getting too technical, here are just a few ways:
High performance in-memory technology – Faster than caching or database buffering, and faster than in-memory databases, it bypasses database overhead for small amounts of data that are accessed thousands of times more often than 99 percent of system data. This technology has been in use for almost 30 years, and is the best-kept secret in mainframe computing. Faster computing increases transaction processing capacity and reduces the cost per transaction.
Automated resource sharing – Mainframe systems are often divided into logical partitions that help delineate use of mainframe computing resources within an organization. Automated resource sharing allows high-priority processing to borrow resources between logical partitions. This ensures that business-critical processing can run faster without the need for temporary (and costly!) processing capacity increases, and without having high-priority processing held up while lower-priority processing completes in another partition.
Smart, high-performance data mobility – While the mainframe lies at the heart of most large banks, all have large distributed computing networks as well, and mainframe data is needed on these systems (think Big Data, analytics, IoT, etc.). While there are plenty of enterprise-level data replication solutions out there, few are optimized for providing mainframe data for other platforms in real time. Better data mobility indirectly reduces the cost per transaction.
Driving down transaction costs
These types of solutions, and many more, can help to bring down the cost of each transaction that a bank runs— every single one—from a customer’s ATM session, to customer debit card transactions, to foreign exchange transactions, to the millions of reconciliation batch transactions that are run at 3AM. The cost per transaction can be driven down further by combining these solutions.
The truth is that every cent—or hundredth of a cent—that can be saved on a transaction passes directly to the bottom line, just as certainly as new income does for any new service charge dreamed up by an up-and-coming banking executive. As banks hoard their cash—as most have been doing since 2008—does it really matter where this cash comes from? And does it make any sense at all to leave some of that cash on the table?
Once a month we like to pick articles from other blogs that we feel are interesting enough to talk about on the Planet Mainframe blog. Here are this month’s picks:
Data Lineage Matters More Than Ever
Very interesting article by Denise Kalm in a recent edition of the DestinationZ online publication: “Data Lineage Matters More Than Ever”. Data lineage can be thought of as the lifecycle of your data. Where does it come from? Where is any given piece of data moving to? Where does it end up? It’s all about understanding how data is used and where it’s being used. In many cases, the path is quite complex and will vary based on inputs to the transaction and other state data.
Another very interesting article by Allan Radding on his Dancing Dinosaur blog: “Hybrid Cloud to Streamline IBM Z”. In it, he tells us that the IBM Z Hybrid Cloud Architecture can be the basis for a complete corporate cloud strategy that encompasses the mainframe, private cloud, and various public cloud offerings, consistent with the message that IBM has always spouted – center the core of your business on the mainframe and then build around it.
Interesting article by Marcel den Hartog recently on LinkedIn Pulse: “The Mainframe is DEAD; Long live the mainframe!” He is correct that mainframe shops need the help of the unique innovations contributed by the mainframe software provider ecosystem – and his company, CA is definitely one of them, but don’t forget BMC, DataKinetics, Compuware, Rocket Software, just to name a few others!
Ten Agreements That Can Save You Hundreds Of Thousands Of Dollars
Very interesting article by Cheryl Watson recently on the CMG blog: “Ten Agreements That Can Save You Hundreds Of Thousands Of Dollars”. In it she tells us about how to stay on top of your outsourcers by communicating with them, and putting in place agreements that promote understanding in both directions.
Timing, timekeeping, time synchronization, more specifically accurate synchronization, is a key requirement for modern information technology systems. This is especially true for industries involved in transaction processing such as the financial industry. As such, the IBM Z sysplex needs highly accurate timing/timekeeping and synchronization technology to ensure data integrity, and to also provide the ability to reconstruct a database based upon logs. There have been 3 phases in the evolution of time synchronization on IBM Z.
Phase 1: The 9037 External Time Reference (ETR) aka the Sysplex Timer was introduced with Parallel Sysplex. An ETR network was comprised on one or two 9037 devices, which synchronized time among themselves. The ETRs provided time services for the zSeries CECs. Support for the 9037 ended with the z10 mainframes.
Phase 2 began with Server Time Protocol (STP). STP is a message based protocol, similar to the industry standard Network Time Protocol (NTP). STP allowed a collection of IBM mainframes to maintain time synchronization with each other using a time value known as Coordinated Server Time (CST). The network of servers is known as a Coordinated Timing Network (CTN). STP improved time synchronization compared to the ETR, and it also scaled much better over longer distances of up to 200km. STP could co-exist with an ETR network (known as a mixed CTN), or it could be an STP only CTN. The mainframe’s Hardware Management Console (HMC) plays a critical role with STP CTNs: The HMC can initialize CST manually or initialize CST to an external time source. The HMC also sets the time zone, daylight savings time, and leap seconds offsets. It also performs the time adjustments when needed.
STP currently has two external time source options. One is an external NTP server, which provides up to 100 milliseconds (ms) accuracy. The second external time source option, which is what I like to term phase 3, is the use of the NTP server with pulse per second (PPS). PPS provides 10 microseconds (μs) of accuracy, and requires a connection directly to the oscillator.
Recent changes in U.S. and European regulations have brought increasing attention to time synchronization accuracy. The US based Financial Industry Regulatory Authority (FINRA) is a not for profit entity that is responsible for overseeing US brokerage firms and works closely with the SEC. In August 2016, the Securities and Exchange Commission (SEC) approved a rule change to require synchronization to within 50 ms of Universal Coordinated Time (UTC), as well as requiring audit log capability to prove compliance. At the same time, the European Securities and Markets Authority (ESMA) came out with revised directive known as the Markets in Financial Instruments Directive II (MiFID II). MiFID II applies to any organization dealing in European stocks, commodities, and bonds. MiFID II requires time synchronization to within 100us of UTC.
These regulations have spurred a renewed interest in another time synchronization protocol, initially introduced in 2002. IEEE 1588 Precision Time Protocol (PTP) enables heterogeneous systems that include clocks of various inherent precision, resolution, and stability to synchronize to a grandmaster clock. PTP is a message-based time transfer protocol that enables synchronization accuracy and precision in the sub microsecond range for packet-based networked systems. The standard was first released in 2002 as Version 1 and then revised in 2008 as Version 2. The IEEE Standards Association is currently leading an initiative to update the standard again (Version 3). The protocol supports synchronization in the sub-microsecond range with minimal network bandwidth and local clock computing resources. PTP has played an important role in power transmission, telecommunications, and laboratories. PTP is starting to find its way into computer networking and is seeing increasing support in hardware such as switches. Will PTP become increasingly adopted in the financial sector as a way to better comply with the new regulations?
Mainframe files/datasets are liable to change quite often. After all, people are using those files so they are likely to be modified/updated. So how could you tell whether there had been a legitimate change to the file or it had been hacked? How can you verify that your z/OS files have remained intact and your data is secure?
The answer seems to be to use File Integrity Monitoring (FIM) software. FIM software is quite prominent on non-mainframe platforms and is only now beginning to be utilized on z/OS. The software can monitor the following values for unexpected changes to files or configuration items including:
Privileges and security settings
Core attributes and size
File Integrity Monitoring makes sense on non-mainframe platforms where they don’t have the incredible level of security that comes out of the box with a z14 processor, so why use it on a mainframe? One big reason, not surprisingly is compliance with regulations. These regulations include: PCI-DSS (Payment Card Industry Data Security Standard), GDPR (General Data Protection Regulation), SOX (Sarbanes-Oxley Act), NERC CIP, FISMA (Federal Information Security Management Act), HIPAA (Health Insurance Portability and Accountability Act), NIST (National Institute of Standards and Technology), and SANS Critical Security Controls. As an example, PCI DSS Requirement 11.5 requires a FIM solution be in place on all platforms, including mainframes.
To comply with these regulations, mainframes need to be able to verify that z/OS files remain intact and data is secured. They also need to be able to continually scan and verify mainframe files, and log the results for auditing in order to comply with these regulations. That way they can produce verifiable proof that file integrity has been maintained. In fact, the FIM software installed should make it possible for either an internal or external compliance auditor to validate data integrity using an on-demand audit function.
Just looking at that in a bit more detail, the file integrity monitoring software needs to be able to scan files on a schedule or on demand. Using the information from the scan, it needs to swiftly identify even minor changes (and that means within seconds), and then send an alert to an existing SAF (System Authorization Facility like RACF) or a SIEM (Security Information and Event Management). And if nothing has changed in the files, the FIM software needs to be able to provide immediate and conclusive evidence that the mainframe environment is unaltered. And, thirdly, it needs to create a full audit trail so that there’s plenty of documented evidence to help prove compliance with the ever-increasing data security standards that apply to mainframe operations.
So what exactly should your mainframe’s file integrity monitoring software be looking out for unauthorized or unrecognized changes to? The answer would need to include:
Executable programs and libraries
JCL, HTML, panels, scripts, rate tables
Configuration and control members
Log files such as SMF, Db2, IMS
Other sequential files and GDGs.
One problem with traditional reporting tools on a mainframe is that they tend to run overnight and the reports are looked at the following morning. In a world where mainframes are still less likely to be hacked than other platforms, that might be OK. But what’s to stop someone with a legitimate reason for accessing the mainframe from making illegal changes to files? The answer is probably nothing because they won’t be suspected until the next day. And at many sites, the person who looks through those nightly reports, is likely to be the person with the skillset to hack the mainframe’s files. The fox in charge of the hen house scenario.
Installing file integrity monitoring software brings with it two benefits. It gives you peace of mind that your files have not been maliciously attacked and your data is secure. And it means that you are compliant with the regulations affecting your industry.
I recently read an interesting article on a vendor website – I got there after one of their spam emails caught my eye – “8 Signs You Need to Replace Your Legacy Enterprise Software” – I had to read that! I know a little bit about legacy software, and I was really curious about what they had to say. I have to admit that I was a little disappointed.
The 8 identified signs were:
It Doesn’t Do Everything You Ask It to Do
The Technology Fix Isn’t In
You Want to Stay Competitive
Lack of External Support
Lack of Internal Support
Lack of Mobility
First sign, “It Doesn’t Do Everything You Ask It to Do” – is definitely a fair statement. How many of your favorite legacy applications do that? Not many. But how many of your desktop business applications do that? For that matter, how many of your fave smartphone apps do everything you ask of them? Microsoft Word gets me riled at least once a week – am I going to look for a replacement? Probably not, since most people I deal with also use MSW, and as much as I dislike it, MSW does give me a ubiquitous format that makes document sharing pretty straight forward. So I’m not moving on from it. My fave smartphone sports app doesn’t give me all the Canadian scores. Will I switch? Sure, if another app gives me everything it does PLUS the Canadian scores. Haven’t found one yet. Not switching. So your legacy software doesn’t do everything you want? That’s not a shocker. But I’m going recommend you look for the best option in the long run; I don’t recommend just dumping it, and going through the pain, cost and risk without making a well-informed decision.
Second sign was “Performance Problems” – which is also fair. Many things can cause performance problems in uber-complex IT systems, and they aren’t always related to your software being ‘legacy’. More often than not, performance problems are curable with the right amount of attention to detail. Certainly new technology solutions have been hampered by performance problems – it’s an ongoing concern on legacy systems, contemporary systems, and the newest systems. Heck, it’s even a concern on my brand new smartphone.
Third, “The Technology Fix Isn’t In” was a little weak to say the least, especially with this statement: “you can either migrate to a new system that can provide the needed capabilities, or simply forego adopting the new technology.” Um. No. There are always options. Hybrid solutions, anyone? Elegant interfaces between new technology and legacy systems? Your bank’s mainframe systems didn’t have to go away when everyone wanted mobile access to their bank accounts. When you pay a bill or make a purchase using your smartphone, it is actually accessing some bank’s legacy mainframe – every time.
Number four, “You Want to Stay Competitive” – that’s fair. Customers may want to track the status of an order online, without having to pick up the phone? Front-end interfaces to legacy systems have been doing that for years.
Five, “Lack of External Support” is a real concern. However, it is probably a solvable problem. The skills gap represents the need for a plan, not necessarily the need for a technology swap-out. This article speaks well to this point: “Making a Modern Mainframer“. The main point being, What? New programmers aren’t smart enough to learn COBOL?
Six, “Lack of Internal Support” – similar problem, similar solutions. Not to belittle these challenges – they’re serious enough – but the need is to formulate a good plan and execute. This is a problem that can be solved in many ways, and a migration from legacy systems is just one of those solutions.
Seven, “User Unfriendliness” – this is a challenge, but far from an insurmountable one. It is being solved now, and has been by many IT organizations in recent years. Customer’s mobile needs are a serious disruptive force that must be taken seriously. And they have been, while using legacy systems. There are solutions available now from IBM and other vendors that allow millennial programmers to create new mobile apps that can leverage legacy applications without altering them. In effect, providing user friendly interfaces for legacy applications, at a fraction of the cost, time and risk of developing brand new replacement systems from the ground up.
Eight, “Lack of Mobility” – see above.
To be fair to this company, they do state that they have no stake in whether an organization running legacy systems migrates or not. However, the entire article does a pretty good job of convincing someone to do just that. That is, someone highly motivated to migrate from legacy systems, or someone unaware that there are many options to consider besides a migration project.