Loading...

Follow Compuware - Mainframe Software Solutions on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Overview: Forward-thinking technology and business leaders joined DevOps gurus and thought leaders for an immersive multi-day smörgåsbord of learning opportunities, otherwise known as DOES 2019 London–and some interesting trends emerged.

If I had to describe the gold standard for a professional development event, it would go something like this: high-energy immersive education, inspiring keynote presentations, boundary-pushing thought leadership, good old-fashioned fun and, importantly, unicorn socks. That’s right, unicorn socks. I’m describing, of course, the recent DevOps Enterprise Summit 2019 (DOES) in London, which brought together forward-thinking technology and business leaders from around the world for an incredibly worthwhile and memorable event.

In going through my notes, some distinct trends bubbled up to the top that are worth recapping.

The Days of Bimodal IT Are Over

The Theory of Constraints says that a chain is only as strong as its weakest link. It should come as no surprise that many of the speakers, being that they are DevOps ambassadors and evangelists, had the same message: anything that is slowing you down needs attention.

It’s almost hard to believe that not so long ago, top tier analyst firms like Gartner were heavily promoting the ill-conceived two-speed—aka Bimodal IT—approach. But thankfully those days are (mostly) gone. Companies like Lloyds and American Airlines are clearly employing one-speed IT to drive their digital transformations. These trailblazers did a fantastic job of convincing the audience that mainframe-inclusive DevOps is not only doable, but they are already exploiting it.

American Airlines’ mainframe delivery pipeline here at #DOES19. Powerful combo of ⁦@xebialabs⁩ release orchestration and ⁦@compuwarepic.twitter.com/Dcmr7UfpBj

— Lisa K. Wells (@ProductPrincipl) June 27, 2019

American Airlines “DevOpsifyed” it’s #mainframe with Compuware ISPW which integrated with Xebialabs and leverages operational intelligence from @compuware zAdviser analytics #does19 https://t.co/1AeGVFnGTx pic.twitter.com/PgNATJktSl

— Sam Knutson (@samknutson) June 27, 2019

Employee Engagement is Key

Compuware CEO Chris O’Malley and CFO Joe Aho sat down with Gene Kim for a memorable Fireside Chat to discuss selling senior leadership on the value of DevOps. They discussed Compuware’s transformation from a declining Waterfall-entrenched company to a vibrant Agile organization that releases net new innovations and updates to classic offerings every 90 days. Chris stated that if you are going to change you need:

  • True grit—the courage to take on setbacks
  • Passion—talk about the good, the bad and the ugly
  • Perseverance—in the face of setbacks

If you’re going to be a part of the change you need 1. true grit – courage to take on setbacks 2. passion – talk about the good, the bad and the ugly 3. Perseverance in the face of setbacks – @chris_t_omalley | #DOES19

— #DOES19 London (@DOES_EUR) June 25, 2019

Employee engagement is also necessary in order to drive change. And once you’re where you want to be, employee engagement is your competitive differentiator. Chris further emphasized that by giving Compuware’s employees a mission and purpose to solve customers’ most important problems, a culture that drives innovation through measured risk taking, iteration, feedback and transparency regarding successes as well as failures was born. Notably, after 19 quarters of innovation, the same people who were there at the start of the company’s journey are still working for Compuware. Joe Aho aptly noted that if you can get your employee engagement up, you will increase customer satisfaction and cash flow will take care of itself.

Fireside chat with Chris O'Malley and Joe Aho of Compuware - YouTube

Stop Calling Them Kids

I wish this wasn’t a theme, but I repeatedly heard some large companies refer to their millennial IT staff as “kids.” Personally, I find this very insulting. Next year the eldest of these next-gen pros will be turning forty. That’s 4-0, not 1-4. They have an established career, most likely a house and a family—and they are delivering the technology that provides your company the competitive edge it needs. And, if they aren’t already, these so called “kids” are the ones who are going to be driving your digital transformation, whether by execution, sourcing new ideas, or assuming leadership roles where they will most certainly make decisions that affect the future of your organization. They deserve respect starting right now.

Companies are Looking to Identify Next Steps and Best Practices

Where previous conferences have focused on more elementary information about DevOps (what is it, how it works, etc.), the general themes of this event were more mature.

Many early adopters of enterprise DevOps have progressed on their journeys and are stepping forward to share their best practices as well as their failures. It’s incredibly encouraging to see the community speak so openly about the good and the bad in hopes that others will learn from their mistakes. It was also refreshing to see each of the companies presenting tell the audience where they needed help, with the hope that audience members would be able to help them solve that challenge so they could move on to the next step of their journey.

Working for a company that went from 40+ years of waterfall to Agile, I can tell you the learning never stops. And while hearing about others’ journeys is critical, organizations must forge their own paths. Resources like the DevOps Guidebook Series are very helpful.

It comes down to this: in business you’ve got to keep the main thing, the main thing. And what is the main thing? Your customers. If you can’t inspire employee engagement by emboldening your teams with a worthy mission to always be innovating on behalf of customers—and equipping them with the means to create that innovation—then you’ll ultimately lose. Conferences like the DevOps Enterprise Summit both inspire and educate organizations about the value of DevOps, which is so critical to the development and delivery of those customer innovations.

A big thanks goes out to Gene Kim and his team for delivering another excellent conference!

See you next year!

The post Reflections on DOES 2019: What Happened in London Shouldn’t Stay in London appeared first on Compuware.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Overview: Leading companies got together in Detroit recently for the Compuware Customer Advisory Council. The feedback was fantastic, but the real magic happened when the technical leads opened up and began sharing.

Compuware recently held its Customer Advisory Council at the brand new Shinola Hotel in Detroit. With more than 60 technical leads representing over 20 companies it was a rare opportunity for us to present and get feedback on our future directions. But, as often is the case, the most interesting conversations occurred when the 60 technical leads start talking among themselves.

Mainframe discussions tend towards the existential. With the mainframe being about as far from being the shiny new thing as possible, mainframe technologists have to fight for a seat at the table. This requires a lot of hard bark since the recent history of the mainframe at many of these organizations involves benign neglect and sunk-cost attitudes. But the tide is turning.

Prevailing Headwinds

Some common threads emerged. One shared challenge is that their industries are in disruption. Companies with mainframes tend be in very traditional businesses: banking, finance, retail, insurance and government. These industries represent the lion’s share of worldwide GDP and are prime candidates for disruption. New companies, both startups and established, are coming into these industries with expertise seeped in software. These new companies are betting they can learn the ins and outs of the industry before the traditional companies can learn the ins and outs of rapid software delivery. They view innovation as their competitive advantage.

Another shared challenge involves evangelizing to the C-level. Most executives are aware when their industry is being challenged with new competitors, but often the reaction is to double down on cost cutting, even if the result is sacrificing some market space. This is exactly the opening these new companies are looking for, best explained by Clay Christensen and his philosophy of disruptive innovation. But fighting this attitude requires constant diligence.

Finally, a more grass-roots challenge is getting and growing a mainframe talent base. The demographics of COBOL programmers are daunting, and schools are not rushing in to fill that vacuum. Instead, companies must use non-traditional strategies to fill these positions.

Unique Advantages

Despite these headwinds, traditional mainframe-based companies are not without their advantages, if they can leverage them. First and foremost, they have a strong understanding of their customers. And, customer needs are stable over time – they will never stop wanting experiences that amaze and delight regardless of industry (read what the world’s biggest disrupter Jeff Bezos recently said about customer needs). Winners will emerge and the primary differentiator will be obsession with customer satisfaction.

Another advantage is data. In today’s world data is king and these companies have unmatched historical data around their customers. They know what initiatives worked with their customers and what initiatives didn’t work; they know where customers spend their time on their site; they have accrued history on their customers. This is their opportunity to tap into that historical data, unavailable to these companies new to the industry, all to maximize the value returned to their customers.

Finally, COBOL provides an interesting challenge. The language is esoteric (although not difficult) but it is also highly connected to IBM mainframes. Possibly more than any other high-level programming language, COBOL is specifically tuned to be highly performant with IBM z/Series hardware. This is not an advantage to be taken lightly; quick response time is one of those customer requirements that remain stable over time.

New Reality; New Opportunities

Disruption is the new reality; there will be winners and losers. The crux of this competition will be how well the existing mainframe companies use their advantages to thwart the disruption challenges from emerging companies. These decisions are up to each individual company, building their specific go-forward strategy. The companies that joined us at the Compuware Customer Advisory Council are in the what business; deciding strategy to best serve their customers and defend/increase their business.

Compuware is not in the what business, we’re in the how business. We’re intensely focused on providing these mainframe companies capabilities to innovate quickly while reducing risks, to entice the best and brightest to the mainframe, to reinvent and reinvigorate the mainframe.

But I think both Compuware and our customers walked away from this customer council with a common understanding, with the same confidence that this can be accomplished, and the same go-forward attitude. Let’s do this!

The post Talking Mainframe at the Compuware Customer Advisory Council appeared first on Compuware.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Overview: Companies that give their developers the right tools, put the right processes in place and measure for continuous improvement can speed and modernize mainframe software delivery.

In this Age of Software, there should be no room for doubt that continuous improvement in software delivery velocity, quality, and efficiency is critical to your organization’s ability to respond to disruptive threats from increasingly digital-enabled business models and to delight your customers with exceptional digital experiences.

With the likes of Amazon, Apple and Tesla entering markets traditionally owned by the world’s largest shipping, insurance, financial, manufacturing and healthcare organizations, comfortably conducting business within the confines of “business as usual” is no longer tenable. In fact, it’s a roadmap to systemic decline.

It’s because of this shift that today’s developers hold in their hands the ability to provide innovations your customers didn’t even know they needed. They hold the key to enabling bold business models and customer satisfaction in perpetuity.

For mainframe-heavy organizations that fail to comprehend the critical importance of software delivery, the consequences could be dire. Hope is no longer a strategy. Large enterprises must not just disrupt the work developers do, but how the work gets done.

The good news is that businesses that treat their developers like high-performance athletes—giving them modern tools, providing continuous, supportive feedback and focusing on improving KPIs around velocity, quality, and efficiency—and embrace mainframe Agile and DevOps methodologies are the ones that are driving unique, competitive value for their business and customers. These companies don’t see themselves “stuck with legacy code,” rather, they see this code containing decades of refined business logic, a precious asset to the business.

It’s through commitments like these that organizations can expect results similar to a large UK bank that after embracing DevOps throughout their enterprise:

  • Reduced the time needed to test a large application from 2 weeks to 5 minutes
  • Cut project delivery timeframe by 1/3
  • Increased overall developer productivity by 400%
  • Virtually eliminated defect leakage

Today, Compuware is delivering several advancements that help companies give their developers the right tools and put the right processes in place, ultimately speeding and modernizing mainframe software delivery. Specifically, we’re advancing cross-platform CI/CD with advanced machine learning and expanding our Git integration.

Advanced Machine Learning for Data-driven Decision-making

To continuously improve software delivery, feedback is absolutely essential. How do you truly know whether software delivery outcomes are improving, deteriorating, or staying the same? And how do you prioritize and resolve technical debt when it can feel like such an overwhelming and arduous task?

Compuware zAdviser is a service that leverages machine learning to help answer these questions. With the proper means to measure, assess and challenge in a continuous improvement journey, organizations can understand how they can become high performers.

New zAdviser analytics, when leveraged with Compuware ISPW, the leading mainframe CI/CD solution, shines a spotlight on zones that should be prioritized for reducing technical debt and development practices that are helping or hindering productivity. The analysis answers questions such as:

  • Where are the constraints; such as technical debt and manual testing that, if improved, could have the biggest positive impact to improving future flow?
  • What modules are contributing to our most problematic technical debt?
  • How much total developer time is actually spent developing and testing code?
  • How are regressions and fallbacks affecting development flow?

So, when you need to decide where to apply resources and how best to improve quality, velocity and efficiency, less “emotional” and more data-driven decisions can be made.

Expanding Cross-platform CI/CD with ISPW Advancements and Integration with Git

Compuware’s mission is to mainstream the mainframe so that you can leverage your mainframe with ease and effectiveness. To do that, your developers with little mainframe experience must be able to confidently build, test and deploy mainframe source code and you must be able to integrate mainframe-focused tools into your multi-vendor, cross-platform DevOps toolchain of choice.

In keeping with this mission, we are expanding ISPW integration with Git version control software commonly used by distributed development teams.  By designing a developer experience focused on continuously improving software delivery velocity, quality and efficiency, and smartly integrating the virtues of best-in-class tools for all code across all platforms, you will gain consistent visibility into your diverse codebases and ease the process of managing code through the CI/CD pipeline.

Additionally, we are offering an expanded Promotion Analysis feature in ISPW that automatically identifies dependencies so that vast numbers of components can be deployed confidently. ISPW is earning the hearts and minds of customers because of continuous innovations such as these, that offer a preferred developer experience and new ways to ensure your mainframe code pipelines are secure, stable and streamlined throughout the DevOps lifecycle.

A Roadmap to Modernization

This is the 19th consecutive quarter that Compuware is delivering urgently needed innovation that empowers you to mainstream your mainframe for the digital age. And we are committed to continuously improving our ability to turn ideas that matter to you into deliverables that make a difference.

Instead of a building a roadmap to systemic decline, let us help you build a roadmap for:

  • Continuous integration and continuous delivery
  • Automation and integration throughout your DevOps toolchain
  • In-depth application and operational analysis
  • Ongoing measurement towards improvement

No partner offers you more ways to get there—and safely put your most critical applications into the hands of today’s developers.

The post Why Developers Hold the Key to Customer Satisfaction appeared first on Compuware.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Overview: If you haven’t thought about Compuware Abend-AID in a while, you might be surprised about its enhancements. Learn about Abend-AID’s hidden treasure chest of capabilities.

When we’ve used a product for a long time, we can get a little complacent. The solution works and does what we want; we forget to carefully review the release updates, especially if we never have problems. Compuware Abend-AID is one of those tools.  We’ve all used it for years and relied on it to solve application issues. For example, Abend-AID really improved my original dump-reading process where we sat around tables in the war-room, reading boxes of print-out and making convoluted notes.

What’s Getting in Our Way?

As the years pass, larger mainframes are managed by fewer SysProgs. More complex applications are managed by fewer application programmers. Who has the time to read all the documentation that comes with every release? You might check it out if you’re looking for a specific fix or feature you requested.  But otherwise, it’s hard enough to find the cycles to install the update, enabling new functionality. Working as designed and doing exactly what we need it to do seems to be enough.  Often, it doesn’t even occur to us that new features might be added. If a product is older, stable and already full-featured, it’s easy to assume that nothing is needed.

What Have We Been Missing?

Perhaps you weren’t aware of all the integrations provided with Abend-AID.  Abend-AID provides access to numerous integration points with Xpediter, File-AID, SDSF and third-party products like Splunk. In the latest release, you can now collect Abend-AID usage data and provide these records to Enterprise Common Components to upload for use and analysis by zAdviser, which can then measure quality, velocity, and efficiency of mainframe development. For those seeking to reduce the footprint of running Abend-AID, particularly for reports with large numbers of modules or discrete control blocks, you’ll now find these running faster and with less CPU time.

It’s easy to find a quick overview of the latest updates. Look for the product on the Compuware website, select “Tech-Doc” at the top of the page, click on your product, then read the release notes on the next page.

What Else is New?

Ever have an issue with invalid data? Are you sure that someone/something modified a variable, but can’t find it in the code? Have you concerns that some heavily modified code segments have Perform statements out of order, but can’t follow the complex logic in your head? Are you aware of the integrations between Abend-AID and File-AID?

Solutions to these issues will be presented in an upcoming webinar. You’ll also learn about how to create the exact report format you want and how to get value from your File-Aid/Abend-AID integration.

 What Should You Do Next?

If you’ve got a half hour, register for the “Abend-AID’s Hidden Treasure Chest of Capabilities” webinar, which will take place on Thursday, July 25th at 11 a.m. EDT. Product gurus will provide details on some of the best, recently-added features AND show you how to use them. It will be a worthwhile 30 minutes.

The post Dig Deeper: What Else Can Abend-AID Do for You? appeared first on Compuware.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Overview: Learn how Societe Generale is using Compuware Topaz to transform its mainframe into a modern and agile DevOps platform.

For Societe Generale, a leading financial services group in Europe, digital transformation is an ongoing strategic initiative as they continually strive to meet customers’ increasingly high expectations for proximity, instantaneity, customization and security in retail banking.

Societe Generale chose Topaz to help it increase agility, improve execution efficiency and increase responsiveness to customers.

Today, the mainframe developers use the intuitive Topaz interface to foster cross-platform collaboration. With the help of Topaz, they increased mainframe development productivity within the first month.

According to Gatien Dupré, Head of Societe Generale’s zDevOps IT initiative for French Retail Banking, in less than two years, more than 90% of the IT employees are now engaged in mainframe DevOps.

Read how Topaz is also helping the organization to:

  • Enable developers of varying skill levels to understand and manage mainframe code
  • On-board next-gen developers
  • Detect bugs earlier

By including the mainframe in a culture of agility, collaboration and DevOps, we disprove anti-mainframe pundits every day.”

Get all the details in the Customer Case Study.

The post Bank Drives Mainframe DevOps with Compuware Topaz appeared first on Compuware.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Overview: Learn how Topaz for Enterprise Data can resolve real-world issues organizations face when masking sensitive data.

One of the most underappreciated aspects of data masking is having a solution generic enough to address records containing sensitive identification numbers that are prone to format changes. These can cover a wide spectrum, ranging account IDs, national IDs, credit card numbers, Medicare numbers, etc.

When these numbers undergo a format change, more often than not, it triggers major modifications to the current masking definitions. Needless to say, over time, this results in solutions becoming complicated and a nightmare to maintain.

Topaz for Enterprise Data, a feature install of Topaz—Compuware’s Eclipse-based development, testing and data management platform that integrates into any DevOps toolchain—addresses this problem by providing an easy-to-use interface to create and group privatization rules and their individual actions into meaningful Test Data Privacy projects. These are easy to debug, manage and port across environments. Once a project is defined, products across the Compuware File-AID portfolio can invoke it and privatize data across a wide range of storage, such as:

  • RDBMS tables
  • z/OS files
  • IMS databases
  • Excel
  • Access
  • XML

In this example, I’ll show an approach to mask Medicare numbers, which underwent a format change some time ago and was a concern for one of our customers. Look for an associated article on the Compuware Support Center that lists step-by-step instructions for the definition of the Test Data Privacy project, and addresses each restriction and complexity highlighted below as the solution is developed, to be available soon.

Sample Data, Restrictions and Complexities

The original Medicare numbers were known as HICN (Health Insurance Claim Number), which were based on the SSNs of people with Medicare.

The new format is the MBI (Medicare Beneficiary Identifier) number, which is a mix of numbers (0-9) and upper-case letters, except for S, L, O, I, B and Z, which is intended to make the whole combination easier to read.

An example of an MBI number is: 1EG4-TE5-MK73. The restrictions on the MBI format are highlighted below:

I’ll address these restrictions for the MBI as the solution is developed, ensuring that the replacement values generated are compliant with them, so as not to cause other failures in downstream systems.

As mentioned above, the struggle for organizations would be that, once implemented, the fields containing Medicare data across different system records could be a mix of the old HICN format or the new MBI format, making the process of defining and maintaining privatization rules complicated and difficult to manage over time.

Another layer of complexity that is quite common is the presence of special characters such as hyphens and spaces, which are irrelevant to the actual data itself and should not have an impact on the masking results. Consider the simple table highlighted in the image below:

Another example of the data can be from a QSAM file (sequential data file found on the mainframe), which I’ve opened in Compuware File-AID’s Data Editor, as shown below:

Notice that in addition to the complexity added by the special characters (dashes and spaces), the HICN-NOS column is right-aligned, while the MBI-NOS column is left-aligned. Needless to say, post execution, the results would be expected to match the alignment too.

Test Data Privacy Project Definition

Detailed instructions for the definition of the test data privacy project to mask Medicare numbers and address each restriction and complexity are available on the Compuware Support Center. A brief overview can be found below.

As seen in the screenshot, multiple tabs (Data Elements, Rules, Coverage, Composites and Extensions), which allow the user to manage and fine tune the project, are available. To give a brief overview:

Data Elements Tab

Helps define generic placeholders for data items or fields and the normalization that should occur for them during pre and post-privatization. It provides the flexibility to add additional field or column names and expect the same processing. In the screenshot outlined below, if a new column containing HICN data is identified, simply adding it to the list of Source Data Identifiers will ensure it is masked identically to the existing fields.

Rules Tab

This helps to define the various Rules and their specific Rule Actions. This is where we either analyze the field to identify its origin (Overloaded Rule Action) or define the masking conditions (Encryption/Translation Rule Actions).

Coverage Tab

This helps preview the results of data element identification and rule assignment. It’s a useful mechanism to make sure everything looks good as per the intended design of the Test Data Privacy project.

Encryption Sets

Encryption Sets is a mechanism to identify a range of characters or Unicode values, which would be the set from which search and replacement encryption values would be chosen. They are particularly useful when there is a need to define a specific Code Page for encryption purposes but the Code Page itself is too large.

The screenshot below shows the series of Encryption Sets defined to address the restrictions associated with Medicare numbers and its usage in the Rule Actions where a Field Mask is applied to restrict the Encryption Set to only certain positions.

Execution

Once the Test Data Privacy project is defined and ready to be used, we can execute a couple of jobs, which invoke the project in order to privatize the data prior to writing it out to the target.

Distributed job

The first job would be a ConverterPro job (part of Topaz for Enterprise Data), which acts on the previously listed RDBMS Source Table and writes to a Target Table.

ConverterPro Specification:

On execution, a comparison of the Source and Target results shows that the numbers are consistently masked across the MBI and HICN columns, and the formatting of the numbers is maintained. In addition to this, the MEDICARE column numbers were categorized and encrypted correctly. The invalid entry 1EG4 TE5 MK7Z has also been encrypted, albeit differently, since we can’t be sure of its origin.

A look at the ConverterPro execution log shows the Warning messages from the Rule Logic listed.

Mainframe Job

The second job is a File-AID Data Solutions JCL job, which simply reads and writes QSAM files while utilizing the same Test Data Privacy project.

Similar to the execution in the distributed world, a comparison of the Source and Target results shows that the numbers are consistently masked across all the columns.

This demonstrates how consistent privatization results can be achieved across the mainframe and distributed platforms.

Summary

This was a quick example to demonstrate the power of Topaz for Enterprise Data in resolving real-world issues customers face when managing data. There are numerous other features across Topaz for Enterprise Data that you can leverage to discover, visualize and privatize both mainframe and non-mainframe data in a common, intuitive manner.

The post Masking of Sensitive Data: A Case Study of Medicare Numbers appeared first on Compuware.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Overview: CEO Chris O’Malley discusses Agile and DevOps on the mainframe and previews what he hopes GSE Nordic Conference attendees will take away from his keynote address.

It’s challenging work to serve as the CEO of a global enterprise software company dedicated to changing the conversation about the mainframe—a conversation essential to transforming its culture, process and tools and imperative to effectively competing in the Age of Software. This position also provides unique opportunities to engage a broad spectrum of experts—from CIOs and other CEOs to IT analysts and the press to development and operations artisans—across a variety of industries.

By constantly crossing paths with these experts—whether they’re customers or fellow technologists and business leaders—Compuware CEO Chris O’Malley is uniquely positioned to learn from and educate others, and ultimately bring that knowledge and experience back to our customers in how we strategically shape the innovative future of the Compuware tools they leverage.

That’s why it’s so important and exciting that Chris is heading to Europe this June to engage with folks in Denmark, Germany, Paris and London. While in Europe, Chris will keynote the GSE Nordic 2019 event on the 11th of June as well as visit with customers and sit down with the press about his and Compuware’s vision for the mainframe in a digital world.

In preparation for his visit, we asked Chris to share some insights in relation to the conversations he’s looking forward to having.

How have customer conversations changed since you became CEO of Compuware in 2014?

In 2014, mainframe teams were loath to abandon their sacrosanct waterfall development methods—the drawn-out, siloed processes; the archaic, clunky tools. Agile and DevOps were still cynically viewed as fads that in time would prove to be no better than existing practices and impossible to achieve when working with mainframe code and data

At the same time, other enterprise business and IT leaders couldn’t see the strategic value of the mainframe in this disruptive Age of Software. They had either convinced themselves to get off the platform or were searching for ways to work around it. The mainframe was left to languish in a dark corner of the data center while distributed teams churned out mobile apps. But that mainframe apathy proved unsustainable. A report we commissioned from Forrester Consulting found all or most customer-facing applications rely on the mainframe at 72% of companies surveyed.

In this digital age, and for all the right reasons, the mainframe isn’t leaving. IT leaders are quickly realizing that they can only be as fast as their slowest digital asset. We’ve begun to see a change in mindset now as more mainframe-powered enterprises realize they can in fact leverage Agile and DevOps on the mainframe if they have the right culture and tools in place to support it. And the examples of success are growing. Countless Compuware customers are now driving unprecedented improvements in software delivery quality, velocity and efficiency on the mainframe, while the organizations that have tried to get off the platform have driven their organizations into a competitive disadvantage.

Mainframe DevOps is real. Mainframe Agile Development is real. But it’s just now that more large enterprises are beginning to believe it and start their mainframe-inclusive DevOps and Agile journeys. Those that don’t believe are quickly becoming outliers. If you aren’t investing in ways to accelerate mainframe software development and delivery, and then you’ll begin to fall behind competitors that are.

Is this change reflected in the media coverage on Enterprise DevOps and Agile methods?

I think we’re seeing the media pivot on this subject. Where once the majority of journalists in IT were being heavily influenced by the “Cloud First” and anti-mainframe mantras of pundits and organizations that wish to witness the demise of the platform, you’re seeing many of the most credible industry voices—DevOps.com, InformationWeek, SD Times—and even mainstream publications like Forbes align their reporting with the reality of mainframe DevOps and Agile.

Again, you’re going to be an outlier if you’re still reporting on the demise of the mainframe come 2020 and beyond. There are just too many success stories giving credence to mainframe DevOps and Agile, where some of the largest enterprises in the world are accomplishing high performance feats on the mainframe in ways that matter to their customers.

What is the biggest misconception about Enterprise DevOps and Agile methods that still prevails?

Even within enterprises that believe in mainframe DevOps and Agile development, too many try to implement these practices in small groups. Agile and DevOps are philosophies, mindsets that must pervade an entire organization. Too many mainframe organizations are afraid to or believe they don’t need to implement DevOps at scale.

The reality is, doing sporadic DevOps with a few people here, a team there, leaves cracks in the foundation for old habits and poor practices to leak back in. If some of your teams are still doing waterfall, they’re going to become bottlenecks for the teams on a DevOps journey. Not only is this going to impact your quality, velocity and efficiency, it’s going to cause infighting as new cultures form in opposition to one another.

You have to get everyone on the same page. You have to be willing to make people uncomfortable and push them to change how they think and work, because the work that must be done has changed in our digital economy, and Agile and DevOps are the only feasible approaches to accomplishing that work with the software delivery quality, velocity and efficiency required in order to get always beautifully, wonderfully dissatisfied customers the innovation they demand.

What will you be talking about at the GSE Nordic Conference?

On June 11th, I’m going to be talking about what’s required to effectively compete in the digital age. While many organizations understand the importance of Agile and DevOps, and while some are even beginning or on a DevOps journey, most still have little concept of how to build a mainframe-integrated DevOps toolchain and follow through on a journey and get to a point where they’re continuously improving mainframe software delivery quality, velocity and efficiency.

So, in my keynote at GSE, I’ll be sharing actionable steps to make Agile and DevOps work across your enterprise, including personal and objective insights based on what we’ve experienced at Compuware and what we’ve observed in the accomplishments of our most forward-thinking customers.

What is your ultimate goal for the week?

I want people to come away with a newfound confidence and unshakable faith that Agile and DevOps on the mainframe work—and that these philosophies are necessary for organizations to transform their mainframe culture, processes and tools. Mainframe teams will need to adapt their mindsets to accomplish this, and that’s going to be uncomfortable at first. But it’s absolutely necessary, and the results far outweigh any discomfort experienced along the way. It’s a journey, one that Compuware can help you begin or improve.

Chris wants to meet you

If you’re attending GSE Nordic on June 11th, you’ll have the perfect opportunity to hear Chris’s vision for bringing Agile and DevOps to the mainframe as well as meet with him to discuss this critical aspect of a business transformation.

If you can’t catch Chris in Denmark, Germany (June 12th) or Paris (June 13th), he’ll be at DevOps Enterprise Summit in London, June 25-27, where he’ll participate in a compelling and motivating DevOps Fireside Chat with Gene Kim. Otherwise, Chris is always available for questions and comments on LinkedIn, where you can easily connect with him.

The post Q&A with Compuware CEO Chris O’Malley appeared first on Compuware.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Overview: Just like the Florence Cathedral’s dome that waited 140 years for the right technology before being built, mainframe teams can use Agile development to break big projects into minimum viable products and have faith that the process will produce the desired outcomes, even when they can’t be realized in the moment.

Work on the Florence Cathedral (Cattedrale di Santa Maria Fiore) started in 1296. At the time the plans included a magnificent dome on the basilica even though the technology didn’t yet exist that would allow for supporting the necessary weight. The working theory was that, by the time they reached that point in the construction, technology would exist that would allow for the dome.

And they were right. In 1418 they introduced a competition to come up with the necessary architectural design. The winner was selected, and the cathedral was completed in 1436.

In IT, this situation is as relevant today as it was in 1296. Time frames are greatly compressed, but IT teams are often pressed to work on rapidly evolving systems with clear objectives but fuzzy paths as to how to achieve them. The future may be wrought with missteps, rework, frustration. It’s no place for the faint of heart.

Agile “Cathedral Thinking”

That’s why adopting an Agile mindset toward development and operations work is critical. Agile breaks big projects into manageable pieces called minimum viable products. This makes it possible to fail fast and pivot quickly when you hit a roadblock. It makes it possible to plan for building a magnificent dome before you know how you’re going to do it.

But when you’re coming off years or decades of linearly planning, analyzing, designing, developing, testing and maintaining—rather than doing these things iteratively in, say, two-week sprints—it’s naturally going to be tough to put your trust in a methodology like Agile that embraces unpredictability and requires a fail-fast mindset. But, in reality, this “cathedral thinking” is what will enable your teams and organization to rapidly innovate and accomplish architectural feats in software.

How Committed Are You?

This is the true test of your commitment to Agile:

  • Is your organization committed to responding to change rather than following a plan?
  • Is your organization truly willing to accept the risk that comes from experimentation to receive the rewards accrued from it?
  • Are your team(s) confident that they can overcome the hurdles as they arise to continuously deliver value to your customers?

Innovation is the gold of the realm in today’s digital world, and fear of the unknown is its sworn enemy. Once your organization and your teams can embrace this challenge, you can start to reap the rewards. As Bertrand Russel said,

To conquer fear is the beginning of wisdom.”

So, when you’re faced with something that feels insurmountable, roll up your sleeves and get to work. You can’t get there if you never head there!

The post ‘Cathedral Thinking’ with Agile Development, 1296 BC to Now appeared first on Compuware.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Overview: Learn how one of Compuware’s expert application developers discovered the benefits of an agile approach to projects and the risks of waterfall early in his career.

Having been a mainframe programmer for over 50 years, it should come as no surprise that I’ve been party to the siloed, top-down practices of waterfall. But I’ve also practiced what we call Agile development and DevOps today. In fact, the teams I’ve worked on over the past five decades have sometimes hovered closer to the best practices of these modern philosophies than to waterfall, though not always completely.

My earliest big project is a good example. Back then, we didn’t know about waterfall or Agile or DevOps, we just tried to get the job done. Nevertheless, we began this project in a very waterfall fashion, and quickly ran into problems. It wasn’t until we began using more agile methods of work that things began to improve.

If you’re skeptical of Agile development and DevOps and are going take anything from this story, I hope it’s this:

Don’t balk at these modern methodologies—which are necessary in a digital economy—just because you’ve been doing mainframe application development one way for decades.

Regardless of your background, you can adapt your experience to these Agile and DevOps and actually leverage them to improve as a mainframe professional with years of expertise—because, chances are, you’ve leveraged some of these practices before without realizing it.

The Project in Waterfall

The project was for a manufacturing plant in a small Midwest town. I worked first on a manufacturing bill of materials system so that we could more accurately acquire labor standards and calculate manufacturing costs.

This led to building a revamped order entry system/material requirements planning system that was used to feed work to the plant floor. It was 1980 or so, when online systems were very expensive and beyond our budget; when waterfall thrived.

There were primarily two of us working on the project, with one or two others who would temporarily assist. At any time, there were two or three people working on design and coding.

I distinctly remember being eager to start coding, but everyone else wanted things mapped out first. I don’t remember how much time we spent designing the system, but let’s say it was three to four months.

This was waterfall for sure, in that most of the design was done before coding started. But it was not a highly detailed design. There were a good number of details left to judgement to be used at coding time.

Still, we worked on development for probably four months and extended it by at least another two as we found more work to do near the end. Things weren’t going according to the plan we had spent months creating.

Shifting Speeds to Agile/DevOps

It was at this point that we began a natural shift to more agile work. We kept finding things that we hadn’t thought of and had to iterate on-the-fly.

On-the-Fly Iteration

For example, it was always assumed that we would do the maximum amount of data validation feasible. (I pictured myself as very particular about data validation and not just that numeric fields were numeric. I was always on the lookout for how we could validate aspects of the input.)

But when we tried to run sample data through the new system, bugs reared their ugly heads. There was always a need to make edit routines better. If data was rejected, it was done in a way that did not destroy integrity.

New Responsibilities

Somewhere in this time, the lead programmer left, and I was put in charge. I worked on debugging and training data entry people (keypunchers) and users (the input data had to follow different rules). Finally, though there were still rough edges, I decided that a full parallel operation was needed.

Due to the number of remaining problems, I became the computer operator and switched to working a third shift after the normal evening work was complete. I would run the new system as far as it could go. If there were fatal errors, I had to diagnose the data, correct it if necessary and rerun it.

In many cases, code had to be changed to either correct the logic or to add validation logic so that bad data got rejected. In some cases, redesign of some routines was required. I spent considerable time talking with data entry people and end users to get control of data errors and improve quality of the data.

Cross-Team Collaboration

My time as operator and instant fixer was painful in some ways, but invaluable in getting things worked out in a reasonable time frame through quick response. I had to spend time every day coordinating with the day shift operations and with other programmers and with users and with keypunchers. This was kind of like a scrum.

I remember times I dreaded having to talk to the order entry supervisor who had a bit of reputation as being hard-nosed and wanted no delays in getting information to the plant floor. But things usually turned out well even when I had to apologize for what seemed like too many errors.

I remember having to go to the production control manager who had a real reputation for being tough. I was shocked at his level of understanding when I pointed out the problems and the fixes that I was working on.

After four months or so, progress was being made and at a much faster pace. Through these efforts, it all worked out, and rather well, with the system becoming the backbone of at least three additional plants.

What About Waterfall?

In theory, waterfall may seem logical, but only if it’s supporting the kind of work that doesn’t require fast-paced iteration and innovation. You can indeed get predictable results—but results that take forever to complete and that leave users unhappy in today’s digital economy. And by the time software changes are delivered, the users’ needs have changed.

It is not that you blindly start on a journey without any idea of the destination, as many who come from waterfall development assume of Agile and DevOps. You might even want a rough-route map. But you do need to adjust to things you encounter as you encounter them. Having a rigid plan can spoil a trip.

Quitting the waterfall mentality is pretty much like quitting smoking.

There are some serious challenges. There are habits to break. But quitting is worth it, because waterfall is not nearly as logical as many still assume. Even early in my career, before the dawn of the digital economy, waterfall failed us. It was an agile approach that saved the project.

There are so many benefits to be gained from achieving true mainframe agility. But understand that achieving those benefits is the result of a journey you commit to. We’ve mapped this out for you with our Achieving DevOps Guidebook Series, walking you through the steps your dev and ops teams should take to increase agility, as well as what your teams can do to measure their progress and continuously improve.

The post Lessons Learned from Waterfall and My First Taste of Agile appeared first on Compuware.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Overview: Just like the mainframe, batch workloads aren’t disappearing—they’re growing. Managing this change requires more than experience. Having a mature batch strategy with adequate automation is essential. Compuware’s new batch maturity model can help.

In my over 30 years of working with mainframes, batch has always been the ugly stepchild. In my first mainframe job, we took a test: Those who did well became the “cool kids,” writing macro CICS ALC code for the online order processing system. If you didn’t do as well, you wrote COBOL for batch.

Years passed with many believing batch could be eliminated. When UNIX became popular, no one even talked about batch, but it was work that still needed to be done. Even when automation became available, software budgets still went to the flashier online transaction processing (OLTP) workloads.

The Challenge for Mainframe Batch

How is it that in most cases, nightly batch keeps chugging along? The answer: Expert mainframe professionals who use their years of knowledge and experience to manually manage batch to SLAs.

But guess what? Those experts are retiring fast, right as mainframe workloads are growing and expectations are rising for increased velocity and cost savings.

Where once we printed our batch window on large sheets pasted to the walls of the data center, now no one person can possibly understand or remember the complexity and interconnectedness of the many batch workloads that have to be managed.

Batch is critical, and it’s time for it to get the attention it deserves.

Addressing Mainframe Batch Maturity

The first step is understanding how mature your batch processing strategy is, relative to your goals, by evaluating your practices against a batch maturity model. Until recently, there wasn’t one, but Compuware experts have studied the matter and devised a model with the goal of decreasing human intervention.

This batch maturity model focuses on automation, as this is the only way to manage the increasing batch demand as experts retire. The goal is an elevated role for Operations, allowing those teams to focus on end-user satisfaction with tools that enable the fine-tuning of batch goals and policies and minimize/eliminate fire drills.

Using the Batch Maturity Model

While few will find themselves at the reactive level, it’s helpful to delve into the details to ensure you’ve completed every automation task for each level before proceeding. Print a copy of the article defining each level so you can more fully assess your processes. It’s likely that many will have achieved parts of some levels described, but missed some automation opportunities that add value.

The good news is you don’t have to write your automation tools as so many of us did in the past. Compuware ThruPut Manager can help you achieve many of the tasks defined in the batch maturity model.

Compuware ThruPut Manager

Not only does ThruPut Manager’s automation relieve operators of cumbersome tasks, freeing them to look at the workloads more holistically; the tool also improves SLA compliance, reduces impact on OLTP workloads and, best of all, can reduce MLC costs. Implementing ThruPut Manager can help you achieve the following important goals:

1. Automated Service Level Management

ThruPut Manager takes automated action when problems arise and delivers real-time information on trends and compliance. As workloads grow and batch elongates, you get the heads-up you need to make adjustments.

2. Workload Balancing and Throughput

Stop manually managing your initiators and let automation ensure they are optimally initiated at all times. Performance is measured every 10 seconds allowing rapid course-correction.

3. Workload Prioritization and Escalation

Ensure your most important work gets into execution sooner by letting the tool do it for you.

4. Automatic Rule Retention

As you develop business rules to manage batch, they will be retained and applied. Make decisions in advance, not under pressure.

5. MSU Reduction

Don’t find out too late about added MLC costs when you run your monthly report. Instead, empower ThruPut Manager to manage this for you by reducing the CPU available to less critical workloads to keep your peak R4HA under control.

6. Dataset Contention Management

Eliminate this key problem from slowing batch and ensure your highest priority batch jobs get the data first.

7. Job Analysis, Job Routing and Data Center Standards Management

This grab-bag of features speaks to a number of levels of the maturity model. By doing an in-depth analysis of job requirements, you can be sure that your job runs where it should and gets the resources it needs while running. No JCL changes needed.

Join the Batch Renaissance

By reviewing your batch maturity and filling in the gaps with intelligent, batch-centric automation, you can more easily achieve throughput goals, eliminate resource contention with online work and save money. And automation not only manages the minutiae of batch for you, it allows you to have much more fun in your job while making you the data-center hero.

The post Why It’s Time for a Mainframe Batch Renaissance appeared first on Compuware.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview