Loading...

Follow Cloud Musings on Feedspot

Continue with Google
Continue with Facebook
or

Valid


While public cloud is undoubtedly an outsized piece of the conversation, news headlines of the latest data breach can make this move a very frightening proposition. The question of how to balance the downside of cloud computing risks with the upside of promising cost savings is, therefore, top of mind. 

The dilemma becomes even more critical as your business starts leveraging the power of 5G telecommunications services. When added to existing network architectures and combined with other next-generation technologies like the Internet of Things (IoT) and Edge-to-Edge capabilities, 5G will dramatically alter user experience – from retail to financial services, transportation to manufacturing, to healthcare and beyond.

Answering these, and other essential cybersecurity questions were the primary task during the “Security ‘In’ the Cloud and ‘Of’ the Cloud“ panel I participated in during the AT&T Business Summit in Dallas, Texas. Read more about how you should address this cloud computing transition issue in my newest blog post on AT&T Business Insights.



This content was sponsored by AT&T Business.





( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)
Follow me at http://Twitter.com/Kevin_Jackson
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Cloud Musings by Kevin L. Jackson - 2M ago


Puppet 5 is released and comes with several exciting enhancements and features that promise to make configuration management much more streamlined. This article will take a comprehensive look at these new features and enhancements.

Puppet 5 was released in 2017, and according to Eric Sorensen, director of product management at Puppet, the goal was to standardize Puppet as a one-stop destination for all configuration management requirements. Here are the four primary goals of this release:
  • To standardize the version numbering of all the major Puppet components (Puppet Agent, PuppetDB, and Puppet Server) to 5, and deliver them as part of a unified platform
  • To include Hiera 5 with eyaml as a built-in capability
  • To provide clean UTF-8 support
  • To move network communications to fast, interoperable JSON

Customer feedback
Customer and community feedback played a major role in setting the goals for Puppet 5’s release, having helped the developers with identifying and defining certain patterns, such as:
  • Different version numbers across components were a huge source of confusion
  • There was a lot of chaos when it came to combining components to get a working installation as well as where each component would fit
  • Since both Facter 3 and PuppetDB 3 seamlessly rolled into PC1, guaranteeing a new Puppet Collection for every major release didn’t make much sense

However, the makers ensured that one critical aspect didn’t get affected: Modules that worked on Puppet 4 will work unchanged under Puppet 5.
New features
Puppet 5 comes with some power-packed new features; have a look:
  • The call function: The call (name, args,…) function has been added, which allows you to directly call a function by its name
  • The unique function:  Earlier, you had to include the stdlib module to include the unique function. None of those hassles anymore! The unique function is now directly available in Puppet 5. What’s more, the function is also capable of handling Hash and Iterable data types. In addition, you can now also give a code block that determines whether the uniqueness has been computed.
  • Puppet Server request metrics: Puppet Server 5 now comes with am http-client metric puppetlabs..http-client.experimental.with-metric-id.puppet.report.http.full-response to enable the tracking of how long Puppet Server requests to a configured HTTP report processor take.

Enhancements
Time to take a look at some exciting new enhancements that come with Puppet 5:
  • Switched from PSON to JSON as default: Agents now download node information, catalogs, and file metadata, by default, in JSON instead of PSON in Puppet 5. The move to JSON ensures enhanced interoperability with other languages and tools, while also enabling better performance, especially when the master is parsing JSON facts and reports from agents. Plus, JSON-encoded facts can also be easily accepted in Puppet 5.
  • Ruby 2.4: Puppet now uses Ruby 2.4, which ships in the puppet-agent package. All you have ensure is to reinstall user-installed Puppet agent gems after upgrading to Puppet agent 5.0. This is necessary because of the differences in Ruby API in Ruby 2.1 and 2.4. Further, some gems may also need to be upgraded to versions compatible with Ruby 2.4.
  • HOCON gem is a dependency now: The HOCON gem, which was previously shipped in puppet-agent package is also now a dependency of the Puppet gem.
  • Silence warnings with metadata.json: You can now turn off warnings from faulty metadata.json by setting --strict=off.
  • Updated Puppet Module Tool dependencies: The gem dependencies of Puppet Module Tool are updated to use puppetlabs_spec_helper 1.2.0 or later, which runs metadata-json-lint as part of the validate rake task.
  • Hiera 5 default file: Default Hiera 5-compliant files go into confdirand env-directory. Puppet creates appropriate v5 hiera.yaml in $confdir and $environment Moreover, if Puppet detects a hiera.yaml in either $confdir or $environment, it won’t install a new file in either location or remove $hieradata.

Performance boosts
All these enhancements and new features have contributed to ushering performance boosts in a lot of aspects. The runtimes of Puppet 5 agent have decreased by 30% at equivalent loads (that is, from an average of 8 seconds to 5.5 seconds). In addition CPU utilization of Puppet 5 server has reduced by at least 20% as compared to Puppet 4 in all scenarios, while the CPU utilization for Puppet 5 PuppetDB and PostgreSQL have also significantly reduced in all scenarios.

Catalog compile times of Puppet 5 reported by Puppet Server have reduced by 7% to 10% compared to Puppet 4. Puppet 5 can now scale to 40 percent more agents with no deterioration in runtime performance, whereas Puppet 4 agent runtimes were disastrously long when scaled to the same number of agents.


If you liked this article and want to learn more about Puppet 5, you can explore Puppet 5 Cookbook – Fourth Edition. This book takes you from a basic knowledge of Puppet to a complete and expert understanding of Puppet’s latest and most advanced features. Puppet 5 Cookbook – Fourth Edition is for anyone who builds and administers servers, especially in a web operations context.

( This sponsored post is part of a series designed to highlight recently published Packt books about leading technologies and software applications. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners.)




( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)
Follow me at http://Twitter.com/Kevin_Jackson
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Amidst volatile markets, dynamic technology shifts, and ever-increasing customer demands, it is imperative for IT organizations to develop flexible, scalable and high-quality applications that exceed expectations and enhance productivity. A software application has numerous moving parts, which, if not effectively maintained, will definitely affect the final quality and end user experience.

This is where configuration management (CM) comes into play, its purpose being to maintain the integrity of the product or system throughout its lifecycle while also making the deployment process controllable and repeatable in order to ensure higher quality. Robust configuration management brings the following advantages to the table:
  • Mitigates redundant tasks
  • Manages concurrent updates
  • Eliminates problems related to configuration
  • Streamlines inter-team coordination
  • Makes defect tracking easier

There are several effective CM tools out there like Puppet, Chef, Ansible. And CFEngine, which provide automation for infrastructure, cloud, compliance and security management, integration for deployment and continuous deployment (CI / CD). However, deciding on which tool to select for an organization’s automation requirements is the most critical task for a sysadmin.

A lot of sysadmins will agree that the daily chores of a sysadmin keep them from being updated about automation. When they do spend time in learning the nuances, they come across multiple CM tools that all offer the same benefits theoretically. This further complicates the decision about which CM tool to choose from, especially for people who are just getting started.

So, what is the best tool for people who have minimal idea about automation?—Ansible—and justifiably so! You may ask why. This article will discuss the five reasons that make Ansible one of the most reliable and efficient CM tools out there.
  • An end-to-end tool to simplify automation
Ansible is an end-to-end tool that aids performing all kinds of automation tasks, right from controlling and visualization to simulation and maintenance. This is because Ansible is developed in Python, which gives Ansible access to all general-purpose language features and thousands existing Python packages that you can use to create your own modules. With over 1300 modules, Ansible simplifies several aspects of IT infrastructure, including web, database, network, cloud, cluster, monitoring, and storage.

Configuration Management: Ansible’s most attractive feature is its playbooks, which are nothing but simple instructions/recipes meant to guide Ansible through the task at hand. Playbooks are written in YAML and are human-readable, which makes it all the more easier to navigate through and work with Ansible. Playbooks enable making changes to code, while also making it possible to manage desired states and idempotency natively.

Orchestration: Ansible, though highly simplified, can’t be underestimated when it comes to its orchestration power. It effortlessly integrates with any area of the IT infrastructure, be it provisioning virtual machines (VMs) or creating firewall rules. Moreover, Ansible comes in handy with aspects that other tools leave gaps in, such as zero-stop and continuous updates for multitier applications across the infrastructure.

Provisioning: With several modules for containers (Docker) and virtualization (VMWare, AWS, OpenStack, Azure, and Ovirt), Ansible can easily integrate with several tasks to provide robust and efficient automation.
  • Faster learning curve

Enabling easy initial configuration and installation, the learning curve related to Ansible is extremely quick. Figure this—you can install, configure, and execute ad-hoc commands for ‘n’ number of servers within 30 minutes, no matter what the issue is, be it daylight savings, synchronization, root security, server updation, and so on.

Moreover, it takes no time, even for a beginner, to understand the syntax and workflows, owing to the fact that it uses YAML (YAML Ain’t Markup Language). YAML is human-readable and, therefore, extremely user-friendly and easy-to-understand. Add to it the Python libraries and modules, you have a very simple yet quite powerful CM tool in your hands.
  • Highly adaptive and flexible

Unlike legacy infrastructure models, which take too long to converge to a fully automated environment, Ansible is highly flexible in this regards. As the tech space becomes increasingly dynamic, it is only understandable that the environments have to be flexible enough to imbibe any changes without affecting the output. Otherwise, it may lead to undesired costs, inter-team conflicts, and manual interventions.

Ansible, however, effortlessly adapts to mixed environments, peacefully coexisting with partial and fully automated environments alike, while also enabling seamless transition between models.
  • Full Ansible control

No agents need to be installed at the endpoints for Ansible; all you need is an Ansible-installed server, managing access to servers through SSH (for Linux environments) and WINRM (Windows Remote Access) protocols. Thanks to playbooks, all the desired settings on the hosts defined in the inventory can also run ad-hoc via the command line without any file definitions required whatsoever. This makes it much faster than the traditional client-server models.
  • Instant automation

Right from the instant you can ping the hosts through Ansible, you can start automating your environment immediately. It’s advisable to begin with smaller tasks, duly following best practices, and prioritize tasks that contribute to achieving the business goals. This will help identify and solve problems much swiftly, while also gaining time and enhancing efficiency.

In a nutshell, where Ansible wins over its competitors is in its simplicity—even a beginner can master it in no time—and its powerful features that make configuration management a cakewalk. Choosing Ansible will help heal the Achille’s Heel of automation while also majorly enhancing productivity and efficiency.


If you found this article interesting and wish to learn more about Ansible, you can explore Learn Ansible, an end-to-end guide that will aid you in effectively automating cloud, security, and network infrastructure. Learn Ansible follows a hands-on approach to give you practical experience in writing playbooks and roles, and executing them. 

( This sponsored post is part of a series designed to highlight recently published Packt books about leading technologies and software applications. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners.)





( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)
Follow me at http://Twitter.com/Kevin_Jackson
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I always find it interesting to hear what people view architecture as. A lot of people think it’s just about the design aspect, where you get to put pen to paper and create a solution. Even more people think that it’s just about putting together different technical components in a server room. And these people have interesting opinions on the importance of those activities to architecture. But, at the end of the day, the MOST important part of architecture is one thing and one thing only: requirements.
Requirements
Without requirements, you have no idea if you are actually designing a solution that matters. Without requirements, you have no way of knowing if those technical components that you are including on the server rack will actually be used. In short, you are only spending money without knowing if it’s worthwhile.
We all know of solutions that have been put into place and yet no one uses them. Why is that? Well, one very simple reason – no one bothered to check with the stakeholders what exactly they wanted. What’s the point of spending money on all those components if no one is going to use what you put together? That’s why you gather requirements so that you don’t waste money and actually have a usable solution. Not a solution that works but one that is actually used.
When you gather requirements, you don’t just sit down at a desk and dream up what you think the solution should meet. That’s just navel gazing and it’s no better than designing or building without requirements. Requirement gathering is all about talking to stakeholders to understand what they want and need. You gather those requirements and only then do you start looking for a design approach.
Now, when you say stakeholders, what do you mean? Well, remember that stakeholders include everyone that has a stake in how a solution works. So, it’s not just the end users that are interfacing with the solution or just the business owner who is providing the money. It’s also the operations folks that are supporting the solution. Remember, if the operations team can’t properly support a solution or would need to spend extra money to support it, then you have a more expensive solution than you may have wanted in the first place. So make sure you talk to the operations people about what they need in a solution as well.
Now, you’ve identified the stakeholders that you want to talk to and you are now scheduling meetings to gather those requirements. How do you do that? I would highly recommend that you don’t talk to them all in one room at the same time. There is always the proverbial ‘wallflower’ that sits in the back and doesn’t say anything but who will have a very valid point about a requirement. You will have domineering personalities that will want to be the focus of the meeting. And there will be people that lose focus during the meeting and do other things.
Instead, schedule one-on-one sessions with every stakeholder. A good requirement gathering session will average to 45 minutes per person, so schedule an hour for each person. Trust me; it may seem like you are spending a lot of time on this but it will save you a lot of money over the longer term if you do things correctly from the start.
Now, you’ve scheduled your session with your stakeholders. How do you conduct the meeting? Well, first off, treat it like you would an audit. You don’t go in with preconceived ideas of what the solution is. What you do is ask your stakeholder very broad, open-ended questions and let them talk. Don’t show any indication on how you feel about a particular requirement that they bring up, just note it down. I would highly recommend that you have a spreadsheet for all the different requirements areas (for example, availability, security, maintenance, usability, etc.) so that you don’t forget to ask about them. And then just let the stakeholder talk and go in whatever direction they want to go in.
Once you’ve interviewed all the stakeholders, consolidate all the requirements and replay them back to the stakeholders as a whole. This is the time that you’ll want to have all the stakeholders in one room. You want them to see what the requirements are and agree to them before moving on. And you are bound to have conflicting requirements that will need to be hashed out between the stakeholders and reach mutual agreement.
Once the stakeholders have agreed to the requirements, you can now start going down the road of designing and building your solution. But ALWAYS refer back to the requirements at every phase. Don’t just gather the requirements and forget about them. Those requirements drive the success of the project, and the closer your end solution is to those requirements the more successful and used the project will be.
Oh, one more thing. There are always requirements that come up AFTER the gathering phase. If that happens, two things have to be kept in mind. First, it means that you didn’t do a good job at collecting the requirements in the first place and you need to figure out a way of improving your requirement gathering process. Second, accepting new requirements at this stage means going back and changing designs or builds, which costs time and money. Often, it’s better to just leave the new requirement for the next phase of the project rather than going back and reworking your design.
Requirements are the flesh and blood of a good solution, regardless of whether you are talking about security, infrastructure, application, or a network solution. And if you do it properly, your requirements can help make you a very successful architect moving forward.

If you found this article interesting and want to learn more about architecture and cybersecurity, you can explore Hands-On Cybersecurity for Architects. The book follows a clear, concise, and straightforward approach to explain the underlying concepts and enable you to develop, manage and securely architect solutions for your infrastructure.  
( This sponsored post is part of a series designed to highlight recently published Packt books about leading technologies and software applications. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners.)




( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)
Follow me at http://Twitter.com/Kevin_Jackson
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


What makes Unity the most popular game engine? Find out below.



The game development space is overflowing with game engines offering capabilities for a diverse range of requirements. There’s the Unreal Engine for high-end sophisticated games, the advanced and feature-packed Godot, the mighty powerful CryEngine,which comes with full-engine source code and zero royalties,and Marmalade SDK, which promises maximum exposure for the games you develop.
However, with over 700 million gamers worldwide, Unity dominates the game engine market. Offering immersive graphics and powerful features, Unity is a platform for artists, designers and developers to collaborate and create stunning 2D and 3D gameplay sequences. Fair enough I hear you say, but why is Unity so popular? Let’s find out.

1.      Unity is free for all

Unity’s motto is to democratize game development; in fact its Personal edition is free to use and download. It is also fullof featuresso that independent developers won’t have to miss out on any functionality to develop immersive games. What’s more, you can create a full Unity game for free in the Personal edition and sell it. You don’t have to spend a dime onsoftware unless you are making more than $100,000 per year by selling games made on Unity, in which case, you’ll have to upgrade to the Plus edition (for a mere$35/month, as of May 2018).

2.      Unity offers stunning realism

The latest tools of Unity are setting industry standards in realism. Its Physically Based Rendering feature along with global illumination and real-time compositing allow incredible detailing, yielding awesome and realistic graphics. In fact, the realism capabilities are so powerful that Unity can be used for tasks other than game development, such as creating interactive product catalogs and realistic visualization walk-throughs.

3.      Programming in Unity is trouble-free

Unity supports C# and UnityScript (a specialized variation of JavaScript made for Unity),so anyone with a background in either language can easily find their way around. An added bonus is that if programming is not your cup of tea, Unity has a plethora of visual scripting tools that enable you to create your own scripts and apply them to any game object. Plus, you can also check out Unity’s vast library of scripts for a multitude of game play mechanics that can make your job easier. 

4.      Unity is platform-agnostic

 It’s so important for developers to get their game running on multiple platforms. Thankfully, Unity,including iOS, Android and PC consoles, and the number of platforms are constantly increasing. The best part, however, is that you’ll need to make little to no changes to your workflow as Unity is quite flexible in this respect. All it takes is clicking a few buttons, and your game is ready to be played across multiple platforms. Plus, Unity supports Oculus Rift, HTC Vive, Microsoft Hololens, and several other VR systems, so you don’t have to limit your creativity.


5.      Unity’s very own Asset Store

Unity’s expansive Asset Store is its biggest asset. AI systems, 3D models, animations, complete projects, shaders, or audio—you name it and Unity has it! Want to get your dream game out of your head and into the hands of millions of gamers out there? No worries; just browse through the store and get access to myriad options for your own project. What’s more, you could also sell the assets you’ve created for a handsome price.

6.      Unity offers an entire suite of services

Unity Services is a new set of features that make building, sharing and selling games a lot more interesting and fun. Tools like Unity Cloud and Unity Collaborate allow backing up the entire game and building alternate versions without affecting the system. They also give you the freedom to jump back to an earlier version if things don’t seem to be going your way. Unity Services also has tools for analytics, ads, performance, multiplayer, and more, so you can get an in-depth look of how your game is performing and where users might encounter issues.




Unity 2017 Game Development Essentials

If you are an animator or a designer who’s taking baby steps in the world of game development, Unity is the most obvious choice of game engine. And when it comes to learning the essentials of game development on Unity, we have just the right solution for you. Unity 2017 Game Development Essentials is an end-to-end exercise in game development, covering environments, physics, sound, particles, and otherkey concepts to get you up and running. You’ll learn scripting games using C#, build your very first 2D and 3D games, create fully functional menus and HUDs, and much more.

The book is written by Tommaso Lintrami, an expert game developer who’s been building games since the age of 9. Tommaso is a man of many talents—he is a designer, developer, composer and writer. He has been working with Unity for over 9 years now, having developed a number of games on different platforms. Tomasso brings this expertise to the book, taking you through game development in the most fun and interactive way imaginable.


So what are you waiting for? Check out Unity 2017 Game Development Essentials and get ready to make a mark in the gaming industry!



( This sponsored post is part of a series designed to highlight recently published Packt books about leading technologies and software applications. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners.)





( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)
Follow me at http://Twitter.com/Kevin_Jackson
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This series has stepped through cloud migration best practices. After providing an overview, we discussed:
With all of that completed, it’s now time to select the right cloud service provider (CSP) and finally execute the migration. Cloud provider selection is an area that many enterprises ignore. Executives looking to take advantage of the real business value that the cloud delivers often view providers simply as commodity technology providers. With this mindset, decision-makers usually pick the most familiar name. But this strategy is little more than throwing the dice.
A Smarter Way to Select a Provider
Cloud service provider selection requires a well-developed hybrid IT strategy, an unbiased application portfolio review and the appropriate due diligence in the evaluation of all credible cloud service providers. When discussing this linkage, I leverage the Digital Transformation Layered Triangle as a visualization tool. After agreeing to an appropriate high-level hybrid IT strategy, a digital transformation core tenant, candidate CSPs capabilities must be compared based on their:
  • Availability of technology services that align with the business/mission model.
  • Availability of data security controls that address legal, regulatory and data sovereignty limitations.
  • Compatibility of CSP sales process with enterprise acquisition process.
  • Cost forecast alignment with budgetary expectations.

Understanding Cloud Service Agreements
Comparing cloud service agreements from the remaining viable service providers is next. These agreements typically have three components:
  • Customer Agreement: Describes the overall relationship between the customer and provider. Service management includes the processes and procedures used by the cloud provider. Thus, it’s crucial to provide definitions of the roles, responsibilities and execution of the processes. The customer agreement does this. This document can be called a “master agreement,” “terms of service” or simply “agreement.”
  • Acceptable Use Policy (AUP): Defines activities that the provider considers to be improper or outright illegal. There is considerable consistency across cloud providers in these documents. While specific details may vary, the scope and effect of these policies remain the same, and these provisions typically generate the least concerns or resistance.
  • Service-Level Agreement (SLA): Describes levels of service by in terms of availability, serviceability or performance. The SLA specifies thresholds and financial penalties associated with violations of these thresholds. Well-designed SLAs can avoid conflict and facilitate the resolution of an issue before it escalates into a dispute.
Designing a CSA Evaluation
The CSA Evaluation must take into account all critical functional and nonfunctional organizational requirements and IT governance policies, to ensure:
  • Mutual understanding of roles and responsibilities.
  • Compatibility with all enterprise business level policies.
  • An identifiable metrics for all critical performance objectives.
  • Agreement on a plan for meeting all data security and privacy requirements.
  • Identified service management points of contact for each critical technology services.
  • Agreement on service failure management process.
  • Agreement on disaster recovery plan process.
  • An approved hybrid IT governance process.
  • Agreement on a CSP exit process.
This due diligence process maximizes the success probability of any cloud migration program. With CSP selection complete, the organization can now tackle the hard work of executing the actual migration. This task should include:
  • Planning and executing an organizational change management plan.
  • Verifying and clarifying all key stakeholder roles.
  • Detailed project planning and execution.
  • Establishing internal processes for monitoring and periodically reporting the status of all key performance indicators.
  • Establishing an internal cloud migration status feedback and response process.
The most important lesson learned across all industries is that cloud migration is not a project for the IT team alone. This is an enterprise-wide endeavor that requires executive leadership and focused change management efforts across multiple internal domains.


This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.



( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)
Follow me at http://Twitter.com/Kevin_Jackson
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In part three of this series on cloud migration best practice, I will focus on migrating the application itself. If you haven’t had the opportunity to read our recommendations from part two, “Classifying Your Data,” check it out — those activities are crucial to the decisions addressed in this installment.
While many organizations are aggressively moving applications to the cloud, they often set the criteria for a cloud service provider (CSP) without the necessary technical and operational due diligence. This widely observed error typically leads to migration delays, failures to attain expected business goals and general disillusionment with cloud computing. However, avoiding this disappointing experience is relatively easy. All it takes is executing an application portfolio screening process that takes a look at:
  • The most appropriate CSP target deployment environment.
  • Each application’s specific business benefits, key performance metrics and target return on investment.
  • Each application’s readiness for cloud migration.
Build a foundation
The first step in the screening process is determining the most appropriate cloud deployment environment. This practice establishes an operational foundation for subsequent service provider selections by using relevant stakeholder goals and organizational constraints to guide service model, deployment model and implementation option strategy decisions. Enterprises transforming their information technology should evaluate all available options by analyzing an app transition across three specific high-level domains and sub-domains, such as:
  • IT implementation model
    • Traditional
    • Managed service provider
    • Cloud service provider
  • Technology service model
    • Infrastructure-as-a-Service
    • Platform-as-a-Service
    • Software-as-a-Service
  • IT infrastructure deployment model
    • Private
    • Hybrid
    • Community
    • Public
Cloud computing domains
These domains and sub-domains outline a structured decision process for placing the right application workload into the most appropriate IT environment. This is not a static decision: As business goals, technology options and economic models changes, the relative value of these combinations to your organization may change as well. Plus, single-point solutions are rarely sufficient to meet all enterprise needs. By the end of the cloud migration journey, an organization may require a mix of two, three or as many as 10 variations. Infrastructure variation is why an organizational hybrid IT adoption strategy is crucial. Figure 1 is an example application decision matrix suitable for this step.


With target deployment environments selected, companies should evaluate each candidate application regarding their business benefits and ability to leverage cloud computing’s technical and operational advantages. Using a simple qualitative scale, stakeholders should agree on:
  • Key performance indicators relevant to business or mission owner goals.
  • Expected or target financial return on investment.
  • Each application’s ability to use cloud infrastructure scalability to:
    • Optimize time to deliver products or services.
    • Reduce time from business decision to execution.
    • Optimize cost associated with IT resource capacity.
    • Increase speed of cost reduction.
  • Possible application performance improvements that may include:
    • More predictable deployment and operational costs.
    • Improved resource utilization.
    • Quantifiable service level metrics.
  • Value delivered by improved user availability that may be indicated by:
    • Improved customer experience.
    • Implementation of intelligent automation.
    • Improved revenue margin.
    • Enhanced market disruption.
  • Enhancing application reliability by:
    • Establishing enforceable service level agreements.
    • Increasing revenue efficiencies.
    • Optimizing profit margin.
Determine KPIs
Figure 2 provides a baseline KPI and ROI model that can be easily modified to effectively manage a qualitative assessment across time, cost, quality and revenue margin criteria.


The final step of this application screening process is determining each application’s readiness to actually migrate to the cloud. This step should qualitatively assess the alignment of an application’s cloud migration decision to the organization’s:
  • Risk appetite and risk mitigation options.
  • Ability to implement, manage and monitor data security controls.
  • Expected migration timelines.
  • Expected ROI realization timelines.
  • Current culture and necessary organizational change management resources.
Performing an application portfolio screening process can be useful in aligning cloud application migration projects with organizational business, technical, security and operational goals. It can also avoid application migration delays, failed business goals and team disillusionment by building and monitoring stakeholder consensus.
In the next and final installment of this series, data classification and application screening are linked to cloud service providerselection and application migration execution.


This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.



( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)
Follow me at http://Twitter.com/Kevin_Jackson
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


In my first post of this series, “CloudMigration Part One: An Overview,” I provided a high-level summary of how enterprises should migrate applications to the cloud. In this installment, the focus is on enterprise data and why your organization may need to review and reclassify its data before moving anything to the cloud.

Cloud computing has done more than change the way enterprises consume information technology.  It is also changing how organizations need to protect their data.  Some may see this as an “unintended consequence” but the headlong rush to save money by migrating applications to the cloud has simultaneously uncovered long-hidden application security issues.  This revelation is mostly due to the wide adoption of “Lift & Shift” as a cloud migration strategy.  Using this option typically precludes any modifications of the migrating application.  It can also result in the elimination of essential data security controls and lead to grave data breaches.

While there is no doubt in the good intentions of all involved, traditionally, enterprise applications were developed for deployment into the organization’s own IT infrastructure.  This implicit assumption also included the use of infrastructure-based security controls to protect organizational data.  These generally accepted industry practices were coupled with a cultural propensity to err on the side of caution by protecting most data at generally high levels.  During an implementation, organizations typically used a two-level (sensitive and non-sensitive) or at most a four-level data classification model.

Today, the cloud has quickly become the preferred deployment environment for enterprise applications.  This shift to using “other people’s infrastructure” has brought with it tremendous variability in the nature and quality of infrastructure-based data security controls.  It is also forcing companies to shift away from infrastructure-centric security to data-centric information security models.  Expanding international electronic commerce, ever tightening national data sovereignty laws and regional data protection and privacy regulations (i.e., GDPR) have also combined to make many data classification schemas generally untenable.  Cloud Security Alliance and the International Information Systems Security Certification Consortium (ISC2), in fact, both suggest that corporate data may need to be classified across at least eight categories, namely:
  • Data type (format, structure)
  • Jurisdiction and other legal constraints
  • Context
  • Ownership
  • Contractual or business constraints
  • Trust levels and source of origin
  • Value, sensitivity, and criticality
  • The obligationfor retention and preservation

Moving to classify data at this level means that one of the most important initial steps of any cloud computing migration must be a review and possible reclassification of all organizational data.  In bypassing this step, newly migrated applications simply become data breaches in wait.  At a minimum an enterprise should:
  • Document all key business processes destined for cloud migration;
  • Identify all data types associated with each migrating business process;
  • Explicitly assign the role of “Process Data Owner” to appropriate individuals; and
  • Assign each “Process Data Owner” the task of setting and documenting the minimum required security controls for each data type.

After completing these steps, companies should review and update their IT governance process to reflect any required expansion of their corporate data classification model.  These steps are also aligned with ISO 27034-1 framework for implementing cloud application security.  This standard explicitly takes a process approach to specifying, designing, developing, testing, implementing and maintaining security functions and controls in application systems.  It defines application security not as the state of security of an application system (the results of the process) but as “a process an organization can perform for applying controls and measurements to its applications in order to manage the risk of using them.”

In Part 3 of this series, I will discuss application screening and related industry best practices and include:
  • Determining the most appropriate target application deployment environment;
  • Determining each application's business value, key performance indicators and target return on investment;
  • Determining each application's migration readiness; and
  • Deciding the appropriate application migration strategy.



This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.



( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016-2018)
Follow me at http://Twitter.com/Kevin_Jackson
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview