Loading...

Follow CloudTech on Feedspot

Continue with Google
Continue with Facebook
or

Valid

The potential of cloud computing and artificial intelligence (AI) is irresistible. Cloud represents the backbone for any data initiative, and then AI technologies can be used to derive key insights for both greater business intelligence and topline revenue. Yet AI is only as good as the data strategy upon which it sits.

At the AI & Big Data Expo in Amsterdam today, delegates were able to see that the proof of the pudding was in the eating through NetApp's cloud and data fabric initiatives, with Dreamworks Animation cited as a key client who was able to transform its operations.

For the cloud and AI melting pot, however, there are other steps which need to be taken. Patrick Slavenburg, a member of the IoT Council, opened the session with an exploration of how edge computing was taking things further. As Moore's Law finally begins to run out of steam, Slavenburg noted there are up to 70 startups working solely on new microprocessors today. 

Noting how technology history tends to repeat itself, he added today is a heyday for microprocessing architecture for the first time since the 1970s. The key aspect for edge here is being able to perform deep learning at that architectural level, with the algorithms being more lightweight.

Florian Feldhaus, enterprise solutions architect at NetApp, sounded out that data was the key to running AI. According to IDC, by 2020 90% of corporate strategies will explicitly mention data as a critical enterprise asset, and mention analytics as an essential competency. "Wherever you store your data, however you manage it, that's the really important piece to get the benefits of AI," he explained.

The industry continues to insist that it is a multi-cloud, hybrid cloud world today. It is simply no longer a choice between Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform (GCP), but assessing which workloads fit which cloud. This is also the case in terms of what your company's data scientists are doing, added Feldhaus. Data scientists need to use data wherever they want, he said - use it in every cloud and move the data around to make it available to them.

"You have to fuel data-driven innovation on the world's biggest clouds," said Feldhaus. "There is no way around the cloud." With AI services available in seconds, this was a key point in terms of getting to market. It is also the key metric for data scientists, he added.

NetApp has been gradually moving away from its storage heritage to focus on its 'data fabric' offering - an architecture which offers access to data across multiple endpoints and cloud environments, as well as on-premises. The company announced yesterday an update to its data fabric, with greater integration across Google's cloud as well as support for Kubernetes.

Feldhaus noted the strategy was based on NetApp 'wanting to move to the next step'. Dreamworks was one customer looking at this future, with various big data pipelines allied with the need to process data in a short amount of time.

Ultimately, if organisations want to make the most of the AI opportunity - and time is running out for laggards - then they need their data strategy sorted out. Yes, not everything can be moved to the cloud and some legacy applications need a lot of care and attention, but a more streamlined process is possible. Feldhaus said NetApp's data fabric had four key constituents; discovering the data, activating it, automating, and finally optimising.

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
  • Improving revenues using BI is now the most popular objective enterprises are pursuing in 2019
  • Reporting, dashboards, data integration, advanced visualisation, and end-user self-service are the most strategic BI initiatives underway in enterprises today
  • Operations, executive management, finance, and sales are primarily driving business intelligence (BI) adoption throughout enterprises today
  • Tech companies’ operations and sales teams are the most effective at driving BI adoption across industries surveyed, with advertising driving BI adoption across marketing

These and many other fascinating insights are from Dresner Advisory Associates’ 10th edition of its popular Wisdom of Crowds® Business Intelligence Market Study. The study is noteworthy in that it provides insights into how enterprises are expanding their adoption of Business Intelligence (BI) from centralized strategies to tactical ones that seek to improve daily operations. The Dresner research teams’ broad assessment of the BI market makes this report unique, including their use visualizations that provide a strategic view of market trends. The study is based on interviews with respondents from the firms’ research community of over 5,000 organizations as well as vendors’ customers and qualified crowdsourced respondents recruited over social media. Please see pages 13 – 16 for the methodology.

Key insights from the study include the following:

Operations, executive management, finance, and sales are primarily driving business intelligence (BI) adoption throughout their enterprises today

More than half of the enterprises surveyed see these four departments as the primary initiators or drivers of BI initiatives. Over the last seven years, Operations departments have most increased their influence over BI adoption, more than any other department included in the current and previous survey. Marketing and Strategic Planning are also the most likely to be sponsoring BI pilots and looking for new ways to introduce BI applications and platforms into use daily.

Tech companies’ operations and sales teams are the most effective at driving BI adoption across industries surveyed, with advertising driving BI adoption across marketing

Retail/Wholesale and Tech companies’ sales leadership is primarily driving BI adoption in their respective industries. It’s not surprising to see the leading influencer among Healthcare respondents is resource-intensive HR. The study found that Executive Management is most likely to drive business intelligence in consulting practices most often.

Reporting, dashboards, data integration, advanced visualisation, and end-user self-service are the most strategic BI initiatives underway in enterprises today

Second-tier initiatives include data discovery, data warehousing, data discovery, data mining/advanced algorithms, and data storytelling. Comparing the last four years of survey data, Dresner’s research team found reporting retains all-time high scores as the top priority, and data storytelling, governance, and data catalog hold momentum. Please click on the graphic to expand for easier reading.

BI software providers most commonly rely on executive-level personas to design their applications and add new features

Dresner’s research team found all vertical industries except Business Services target business executives first in their product design and messaging. Given the customer-centric nature of advertising and consulting services business models, it is understandable why the primary focus BI vendors rely on in selling to them are customer personas. The following graphic compares targeted users for BI by industry.

Improving revenues using BI is now the most popular objective in 2019, despite BI initially being positioned as a solution for compliance and risk management

Executive Management, Marketing/Sales, and Operations are driving the focus on improving revenues this year. Nearly 50% of enterprises now expect BI to deliver better decision making, making the areas of reporting, and dashboards must-have features. Interestingly, enterprises aren’t looking to BI as much for improving operational efficiencies and cost reductions or competitive advantages.

Over the last 12 to 18 months, more tech manufacturing companies have initiated new business models that require their operations teams to support a shift from products to services revenues. An example of this shift is the introduction of smart, connected products that provide real-time data that serves as the foundation for future services strategies. Please click on the graphic to expand for easier reading.

In aggregate, BI is achieving its highest levels of adoption in R&D, executive management, and operations departments today

The growing complexity of products and business models in tech companies, increasing reliance on analytics and BI in retail/wholesale to streamline supply chains and improve buying experiences are contributing factors to the increasing levels of BI adoption in these three departments. The following graphic compares BI’s level of adoption by function today.

Enterprises with the largest BI budgets this year are investing more heavily into dashboards, reporting, and data integration

Conversely, those with smaller budgets are placing a higher priority on open source-based big data projects, end-user data preparation, collaborative support for group-based decision-making, and enterprise planning. The following graphic provides insights into technologies and initiatives strategic to BI at an enterprise level by budget plans.

Marketing/sales and operations are using the greatest variety of BI tools today

The survey shows how conversant Operations professionals are with the BI tools in use throughout their departments. Every one of them knows how many and most likely which types of BI tools are deployed in their departments. Across all industries, Research & Development (R&D), Business Intelligence Competency Center (BICC), and IT respondents are most likely to report they have multiple tools in use.

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In today’s complex business ecosystem, the relationship between the CIO and the CFO has to be closely aligned, which goes beyond just an agreement on the budget. The CIO and the CFO have to be as one, working together as two strong pillars that ensure the organisation meets every demand the regulators, shareholders, customers, partners and employees place on it. The days of tension between the CIO and CFO are long gone.

Whether your day to day responsibility is financial or technological, there is a binding commonality between you both: as CIO and CFO, you both share the responsibility for risk and compliance in the organisation. That shared responsibility extends to a complete understanding of each other’s roles and challenges. As CIO, you should be well versed on the financial processes and demands, and a transformational CFO is well briefed and often passionate about technology.

With technology underpinning every aspect of an organisation and becoming increasingly important, the responsibilities of the CIO have evolved from just a set of assets and services that are a cost centre to the organisation. Technology is now a foundation of the business, no matter its vertical market. If we accept that technology underpins our organisations, it is important to realise there is an increased risk. With this risk comes greater demand for compliance. The last decade has seen a wealth of regulations, all of which bring a technology compliance element to your organisation.

It is, therefore, vital for the CIO to be able to articulate risk in financial terms, whether it be data or cyber security threats to the CFO. This is critical because any problems that cause risk have a technical and operational impact and solution. For example, investing in new infrastructure or balancing the organisation’s concerns about intellectual property protection when using the public cloud, are technology questions. But, they are clearly business questions too. If the organisation opts to delay infrastructure investment, it could impact business continuity. That in turn impacts the customer experience.

These may sound like decisions that are the sole domain of the CIO, but that is not the case. If technology is truly at the heart of the organisation, then the responsibility for these decisions has to be shared with your peers in the C-suite.

The CIO has to understand, extremely well, the financial system of the organisation, and be able to calculate the risks in a way that the CFO finds accurate and helpful. Together, you need to be able to take these assessments to the audit, or risk, committee and on to the board. This entire discussion is not technical; it is a financial discussion. In fact, you should not have a C in your job title until you realise that part of your job entails being able to hold your own when having financial discussions.

Honorific title

There are a lot of people who have grown up through the technology ranks and eventually get a CIO title. And, in my experience, there are many who are CIO in name only; the title is somewhat honorific.

Am I being disloyal to my CIO peers? No. You can tell the difference between CIOs that are fluent in financial terms and ideas and those that aren’t.

IT leaders are new to the C-suite. Many CIOs have come up from an operational background, which has required and benefited from our detail orientation. Important as detail is at the C-suite, so too is a collaborative approach and this is where CIOs need to develop their skill set. Because, in truth, the C-suite is a lifeboat with just one packet of biscuits to survive on. The team that shares the biscuits of responsibility, is the one that will sail into a safe harbour. Any group of executives at the C-level are all working hard to figure out how to direct the company and they all share the authority.

Lastly, a good relationship with the CFO - and everyone else at the C-level - in no way detracts from the relationship a CIO must have with the CEO. Buy-in and participation from the CEO is vital. Everyone in the C-level needs the CEO’s support and understanding. That CEO air cover is most effective when it is shared between the CFO and CIO. 

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Note: The most common request from this blogs’ readers is how to further their careers in analytics, cloud computing, data science, and machine learning. I’ve invited Alyssa Columbus, a data scientist at Pacific Life, to share her insights and lessons learned on breaking into the field of data science and launching a career there. The following guest post is authored by her.

Earning a job in data science, especially your first job in data science, isn’t easy, especially given the surplus of analytics job-seekers to analytics jobs.

Many people looking to break into data science, from undergraduates to career changers, have asked me how I’ve attained my current data science position at Pacific Life. I’ve referred them to many different resources, including discussions I’ve had on the Dataquest.io blog and the Scatter Podcast. In the interest of providing job seekers with a comprehensive view of what I’ve learned that works, I’ve put together the five most valuable lessons learned. I’ve written this article to make your data science job hunt easier and as efficient as possible.

Continuously build your statistical literacy and programming skills

Currently, there are 24,697 open data scientist positions on LinkedIn in the United States alone. Using data mining techniques to analyse all open positions in the U.S., the following list of the top 10 data science skills was created today.

As of April 14, the top 3 most common skills requested in LinkedIn data scientist job postings are Python, R, and SQL, closely followed by Jupyter Notebooks, Unix Shell/Awk, AWS, and Tensorflow. The following graphic provides a prioritised list of the most in-demand data science skills mentioned in LinkedIn job postings today. Please click on the graphic to expand for easier viewing.

Hands-on training is the best way to develop and continually improve statistical and programming skills, especially with the languages and technologies LinkedIn’s job postings prioritise. Getting your hands dirty with a dataset is often much better than reading through abstract concepts and not applying what you’ve learned to real problems. Your applied experience is just as important as your academic experience, and taking statistics, and computer science classes help to translate theoretical concepts into practical results. The toughest thing to learn (and also to teach) about statistical analysis is the intuition for what the big questions to ask of your dataset are. Statistical literacy, or “how” to find the answers to your questions, come with education and practice. Strengthening your intellectual curiosity or insight into asking the right questions comes through experience.

Continually be creating your own, unique portfolio of analytics and machine learning projects

Having a good portfolio is essential to be hired as a data scientist, especially if you don’t come from a quantitative background or have experience in data science before. Think of your portfolio as proof to potential employers that you are capable of excelling in the role of a data scientist with both the passion and skills to do the job. When building your data science portfolio, select and complete projects that qualify you for the data science jobs, you’re the most interested in. Use your portfolio to promote your strengths and innate abilities by sharing projects you’ve completed on your own. Some skills I’d recommend you highlight in your portfolio include:

  • Your programming language of choice (e.g., Python, R, Julia, etc.).
  • The ability to interact with databases (e.g., your ability to use SQL).
  • Visualisation of data (static or interactive).
  • Storytelling with data. This is a critical skill. In essence, can someone with no background in whatever area your project is in look at your project and gain some new understandings from it?
  • Deployment of an application or API. This can be done with small sample projects (e.g., a REST API for an ML model you trained or a nice Tableau or R Shiny dashboard).

Julia Silge and Amber Thomas both have excellent examples of portfolios that you can be inspired by. Julia’s portfolio is shown below.

Get (or git!) yourself a website

If you want to stand out, along with a portfolio, create and continually build a strong online presence in the form of a website.  Be sure to create and continually add to your GitHub and Kaggle profiles to showcase your passion and proficiency in data science. Making your website with GitHub Pages creates a profile for you at the same time, and best of all it’s free to do. A strong online presence will not only help you in applying for jobs, but organisations may also reach out to you with freelance projects, interviews, and other opportunities.

Be confident in your skills and apply for any job you’re interested in, starting with opportunities available in your network

If you don’t meet all of a job’s requirements, apply anyway. You don’t have to know every skill (e.g., programming languages) on a job description, especially if there are more than ten listed. If you’re a great fit for the main requirements of the job’s description, you need to apply. A good general rule is that if you have at least half of the skills requested on a job posting, go for it. When you’re hunting for jobs, it may be tempting to look for work on company websites or tech-specific job boards. I’ve found, as have many others, that these are among the least helpful ways to find work. Instead, contact recruiters specialising in data science and build up your network to break into the field. I recommend looking for a data science job via the following sources, with the most time devoted to recruiters and your network:

Bring the same level of intensity to improving your communication skills as you do to your quantitative skills as data scientists need to also excel at storytelling. One of the most important skills for data scientists to have is the ability to communicate results to different audiences and stakeholders so others can understand and act their insights. Since data projects are collaborative across many teams and results are often incorporated into larger projects, the true impact of a data scientist’s work depends on how well others can understand their insights to take further action and make informed decisions.

  • Recruiters
  • Friends, family, and colleagues
  • Career fairs and recruiting events
  • General job boards
  • Company websites
  • Tech job boards
Bring the same level of intensity to improving your communication skills as you do to your quantitative skills - as data scientists need to also excel at storytelling

One of the most important skills for data scientists to have is the ability to communicate results to different audiences and stakeholders so others can understand and act their insights. Since data projects are collaborative across many teams and results are often incorporated into larger projects, the true impact of a data scientist’s work depends on how well others can understand their insights to take further action and make informed decisions.

Alyssa Columbus is a Data Scientist at Pacific Life and member of the Spring 2018 class of NASA Datanauts. Previously, she was a computational statistics and machine learning researcher at the UC Irvine Department of Epidemiology and has built robust predictive models and applications for a diverse set of industries spanning retail to biologics. Alyssa holds a degree in Applied and Computational Mathematics from the University of California, Irvine and is a member of Phi Beta Kappa. She is a strong proponent of reproducible methods, open source technologies, and diversity in analytics and is the founder of R-Ladies Irvine. You can reach her at her website: alyssacolumbus.com.

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Dropbox has launched a significant redesign, repositioning itself as an enterprise collaboration workspace and moving away from its file storage heritage.

The move will see Dropbox aim to be a one-stop-shop. The relaunched desktop app will enable users to create, access and share content across the Google and Microsoft portfolio, opening Google Docs and Microsoft Office files, offer synchronised search, alongside partnerships with Atlassian, Slack, and Zoom.

The latter partnerships are of particular interest; users will be able to start Slack conversations and share content to Slack channels directly from Dropbox, while also being able to add and join Zoom meetings from Dropbox, as well as again share files. The new features with Atlassian weren’t announced, but Dropbox promises ‘enhanced integrations [to] help teams more effectively manage their projects and content.’

As a blog post from the Dropbox team put it, the primary motivator for the move was to address the ‘work about work’ which slowed many organisations down. “Getting work done requires constant switching between different tools, and coordinating work with your team usually means a mountain of email and meetings,” the company wrote. “It all adds up to a lot of time and energy spent on work that isn’t the actual work itself. But we’ve got a plan, and we’re excited to share how we’re going to help you get a handle on all this ‘work about work.’”

From the company’s perspective, the move makes sense. As regular readers of this publication will be more than aware, the industry – and almost all organisations utilising cloud software – has moved on from simple storage.

Dropbox has made concerted efforts in the past to help customers get more out of their data, rather than the data itself. In October the company upgraded its search engine, Nautilus, to include machine learning capabilities – primarily to help understand and predict users’ needs for documents they search for as and when, rather than being slaves to any one algorithm.

Indeed, it can be argued the company has shifted away from cloud computing as both a marketing message and as an internal business process. Writing for Bloomberg at the time of Dropbox’s IPO filing last March, Shira Ovide noted that the company building out its own infrastructure – a two and a half year project to move away from Amazon Web Services (AWS) – helped make its IPO proposition more viable.

You can read more about the redesign here.

Picture credit: Dropbox

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Let me start by painting the picture: You’re the CFO, or manager of a department, group, or team, and you’re ultimately responsible for any and all financial costs incurred by your team/group/department. Or maybe you’re in IT and you’ve been told to keep a handle on the costs generated by application use and code development resources.

Your company has moved some or all of your projects and apps to the public cloud, and since things seem to be running pretty smoothly from a production standpoint, most of the company is feeling pretty good about the transition.

Except you.

The promise of moving to the cloud to cut costs hasn’t matriculated and attempting to figure out the monthly bill from your cloud provider has you shaking your head.

From reserved instances and on-demand costs, to the “unblended” and “blended” rates, attempting to even make sense of the bill has you no closer to understanding where you can optimise your spend.

It’s not even just the pricing structure that requires an entire department of accountants to make sense of, the breakdown of the services themselves is just as mind boggling. In fact, there are at least 500,000 SKUs and price combinations in AWS alone! In addition, your team likely has no limitation on who can spin up any specific resource at any time, intrinsically compounding the problem—especially when staff leave them running, the proverbial meter racking up the $$ in the background.

Addressing this complex and ever-moving problem is not, in fact, a simple matter, and requires a comprehensive and intimate approach that starts with understanding the variety of opportunities available for cost and performance optimisation. This where our six pillars of cloud optimisation come in.

Reserved instances (RIs)

AWS Reserved Instances, Azure Reserved VM Instances, and Google Cloud Committed Use Discounts take the ephemeral out of cloud resources, allowing you to estimate up front what you’re going to use. This also entitles you to steep discounts for pre-planning, which ends up as a great financial incentive.

Most cloud cost optimisations, erroneously, begin and end here—providing you and your organisation with a less than optimal solution. Resources to estimate RI purchases are available through cloud providers directly and through third party optimisation tools. For example, CloudHealth by VMware provides a clear picture into where to purchase RI’s based on your current cloud use over a number of months and will help you manage your RI lifecycle over time.

Two of the major factors to consider here are Risk Tolerance and Centralised RI Management portfolios.

  • Risk tolerance refers to identifying how much you’re willing to spend up front in order to increase the possibility of future gains or recovered profits. For example, can your organisation take a risk and cover 70% of your workloads with RIs? Or do you worry about consumption, and will therefore want to limit that to around 20-30%? Also, how long, in years, are you able to project ahead? One year is the least risky, sure, but three years, while also a larger financial commitment, comes with larger cost savings
     
  • Centralised RI management portfolios allow for deeper RI coverage across organisational units, resulting in even greater savings opportunities. For instance, a single application team might have a limited pool of cash in which to purchase RIs. Alternatively, a centralised, whole organisation approach would cover all departments and teams for all workloads, based on corporate goals. This approach, of course, also requires ongoing communication with the separate groups to understand current and future resources needed to create and execute a successful RI management program

Once you identify your risk tolerance and centralise your approach to RI’s you can take advantage of this optimisation option. Though, an RI-only optimisation strategy is short-sighted. It only allows you to take advantage of pricing options that your cloud vendor offers. It is important to overlay RI purchases with the five other optimisation pillars to achieve the most effective optimisation.

Auto-parking

One of the benefits of the cloud is the ability to spin up (and down) resources as you need them. However, the downside of this instant technology is that there is very little incentive for individual team members to terminate these processes when they are finished with them. Auto-Parking refers to scheduling resources to shut down during the off hours—an especially useful tool for development and test environments. Identifying your idle resources via a robust tagging strategy is the first step; this allows you to pinpoint resources that can be parked more efficiently. The second step involves automating the spin-up/spin-down process. Tools like ParkMyCloud, AWS Instance Scheduler, Azure Automation, and Google Cloud Scheduler can help you manage the entire auto-parking process.

Right-sizing

Ah, right-sizing, the best way to ensure you’re using exactly what you need and not too little or too much. It seems like a no-brainer to just “enable right-sizing” immediately when you start using a cloud environment. However, without the ability to analyse resource consumption or enable chargebacks, right-sizing becomes a meaningless concept. Performance and capacity requirements for cloud applications often change over time, and this inevitably results in underused and idle resources.

Many cloud providers share best practices in right-sizing, though they spend more time explaining the right-sizing options that exist prior to a cloud migration. This is unfortunate as right-sizing is an ongoing activity that requires implementing policies and guardrails to reduce overprovisioning, tagging resources to enable department level chargebacks, and properly monitoring CPU, Memory and I/O, in order to be truly effective.

Right-sizing must also take into account auto-parked resources and RIs available. Do you see a trend here with the optimisation pillars?

Family refresh

Instance types, VM-series and “Instance Families” all describe methods by which cloud providers package up their instances according to the hardware used. Each instance/series/family offers different varieties of compute, memory, and storage parameters. Instance types within their set groupings are often retired as a unit when the hardware required to keep them running is replaced by newer technology. Cloud pricing changes directly in relationship to this changing of the guard, as newer systems replace the old. This is called Family Refresh.

Up-to-date knowledge of the instance types/families being used within your organisation is a vital component to estimating when your costs will fluctuate. Truth be told, though, with over 500,000 SKU and price combinations for any single cloud provider, that task seems downright impossible.

Some tools exist, however, that can help monitor/estimate Family Refresh, though they often don’t take into account the overlap that occurs with RIs—or upon application of any of the other pillars of optimisation. As a result, for many organisations, Family Refresh is the manual, laborious task it sounds like. Thankfully, we’ve found ways to automate the suggestions through our optimisation service offering.

Waste

Related to the issue of instances running long past their usefulness, waste is prevalent in cloud. Waste may seem like an abstract concept when it comes to virtual resources, but each wasted unit in this case = $$ spent for no purpose. And, when there is no limit to the amount of resources you can use, there is also no incentive to individuals using the resources to self-regulate their unused/under-utilised instances. Some examples of waste in the cloud include:

  • AWS RDSs or Azure SQL DBs without a connection
  • Unutilised AWS EC2s
  • Azure VMs that were spun up for training or testing
  • Dated snapshots that are holding storage space that will never be useful
  • Idle load balancers
  • Unattached volumes

Identifying waste takes time and accurate reporting. It is a great reason to invest the time and energy in developing a proper tagging strategy, however, since waste will be instantly traceable to the organisational unit that incurred it, and therefore, easily marked for review and/or removal. We’ve often seen companies buy RIs before they eliminate waste, which, without fail, causes them to overspend in cloud – for at least a year.

Storage

Storage in the cloud is a great way to reduce on-premises hardware spend. That said, though, because it is so effortless to use, cloud storage can, in a very short matter of time, expand exponentially, making it nearly impossible to predict accurate cloud spend. Cloud storage is usually charged by four characteristics:

  • Size – how much storage do you need?
  • Data transfer (bandwidth) – how often does your data need to move from one location to another?
  • Retrieval time – how quickly do you need to access your data?
  • Retrieval requests – how often do you need to access your data?

There are a variety of options for different use cases including using more file storage, databases, data backup and/or data archives. Having a solid data lifecycle policy will help you estimate these numbers, and ensure you are both right-sizing and using your storage quantity and bandwidth to its greatest potential at all times.

So, you see, each of these six pillars of optimisation houses many moving parts, and what with public cloud providers constantly modifying their service offerings and pricing, it seems wrangling in your wayward cloud is unlikely. Plus, optimising only one of the pillars without considering the others offers little to no improvement, and can, in fact, unintentionally cost you more money over time. An efficacious optimisation process must take all pillars and the way they overlap into account, institute the right policies and guardrails to ensure cloud sprawl doesn’t continue, and implement the right tools to allow your team regularly to make informed decisions.

The good news is that the future is bright. Once you have assessed your current environment, taken the pillars into account, made the changes required to optimise your cloud, and found a method by which to make this process continuous, you can investigate optimisation through application refactoring, ephemeral instances, spot instances and serverless architecture.

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Over the past few years, companies have been rapidly transitioning to dynamic, hybrid cloud environments in a bid to keep up with the constant demand to deliver something new. However, while the cloud provides the agility businesses crave, the ever-changing nature of these environments has generated unprecedented levels of complexity for IT teams to tackle.

Traditional performance management strategies have been stretched to breaking point, as IT struggles to piece together often conflicting insights from countless monitoring tools and dashboards.

These tools collect a multitude of metrics and raise alerts when problems arise, but provide very few answers as to what actually went wrong. They’re firing out thousands of alerts every day, creating a data storm that makes it easy for problems to be missed or worsen as IT works to identify which are urgent, which are duplicates, and which are false alarms – and that’s before they’ve even gotten onto the task of understanding and resolving the underlying issue.

Hope on the horizon

Over the last two years, IT teams have identified hope on the horizon in the form of the emerging market of AIOps tools. This new breed of solution uses artificial intelligence to analyse and triage monitoring data faster than humans ever could, helping IT teams to make sense of the endless barrage of alerts by eliminating false positives and identifying which problems need to be prioritised.

The global AIOps platform market is expected to grow from $2.5bn in 2018 to $11bn by 2023, and Gartner predicts that 25% of enterprises will have an AIOps platform supporting two or more major IT operations by the end of the year. This demonstrates that there’s a substantial appetite for AIOps capabilities. However, AIOps is not a silver bullet, and there’s a risk that enterprises will fail to realise its potential if it simply becomes just another cog in the machine alongside the array of monitoring tools they already rely on.

Part of the wider solution

AIOps tools are only as good as the data they’re fed, and to radically change the game in operations, they need the ability to provide precise root case determination, rather than just surfacing up alerts that need looking into. It’s therefore critical for AIOps to have a holistic view of the IT environment so it can pull in any pertinent data and contextualise alerts using performance metrics from the entire IT stack. Integration with other monitoring capabilities is therefore key when adopting AIOps, ensuring there are no gaps in visibility and issues can be understood and resolved faster.

While IT teams would almost certainly see a reduction in alert noise when taking the ‘bolt-on’ approach to AIOps, other tools would still be required to drill down and identify the solution to a problem, taking time and manual effort. For AIOps to truly deliver on its promise and make life easier for IT teams, it needs to be part of a holistic approach to performance management.

Taking this more integrated approach will enable IT teams to not only automatically find and triage issues, but create true software intelligence that can surface answers to those problems in real-time.

Looking to the autonomous future

It’s this potential for simplifying IT operations and delivering a more efficient organisation that should be the end goal of AIOps. When done right, the software intelligence enabled by AIOps can be used to drive true efficiencies, through automated business processes, auto-remediation and self-healing.

Ultimately, this can enable the transition towards autonomous cloud operations, where hybrid cloud environments can dynamically adapt in real-time to optimise performance for end-users, without the need for human intervention. As a result, problems can be resolved before users even realise there’s been a glitch.

This AI-driven automation will fuel the next wave of digitalisation and truly transform IT operations. However, reaching this nirvana can’t be achieved by cobbling together a mixed bag of monitoring tools and an AIOps solution into a Frankenstein’s monster for IT. Companies need a new, holistic approach to performance management that combines application insights and cloud infrastructure visibility with digital experience management and AIOps capabilities.

Taking this approach will help to deliver the true promise of AIOps, providing IT with answers as opposed to just more data. As a result, IT teams will be freed up to invest more time in innovation projects that set the business apart from competitors, instead of focussing their efforts on keeping the lights on.

Read more: How to solve visibility issues from AIOps: A guide

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Opinion Cloud in various iterations has been around now for approaching 20 years (longer if you go back the concept of compute as a public utility introduced by scientist John McCarthy in the 1960s), many remembering seeing iterations such as the ASP (application service provider) as a failed step along that journey until we matured to the SaaS, PaaS, IaaS and varying other ‘as a service’ offerings now in the market.

We have witnessed the varying vendor hype marketing around ‘all in’, ‘everything cloud’ to more recent brandings of ‘fear no cloud’ and even the race to zero – the phrase used to describe the rapid price reduction on IaaS and PaaS offerings from the big name lead technology vendors in cloud, implicating someone will one day give it away for free.

Cloud has driven a behavioural change in business and its people enabling lines of business to navigate around the CIO and tech policies to get things done. Through utilisation of their own budgets and cloud switch it on capabilities many have gone the way of ‘shadow IT’ subscribing to cloud-based systems without IT knowing of the use or budget and changing the landscape of consistent procurement, integration and security across the business.

Some business leaders embrace this departmental agility and work to bring it into an aligned strategy, while others resist fighting this new mantra. We live in a time when compute and its use and responsibility in business is rapidly under change. Witness three types of fundamental business driver – those where:

  • The CIO reports into the CFO and we see the behaviours of a cost reduction business
  • The CIO reports into the CEO typically resulting in an innovation focused business
  • The CIO reports into the CTO resulting in a product focused company

In the innovation business, (the ideal state of successful growth focused firms), gains being sought include an optimisation for the business, an aim for frictionless operational processes and better business insight for smarter decisions; these lead to a focus on revenue value-add and lesser of a focus on cost saving to the business.

There are often 3 barriers for change and innovation in a firm that hinder cloud adoption; culture, tech religion and politics.

Culture

Where you find an agile born in the cloud company, the culture leads to fast adoption and leverage of new emerging tech and receptiveness to fast change.  Take a legacy firm, where change has always been slow, leadership is from the old world and you more often find a lethargy to change, projects that take years and often get deferred time and time again and an environment where by the time change happens its already time to start changing again as the market has moved on. These firms are those that face the greatest risk of disruption and we are already seeing a growing number of long existing big brand names across sectors struggling or even going out of business.

Tech religion

Another frequent hampering is the technology religious debate, where the organisation's strategy becomes aligned to a specific vendor brand. When asked what their IT strategy is, in other words, the resulting answer should not be a brand name!

Becoming agile and having a strategy aligned to process and methodology improvement not a vendor brand, allows for the mixing of technology approaches, platforms and brands as and when applicable. In the old tech world this would have aligned to being a Unix, Lan Manager or Windows NT house instead of an agnostic approach, using Unix, NT and perhaps VM for example, mixed and integrated where applicable for the best business outcome.

Politics

Finally, politics comes into play where an organisation finds itself going against performance indicators and best logic; doing it through emotions of people due to brand favouring, historic bias or existing skillsets.

Businesses additionally are challenged to bring together legacy in alignment with new innovative technology offerings to not only become agile but allow agility to scale across the organisation.

Oracle reports client engagements of cloud evolution as having four main themes; a need for modern data management, a shift to the enterprise, a need to be agile to scale, and for all the new tech to show a fundamental positive impact to revenue results

Often businesses are too focused on getting ready for the coming storm; defending their base; instead of focusing on the challenge of constant innovation and agility. We live in a time of the ‘art of data’, where data insights and data itself are often the currency of value and what drives the success of a business. What data tells us and enables us to do is more critical than ever in the world we reside in; without this we would not have the services we rely on daily such as Uber and Amazon and the Facebooks would not exist as free services. Data itself and how it is purposed has a high value in today’s economy.

We can expect to see a continual hype of technology types; cloud, big data, AI, IoT and the like; however, the real focus should be on the outcomes, the creation of success and meeting the needs of the future customer be they external or internalised through leveraging of the most relevant tech available as an enabler.

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Late last week, Google announced its intention to acquire business intelligence platform Looker for $2.6 billion (£2.05bn) in an all-cash transaction, with Looker joining Google Cloud upon the acquisition’s close.

For Google Cloud – whose bill is chump change compared to what Salesforce is outlaying for Tableau – Looker represents more options for customers looking to migrate data from legacy technology stacks to BigQuery, Google’s enterprise data warehouse. As Google Cloud chief Thomas Kurian put it, it will help offer customers “a more complete analytics solution from ingesting data to visualising results and integrating data and insights into… daily workflows.” Looker, meanwhile, gets a surrogate while shareholders get a pile of cash. Yet the key to making it all work is multi-cloud.

Google’s primary focus at Next in San Francisco back in April, as this publication noted at the time, was around hybrid cloud, multi-cloud – and in particular open source. The highlight of the keynote was a partnership with seven open source database vendors, including Confluent, MongoDB, and Redis Labs. Looker is compatible across all the major cloud databases, from Amazon Redshift, to Azure SQL, Oracle, and Teradata. CEO Frank Bien confirmed that customers should expect continuing support across all cloud databases.

“[The] announcement also continues our strategic commitment to multi-cloud,” wrote Kurian. “While we deepen the integration of Looker into Google Cloud Platform, customers will continue to benefit from Looker’s multi-cloud functionality and its ability to bring together data from SaaS applications like Salesforce, Marketo, and Zendesk, as well as traditional data sources. This empowers companies to create a cohesive layer built on any cloud database, as well as on other public clouds and in on-premise data centres.

“Looker customers can rest assured that the high-quality support experience that Looker has provided will be bolstered by the resources, expertise, and global presence of our cloud team,” Kurian added. “We will also continue to support best of breed analytics and visualisation tools to provide customers the choice to use a variety of technologies with Google Cloud’s analytics offering.”

Google had long been partners with Looker before the acquisition developed. In July, Looker announced an integration with BigQuery whereby data teams could create machine learning models directly in the latter via the former. The companies shared more than 350 customers, including Buzzfeed, Hearst, and Yahoo!

“The data analytics market is growing incredibly fast as companies look to leverage all of their data to make more informed decisions,” said Frank Gens, senior vice president and chief analyst at IDC. “Google Cloud is one of the leaders in the data warehouse market, and the addition of Looker will further strengthen their ability to serve the needs of enterprise customers while also advancing their commitment to multi-cloud.”

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The pace of change from a traditional capital-intensive IT infrastructure model to a more flexible hybrid multi-cloud services model is influencing enterprise spending trends across the globe.

Worldwide IT spending is forecast to total $3.79 trillion in 2019 -- that's an increase of just 1.1 percent from 2018, according to the latest global market study by Gartner.

IT infrastructure market development

"Currency headwinds fuelled by the strengthening US dollar have caused us to revise our 2019 IT spending forecast down from the previous quarter," said John-David Lovelock, vice president at Gartner. "Through the remainder of 2019, the US dollar is expected to trend stronger, while enduring tremendous volatility due to uncertain economic and political environments and trade wars."

In 2019, technology product managers will have to get more strategic with their portfolio mix by balancing products and services that will post growth in 2019 with those larger markets that will trend flat to down.

According to the Gartner assessment, successful IT product managers in 2020 will have had a long-term view of the changes made in 2019.

The data centre systems segment will experience the largest decline in 2019 with a decrease of 2.8 percent. This is mainly due to the expected lower average selling prices (ASPs) in the server market driven by adjustments in the pattern of expected component costs.

Moreover, the shift of enterprise IT spending from traditional (non-cloud) offerings to new, cloud-based alternatives is continuing to drive growth in the enterprise software market.

In 2019, the market is forecast to reach $427 billion; that's up 7.1 percent from $399 billion in 2018. The largest cloud shift has so far occurred in application software.

However, Gartner expects increased growth for the infrastructure software segment in the near-term, particularly in integration platform as a service (iPaaS) and application platform as a service (aPaaS).

"The choices CIOs make about technology investments are essential to the success of a digital business. Disruptive emerging technologies, such as artificial intelligence (AI), will reshape business models as well as the economics of public- and private-sector enterprises. AI is having a major effect on IT spending, although its role is often misunderstood," said Mr. Lovelock.

Outlook for AI applications spending growth

Gartner believes that AI is not a product, it is really a set of techniques or a computer engineering discipline. As such, AI is being embedded in many existing products and services, as well as being central to new development efforts in every industry.

Gartner’s AI business value forecast predicts that organisations will receive $1.9 trillion worth of benefit from the use of AI this year alone.

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview