Loading...

Follow Think Big Analytics on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Originally featured in Manufacturing and Logistics IT.

By Marat Otarov, Data Scientist, Think Big Analytics.

In today’s digital landscape, asset management systems are becoming increasingly sophisticated, introducing new data sources and a data-driven approach: weather data, asset information and sensor data from equipment can be integrated and fed to machine learning algorithms to predict and prevent failures and disruptions.

As a result of the new available data, more sophisticated approaches to Predictive Asset Maintenance (PAM) can help organisations in reducing downtimes, avoiding reactive maintenance costs, reducing preventive maintenance costs and improving customer satisfaction.

The traditional approach to maintain assets is mainly around preventative maintenance with strict maintenance regimes, standard inspection cycles and renewal policy based on the lifetime of the asset calculated using theoretical engineering expertise. While such approaches are generally good at reducing downtime, they can also be inefficient, inflexible and are generally associated with high costs.

A machine learning approach to PAM enables organisations to design maintenance policies based on real time data and to make efficient decisions about asset renewal and maintenance. Engineering knowledge is still key, captured in the model definition stage and through feature engineering, for example understanding what the key features are as well as different types of failures and consequences. However, unlike the traditional approach, a machine learning enhanced PAM system benefits from data in real time and captures high-volumes of data to improve processes over time, based on historical information.

Increasing use of sensors in equipment and the availability of more data is creating ideal environments for machine learning to thrive and drive more business value from PAM systems. However, businesses will only look to invest in machine learning if they can see immediate Return on Investment (ROI):, it is difficult to prove value from machine learning models without making a strong case with evidence based savings calculations.

So how do we take the next steps towards machine learning with PAM systems? And what are the business benefits companies can hope to achieve?

Return on Investment in PAM

Businesses need to wake up to the potential of machine learning driven PAM systems in terms of utilising asset data. The challenge is that the value of machine learning models is not transparent and the cost benefit analysis is not easy to conduct, therefore, business stakeholders are often reluctant to invest in machine learning driven PAM systems.

That’s where data scientists and PAM experts step in. They should acknowledge the importance of proving real business value to the key stakeholders. It is insufficient to provide a good machine learning model that accurately predicts the probability of a failure and data scientists need to develop an actionable insight for the business.

Given the importance of ROI evaluation, data scientists need to communicate the benefits of machine learning through developing optimised maintenance plans. The optmised maintenance plan is the maintenance policy for renewal and repair of assets based on probability of a failure and prioritisation calculated using machine learning models. It also needs to consider business specific requirements including the number of available repair crews and maintenance costs. This actionable insight would bring measurable benefit to the business and can be effective in reducing the communication gaps between data science and business stakeholders.

PAM Success Stories

Machine learning based PAM systems can deliver significant transformational and business value to various industries. Here’s how:

A logistics company:

A logistics company wanted to be able to predict container ship engine component defects using sensor data. Predicting and avoiding ship engine components from failing saves the shipping companies for idle and unproductive time worth millions of dollars. Traditionally, real time streaming data requires an expert who understands the engine to monitor it. There is some level of automation to traditional preventive maintenance of this asset as alerts can be raised based on rules to detect “abnormal” states of an engine based on engineering knowledge.

Machine learning models were developed to predict failures with ten-day lead time. The models were trained on historical sensor data such as temperature of the engine and vibrations. These models enable an automated processing of large amounts of sensor data and accurate reporting of a probability of an engine component failure within 10 days.

The PAM model’s outputs can be used to setup an automated system to raise alerts each time the engine is in risk of failure. The main benefit of implementing such a system is to reduce unexpected downtime by raising preventive alerts and reducing the cost spent in manual inspections by relying more on sensor information – and applying this data-driven approach.

A train manufacturer:

A global train manufacturer wanted to improve the servicing of trains and reduce downtime to improve customer satisfaction. The organisation wanted to use PAM models to understand the root cause of failures, as well as to give its Operations teams time to react to train disruptions, thus minimising downtime. With better machine learning models, maintenance costs would ultimately decrease and the manufacturer’s supply chain would be optimised by ordering parts exactly when required.

The PAM models were built using a range of data sources such sensor data from various components, historical maintenance data, weather and geo data. The models were used to understand the most critical components and influence factors of downtime. This insight helped to inform management of failure risk, and eventually the team could design predictive models to respond to this risk in an automated way.

The Power of PAM

PAM with machine learning models enable organisations to take a data-driven approach to managing asset failure risk while improving efficiency and testing current engineering assumptions about the assets. However, to reveal its full potential, PAM must be adopted on a wider scale – therefore it is important to show its benefits to business leaders using interpretable and actionable insight. One example is to make this extra step from reporting a probability of a failure to designing an optimised maintenance plan. This would provide clear evidence of the benefits to the business and promote the adoption of PAM with machine learning models.

The post Predictive Asset Maintenance: The business benefits – and how to prove them appeared first on Think Big.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As organizations get ready to invest (or further invest) in AI, some recent research efforts offer insight into what the status quo is around AI in the enterprise and the barriers that could impede adoption. 

According to a recent Teradata study, 80% of IT and business decision-makers have already implemented some form of artificial intelligence (AI) in their business.

The study also found that companies have a desire to increase AI spending. Forty-two percent of respondents to the Teradata study said they thought there was more room for AI implementation across the business, and 30% said their organizations weren’t investing enough in AI.

Forrester recently released their 2018 predictions and also found that firms have an interest investing in AI. Fifty-one percent of their 2017 respondents said their firms were investing in AI, up from 40% in 2016, and 70% of respondents said their firms will have implemented AI within the next 12 months.

While the interest to invest in and grow AI implementation is there, 91% of respondents to the Teradata survey said they expect to see barriers get in the way of investing in and implementing AI.

Forty percent of respondents to the Teradata study said a lack of IT infrastructure was preventing Ai implementation, making it their number one barrier to AI. The second most cited challenge, noted by 30% of Teradata respondents, was lack of access to talent and understanding.

“A lot of the survey results were in alignment with what we’ve experienced with our customers and what we’re seeing across all industries – talent continues to be a challenge in an emerging space,” says Atif Kureishy, Global Vice President of Emerging Practices at Think Big Analytics, a Teradata company.

When it comes to barriers to AI, Kureishy thinks that the greatest obstacles to AI are actually found much farther down the list noted by respondents.

“The biggest challenge [organizations] need to overcome is getting access to data. It’s the seventh barrier [on the list], but it’s the one they need to overcome the most,” says Kureishy.

Kureishy believes that because AI has the eye of the C-suite, organizations are going to find the money and infrastructure and talent. “But you need access to high-quality data, that drives training of these [AI] models,” he says.

Michele Goetz, principal analyst at Forrester and co-author of the Forrester report, “Predictions 2018: The Honeymoon For AI Is Over,” also says that data could be the greatest barrier to AI adoption.

“It all comes down to, how do you make sure you have the right data and you’ve prepared it for your AI algorithm to digest,” she says.

How will companies derive value out of AI? Goetz says in this data and insights-driven business world, companies are looking to use insights to improve experiences with customers. “AI is really recognized by companies as a way to create better relationships and better experiences with their customers,” says Goetz.

One of the most significant findings that came out of the Forrester AI research, says Goetz, is that AI will have a major impact on the way companies think about their business models.

“It is very resource intensive to adopt [AI] without a clear understanding of what [it] is going to do,” says Goetz, “So, you’re seeing there’s more thought going into [the question of] how will this change my business process.”

The Forrester Predictions research also showed that 20% of firms will use AI to make business decisions and prescriptive recommendations for employees and customers. In other words, “machines will get bossy,” says Goetz.

Goetz also says that AI isn’t about replacing employees, it’s about getting more value out of them. “Instead of focusing on drudge work or answering questions that a virtual agent can answer, you can allow those employees to be more creative and think more strategically in the way that they approach tasks.”

And in terms of how you can get a piece of the AI pie? Focus your growth on data engineering skills. Forrester predicts that the data engineer will be the new hot job in 2018.

A Udacity blog post describes data engineers as, “responsible for compiling and installing database systems, writing complex queries, scaling to multiple machines, and putting disaster recovery systems into place.” In essence, they set the data up for data scientists to do the analysis. They also often have a background in software engineering. And according to data gathered in June of 2017 and noted in the Forrester Predictions report, 13% of data-related job postings on Indeed.com were for data engineers, while fewer than 1% were for data scientists.

The post A Check-Up for Artificial Intelligence in the Enterprise appeared first on Think Big.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As organizations get ready to invest (or further invest) in AI, some recent research efforts offer insight into what the status quo is around AI in the enterprise and the barriers that could impede adoption. 

According to a recent Teradata study, 80% of IT and business decision-makers have already implemented some form of artificial intelligence (AI) in their business.

The study also found that companies have a desire to increase AI spending. Forty-two percent of respondents to the Teradata study said they thought there was more room for AI implementation across the business, and 30% said their organizations weren’t investing enough in AI.

Forrester recently released their 2018 predictions and also found that firms have an interest investing in AI. Fifty-one percent of their 2017 respondents said their firms were investing in AI, up from 40% in 2016, and 70% of respondents said their firms will have implemented AI within the next 12 months.

While the interest to invest in and grow AI implementation is there, 91% of respondents to the Teradata survey said they expect to see barriers get in the way of investing in and implementing AI.

Forty percent of respondents to the Teradata study said a lack of IT infrastructure was preventing Ai implementation, making it their number one barrier to AI. The second most cited challenge, noted by 30% of Teradata respondents, was lack of access to talent and understanding.

“A lot of the survey results were in alignment with what we’ve experienced with our customers and what we’re seeing across all industries – talent continues to be a challenge in an emerging space,” says Atif Kureishy, Global Vice President of Emerging Practices at Think Big Analytics, a Teradata company.

When it comes to barriers to AI, Kureishy thinks that the greatest obstacles to AI are actually found much farther down the list noted by respondents.

“The biggest challenge [organizations] need to overcome is getting access to data. It’s the seventh barrier [on the list], but it’s the one they need to overcome the most,” says Kureishy.

Kureishy believes that because AI has the eye of the C-suite, organizations are going to find the money and infrastructure and talent. “But you need access to high-quality data, that drives training of these [AI] models,” he says.

Michele Goetz, principal analyst at Forrester and co-author of the Forrester report, “Predictions 2018: The Honeymoon For AI Is Over,” also says that data could be the greatest barrier to AI adoption.

“It all comes down to, how do you make sure you have the right data and you’ve prepared it for your AI algorithm to digest,” she says.

How will companies derive value out of AI? Goetz says in this data and insights-driven business world, companies are looking to use insights to improve experiences with customers. “AI is really recognized by companies as a way to create better relationships and better experiences with their customers,” says Goetz.

One of the most significant findings that came out of the Forrester AI research, says Goetz, is that AI will have a major impact on the way companies think about their business models.

“It is very resource intensive to adopt [AI] without a clear understanding of what [it] is going to do,” says Goetz, “So, you’re seeing there’s more thought going into [the question of] how will this change my business process.”

The Forrester Predictions research also showed that 20% of firms will use AI to make business decisions and prescriptive recommendations for employees and customers. In other words, “machines will get bossy,” says Goetz.

Goetz also says that AI isn’t about replacing employees, it’s about getting more value out of them. “Instead of focusing on drudge work or answering questions that a virtual agent can answer, you can allow those employees to be more creative and think more strategically in the way that they approach tasks.”

And in terms of how you can get a piece of the AI pie? Focus your growth on data engineering skills. Forrester predicts that the data engineer will be the new hot job in 2018.

A Udacity blog post describes data engineers as, “responsible for compiling and installing database systems, writing complex queries, scaling to multiple machines, and putting disaster recovery systems into place.” In essence, they set the data up for data scientists to do the analysis. They also often have a background in software engineering. And according to data gathered in June of 2017 and noted in the Forrester Predictions report, 13% of data-related job postings on Indeed.com were for data engineers, while fewer than 1% were for data scientists.

The post A Check-Up for Artificial Intelligence in the Enterprise appeared first on Think Big.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Real-life case studies on how retailers can optimise on-shelf availability with machine learning.

How many strawberries should I ship to each of my grocery stores? What level of energy consumption should I expect at 7pm in a specific neighborhood? How do I know when there’s a need to upgrade my network capacity to cope with increased usage?

All these questions relate in one way or another to a type of forecasting based on estimating the level of demand for a product or service to better plan and manage supply chains or future investments. Moreover, models and predictions need to be tailored to specific situations. For example, demand for strawberries might be different to that of tomatoes, and sales patterns in a large rural store will be dissimilar from those of a small, inner-city one.

Transforming retail intelligence

Traditionally, statistical demand models have been built using either relatively simple techniques and limited data sources or aggregated data and manual processes, as the sheer number of models makes granular model tuning all but impossible. For instance, the number of combinations of products and stores can reach into the millions for a large national retailer. However, these limitations can be overcome. Here at Think Big Analytics, we are developing a scalable framework for modelling and forecasting that combines modern machine learning techniques with big data technologies.

Rather than relying on time-series techniques that estimate demand based on previous sales only, our methodology has been designed to use a variety of data sources: from pricing and competitive information to variables that capture cannibalization across different product lines, to external factors such as weather forecasts. 

Figure 1: Modelling GUI. The variable inspection tab visualizes the relationship between product sales and other key features such as product price

The data gathering stage is followed by a variable intelligence engine that allows performing transformations to capture different types of impacts on demand. For example, demand might be dependent on a categorical distinction between promotion versus full price that does not depend on the level of discount. It could also be captured by a non-linear relationship between price and sales, whereby increasing the price is associated with the same level of sales up to a certain price point, after which we observe a dramatic drop in sales.

The purpose of the feature intelligence module is to generate all these variables for each model and pass them to the modelling engine. That’s where all the signals are combined and prioritized so that only the most relevant will be used in each situation. For example, the demand for a bottle of wine sold at a large store might be strongly dependent on whether the item is on promotion or not, whereas the sales pattern for a barbeque set will be highly reliant on the rain forecast in the area where it is sold. These relationships are automatically learnt from data while requiring minimal user intervention, and used to accurately predict future demand.

Real-life success stories

Not only have we used the framework for building general and robust demand models, but we have also combined its different components to tackle more specific problems, for example, reporting out of stock items for a large UK food retailer. Out of stocks occur for a variety of reasons linked to the running of operations in stores, and to the management of supply chains. It is estimated that between 3% and 6% of items in UK supermarkets are out of stock at any given time. This has a significant impact on missed sales opportunities, which can escalate to losses amounting to millions, as well as have a negative impact on customer experience. Furthermore, retailers spend on human workers who manually check the product levels on the shop floor, even if in the great majority of cases replenishments are unnecessary.

Figure 2: Out of stocks. It is estimated that between 35% to 6% of products are not in stock at any given time in the UK supermarkets

Both operational costs and missed opportunities could be significantly reduced if we had a reliable way of estimating when and for which products an out of stock occurred, enabling retailers to tackle the underlying root causes and plan ahead. For instance, the could order or bake more bread if it tended to be unavailable at the end of the day or only check products when they are likely to be missing, hence freeing up time previously invested in a repetitive stock count rota.

That’s where our modelling approach enters the picture: by combining stock data with historical sales patterns, we can estimate the expected demand for any given product at any given store and compare it with sales data and insights. Sometimes, products will not sell simply because they are what in retail jargon is referred to as “slow rotation.” For example, think about a bottle of expensive champagne at the end of an aisle in a small village shop that only sold when Jane got her job promotion. Other times, however, we might expect a high level of sales, but we observe zero in the cash register: if sandwiches are not selling at lunchtime at a busy city centre store, it might be because someone has forgotten to replenish the shelves.

During a trial involving a sample of products and stores at major UK grocery retailer, our model-factory reporting tool was accurate in discriminating in-stock from out-of-stock items in more than 97% of cases based on a manual scan of the products available on the shelves. As the tool is implemented across the entire range of products and stores, operational savings and reductions in missed sales opportunities are estimated to bring in tens of millions of pounds annually.

The future

We are currently testing and refining our modelling engine and, going forward, we plan to use this powerful tool to tackle a wide range of supply chain challenges. Demand models can be enriched with new data sources such as weather forecasts. Promotions can be better designed through a data-driven approach that integrates with other functions and operations such as media and in-store support. Finally, price reductions can be optimized to reduce waste for highly perishable products, while improving lead times and increasing availability.

Pricing is a major focus area for our future development. We are making significant steps towards a framework that combines flexible and highly predictive models with a robust, business-validated optimization engine to enable better pricing decisions. Such a system can underpin multiple domains such as strategic pricing, promotional planning, markdown pricing and reduced to clear items by analyzing and optimizing different “levers” that the business can pull to maximize revenue, sales volume, and margins.

Leveraging and deploying machine learning at scale is what enables the most innovative organizations to build the products and services of the future. Our mission is to empower high impact business outcomes with cutting-edge data science and big data technologies.

The post A scalable framework for demand forecasting helping to save millions appeared first on Think Big.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Daniele Barchiesi, Senior Data Scientist at Think Big Analytics, A Teradata Company.

How many Granny Smith apples should I deliver to each of my supermarkets? Will there be a run on sun cream this weekend? These are the types of questions retailers are asking when attempting to apply demand forecasting, a way of predicting demand levels for products or services to plan and manage supply chains more effectively.

Historically, statistical demand models have been built using either relatively simple techniques and limited data, or manual processes and aggregated data. However, given today’s explosion of data, and the new predictive supply and demand insight to stock the right products and sell them as quickly as possible.

These new models are transforming the way many retailers are achieving astonishing new business results. But how exactly is machine learning helping companies uncover new insight to fuel an on-shelf retail revolution?

Capturing and analysing demand in real-time

Today, instead of having to rely on techniques that predict demand based only on only past sales, new machine learning demand models are using a wide variety of live data sources, including competitive pricing information, and external variables, such as weather forecasts.

Beyond this, the models are combining and prioritising the data they analyse: for example, for some products, demand might depend on a categorical distinction between promotion versus full price that is not dependent on discount percentage. Additionally, the sales pattern for sun cream is likely to be reliant on the weather forecast in the area it is being sold in. Machine leaning based demand models can ‘learn’ these relationships from data, needing minimal human intervention while using them to effectively predict demand.

Smart operations for supply chain management

Not only can this framework be used for building general and robust demand models, but its many different components can be combined to tackle specific challenges, for example, reporting out of stock items for major grocery retailers. Items become out of stock for a variety of reasons that link to operational processes in stores, as well as supply chain management factors.

It is estimated that between three and six percent of items in UK supermarkets is out of stock at any given time¹. This impacts significantly on missed sales opportunities, which can amount to millions in losses, as well as have a negative impact on customer experience. Moreover, retailers pay their staff to manually check shop floor product levels – even if restocks are unnecessary in most cases.

Missed opportunities and operational costs could be heavily reduced if retailers could reliably estimate when and for which products out of stocks occurred. For example, they could order or bake more pastries if these tended to be unavailable at the end of the day, or only check specific products when they are likely to be missing, freeing up time previously required for a repetitive stock count rota.

That’s where new machine learning based demand forecasting models involving advanced analytics capabilities can step in. By combining stock data with historical sales patterns, retailers can estimate the expected demand for any given product at any given store location and compare it with sales data and insights, potentially saving tens of millions of pounds annually in operational costs.

The future of retail forecasting

In the future, these demand models will become powerful tools to tackle a wide range of other supply chain challenges – and will ultimately separate the winners from losers. Promotion design can be improved through a data-driven approach that integrates with other functions and operations such as media and in-store support. Also, price reductions can be optimised to reduce waste for highly perishable products while lowering lead times but at the same time increasing availability.

Frameworks will combine flexible and advanced predictive models with robust, business-validated optimisation engines to enable better pricing decisions. For example, these systems will help to inform promotional planning, strategic pricing, markdown pricing and reduced to clear items by analysing and optimising different business functions to maximise revenue, sales volume, and margins.

Leveraging and deploying machine learning at scale is what will enable the most innovative organisations to build the products and services of the future: cutting-edge data science along with the emerging big data technologies are set to continue empowering high impact business outcomes for retailers in 2018 and beyond.

¹ http://www.oliverwyman.com/content/dam/oliver-wyman/global/en/2014/jul/OW_Getting%20Availability%20Right_ENG.pdf

The post Next-generation supply & demand forecasting: How machine learning is helping retailers to save millions appeared first on Think Big.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Originally featured on IoT Tech News.

By Akihiro Kurita

Driven by analytics, the culture of the automobile, including conventional wisdom about how it should be owned and driven is changing. Case in point, take the evolution of the autonomous vehicle. Already, the very notion of what a car is capable of is being radically rethought based on specific analytics use cases, and the definition of the ‘connected car’ is evolving daily.

Vehicles can now analyse information from drivers and passengers to provide insights into driving patterns, touch point preferences, digital service usage, and vehicle condition, in virtually real time. This data can be used for a variety of business-driven objectives, including new product development, preventive and predictive maintenance, optimised marketing, up selling, and making data available to third parties. It’s not only powering the vehicle itself, but completely reshaping the industry.

By using a myriad of sensors to inform decisions traditionally made by human operatives, analytics is completely reprogramming the fundamental areas of driving perception, decision making and operational information. In this article, we discuss a few of the key analytics-driven use cases that we are likely to see in the future as this category, (ahem) accelerates.

The revolution of driverless vehicles

Of course, in the autonomous vehicle, the major aspect missing is the driver, traditionally the eyes and ears of the journey. Replicating the human functions is one of the major ways in which analytics is shaping the industry. Based on a series of sensors, the vehicle gathers data on nearby objects, like their size and rate of speed and categorizes them based on how they are likely to behave. Combined with technology that is able to build a 3D map of the road, it helps it then to form a clear picture of its immediate surroundings.

Now the vehicle can see, but it requires analytics to react and progress accordingly taking into account the other means of transportation in the vicinity, for instance. By using data to understand perception, analytics is creating a larger connected network of vehicles that are able to communicate with each other. In making the technology more and more reliable, self-driving vehicles have the potential to eventually become safer than human drivers and replace those in the not so distant future. In fact, a little over one year ago, two self-driving buses were trialed on the public roads of Helsinki, Finland, alongside traffic and commuters. They were the first trials of their kind with the Easymile EZ-10 electric mini-buses, capable of carrying up to 12 people.

Artificial intelligence driving the innovation and decision making

In the autonomous vehicle, one of the major tasks of a machine learning algorithm is continuous rendering of environment and forecasting the changes that are possible to these surroundings. Indeed, the challenge facing autonomous means of transportation is not so much capturing the world around them, but making sense of it. For example, a car can tell when a pedestrian is ready to cross the street by observing behaviour over and over again. Algorithms can sort through what is important, so that the vehicle will not need to push the brakes every time a small bird crosses its path.

That is not to say we are about to become obsolete. For the foreseeable future, human judgement is still critical and we’re not at the stage of abandoning complex judgement calls to algorithms. While we are in the process of ‘handing over’ anything that can be automated with some intelligence, complex human judgement is still needed. As time goes on, Artificial Intelligence (AI) ‘judgement’ will be improved but the balance is delicate – not least because of the clear and obvious concerns over safety.

How can we guarantee road safety?

Staying safe on the road is understandably one of the biggest focuses when it comes to automated means of transportation. A 2017 study by Deloitte found that three-quarters of Americans do not trust autonomous vehicles. Perhaps this is unsurprising as trust in new technology takes time – it took many years before people lost fear of being rocketed through the stratosphere at 500mph in an aeroplane.

There can, and should, be no limit to the analytics being applied to every aspect of autonomous driving – from the manufacturers, to the technology companies, understanding each granular piece of information is critical. But, it is happening. Researchers at the Massachusetts Institute of Technology are asking people worldwide how they think a robot car should handle such life-or-death decisions. Its goal is not just for better algorithms and ethical tenets to guide autonomous vehicles, but to understand what it will take for society to accept the vehicles and use them.

Another big challenge is determining how long fully automated vehicles must be tested before they can be considered safe. They would need to drive hundreds of millions of miles to acquire enough data to demonstrate their safety in terms of deaths or injuries. That’s according to an April 2016 report from think tank RAND Corp. Although, only this month, a mere 18 months since that report was released, professor Amnon Shashua, Mobileye CEO and Intel senior vice president, announced the company has developed a mathematical formula that reportedly ensures that a “self-driving vehicle operates in a responsible manner and does not cause accidents for which it can be blamed”

Transforming transportation and the future

In many industries, such as retail, banking, aviation, and telecoms, companies have long used the data they gather from customers and their connected devices to improve products and services, develop new offerings, and market more effectively. The automotive industry has not had the frequent digital touch points to be able to do the same. The connected vehicle changes all that.

Data is transforming the way we think about transportation and advanced analytics has the potential to make driving more accessible and safe, by creating new insights to open up new opportunities. As advanced analytics and AI become the new paradigm in transportation, the winners will be those who best interpret the information to create responsive vehicles as simple as getting from A to B.

The post Autonomous vehicles: Three key use cases of advanced analytics shaping the industry appeared first on Think Big.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Originally featured on The Next Web.

The Art of Analytics project visualizes complex datasets as works of art.

In her day-to-day work, Yasmeen Ahmad tackles immensely complex datasets, deploying an arsenal of approaches and methodologies that would sound intimidating to most lay people.

From predictive modelling to text analytics, time series analysis to development of attribution strategies, few people can easily wrap their heads around what such terms mean, and even fewer are capable of drawing actionable insights form them – which is why data scientists such as herself are always in such high demand.

Ahmad worked in the life sciences industry before pivoting to commercial work, where she is now Director of Think Big Analytics, the consulting branch of IT service management company Teradata.

Over many years helping clients across a variety of industries to make sense of their data, however, she realised that the best way help them see meaning in those datasets was to literally paint them a picture.

“Visualization is a core component of any data science and analytical project“ she explains. “It is almost always used at the beginning to understand the datasets you are working with, and can help to quickly identify anomalies, outliers and strong correlations in the data.“

As far as data is concerned, she says, a picture really is worth a thousand words, as visualisation helps to add meaning on top of data that is much easier to assimilate for humans than descriptive words or single numbers and values alone.

Her team would therefore routinely include such visuals when they presented their key results to clients, and found that even people who might not be well versed in data science or technology could still connect with them. The visualisations supported storytelling around a project, engaging business stakeholders to understand connections, relationships and associations in the data.

“As more investment goes into data platforms and analytical technologies, the artwork helps to provide a face to this investment. We had business leaders commenting on how beautiful the visualizations were. Colour, shape and layout were all dimensions that were used to convey meaning. The choice of how a visualization was formed is actually a creative process – like creating a piece of art.”

From there, she explains, it was a short leap to the idea behind the Art of Analytics project, which brings together a range of those visualisation pieces from their previous projects.

Ahmad believes one of the main strengths of the project is its ability to bring data to life for lay audiences, creating a connection between data insights and observers and bridging the technical gap.

“The visualisations push the human to look beyond individual numbers and values, to thinking about data as a series of connections to be explored. They make it particularly easy to see associations, connections, pathways etc. The art is providing form to the complex fields of big data and data science – making them accessible to a wider audience.”

Yet the usefulness of data visualisation is not limited to non-technical people. According to Ahmad, it is also a key component of a data scientist’s toolkit:

“During my life sciences research – I was working in a field where everything was abstracted. The data I analysed often came from human cell samples that could only be seen under microscopes. Hence, collecting data about these samples, analysing it to create insight and then relating the insight back to reality was somewhat difficult. Visualisation was key to help portray not only the insights, but also how they linked to human cells and biology in general.”

Art of Analytics was also an opportunity for Ahmad to bring together those creative and scientific sides. Data science, she says, is actually a highly creative discipline that combines technical know-how with lateral thinking and the ability to tease out stories from complex datasets.

“I believe that art and science need to come together to help to solve the world’s most complex problems, and the best data scientists are not only great at statistics, maths and analytical subjects, but are also creative problem solvers who can translate their work into meaningful messaging that connects with their business and commercial audiences.”

This is an on-going project, and there are plans to create new data representations as they work with new datasets they haven’t encountered before.

Ahmad is also keen to explore how other media such as animation, video and perhaps even VR could help add other dimensions to that work. By creating a video, she says, it would be possible to create another level of emotional connection with audiences, by representing how those relationships have evolved over time.

“We have had senior executives request copies of the Art of Analytics to hang in their office. The emotional connection that people can establish with the work highlights the power of data.”

The post Art exhibit shows off the beauty of data appeared first on Think Big.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Artificial intelligence will have a profound impact on our lives and work. Soon it will be everywhere, from our homes to our cars to our offices. Many believe it will cause a massive displacement of jobs. Others argue that it will enhance our decision-making and make us smarter. But, what do business executives at some of the world’s largest organizations believe about AI? What are they investing in today? What are the biggest barriers and challenges? What value are they expecting from their AI investments? And, what about the impact of AI on jobs and employee morale?

To find out the answers to these questions, and more, Teradata commissioned technology research company Vanson Bourne to conduct a survey of global executive decision-makers on the topic of artificial intelligence for the enterprise. Senior VP and C-Level executives at 260 organizations globally – 50% of which are organizations with annual revenue $1B or more – responded to the survey, resulting in some very interesting insights.

At the highest level, the Teradata “State of AI for Enterprises” survey found that:

  • Businesses are all in on AI – 80% of enterprises already have some form of AI (machine learning, deep learning, etc.) in production today
  • And, they have high hopes for business impact – for every dollar invested today, they expect to double the return in 5 years, and triple in 10 years
  • But, 91% see big barriers ahead – lack of IT infrastructure (40%) and lack of talent (34%) are the biggest ones
  • To set the strategy, a new role is emerging – 62% expect to hire a Chief AI Officer in the future
  • And, what about the big question of AI taking jobs? – 95% think AI will have some impact on human work by the year 2030, but only 21% think AI will replace humans for most enterprise tasks; And, only 20% think AI will impact employee morale.

Check out the complete report and infographic for a deeper dive on how business are using AI today, how they expect to use AI in the future, the big roadblocks along the way and how AI will impact some of the biggest companies in the world.

The post Survey: State of Artificial Intelligence for Enterprises appeared first on Think Big.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Originally featured in Forbes.

As banks go digital they face new levels of fraud, or fraud threats that have to be checked at very high speed. Often the major cost is turning away good business because fraud detection is too restrictive (see my story on MacSales which would have missed $2 million in good sales before implementing Forter for fraud detection) or wasting time, and annoying customers, with investigations of legitimate transactions.

Denmark’s Danske Bank was picking up 1,200 false positives per day in its transaction monitoring, and 99.5 percent were false positives said Nadeem Gulzar, head of global analytics at the bank. Often the issue could be resolved in a few minutes by an investigator, but even a minute or two on 1,200 transactions is a lot of wasted time.

Sometimes the investigator could just look at the data and clear it while other payments could be approved by checking details such as the merchant or the amount of the payment.

“In the worst case we call the customer,” Gulzar said.

The banks reviewed some of the anti-fraud software packages and decided to build its own with open source modules working with Think Big Analytics, A Teradata Company.

With machine learning, they were able to reduce false positives by 35 percent and improve detection of true positives – actual fraud, at roughly the same percent. When they added deep learning the numbers almost doubles to a 60 percent reduction in false positives and a 50-ish improvement in detecting actual fraud, said Gulzar.

“From the beginning we wanted a data driven approach with machine learning and deep learning,” Gulzar said. “Even though we had people with the right skills, we didn’t have competency with low latency production, no one had really done that. We engaged with Think Big to deliver it in production and make sure we could take over the solution and improve it.”

The bank had about 20 people including platform engineers, data engineers and data scientists working on the project and Think Big brought it in a similarly skilled set of its people. The project started with a small subset of transactions – cross border movements of money which offered a little more time to run analytics, about half a second more, he said. The goal was to screen transactions in 300 milliseconds and eventually in 40 milliseconds.

By starting with foreign transactions to the bank had time to learn how to scale and improve competencies, he said.

It was very much a collaborative effort between the two teams, added Gulzar, who said it was the first time he saw the bank succeed in blending teams with an outside firm. In Denmark, special occasions are celebrated by someone bringing in a home-baked cake. He knew the blending had worked when a Think Big employee living in London baked a cake himself to mark one such occasion.

Dankse built the fraud solution on open source components including Hadoop, Spark, Cassandra, Tensa Flow from Google and LIME from the University of Washington which provides some insight into the black box processing.

A way to explain what the engine was doing was needed to overcome internal resistance from executives who didn’t understand machine learning and deep learning, and also to meet regulatory requirements, although that wasn’t the main driver, he said.

“If an investigating officer gets a call about a blocked transaction he can offer an explanation, such as it didn’t match the customer’s normal behavior or customary location.”

Gulzar said several companies said they could do the project, but after looking into their records, the bank shortlisted just Think Big Analytics. The banks started the project in March 2016, launched a shadow version in March and a production version in May.

The company built the Danske system in six-week cycles with two-week sprints, said Rick Farnell, senior vice president, Think Big Analytics. It took only a handfull of cycles to complete the project, he added.

With the reduction in false positives, investigators can focus on real cases.

“We have the same size [anti-fraud] team – this wasn’t an FTE play – but they can use their time more wisely. We know in the long run, fraud will increase as we become more and more digital. We also believe we should protect our customers. Our mission is to be the most trusted bank in the Nordics.”

The analytics run on low-spec Linux servers, although training the model takes more powerful machines with GPUs where Tensa Flow, released by Google, eased the programming challenges.

Danske believes the bank should protect the customer, sometimes against carelessness or ignorance. They may be using a Mac or a PC, but do they understand how to protect themselves?

It could be a relative logging in from their computer, and sometimes the user’s behavior will be a giveaway.

“We have data showing how fast a customer fills in a form, and if it is 4x the usual speed, it probably isn’t the customer.”

Danske is looking at measuring mouse movement speeds and soon will be able to identify and authenticate people by their typing patterns. The fraud detection platform is expected to meet 100 percent ROI in its first year of production.

The post Danske Bank Uses Tech To Prevent Digital Fraud appeared first on Think Big.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Analytics is serious business. I don’t care what buzz words are being used in today’s day’s stylized reinterpretation of the data business that we are in but analytics has indeed come a long way. Both, in the way it is done and the increasing incidence of companies doing it to stay competitive and grow. Thing is, in an odd sort of way, analytics is often done in tentative spurts of frenzy and the business of providing analytic solutions that can create a sustained analytics practice and one that is fully beneficial to the business is, well, rarely the norm in companies today.

Challenges that impede a sane analytics practice

So, let’s first answer the question of what challenges companies face in creating this competitively differentiating analytics practice on a sustained basis. But before I get into this, a quick note. Typically, in the spirit of shameless confessionals, my blog posts are prolix, long-winded affairs. In the interest of judiciously using your spare time I will be brief and if or when you may have the desire to engage in a larger relaxed conversation, reach out to me. Now, for the challenges:

First, businesses today generate vast gobs of data of all types, shapes, sizes, looks, across different times, and at various speeds. Consequently, analytics solutions need to be repositories of these diverse data. One cannot have bespoke storage solutions for each data type and confront an array of infrastructure that requires all kinds of physical and mental gymnastics just to get all the data in one place.

Second, the diversity of data alludes to the diversity of the business. For example, when customer data is collected through CRM systems, web servers, call center notes, and images, it means that customers engage with the business via the store, through online portals, via the call center, and on social media. To understand the multi-channel experience, we need to analyze these diverse data through multiple techniques. Unfortunately, to do that businesses have been forced to buy multiple analytic solutions, each with a unique set of algorithmic implementations and none easily interacting with the others.

Third, let’s assume that the ability and purpose of doing advanced analytics is there and well established. Now comes the challenge of purchasing a solution that can neatly fit into the budgetary constraints of the business. Not for the first time can I recall a customer who has expressed an inability to transact business with a vendor not because they do not have a desperate need for that vendor’s solution but because they are likely locked into a solution purchase that inevitably restricts their flexibility of deploying it. For example, a customer may first desire to kick the tired with a solution by purchasing a limited time cloud subscription before they are able to commit more resources to it. This is only a fair ask. Once they are successful in a minimal-risk purchase, they can up the ante by buying something more substantial depending on their needs. Analytic solutions that cannot fulfil this primal customer need will fast recede as a prickly memory of the past, regardless of how good or versatile they can be.

Teradata Analytics Platform

Now that I have outlined the challenges, how about providing a rational fix for these? Fortunately, and not too coincidentally, this is a pleasant enough occasion for me to introduce the Teradata Analytics Platform. At a high level, the Teradata Analytics Platform is an advanced analytics solution that comes in multi-deployment form factors with the capability of ingesting diverse data types at scale upon which native analytic engines can be invoked to implement multi-genre analytic functions that deliver and operationalize critical business insights. There are six core capabilities that are likely to provide a unique and significant competitive edge to customers of this solution. They are:

  • Evolution of the Teradata Database to include multi-data type storage capabilities (e.g., time series, text) and pre-built analytics functions (e.g., sentiment extraction, time series modelling).
  • Aster Analytics analytic engine with over 180 pre-built advanced analytics functions that span multiple genres such as Graph, Statistics, Text and Sentiment, Machine and Deep Learning, and Data Preparation.
  • A customer friendly deployment and pricing option set (in-house, hybrid cloud, managed cloud, term and perpetual licensing) that ensures flexibility in accommodating changing customer preferences without impacting any current investments.
  • A Data Science friendly analytic environment that includes a variety of development tools and languages (e.g., Hadoop) that aims to provide a customized solution that dovetails with a customer’s current investments and desired ecosystems mix.
  • A highly performant solution where the insights delivery operationalization are tightly coupled in the same environment without having to artificially separate them.

Addressing the Analytics Challenges 

So, now that you know what the Teradata Analytics Platform is it behooves me to close the loop and discuss briefly how it fixes the challenges that I had outlined earlier. Fortunately, for me, this is an easy and delightful exercise. For one thing, the features above clearly speak to the solution’s capability to ingest and process data of all types (our first challenge). The fact that it comes with a mind-boggling array of analytic capabilities, not to mention the capability to leverage open source analytic engines such as Tensorflow clearly indicates that Data Scientists and other analytics professionals have the ease and flexibility to choose from a colorful palette of techniques to effectively do their work. And what’s more, they can do their work using development tools and languages that they’re most comfortable with (our second challenge). Finally, given that the Teradata Analytics Platform was conceived with a “customer first” mentality – a hall-mark of the Teradata way of doing business – it is available for deployment in ways that suit the customer’s unique business needs. Customers who prefer to analyze their data on the public cloud will have the option of buying a subscription to this solution. Alternatively, those that prefer an in house implementation can have their choice fulfilled as well. Customers who choose one deployment option to begin with and decide to change to something else won’t have the worry of a repriced solution as the pricing unit (TCore) is the same across all deployments (our third challenge).

Teradata Analytics Platform, the smart choice

Clearly, and honestly, my conclusion is not likely to culminate in a dramatic denouement. Be that as it may, it is a logical choice to opt for the Teradata Analytics Platform that puts the power of analytics in the hand of the customer and delivers a unique purchasing experience that is quite revolutionary in the market.

The post Advanced analytics for a new era appeared first on Think Big.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview