Loading...

Follow Datafloq on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Data is everywhere today and in every industry — and it's remaking how we manufacture products, construct homes and buildings, feed ourselves, manage our resources and much more.

Big data is also the key to improving the way we conduct environmental studies and engage in environmental reporting for the purposes of regulatory compliance — and, ultimately, the key to making improvements. Here's a look at why big data is essential for understanding how human activities impact the environment.

Data Mapping for Accountability

Mapping and data visibility tools are incredibly important when it comes to understanding how environmental trends impact business and other human activities. Deforestation, for instance, is a problem that companies, regulators and authorities alike can all get on the same page about thanks to data. In this case, it's satellite mapping data.

The World Resources Institute founded a group called Global Forest Watch, which utilizes satellite telemetry, crowdsourced data (photo and video coverage, eyewitness accounts, etc.) and comparative analytical tools to identify areas where deforestation is accelerating, where companies are using more than their fair share of resources and areas where industrial activities and a lack of adequate protections have resulted in habitat loss and other changes in land cover and even ...


Read More on Datafloq
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Most modern technologies complement each other nicely. For example, advanced analytics and AI can be used collectively to achieve some amazing things, like powering driverless vehicle systems. Big data and machine learning can be used collaboratively to build predictive models, allowing businesses and decision-makers to react and plan for future events.

It should come as no surprise, then, that big data and 3D printing have a symbiotic nature as well. The real question is not "if" but rather "how" they will influence each other. After all, most 3D prints come from a digital blueprint, which is essentially data. Here are some of the ways in which big data and 3D printing influence one another:

On-Demand and Personalized Manufacturing

One of the things 3D printing has accomplished is to transform the modern manufacturing market to make it more accessible and consumer-friendly. There are many reasons for this.

First, 3D printing offers localized additive manufacturing, which means teams can create and develop prototypes or concepts much faster. The technology can also be augmented to work with a variety of materials, from plastic and fabric to wood and concrete.

Additionally, the manufacturing process itself is both simplified and sped up considerably. One only needs the proper digital blueprint ...


Read More on Datafloq
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Datafloq by Harikrishna Kundariya - 1d ago

Every industry can benefit from saving time and money. Competition is high in all sectors that streamlining of operations can only bring profits. Among the areas, transportation is probably one sector that needs a lot of modernization. It is a sector that still depends a lot on paper documents and time-consuming procedures.

Another problem that plagues the transport industry is the delay in the receipt of payments. There are also disputes that result in non-payment. There is a need for a transparent and open network that will help the industry solve these problems. dApps which use blockchain technology could be the best solution to the issues.

1. Problems Faced By Transport Industry

The transport industry is facing challenges. All these problems relate to delays in procedures or payments. Even the payment delays occur due to disputes regarding payment terms.  Using modern technology in the proceedings can solve the issues.

One of the fundamental problems with the transportation industry is the lack of transparent communication. When goods travel from one point to another, there are many agencies involved. It may go through different types of transports.

These different parties don’t know each other. Communication is not the same across the whole network. One person is not ...


Read More on Datafloq
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Ready to transition from a commercial database to open source, and want to know which databases are most popular in 2019? Wondering whether an on-premise vs. public cloud vs. hybrid cloud infrastructure is best for your database strategy? Or, considering adding a new database to your application and want to see which combinations are most popular? We found all the answers you need at the Percona Live event last month, and broke down the insights into the following free trends reports:


Top Databases Used: Open Source vs. Commercial
Cloud Infrastructure Analysis: Public Cloud vs. On-Premise vs. Hybrid Cloud
Polyglot Persistence Trends: Number of Databases Used & Top Combinations


2019 Top Databases Used

So, which databases are most popular in 2019? We broke down the data by open source databases vs. commercial databases:

Open Source Databases

Open source databases are free community databases with the source code available to the general public to use, and may be modified or used in their original design. Popular examples of open source databases include MySQL, PostgreSQL and MongoDB.

Commercial Databases

Commercial databases are developed and maintained by a commercial business that are available for use through a licensing subscription fee, and may not be modified. Popular examples of commercial databases include Oracle, SQL Server, and DB2.

Top Open Source Databases

MySQL remains on ...


Read More on Datafloq
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

If we think of the newest trends in IT service automation, or try to follow the recent research, or listen to the tops speakers at conferences and meetups — they all will inevitably point out that automation increasingly relies on Machine Learning and Artificial Intelligence.

It may sound like the case when these two concepts are used as buzzwords to declare that process automation follows the global trends. It is partially true. In theory, machine learning can enable automated systems to test and monitor themselves, to provide additional resources when necessary to meet timelines, as well as retire those resources when they’re no longer needed, and in this way to enhance IT processes and software delivery.

Artificial Intelligence in turn refers to completely autonomic systems, that can interact with their surroundings at any situation and reach their goals independently.

However, most organizations are in very early days in terms of actual implementations of such solutions. The idea lying behind the need for AI and related technologies is that many decisions are still the responsibility of the developers in spheres that can be effectively addressed by adequate training of computer systems. For example, it is the developer who decides what needs to be executed, but identifying ...


Read More on Datafloq
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Your company needs data to thrive. Quality data leads to key insights for businesses, and factors heavily into their decision-making.

But where do you find quality data? While much of your company’s data will come from internal sources such as CRM and ERP software, even more of it will come externally from the web. In fact, the web is the largest data repository out there.

The overall volume of data in the digital universe has grown significantly, and shows no signs of slowing down. Experts say it’s doubling in size every two years, growing from a mere 4.4 zettabytes in 2013 to a predicted 44 zettabytes (or 44 trillion GB) in 2020.

However, this data is unstructured, unorganized, and lacks consistency. To fully capitalize on it and glean its highly-valuable insights, you must efficiently extract, prepare, and integrate data so it can be consumed at scale.

Not only that, it needs to be clean, reliable data. To help with that, you need a trusted platform that treats external data with the same quality and control as internal data sets.

Here are some strategies to ensure data consistency and quality across the board to benefit your business.

Develop Guidelines for Your Sales Teams

Sales teams have access to a massive amount of ...


Read More on Datafloq
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Airline big data, combined with predictive analytics is being used to drive up airline ticket prices.

As airlines and their frequent flyer programs gather more intelligence on your day to day lifestyle, flying and financial position – they begin to build an airline big data profile.

Consumer interests, goals, psychometric assessment, your motivations to engage with a brand at any given every point throughout the day, what has driven you to purchase in the past – and most importantly – where your thresholds are.

To illustrate how data is playing a growing role in today’s flight booking engines I’ve broken down play by play how each piece of data collected about you can be used, analysed and overlaid with other datasets to paint a picture of who you are, what motivates and drives you to purchase a particular product.

Every day – trillions of calculations are number-crunched to transform this goldmine of data opportunity into real, tangible high-revenue opportunities for the airlines and their frequent flyer programs.

“When armed with key insights, a holistic overview of yours, and other customers’ detailed profiled information can be applied to direct booking channels which are designed to customize pricing for your personal situation at that very given moment. Here is ...


Read More on Datafloq
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Datafloq by Gilad David Maayan - 6d ago

Deep learning is growing in both popularity and revenue. In this article, we will shed light on the different milestones that have led to the deep learning field we know today. Some of these events include the introduction of the initial neural network model in 1943 and the first use of this technology, in 1970.

We will then address more recent achievements, starting with Google’s Neural Machine Translation and moving on to the lesser known innovations such as the Pix2Code – an application that is used to generate a specific layout code to defined screenshots with 77% accuracy.

Towards the end of the article, we will briefly touch on automated learning-to-learn algorithms and democratized deep learning (embedded deep learning in toolkits).

The Past - An Overview of Significant Events

1943 – The Initial Mathematical Model of a Neural Network

For deep learning to develop there needed to be an established understanding of the neural networks in the human brain.

A logician and a neuroscientist – Walter Pitts and Warren McCulloch respectively, created the first neural network mathematical model. Their work, ‘A logical Calculus of Ideas Immanent in Nervous Activity’ was published, and it put forth a combination of algorithms and mathematics that were aimed at mimicking ...


Read More on Datafloq
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Building on our month focussed on controversial topics, let’s turn to what will set your team up for success.

Different contexts can require different types of the analytics team. A lot of the advice that I offer within the Opinion section of this blog is based on a lifetime leading teams in large corporates. So, I’m pleased to partner with guest bloggers from other settings.

So, over to Alan to explain why getting “fuzzy” is the way for an analytics team to see success in the world of startups…

Get fuzzy! Why it is needed

My co-founders and I have recently had to face up to this challenge of creating a new data analytics team having set up our new firm Vistalworks, earlier in 2019.  Thinking about this challenge, reflecting on what we know, and getting the right answer (for us) has been an enlightening process.

With 70-odd years of experience between us, we have plenty of examples of what not to do in data analytics teams, but the really valuable question has been what should we do, and what conditions we should set up in order to give our new team the best chance to be successful.

As we talked through this issue my main personal observation was that successful data analytics teams, of whatever size, have ...


Read More on Datafloq
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 



This post describes developing a web application for a machine learning model and deploying it so that it can be accessed by anyone. The web application is available at:

https://arrear-model.herokuapp.com/

The process of deployment consists of transferring all flask application files from a local computer to the web server. Once completed the web application can be visited by anyone through a public URL. The cloud platform used is Heroku because it supports Python web applications built with various programming languages including applications built with Python flask. Heroku makes things easier by handling administrative tasks by itself so one can focus on the programming part. Another good thing is that web applications is hosted for free.  As the traffic increases, one may want to sign up for one of the better plans so that the web application performs well in high traffic.

The steps involved are as follows:

Creating a simple model that can be deployed to the web, where users can input variables to get predictions

For this post, the model is a simple linear regression model based on the paper “Assessment of Irish Mortgage Arrears at County Level using Machine Learning Techniques and Open Data”. The code and the paper are available for download here. The model is ...


Read More on Datafloq
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview