Julia is a free open source, high-level, high-performance, dynamic programming language for numerical computing. It has the development convenience of a dynamic language with the performance of a compiled statically typed language, thanks in part to a JIT-compiler based on LLVM that generates native machine code, and in part to a design that implements type stability through specialization via multiple dispatch, which makes it easy to compile to efficient code.
CIOs and data center managers who run large hybrid clouds worldwide have a good chance of hearing IBM knock on their doors in the next few months.
That's because IBM is opening 18 new "availability zones" for its public cloud across the U.S., Europe, and Asia-Pacific. An availability zone is an isolated physical location within a cloud data center that has its own separate power, cooling and networking to maximize fault tolerance, according to IBM.
Along with uptime service level agreements and high-speed network connectivity, users have gotten used to accessing corporate databases wherever they reside, but proximity to cloud data centers is important. Distance to data centers can have an impact on network performance, resulting in slow uploads or downloads.
Machine learning is a complex discipline. But implementing machine learning models is far less daunting and difficult than it used to be, thanks to machine learning frameworks—such as Google’s TensorFlow—that ease the process of acquiring data, training models, serving predictions, and refining future results.
Created by the Google Brain team, TensorFlow is an open source library for numerical computation and large-scale machine learning. TensorFlow bundles together a slew of machine learning and deep learning (aka neural networking) models and algorithms and makes them useful by way of a common metaphor. It uses Python to provide a convenient front-end API for building applications with the framework, while executing those applications in high-performance C++.
The GDPR deadline is coming up fast, and most businesses in the U.S. aren’t ready yet. Join Ken Mingis and his panel of experts as they discuss the impact of the new rules and what U.S. organizations must do now to protect customer data. Find the show here on May 17.
German reinsurance company Munich Re has built a self-serve portal for employees to access a data lake in the hope that they will unearth innovative new business models.
Speaking at the Dataworks Summit in Berlin this week, Andreas Kohlmaier, head of data engineering at Munich Re said: "The game has changed in the last few years, it is no longer about who has the best experts and knowledge in the company and is more and more about who has access to the right data sources and who has the right technology in place to analyse and crunch that data."
The company set out to enable those experts to make use of data and technology in a more self-serve way, "to do what we have done for more than one hundred years, but better, to find answers to those new risks that come up and maybe to find some great and innovative business models," he added.
Programmers love to sneer at the world of fashion where trends blow through like breezes. Skirt lengths rise and fall, pigments come and go, ties get fatter, then thinner. But in the world of technology, rigor, science, math, and precision rule over fad.
Enterprises should find it easier to tap the benefits of FPGAs now that Dell EMC and Fujitsu are putting Intel Arria 10 GX Programmable Acceleration Cards into off-the-shelf servers for the data center.
The infrastructure required to run artificial intelligence algorithms and train deep neural networks is so dauntingly complex, that it’s hampering enterprise AI deployments, experts say.
“55% of firms have not yet achieved any tangible business outcomes from AI, and 43% say it’s too soon to tell,” says Forrester Research about the challenges of transitioning from AI excitement to tangible, scalable AI success.
“The wrinkle? AI is not a plug-and-play proposition,” the analyst group says. “Unless firms plan, deploy, and govern it correctly, new AI tech will provide meager benefits at best or, at worst, result in unexpected and undesired outcomes.”
Key-value, document-oriented, column family, graph, relational... Today we seem to have as many kinds of databases as there are kinds of data. While this may make choosing a database harder, it makes choosing the right database easier. Of course, that does require doing your homework. You’ve got to know your databases.
One of the least-understood types of databases out there is the graph database. Designed for working with highly interconnected data, a graph database might be described as more “relational” than a relational database. Graph databases shine when the goal is to capture complex relationships in vast webs of information.