
La Biblia de la IA
104 FOLLOWERS
Covers International scientific and technical publication on the states of Artificial Intelligence
La Biblia de la IA
2y ago
«AutoQML, self-assembling circuits, hyper-parameterized Quantum ML platform, using cirq, tensorflow and tfq. Trillions of possible qubit registries, gate combinations and moment sequences, ready to be adapted into your ML flow. Here I demonstrate climatechange, jameswebbspacetelescope and microbiology vision applications… [Thus far, a circuit with 16-Qubits and a gate sequence of [ YY ] - [ XX ] - [CNOT] has performed the best, per my blend of metrics ..read more
La Biblia de la IA
3y ago
In this presentation, the current issues to make federated learning flawlessly useful in the real world will be briefly overviewed. They are related to data/system heterogeneity, client management, traceability, and security. Also, we introduce the modularized federated learning framework, we currently develop, to experiment various techniques and protocols to find solutions for aforementioned issues. The framework will be open to public after development completes ..read more
La Biblia de la IA
3y ago
CNN Explainer tightly integrates a model overview that summarizes a CNN's structure, and on-demand, dynamic visual explanation views that help users understand the underlying components of CNNs. Through smooth transitions across levels of abstraction, our tool enables users to inspect the interplay between low-level mathematical operations and high-level model structures ..read more
La Biblia de la IA
3y ago
In this survey, we connect several lines of work from the pre-neural and neural era, by showing how hybrid approaches of words and characters as well as subword-based approaches based on learned segmentation have been proposed and evaluated. We conclude that there is and likely will never be a silver bullet singular solution for all applications and that thinking seriously about tokenization remains important for many applications ..read more
La Biblia de la IA
3y ago
The Python scientific visualisation landscape is huge. It is composed of a myriad of tools, ranging from the most versatile and widely used down to the more specialised and confidential. Some of these tools are community based while others are developed by companies. Some are made specifically for the web, others are for the desktop only, some deal with 3D and large data, while others target flawless 2D rendering ..read more
La Biblia de la IA
3y ago
I was emboldened to write this book after my video series called Data Science With Julia got some traction. That too after a tweet about Decision Tree was liked by Julia Language itself. So I thought why not give it more ..read more
La Biblia de la IA
3y ago
Organisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS ..read more
La Biblia de la IA
3y ago
The OECD launched the Artificial Intelligence and the Future of Skills project to develop a programme that could assess the capabilities of AI and robotics and their impact on education and work. This report represents the first step in developing the methodological approach of the project ..read more
La Biblia de la IA
3y ago
Isaac Gym offers a high performance learning platform to train policies for wide variety of robotics tasks directly on GPU. Both physics simulation and the neural network policy training reside on GPU and communicate by directly passing data from physics buffers to PyTorch tensors without ever going through any CPU bottlenecks. This leads to blazing fast training times for complex robotics tasks on a single GPU with 2-3 orders of magnitude improvements compared to conventional RL training that uses a CPU based simulator and GPU for neural networks ..read more
La Biblia de la IA
3y ago
Convolutional neural networks (CNNs) have so far been the de-facto model for visual data. Recent work has shown that (Vision) Transformer models (ViT) can achieve comparable or even superior performance on image classification tasks. This raises a central question: how are Vision Transformers solving these tasks? Are they acting like convolutional networks, or learning entirely different visual representations? Analyzing the internal representation structure of ViTs and CNNs on image classification benchmarks, we find striking differences between the two architectures, such as ViT having more ..read more