Two new tutorials on rCUDA planned for 2021
rCUDA News
by
3y ago
The rCUDA Team is glad to announce that two new tutorials on rCUDA will be held during the first months of 2021. The first of the tutorials, titled "rCUDA: Going Further in Remote GPU Virtualization", will be held on January 21st 2021 at the 2020 International Conference on High Performance Computing & Simulation (HPCS 2020). The second of the tutorials, titled "rCUDA Goes Containers: Another Step towards Remote GPU Virtualization", will be held at Principles and Practice of Parallel Programming (PPoPP 2021) on February 28th, 2021. Both tutorials will be online events due to COVID restrict ..read more
Visit website
RCUDA trying to support unified memory. Will succeed?
rCUDA News
by
3y ago
As part of the next rCUDA release, the rCUDA Team is trying to provide support for the unified memory in CUDA. This would allow rCUDA to provide much better support to some applications. We have some ideas about how to provide such support. We also have some ideas about how to make that support more efficient. Will we succeed? For sure we will learn a lot in the attempt ..read more
Visit website
Nice discussion about GPU virtualization at HPC-AI Advisory Council 2020 UK Conference
rCUDA News
by
3y ago
The rCUDA Team presented several virtualization technologies at the HPC-AI Advisory Council 2020 UK Conference. The discussion included the GPU virtualization technologies created by NVIDIA, in addition to rCUDA. You can access a copy of the slides at this link ..read more
Visit website
RCUDA goes containers
rCUDA News
by
3y ago
The rCUDA Team is pleased to announce that we are creating some containers in order to distribute rCUDA with specific applications. That is, the next release of rCUDA, in addition to be distributed in the usual way (a tarball) will also be available in container form, distributed with HPC and Deep Learning applications. As we are a very small team, we will begin with a few containers and will progressively enlarge the collection of applications available ..read more
Visit website
RCUDA continues improving
rCUDA News
by
3y ago
The rCUDA Team continues improving the rCUDA middleware. We are very happy about having recently accomplished a new milestone: the LAMMPS Molecular Dynamics Simulator is now fully working with rCUDA. This achievement has been done thanks to a thorough debugging process, which has allowed us to find several hidden bugs in the rCUDA source code. The next release of rCUDA will include this bug fixing, thus making rCUDA even more robust. The next version of rCUDA will also include additional features ..read more
Visit website
Back from vacation; back to rCUDA development
rCUDA News
by
3y ago
After summer vacations, the rCUDA Team is back to work. Just before vacations we released our new rCUDA version (v20.07), which has been very well welcome. Now, our immediate goal is improving the new version of rCUDA so that it provides support for CUDA 10.0. We are also working on providing support for multitenancy ..read more
Visit website
New version of rCUDA released
rCUDA News
by
4y ago
The rCUDA Team is happy to announce that the new version of the rCUDA middleware has been released. The new version, v20.07, is the result of our hard work during the last year and a half. The new version of rCUDA includes a completely new and disruptive internal architecture both at clients and servers. This new architecture is intended to provide improved performance at the same time that CUDA applications are much better supported. Moreover, the new version of rCUDA also includes a completely new communications layer, which is intended to provide much better performance than previous versio ..read more
Visit website
The new version of rCUDA keeps growing
rCUDA News
by
4y ago
The rCUDA Team is glad to inform that more and more applications are being executed with the new version of rCUDA. In addition to TensorFlow, we have tried with applications such as CUDAmeme, Gromacs, Barracuda, CUDASW, GPU-LIBSVM and HPL linpack. We are currently working with NAMD and LAMMPS. More applications will be tried in the future. Notice that with rCUDA it is possible to use remote GPUs located in different nodes. In this way, it is possible to provide applications with the GPUs installed in all nodes of the cluster. Additionally, those GPUs can be safely shared among several applicat ..read more
Visit website
The new rCUDA version is able to safely partition the memory of a GPU among applications
rCUDA News
by
4y ago
The rCUDA Team is happy to disclose that the new rCUDA version (not released yet) is able to create isolated partitions of the GPU memory and provide each partition to an application. This can be done without having to use virtual machines or hypervisors. In this way, it is possible to split the memory of a GPU into a large amount of sealed partitions, each of them with different size. For instance, it is possible to partition a GPU with 32 GB into 29 partitions, where 1 partition is sized 8 GB, 2 partitions are sized 3 GB each, 10 partitions are sized 1 GB each, and 16 partitions have 0.5 GB ..read more
Visit website
The new rCUDA version includes a new tool called rCUDA-smi
rCUDA News
by
4y ago
The rCUDA Team is happy to announce that a new tool will be included in the new rCUDA release. The new tool, named 'rCUDA-smi' behaves similarly to the nvidia-smi tool. In this way, the rCUDA-smi tool provides information about the remote GPUs used with rCUDA. The picture shows an example of this new tool. It can be seen in the picture that 8 GPUs, located in 5 different nodes, are used with rCUDA. The first node (node1) provides a K40m GPU. The second node (node2) provides two K80 GPUs, as well as the third node. The fourth node provides two different GPUs: one K40m and one K20. Finally, the ..read more
Visit website

Follow rCUDA News on FeedSpot

Continue with Google
Continue with Apple
OR