Carlos Reaño and Federico Silla will give a tutorial about rCUDA next Tuesday 22 as part of the HiPEAC 2019 conference that will take place in Valencia, Spain. More information about the tutorial is available at the conference website
The rCUDA Team is glad to invite you to the information meeting that will take place next Friday 28th December at Universitat Politecnica de Valencia, Spain. For more information please contact email@example.com
The rCUDA Team is happy to announce that the Ladon OS HPC environment, provided by SIE into their systems, will include the rCUDA middleware. Ladon OS is a software HPC environment based on the CentOS Linux distribution provided by SIE, a company located in Madrid, Spain. SIE provides cutting-edge HPC systems to private companies and public institutions.
The rCUDA Team is proud to announce the new release of the rCUDA remote GPU virtualization middleware. This new release, which is a beta version, provides improved and much more robust functionality over the previous version. Performance is not optimized yet. The new release includes support for CUDA 8.0 and CUDA 9.0.
A new release of the rCUDA middleware will be published during September-October. In addition to support CUDA 9.0 and CUDA 9.1, the new version of rCUDA has successfully run on popular HPC applications such as BARRACUDA, CUDAmeme, GPUBlast, GPU-LIBSVM, Gromacs, LAMMPS, MAGMA and NAMD. Deep learning frameworks are also supported. This new version of rCUDA has been successfully run with TensorFlow version 1.7, Caffe, Torch, Theano, PyTorch and MXNET. Finally, renderers such as Blender and Octane are also supported. This new version of rCUDA is a major step forward in the evolution of this middleware because almost 80% of its code is either new or has been reworked.
The rCUDA Team is happy to announce that the new rCUDA version, to be released at the end of October 2018, has been successfully tried with the Inception program using TensorFlow 1.10 and CUDA 9.0. More tests need to be carried out with other programs.
rCUDA is designed to provide the best performance. In the plot you can see a comparison among the performance of P2P memory copies (data copies among GPUs) carried out with CUDA using the PCIe link within a single node and the performance attained by rCUDA when both GPUs are located in different remote servers and connected by InfiniBand. Three different GPU generations are considered. As you can see, using rCUDA does not mean a performance degradation.