HPC

Exxact Now Offering NVIDIA Tesla Volta Solutions

October 2, 2017
2 min read
Tesla-Volta-Solutions-Featured-Image.png

The NVIDIA Tesla V100 is engineered for the convergence of AI and HPC. It offers a platform for Exxact HPC systems to excel at both computational sciences for scientific simulation and data science for finding insights in data. By pairing NVIDIA CUDA® cores and Tensor Cores within a unified architecture, a single Exxact Tensor server with Tesla V100 GPUs can replace hundreds of commodity CPU-only servers for both traditional HPC and AI workloads. Every researcher and engineer can now afford an AI supercomputer to tackle their most challenging work with Exxact Tensor servers featuring NVIDIA Tesla V100 GPUs.

Click here to view all NVIDIA Tesla Volta Solutions

NVIDIA Tesla V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by the latest GPU architecture, NVIDIA Volta, Tesla V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once impossible. The Tesla V100 comes in two form factors:

With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance. The next generation of NVIDIA NVLink high-speed interconnect technology connects multiple V100 GPUs at up to 300 GB/s to create the world’s most powerful computing servers. AI models that would consume weeks of computing resources on previous systems can now be trained in a few days. With this dramatic reduction in training time, a whole new world of problems will now be solvable with AI.

Tesla V100 Specifications:

  • 5,120 CUDA cores
  • 640 New Tensor Cores7.8 TFLOPS double-precision performance with NVIDIA GPU Boost™
  • 15.7 TFLOPS single-precision performance with NVIDIA GPU Boost
  • 125 TFLOPS mixed-precision deep learning performance with NVIDIA GPU Boost
  • 300 GB/s bi-directional interconnect bandwidth with NVIDIA NVLink
  • 900 GB/s memory bandwidth with CoWoS HBM2 Stacked Memory
  • 16 GB of CoWoS HBM2 Stacked Memory
  • 300 Watt

For inquiries or more information, contact our sales department here.

Topics

Tesla-Volta-Solutions-Featured-Image.png
HPC

Exxact Now Offering NVIDIA Tesla Volta Solutions

October 2, 20172 min read

The NVIDIA Tesla V100 is engineered for the convergence of AI and HPC. It offers a platform for Exxact HPC systems to excel at both computational sciences for scientific simulation and data science for finding insights in data. By pairing NVIDIA CUDA® cores and Tensor Cores within a unified architecture, a single Exxact Tensor server with Tesla V100 GPUs can replace hundreds of commodity CPU-only servers for both traditional HPC and AI workloads. Every researcher and engineer can now afford an AI supercomputer to tackle their most challenging work with Exxact Tensor servers featuring NVIDIA Tesla V100 GPUs.

Click here to view all NVIDIA Tesla Volta Solutions

NVIDIA Tesla V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by the latest GPU architecture, NVIDIA Volta, Tesla V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once impossible. The Tesla V100 comes in two form factors:

With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance. The next generation of NVIDIA NVLink high-speed interconnect technology connects multiple V100 GPUs at up to 300 GB/s to create the world’s most powerful computing servers. AI models that would consume weeks of computing resources on previous systems can now be trained in a few days. With this dramatic reduction in training time, a whole new world of problems will now be solvable with AI.

Tesla V100 Specifications:

  • 5,120 CUDA cores
  • 640 New Tensor Cores7.8 TFLOPS double-precision performance with NVIDIA GPU Boost™
  • 15.7 TFLOPS single-precision performance with NVIDIA GPU Boost
  • 125 TFLOPS mixed-precision deep learning performance with NVIDIA GPU Boost
  • 300 GB/s bi-directional interconnect bandwidth with NVIDIA NVLink
  • 900 GB/s memory bandwidth with CoWoS HBM2 Stacked Memory
  • 16 GB of CoWoS HBM2 Stacked Memory
  • 300 Watt

For inquiries or more information, contact our sales department here.

Topics