Deep Learning

New Deep Learning Software Release: NVIDIA CUTLASS 1.2

November 20, 2018
2 min read
abstract-ai-art-373543.jpg

Overview - CUTLASS 1.2

"CUTLASS is a collection of CUDA C++ template abstractions for implementing high-performance matrix-multiplication (GEMM) at all levels and scales within CUDA. It incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS. CUTLASS decomposes these "moving parts" into reusable, modular software components abstracted by C++ template classes. These thread-wide, warp-wide, block-wide, and device-wide primitives can be specialized and tuned via custom tiling sizes, data types, and other algorithmic policy. The resulting flexibility simplifies their use as building blocks within custom kernels and applications.

To support a wide variety of applications, CUTLASS provides extensive support for mixed-precision computations, providing specialized data-movement and multiply-accumulate abstractions for 8-bit integer, half-precision floating point (FP16), single-precision floating point (FP32), and double-precision floating point (FP64) types. Furthermore, CUTLASS demonstrates CUDA's WMMA API for targeting the programmable, high-throughput Tensor Cores provided by NVIDIA's Volta architecture and beyond. "

DL_NVIDIA_GPU_Dynamic-SLB.jpg

What's New in CUTLASS 1.2

CUTLASS 1.2, the latest version of the CUDA template library for linear algebra subroutines, includes the following key updates:

  • Support for Turing Tensor Cores that significantly speedup matrix computations for deep learning inference
  • Tensor Core optimized WMMA GEMMs for the new INT8, INT4, and INT1 precision modes introduced in Turing
  • Support for batched strided GEMMs, parallelized GEMM-K reductions, enhanced utilities, and samples

Have any questions? Contact us directly here.

Topics

abstract-ai-art-373543.jpg
Deep Learning

New Deep Learning Software Release: NVIDIA CUTLASS 1.2

November 20, 20182 min read

Overview - CUTLASS 1.2

"CUTLASS is a collection of CUDA C++ template abstractions for implementing high-performance matrix-multiplication (GEMM) at all levels and scales within CUDA. It incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS. CUTLASS decomposes these "moving parts" into reusable, modular software components abstracted by C++ template classes. These thread-wide, warp-wide, block-wide, and device-wide primitives can be specialized and tuned via custom tiling sizes, data types, and other algorithmic policy. The resulting flexibility simplifies their use as building blocks within custom kernels and applications.

To support a wide variety of applications, CUTLASS provides extensive support for mixed-precision computations, providing specialized data-movement and multiply-accumulate abstractions for 8-bit integer, half-precision floating point (FP16), single-precision floating point (FP32), and double-precision floating point (FP64) types. Furthermore, CUTLASS demonstrates CUDA's WMMA API for targeting the programmable, high-throughput Tensor Cores provided by NVIDIA's Volta architecture and beyond. "

DL_NVIDIA_GPU_Dynamic-SLB.jpg

What's New in CUTLASS 1.2

CUTLASS 1.2, the latest version of the CUDA template library for linear algebra subroutines, includes the following key updates:

  • Support for Turing Tensor Cores that significantly speedup matrix computations for deep learning inference
  • Tensor Core optimized WMMA GEMMs for the new INT8, INT4, and INT1 precision modes introduced in Turing
  • Support for batched strided GEMMs, parallelized GEMM-K reductions, enhanced utilities, and samples

Have any questions? Contact us directly here.

Topics