Skip to content

yhwang-hub/Matrix_Multiplication_Performance_Optimization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Matrix Multiplication Performance Optimization

Language Language Language

Introduce

This article will take single-precision matrix multiplication (Sgemm) as an example to discuss the optimization and acceleration of CUDA performance, and use the basic knowledge of CUDA optimization to step by step optimize the performance of single-precision matrix multiplication to up to 70% of cublas. The importance of matrix multiplication: In the field of high performance, the optimization of matrix multiplication (GEMM) is a very important topic.

GEMM can be widely used in scientific computing fields such as aerospace and fluid mechanics, which was also the main application scenario of HPC before. Later, deep learning developed in full swing, and due to the need for high computing power, it also became one of the main application scenarios of HPC. A series of deep learning models have emerged in recent years. The most time-consuming things in the model, including convolution, fully connected layers, and attention, can all be converted into GEMM operations. Therefore, the importance of GEMM optimization cannot be overemphasized.

This article explainshow to learn CUDA through matrix multiplication performance optimization.

Environment

The following environments have been tested:

  • ubuntu16.04
  • cuda11.1
  • cudnn8.6.0
  • cmake-3.24.0

Build And Run

git clone [email protected]:yhwang-hub/Matrix_Multiplication_Performance_Optimization.git
cd Matrix_Multiplication_Performance_Optimization
mkdir build && cd build
cmake ..
make
./mat_multi

Result

The comparison of the throughput results of the manually optimized kernel and CUDA matrix multiplication library cublas is shown in the following figure: ​image [Note] The vertical axis is Gflop/s, and the horizontal axis is the matrix dimension.

The delay calculation method is the average of the sum of the times of calling CUDA kernel and Cublas 100 times respectively. The manually optimized kernel can reach up to 71% of the performance of Cublas.

Reference

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published