Message boards :
Graphics cards (GPUs) :
Using Tensor Cores
Message board moderation
| Author | Message |
|---|---|
ChileanSend message Joined: 8 Oct 12 Posts: 98 Credit: 385,652,461 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
|
|
Send message Joined: 31 Mar 17 Posts: 1 Credit: 3,732,425 RAC: 0 Level ![]() Scientific publications
|
Turing is in many ways very interesting. It would be a huge performance leap if we can use the Tensor cores for something useful, but there is more: First, the Turing SM adds a new independent integer datapath that can execute instructions concurrently with the floating-point math datapath. In previous generations, executing these instructions would have blocked floating-point instructions from issuing. Simultanious execution of integer and floating-point operations could improve performance a lot. Also, FP16 is now supported at double the performance on all cores, so not only Tensor but also regular CUDA. GeForce RTX 2080 Ti for example: 14.2 TFLOPS of peak single precision (FP32) performance |
©2025 Universitat Pompeu Fabra