Message boards :
Graphics cards (GPUs) :
Tesla K40
Message board moderation
Previous · 1 · 2
| Author | Message |
|---|---|
|
Send message Joined: 6 Feb 10 Posts: 38 Credit: 274,204,838 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
How do you want me to send the debug file? Thanks for your time. |
MumakSend message Joined: 7 Dec 12 Posts: 92 Credit: 225,897,225 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I have sent you a PM about that |
|
Send message Joined: 6 Feb 10 Posts: 38 Credit: 274,204,838 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Done |
MumakSend message Joined: 7 Dec 12 Posts: 92 Credit: 225,897,225 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
After checking the detailed data, it's indeed an NVIDIA driver problem. NVAPI doesn't return information about Teslas at all. Moreover, due to a bug in addressing, HWiNFO thinks the 2nd adapter is the Tesla, but in fact that information should belong to the 2nd TITAN. NVIDIA needs to fix this. Despite that, I think that HWiNFO should display at least the temperature for all GPUs even if not currently properly supported by NVAPI. |
|
Send message Joined: 6 Feb 10 Posts: 38 Credit: 274,204,838 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
This is what Nvidia says: "Checking with Driver team. Tesla customers use the nvsmi or nvml for getting the gpu statistics. Nvapi is more for geforce and quadro. I have asked engg if the nvapi extends to Tesla." Thanks |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
yes, 12Gb and 64fp performance is hard to replace for some tasks. CUDA6 will introduce Multi-GPU scaling (cublasXT); should allow you to use the GDDR memory of up to 8 cards on one app. NVidia, a new BLAS GPU library that automatically scales performance across up to eight GPUs in a single node, delivering over nine teraflops of double precision performance per node, and supporting larger workloads than ever before (up to 512GB). The re-designed FFT GPU library scales up to 2 GPUs in a single node, allowing larger transform sizes and higher throughput. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
©2025 Universitat Pompeu Fabra