Message boards :
Number crunching :
Elapsed Time vs CPU time
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 16 Nov 10 Posts: 22 Credit: 24,712,746 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Why is there such a big difference between This is an example of a task on the Jupiter device that runs a GTX 285. Drivers are 260.99 Elapsed Time: 27,489.40 seconds CPU Time: 4,320.35 seconds To me this sounds as a very low efficiency of 15.7% The best case I have is around 30% but only once. Ther are also worse cases. Is it normal? |
SaengerSend message Joined: 20 Jul 08 Posts: 134 Credit: 23,657,183 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Elapsed Time: 27,489.40 seconds Yes. The app mainly runs on the GPU, it only needs a few cycles on the CPU, and only they are counted in that measure. Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki |
|
Send message Joined: 16 Nov 10 Posts: 22 Credit: 24,712,746 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Elapsed Time: 27,489.40 seconds So this measure is the ratio CPU/GPU. In that case I would say that the CPU usage remains still very high. In a perfect world the whole WU is downloaded on the GPU (If there is enough local memory), then until completion would run on the board with no CPU interference and when finished is uploaded back to the CPU and a new WU downloaded. This should necessitate some seconds of CPU but not more than one hour CPU time (3600 seconds is one hour). These values of CPU usage mean that the CPU is doing real work as the GPU crunches. Maybe there is data exchanged and written back to the HDD as the crunching goes on or something else. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So this measure is the ratio CPU/GPU. In that case I would say that the CPU usage remains still very high. The GF110 is practically a 'primitive CPU' containing 16 cores with 32 threads in each core (16*32=512 CUDA cores in nVidia terminology). I think supporting 512 GPU cores with even a whole CPU core to achieve maximum performance is a rewarding sacrifice. In a perfect world the whole WU is downloaded on the GPU (If there is enough local memory), then until completion would run on the board with no CPU interference and when finished is uploaded back to the CPU and a new WU downloaded. This should necessitate some seconds of CPU but not more than one hour CPU time (3600 seconds is one hour). While in the real world the GPU is still a coprocessor (actually, a lot of it as I mentioned above), that's why the GPU cannot do everyting on it's own, would it be calculating a 2D projection of a 3D (game)scene, or doing some 3D scientific calculation for GPUGRID. Loading the data and the code to the GPU takes only a few seconds, just like unloading it, so it would't be an hour. These values of CPU usage mean that the CPU is doing real work as the GPU crunches. Maybe there is data exchanged and written back to the HDD as the crunching goes on or something else. That's correct. As far as I know, some double precision calculation is needed for crunching these WUs, and this is done by the CPU (because the GTX's DP is slowed down by nVidia) |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
CPU time is the time used on one CPU core/thread, so if you have an 8 thread CPU the actual time spent using the entire CPU would need to be divided by 8. If that GPU is in an 8core system then that works out at 9min of entire CPU time per 7.6h of entire GPU time. There is no point looking at the GPU as a unit and the CPU as separate cores; it would be no better than saying each of 240 GPU CUDA cores uses 2.25sec of CPU time. |
|
Send message Joined: 16 Nov 10 Posts: 22 Credit: 24,712,746 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Thanks for your replies. All clear now. Hope the GTX580 who is double precision will do better, but I agree that the CPU contribution on a 12 thread CPU remains minimal. |
©2025 Universitat Pompeu Fabra