Message boards :
Graphics cards (GPUs) :
Anyone tried a GTX670 on GPUgrid?
Message board moderation
Previous · 1 · 2 · 3 · 4 · Next
| Author | Message |
|---|---|
|
Send message Joined: 26 Dec 10 Posts: 115 Credit: 416,576,946 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Retvari: Thank you for the power measurements. This is very helpful. We can use this info to justify the purchase of new cards based on power savings. I see a 670 in my future! Thx - Paul Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
That's ~30% better performance per Watt. Different tasks types may show different improvements (25%, 30%, 35%). The GTX670, 680 and 690 cards are 14.47, 15.85 and 18.74 GFLOPS/W respectively. Doesn't necessarily reflect performance here, but a 680 might be more in line with a 480, and might take it over 40%. That said the 580 would still be more competitive. The PCIE3 vs PCIE2 debate is still open. Can you reduce that FOC? FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
...but a 680 might be more in line with a 480, and might take it over 40%. That said the 580 would still be more competitive. I'll check that tomorrow. I've just finished changing one of my GTX 590 to a GTX 680. (GV-N680OC-2GD) The PCIE3 vs PCIE2 debate is still open. I'm not planning to upgrade my motherboards in the near future, but maybe I can put one of my cards to a PC equipped with a PCIe3 capable motherboard. Can you reduce that FOC? Sure. To what frequency and voltage? (for the 670 and for the 680) I'm also planning to measure the power consumption of the GTX 480 at stock speed and voltage. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The GFlops looks very odd in the BOINC manager's log: NVIDIA GPU 0: GeForce GTX 680 (driver version 30479, CUDA version 5000, compute capability 3.0, 2048MB, 582 GFLOPS peak) NVIDIA GPU 1: GeForce GTX 590 (driver version 30479, CUDA version 5000, compute capability 2.0, 1536MB, 1244 GFLOPS peak) NVIDIA GPU 2: GeForce GTX 590 (driver version 30479, CUDA version 5000, compute capability 2.0, 1536MB, 1244 GFLOPS peak) NVIDIA GPU 0: GeForce GTX 480 (driver version 30479, CUDA version 5000, compute capability 2.0, 1536MB, 1538 GFLOPS peak) NVIDIA GPU 1: GeForce GTX 670 (driver version 30479, CUDA version 5000, compute capability 3.0, 2048MB, 439 GFLOPS peak) It looks like I'm downgrading my cards.... |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Ref GFLOPS:
GTX 680 is 3090.4 GTX 690 is 2*2810.88=5621.76
FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I've tried to refine my measurements with my partly upgraded configuration. It's very hard to calculate the power consumption of different parts by measuring the overall power consumption at different workloads, because the parts are heating each other, and it is causing extra power consumption on the previously measured parts. My previous measurements didn't take this effect into consideration, so the extra 174 Watts doesn't come only from the task running on the GTX 670. no GPU tasks, no CPU tasks, no power management (idle) : 180W no CPU tasks, no GPU task on the GTX 670 GTX 480 1025mV, 701MHz, 99%, 61°C: 344W (164W) GTX 480 1050mV, 701MHz, 99%, 63°C: 355W (175W) (+11W) GTX 480 1075mV, 701MHz, 99%, 64°C: 365W (185W) (+21W) GTX 480 1075mV, 726MHz, 99%, 64°C: 370W (190W) (+26W) GTX 480 1075mV, 749MHz, 99%, 65°C: 374W (194W) (+30W) GTX 480 1075mV, 776MHz, 99%, 66°C: 378W (198W) (+34W) GTX 480 1075mV, 797MHz, 99%, 66°C: 382W (202W) (+38W) GTX 480 1075mV, 797MHz, 99%, 71°C: 395W (215W) (+51W) GTX 670 1162mV, 981MHz, 97%, 66°C: 555W (160W) + 4 CPU cores: 620W GTX 480 1075mV, 797MHz, 99%, 67°C: 383W (203W) GTX 670 1137mV,1084MHz, 97%, 60°C: 552W (160W) GTX 480 1075mV, 797MHz, 99%, 71°C: 397W (217W) (+53W) GTX 670 1137mV,1084MHz, 97%, 65°C: 559W (162W) GTX 480 1075mV, 797MHz, 99%, 71°C: 397W (217W) GTX 670 1137mV,1084MHz, 0%, 44°C GTX 480 1075mV, 797MHz, 0%, 44°C: GTX 670 1137mV,1084MHz, 97%, 65°C: 341W (161W) All in all, the GTX 670 is better than I've calculated from my first measurements: The GTX 670 @ 1083MHz consumes (162W) 75% of the GTX 480 @ 800MHz (217W). These were my last measurements with this host. I'm going to change the GTX 480 to a GTX 670 on this host right now, and I will measure the power consumption again after that. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Nice set of data. By reducing the second cards heat radiation both cards should benefit somewhat, and you might even see the pair reach +40% performance per Watt over two GTX480's. Out of curiosity, are your CPU heatsink fins/blades vertical or horizontal? FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
By reducing the second cards heat radiation both cards should benefit somewhat, and you might even see the pair reach +40% performance per Watt over two GTX480's. I agree. But the gain is even bigger than I expected: my host consumes 520W now under full load (2 GPU tasks on the two GTX 670s, and 4 CPU tasks). It was 625W before. So now my host consumes 105W less than before my first measurement, and probably 210W less than with two GTX 480s @800MHz. I expected 217W-162W(=55W)+~10W gain. I have to double check it tomorrow (runtimes etc.). Out of curiosity, are your CPU heatsink fins/blades vertical or horizontal? My CPU heatsink is a Noctua NH-D14, it's fins are vertical, and the axis of the fans are horizontal. My motherboard is vertically mounted, and the GPUs are under the CPU. The cool air comes from the side of the case, and the hot air from the CPU heatsink is exhausted through the back of the case. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
It's very hard to calculate the power consumption of different parts by measuring the overall power consumption at different workloads, because the parts are heating each other, and it is causing extra power consumption on the previously measured parts. My previous measurements didn't take this effect into consideration, so the extra 174 Watts doesn't come only from the task running on the GTX 670. When I read your first post I thought the same, but then decided "never mind, he already put so much work into these measurements..." ;) And there's another factor: PSU efficiency is not constant over a large load range. Past 50% load efficiency will probably drop a bit with increasing load. MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Some of the newer PSU's hold steady from ~5% through to 85 or 90%, but not all. So an 850W PSU (for example) could have fairly linear power efficiency from ~40W right up to ~723W. Don't know the PSU in use though, so it might well have struggled for efficiency with the two GTX480's. If it did it's a big consideration, but that's the setup, and the new setup is still 200W better off, without replacing the PSU. Looking at the PSU's power/efficiency curve would tell you the PSU efficiency at different power usages. Ambient or motherboard/hard drive temps might on the other hand indicate cooling issues (there might have been higher case temps caused by more power usage/lack of heat displacement). Or both. I notice a 97% GPU utilization for the GTX670 vs a 99% utilization with the GTX480. I would say this is down to dual channel RAM &/or PCIE2, but 2% isn't much to worry about, if that's all it really is; I still expect that with quad channel and seeing 99% moving from PCIE2 to PCIE3 would make a difference, but only be seen by looking at the results run times. So with GTX680's the 2% might rise to 3% and possibly 4 or 5% with GTX690's. With triple or quad channel this would probably disappear, but the runtimes with PCIE3 should better those on PCIE2, even if both show 99% GPU utilization. FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 18 Jul 12 Posts: 2 Credit: 15,245,690 RAC: 0 Level ![]() Scientific publications ![]()
|
I would like to ask something and i m sorry if it has been answered in another thread. I am running both WCG and Gpugrid on my pc (3770k/M5G/GTX670). Is it normal that some projects have more gpu usage than others? Also, when i run WCG with all 4c/8t i see 73-74% gpu usage, but when i stop WCG i see 81-82% gpu usage. Why does this happen? Can i do something to rum both wcg and at the same time utilize my gpu at 100%? |
|
Send message Joined: 24 Jan 09 Posts: 42 Credit: 16,676,387 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I would like to ask something and i m sorry if it has been answered in another thread. I am running both WCG and Gpugrid on my pc (3770k/M5G/GTX670). Is it normal that some projects have more gpu usage than others? Also, when i run WCG with all 4c/8t i see 73-74% gpu usage, but when i stop WCG i see 81-82% gpu usage. Why does this happen? Can i do something to rum both wcg and at the same time utilize my gpu at 100%? Hey. All gpugrid.net work packages need a cpu to feed it. You have a quad-core processor with hyperthreading supported so your processor is shown in eight core cpu in operating system. Leave a one core-free so that the processor can support the graphics cards in its calculations. BOINC Manager, go to cpu usage (preferences) and select that you want to use 87.5% of the processors so one core is always free for the graphics card. Very likely you will not get 100% GPU utilization. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
More power consumption and performance measurements are in progress :) I've a PC with a Core i5-3570K in an Intel DH77EB motherboard for a couple of days. I'm not sure if the motherboard supports PCIe3.0 though. |
|
Send message Joined: 8 Mar 12 Posts: 411 Credit: 2,083,882,218 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
My 3rd 680 will be in my hands in about 2 weeks. Since I already have 2 working at PCIe 3 on my x79 when I install the 3rd it will be PCIE 2 x8 (PCIe 1) until I apply the hack which will make it PCIe 3 x8 (PCIe 2). I will test both to see how the times compare. So in short in 2 weeks I will have results on a 680 across all PCIe bandwidth speeds. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
About the Motherboard Intel says, "One PCI Express 3.0 x 16 discrete graphics card connector" "Support for PCI Express* 3.0 x16 add-in graphics card" "PCI Express* 3.0 support requires select 3rd generation Intel® Core™ processors" About the i5-3570K Intel says, "3rd Generation Intel® Core™ i5 Processor" "PCI Express Revision 3.0" So I say maybe, just maybe :) FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The power supply in my dual GTX-670 host is an Enermax MODU87+ 800W. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
That PSU has a maximum efficiency at around 400W, so its going to be about as efficient at 500W as it is at 300W. Going by the graph the loss of efficiency with two GTX480's vs two GTX670's would be no more than 2%, <10W: FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help |
|
Send message Joined: 18 Jul 12 Posts: 2 Credit: 15,245,690 RAC: 0 Level ![]() Scientific publications ![]()
|
Hey. All gpugrid.net work packages need a cpu to feed it. You have a quad-core processor with hyperthreading supported so your processor is shown in eight core cpu in operating system. Leave a one core-free so that the processor can support the graphics cards in its calculations. Atm i m running wcg with 4c/8t at 100% and gpugrid runs a project at 92% gpu load which goes to 94-95% when i close wcg. So i guess gpu usage depends mainly on the project. edit: by changing 100% to 87.5% the cpu usage didnt change gpu load at all. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
My experimental PCIe3.0 x16 host is up and running. I've checked the speed of the PCIe bus with GPU-Z 0.6.3, according to this tool the GPU runs at PCIe3.0 x16. It has the Gigabyte GV-N680OC-2GD in it (moved from my main cruncher PC) We'll see how it performs against my old (PCIe2.0 x16) host. There is no CPU tasks running on this host, because it has a 400W power supply. It was quite an adventure to install Windows XP x64 on this configuration. I've checked the GPU with MSI Afterburner 2.2.3: Usage: 97% Voltage: 1.175V Clock: 1137MHz I had the same numbers with my old host. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
The PCIE3 vs PCIE2 debate is still open. My experiment with my PCIe 3.0 host is over. Here are the results: It has processed 5 kind of workunits: _______ wu type __________________ # of wus _|_ shortest (h:mm:ss)__|_ longest (h:mm:ss) __|_ average (h:mm:ss) NATHAN_RPS1120528 ________________|_ 16 _|__ 13596.73 (3:46:36) _|_ 13729.89 (3:48:49) _|_ 13633.03 (3:47:13) PAOLA_HGAbis _____________________|||__ 8 _|__ 17220.55 (4:47:00) _|_ 17661.73 (4:54:21) _|_ 17505.56 (4:51:45) rundig1_run9-NOELIA_smd _____________|__ 1 _|__ 21593.69 (5:59:53) run5_replica43-NOELIA_sh2fragment_fixed ||__ 1 _|__ 25169.00 (6:59:29) run2_replica6-NOELIA_sh2fragment_fixed _||__ 1 _|__ 35357.31 (9:49:17) BTW it's power consumption was only 247 Watts with a GTX 680 and all 4 CPU cores crunching (3x rosetta + 1x GPUGrid) For comparison, here are one of my old host's power consumption measurements: Core i7-970 @4.1GHz (24*171MHz, 1.44v, 32nm, 6 HT cores) ASRock X58 Deluxe motherboard 3x2GB OCZ 1600MHz DDR3 RAM 2 HDD GPU1: Gigabyte GTX 480@800MHz, 1.088V (BIOS:1A) GPU2: Asus ..... GTX 480@800MHz, 1.088V (BIOS:21) Idle (No CPU tasks, No GPU tasks, No power management): 233W CPU cores 32°C to 40°C GPU1 idle 00%: 36°C - GPU2 idle - 00%: 39°C 1 GPU task running: GPU1 idle 00%, 36°C - GPU2 in use 99%, 51°C: 430W GPU1 idle 00%, 36°C - GPU2 in use 99%, 60°C: 434W GPU1 idle 00%, 37°C - GPU2 in use 99%, 65°C: 438W GPU1 idle 00%, 37°C - GPU2 in use 99%, 69°C: 442W 2 GPU tasks running: GPU1 in use 99%, 47°C - GPU2 in use 99%, 69°C: 647W GPU1 in use 99%, 53°C - GPU2 in use 99%, 71°C: 656W GPU1 in use 99%, 60°C - GPU2 in use 99%, 72°C: 665W GPU1 in use 99%, 62°C - GPU2 in use 99%, 74°C: 670W GPU1 in use 99%, 63°C - GPU2 in use 99%, 76°C: 675W GPU1 in use 99%, 66°C - GPU2 in use 99%, 79°C: 680W 2 GPU tasks and 6 CPU tasks running: GPU1 in use 99%, 47°C - GPU2 in use 99%, 69°C: 756W CPU cores 50°C to 66°C I would like to compare the performance of PCIe2.0 (x16 and x8), as my main cruncher PC has 3 different cards right now (GTX 670 OC, GTX 680 OC, and a GTX 690), but the lack of info about the number of GPU used for crunching in the stderr output file makes it very very hard. |
©2025 Universitat Pompeu Fabra