Message boards :
Graphics cards (GPUs) :
GPUGRID and Fermi
Message board moderation
Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · Next
| Author | Message |
|---|---|
|
Send message Joined: 6 Feb 10 Posts: 1 Credit: 5,434,095 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It's not specific to GPUGRID. Also other application suffer from windows WDDM drivers. An explanation can be found here: http://forums.nvidia.com/lofiversion/index.php?t150236.html As per this thread NVIDIA working on a solution with MS. |
bloodrainSend message Joined: 11 Dec 08 Posts: 32 Credit: 748,159 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
hehe. one thing i learn with just release video cards is bad memory tends to be first issue with cards. |
|
Send message Joined: 11 May 10 Posts: 68 Credit: 12,531,253,875 RAC: 2,638,199 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Agreed.. just want to clarify a detail, if it wasn't clear before: 15% power for 15% performance is only due to frequency increase. Touching the voltage adds to this (or in fact multiplies). So in roundups example he'd get 1.15 * 1.19 = 1.37, i.e. a 37% increase in power consumption if he went for the higher clocks at 1.05 V. One more interesting observation regarding the voltage: I now have a second GTX470 (same type, same manufacturer) running in a different computer (i7-860@ 3.5 GHz, let's call it PC2). The 470 selects automatically the setting of 1.025 V - even without overclocking. In the first computer (i7-920@stock, called PC1 for now) the setting was 0,962 V with the ability for stabile overclocking at 0,975 V. I changed the graphic cards between the computers to check if the different voltage setting is caused by the card ot by other parts of the computers (like Motherboards, processors, power supply, etc). The result is that in both computers the same voltage setting as before occurred: PC2 operates the GTX470 again @ 1,025 V, PC1 again @ 0,962 V. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Perhaps you have a PCIE voltage overclock set on the motherboard. This can happen if it is linked to the bus. Alternatively, you have a bad Motherboard or the reading is incorrect. Check you did not up it in NVidia Control Panel or some software (if you had a different card installed previously). |
|
Send message Joined: 4 Apr 09 Posts: 450 Credit: 539,316,349 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
@roundup - The fermi's have their voltages individually tweaked before being sent out for stability so you will see differences between cards. Some chips are just better than others and need less volts. I believe this is the situation you are observing as you say the volts are the same per card no matter which board you install them in. Thanks - Steve |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
@Snow: actually he said the opposite is true, the PCs apparently run their cards at different voltages, no matter which card is put in. One thing I would check is whether both systems use the same software versions, as far as possible. Important is the tool to read out and set the voltage. It could be that after some time with Fermis the developer realized a mistake in reading out the voltages (this can happen disturbingly easy) and corrected it in a later release. A forgotten setting somewhere in some tuning software could obviously also be possible. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 11 May 10 Posts: 68 Credit: 12,531,253,875 RAC: 2,638,199 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
@Snow: actually he said the opposite is true, the PCs apparently run their cards at different voltages, no matter which card is put in. Exactly. When I put the second card in the second computer, no tuning software was active. The same applied, when I changed the cards between the computers. At the time of changing there was no GPU relevant tuning software in place - just the 197.75 driver. It`s only the CPU in the second computer that has been tuned from 2.8 GHz to 3,5 GHz (with proper cooling solution, of course). This computer tunes up the voltage of both GTX 470 to 1.025 V. |
Retvari ZoltanSend message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
This brand new 257.21 WHQL driver isn't beta and supports CUDA 3.1. Does it means from (y)our point of view, that CUDA 3.1 is out? |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Yes, CUDA 3.1 has been released to the General Public by NVidia. Thanks for the post. I started a new thread for CUDA 3.1 Roundup, did you check the Bios (to see if the OC was linking PCIE to your BUS/QPI)? How are you OC'ing the i7? (Bios or software)? |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I'd try switching to stock settings for the motherboard and check which voltage the card gets. Afterwards OC again, of course ;) And can you tell a difference in GPU temperature between the two machines, preferably using the same card? Maybe with the case open to reduce the effect of case air flow. I know these numbers will fluctuate quite a bit, but if you can see a clear difference then the voltage difference is certainly for real. And maybe do a sanity check first: monitor the temperature over a few minutes while running GPU-Grid to make sure it's stable. Then change the voltage to the value corresponding to the other PC and check if you can spot a temperature and/or fan speed change at all. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 11 May 10 Posts: 68 Credit: 12,531,253,875 RAC: 2,638,199 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I am OC'ing the i7-860 with BIOS: Tuning up the base clock, QPI and RAM multiplier tuned down a bit. The i7-920 operates at stock speed.
Even with stock settings for the motherboards and graphic cards one GTX470 desires 1.025 V. For checks of GPU temperature and fan speed both cases were open - in fact they are open since I have built in the cards :). I got the following results (GPUs at stock speeds / stock voltages): GPU 1: 0,962 V, GPU load 65%, 79°C, Fan 52% GPU 2: 1.025 V, GPU load 74%, 85°C, Fan 57% Then I selected the voltage of GPU 2 down to 0,975 V. After 10 minutes I got the following values: GPU 2: 0.975 V, GPU load 74%, 82°C, Fan 54%. Now I OC'ed both GPU to 702/1708/1404 (GPU/Mem/Shader). This delivered the following results: GPU 1: 0,975 V, GPU load 63%, 82°C, Fan 54% (I know already that I have to increase the voltage for GPU 1 to 0,975 V for stable operation) OC with 0.975 V led to a black screen for GPU 2 after 6 minutes. However, rebooting showed that the ACEMD2 application did NOT crash. Changing the cards between both computers does not show significantly different results. GPU 2 led to a black screen in the other PC immediately when OC'ed and set to 0,975 V. It seems to me that NVIDIA sets different factory voltages dependent on the quality of certain parts of the cards. |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I am OC'ing the i7-860 with BIOS: Tuning up the base clock, QPI and RAM multiplier tuned down a bit. The i7-920 operates at stock speed.In the Bois of your i7-860 system, is the PCIE clock linked to the Bus, and does the bus automatically increase the voltage on that board? Even with stock settings for the motherboards and graphic cards one GTX470 desires 1.025 V. So both cards did not automatically go to 1.025V? Only the second GPU on the i7-860? Architecturally there is a big difference between the two motherboards in question, with the controller being on the CPU for the i7-860. However, I think what may have happened is that the first card installed set the Voltage. So, earlier, when you moved the cards, it kept the voltage it had before, thinking it was the exact same card, making it appear that GPU Voltage was motherboard dependent. Then I selected the voltage of GPU 2 down to 0,975 V. After 10 minutes I got the following values: That looks like a good stable voltage for that system. Now I OC'ed both GPU to 702/1708/1404 (GPU/Mem/Shader). This delivered the following results: So the GPU failed, but the app is essentially run via the CPU, which was fine (being native), and presumably recovered to the last checkpoint. Anyway, that Voltage was not enough to support GPU2 at that frequency. Changing the cards between both computers does not show significantly different results. GPU 2 led to a black screen in the other PC immediately when OC'ed and set to 0,975 V.To me this suggests that on the i7-920 the card is less stable, so as I suggested above, perhaps the 860 allows for a greater Voltage range (automatically Volt regulating to draw, up to a point). It could for example supply 0.980 rather than 0.975 exactly, or the 920 could be supplying 0.970! Only a small difference but perhaps enough to let the 860 run for 10 minutes, while the 920 failed immediately. It seems to me that NVIDIA sets different factory voltages dependent on the quality of certain parts of the cards.For sure, and as Steve said, "The fermi's have their voltages individually tweaked before being sent out for stability so you will see differences between cards". This would naturally even apply to the same card range from the same manufacturer, as it depends on the individual quality (leakage) of the core. |
|
Send message Joined: 11 May 10 Posts: 68 Credit: 12,531,253,875 RAC: 2,638,199 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
In the Bois of your i7-860 system, is the PCIE clock linked to the Bus, and does the bus automatically increase the voltage on that board? No, the PCIE clock in the 860 is running at stock speed. Voltage settings for the board (MSI P55-GD65) are set to auto - so yes, it increases voltage automatically.
No, just GPU 2 operates automatically at 1.025 V - in both PC. The other card went automatically to 0,962 V - also in both PC. I have to increase the voltage by one step for this card to enable a stable overclocking. It seems to me that NVIDIA sets different factory voltages dependent on the quality of certain parts of the cards.For sure, and as Steve said, "The fermi's have their voltages individually tweaked before being sent out for stability so you will see differences between cards". Yes, all the testing fully confirmed Snow Crash's posting regarding the individually tweaked voltages. Thanks for you help to interpret the test results. @MrSpadge: Thanks for the suggestion of the test setup. |
|
Send message Joined: 22 Jul 09 Posts: 21 Credit: 195 RAC: 0 Level ![]() Scientific publications
|
CUDA 3.1 is out. http://developer.nvidia.com/object/cuda_3_1_downloads.html |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The developer version of CUDA 3.2 is presently in Beta and the latest Beta driver supports CUDA 3.2. Hopefully by the end of the month these will bring further improvements, particularly to GTX460’s. I decided to consolidate some of my older systems; basically I bought a new GTX470 to replace 3 systems; each with a single GT240. The new Asus ENGTX470 is now in my i7-920 system, along with an older ENGTX470. This consolidation of resources is to reduce my electric bill and the number of systems I use while improving the amount of crunching I do. Anyway, I noticed ASUS made a few changes to their ENGTX470 card since its first release: It now says Ver 2 on the back, the fan can run faster, it stays cooler, and the VDDC is only 0.9370 Volts – even when overclocked to 731MHz ;) There are no obvious external changes to the card, but it does come with new Firmware (70.00.21.00.03). I picked it up for £219 inc. When idle the system uses 158W. With the cards overclocked to 731MHz and 715MHz the system draws around 500W at the socket (crunching 6 CPU tasks and 2 Fermi tasks). I saved about 400W overall, and will do more work. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
and the VDDC is only 0.9370 Volts – even when overclocked to 731MHz That would be pure luck - or a side effect of a more mature 40 nm process at TSMC. Or maybe both ;) MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 28 Mar 09 Posts: 490 Credit: 11,739,145,728 RAC: 95,752 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I hope this version will improve the performance of Windows 7 machine, hopefully on par with XP, or is this just wishful thinking? |
|
Send message Joined: 11 May 10 Posts: 68 Credit: 12,531,253,875 RAC: 2,638,199 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Anyway, I noticed ASUS made a few changes to their ENGTX470 card since its first release: Hi skgiven, what is your clock setting for the memory in this 731MHZ setup? I guess you are using 1462 MHz for the Shaders, right? Greetings |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
... in this 731MHZ setup? The shaders always run at twice the core clock for Fermi based chips (GF 100, 104, 106, 108). MrS Scanning for our furry friends since Jan 2002 |
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Yeah, the shaders are linked to the GPU speed, so they are at 1462 MHz. I left the GDDR at NVidia’s reference 3348MHz, but is read as 1674MHz using EVGA Precision and half that going by GPUz. I presumed increasing the RAM speed makes little difference to performance, and might heat up the card and increase power usage, but I did not try (well not since I got errors using the first card when trying desperately to run it on Win7). The GTX480 uses Ram speeds of 3696, possibly to balance against it’s native 700MHz core, so perhaps I should be trying to match the RAM speeds better (3859MHz for the 731MHz card). I just started to try the other card at 1887/3774, to see if it makes any difference: A quick look and it seems stable. Memory Controller load fell from 22% to 19% and there was no change is temps or noticeable change in system power usage, but I will have to wait a while to see if it finishes a task, and if it is any faster. - Turned out that I started to get lots of failures, one card stopped running and I even a system reboot. So both cards have now been pegged back to 715MHz, with default RAM speeds. Any RAM speed improvement, would not offset the cost of result failures and reboots, so it's definitely not worth it. Perhaps with a single card in the system, cooling would be easier, and slightly upping the RAM might be worthwhile, but I don’t have the time to test this any further. I also remember that there were concerns with the GF100 memory controller, which is probably why they chose 4GHz max memory, when they could have used 5GHz. My guess is the controller failed and not the RAM. People might be more successful trying to up the RAM on GF104 or GF106 cards, but I would suggest you test them another way before doing it here. I also doubt you will see significant improvement and may see a slight reduction in performance, for tasks that actually run successfully. |
©2026 Universitat Pompeu Fabra