Message boards :
Graphics cards (GPUs) :
New nvidia beta application
Message board moderation
Previous · 1 . . . 8 · 9 · 10 · 11
| Author | Message |
|---|---|
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Well you should stop receiving it if you set "no test application". It's odd that you keel receiving those jobs. Anyway, that queue is empty. gdf |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
today i've got 6.04 app even "No" to "run test apps" option is still there. In terms of the speed it looks like 6.70 app (0h15min and 6.384%) but consumes the whole core... Temperature is 4-5 *C higher then while 6.70 app runs (68 against 63-64), but that's not big deal.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The 6.03 (Windows) apps seem to be similar to before. Core usage is normal (about 28% of one of my 4 Phenom II 940 cores supporting an ATX260sp216, less for my i7 supporting 2 GT240s). |
[AF>Libristes>Jip] Elgrande71Send message Joined: 16 Jul 08 Posts: 45 Credit: 78,618,001 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
today i've got 6.04 app even "No" to "run test apps" option is still there. Same problem with this version 6.04, 91% of one core of my cpu used for the calculation, it's unbelievable. It's worse than the previous 6.70 version 50% of one core of my cpu. This project becomes more and more elitist : GTX260 bug, GTX295 (several compute errors, and another bug). I am very tired of it. |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Same problem with this version 6.04, 91% of one core of my cpu used for the calculation, it's unbelievable. well, in my case 6.04 going the job and it's OK. willing to get new 6.10/6.11 test app
|
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
This is the CUDA FFT bug on 260 cards. Can we have more information on how to tell which 260 cards are affected, and under which versions of CUDA? For example, I now have CUDA 3.0 available and haven't seen anything on whether this version is affected. 196.21 driver, compute capability 1.1, BOINC 6.10.18, not a 260 card I think I remember seeing something saying that CUDA 2.1 is not affected, and that 280 cards with the same graphics chip as the 260 cards (except no shaders disabled) work just fine. Is is possible to build an alternate version of the application using at least part of the CUDA 2.1 SDK instead, or is too much missing from that SDK? If it's possible, that version could be used on 260 cards and any other cards no longer adequately supported by recent CUDA versions. Is it possible to build an alternate version using the CUDA 3.0 SDK, at least for testing whether that fixes any known problems? Can you ask Nvidia to check whether recent versions of CUDA leave out the part needed for proper handling of the way shaders were disabled on the early chips used on some 260 cards? Also, is there a way to ask the graphics chip for details about what type of graphics chip was used, including how many shaders were disabled and perhaps even which ones? Checking that information might allow better information on which 260 cards are affected. |
|
Send message Joined: 24 Dec 08 Posts: 738 Credit: 200,909,904 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
This is the CUDA FFT bug on 260 cards. Download a copy of GPU-Z and run it. It will tell you various details about your gpu. If it says you have a 65nm GTX260 with a G200 chip then it has the FFT bug courtesy of nVidia. Driver versions above 181.21 have the bug. The fact you have a cuda 3.0 capable driver doesn't mean the app is using the 3.0 DLL's as its designed to be able to run older apps. The current apps use either cuda 2.2 or cuda 2.3 (see the plan class shown in brackets in the app name, under the tasks tab in BOINC manager). The cuda 3.0 DLL's are currently only available to registered developers and cannot be given to the public due to the pending release of the Fermi cards. Once they are released it will be publicly available. Given that nvidia doesn't seem willing or able to correct the bug suggests that even cuda 3.0 apps are still going to have the same problem, but you never know. BOINC blog |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I still don't have an answer to this part: Is is possible to build an alternate version of the application using at least part of the CUDA 2.1 SDK instead, or is too much missing from that SDK? If it's possible, that version could be used on 260 cards and any other cards no longer adequately supported by recent CUDA versions. Are you still trying to find out yourself? It would require those with the affected 260 and other cards to install rather old drivers, but wouldn't that be better than not being able to use their cards at all? |
|
Send message Joined: 25 Aug 08 Posts: 143 Credit: 64,937,578 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi all! Is there a way to make ACEMD 6.04 (Linux app) run by default with the same (lowest) priority as other BOINC applications? It's not true way to set the priority by hands each time, I suppose... From Siberia with love!
|
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
all the application should run with the same system priority. This is set by BOINC. gdf |
Michael GoetzSend message Joined: 2 Mar 09 Posts: 124 Credit: 124,873,744 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Hi all! Short answer: not really. BOINC sets the priorities for all the WUs to the same (low) value, except for GPU WUs. The CPU portion of a GPU WU is set to a slightly higher priority than the pure CPU WUs so that it runs ahead of the CPU tasks. The intent is to keep the GPU from being starved of work by the other BOINC WUs. If you're running any other CPU-only projects on your machine, you don't want to lower the priority anyway, at least not unless you dedicate a CPU core to feeding the GPU. |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
Thanks, I had forgotten about it. gdf |
©2026 Universitat Pompeu Fabra