ACEMD2 6.12 cuda and 6.13 cuda31 for windows and linux

Message boards : Graphics cards (GPUs) : ACEMD2 6.12 cuda and 6.13 cuda31 for windows and linux
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 5 · 6 · 7 · 8

AuthorMessage
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 19835 - Posted: 9 Dec 2010, 16:41:39 UTC

Wow, now I've even gone back to using Nvidia 195 driver on my Linux, after lots & lots of fun with SWAN_SYNC=0 & other fun stuff too. It's even worse with older drivers. 71711 70922 62706

But at least it's a cold winter with snow in Denmark ;-)
ID: 19835 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19837 - Posted: 9 Dec 2010, 17:05:33 UTC - in response to Message 19835.  

Wow, now I've even gone back to using Nvidia 195 driver on my Linux, after lots & lots of fun with SWAN_SYNC=0 & other fun stuff too. It's even worse with older drivers. 71711 70922 62706

But at least it's a cold winter with snow in Denmark ;-)

I've crunched before with this older drivers, with a similar result as you say here, and was vehemently convinced by the usual suspects to use the new one, as it's soooo far better........until it turned out to be no difference.
I told them that it made no difference, but they nevertheless no try to convince me to get my old driver back, because it's sooooo far better ;)
Now at least there's a second cruncher to confirm what I told them for quite some time, only they do not listen. Let's see what their next suggestion will be: Even older drivers? New, beta stuff from nVidia?

They obviously don't want to tell the truth that they <are not interested in people that don't invest a few hundred Euro every year just in cards, plus the electricity bill of another few hundred Euro.

It's OK, if they only want rich nerds they could say so. But if they pretend that normal, mid-budget, cards, like the G200-series that is far from old, are suitable for this project they don't play fair. They are only suitable if they are extremely micromanaged by hand and running 24/7, and that's not the basis for a BOINC project.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 19837 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 19880 - Posted: 13 Dec 2010, 0:10:09 UTC - in response to Message 19837.  
Last modified: 13 Dec 2010, 0:10:49 UTC

Okay, still too soon to say, but at least it looks normal now. I reinstalled Linux on one of my PCs yesterday. Nothing worked, so there was nothing to loose. I decided to try Mint Linux 10, since Boinc-Client & Nvidia Drivers are now as should be, there's no more need to chase Betas all the time.

It "might" have been one of the Mint Linux 8 updates that messed with something I haven't a clue about what it could be. But after the first reinstall, I'm looking at a 30 hour WU on a PC using a GT240.

So I decided to reinstall the other 2 PCs, it all looks better then hopeless. My apologies for bitching so much. Maybe later I'll try tweaking with SWAN_SYNC=0 & other stuff, but for now I'm just glad to get it working again.

Cheers!
ID: 19880 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 19883 - Posted: 13 Dec 2010, 11:49:35 UTC - in response to Message 19880.  
Last modified: 13 Dec 2010, 11:50:20 UTC

I attached the four other projects I support. Reinstalled on the remaining two PCs, attached gpugrid.net as well as the other projects on the two newly reinstalled PCs, wrote the last article. Now it's the same ol, same ol. So something happened in between only having gpugrid.net run on the first newly reinstalled PC where a GT240 was at 80% after 24hours, & after I attached other projects where it's at 93% after 37hours. So I "assume" that "maybe", the new WU's from gpugrid.net running on Linux doesn't like other projects & vice versa.
ID: 19883 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Richard Haselgrove

Send message
Joined: 11 Jul 09
Posts: 1639
Credit: 10,159,968,649
RAC: 261
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19884 - Posted: 13 Dec 2010, 12:11:47 UTC - in response to Message 19883.  
Last modified: 13 Dec 2010, 12:12:02 UTC

I don't think it's anything to do with 'liking' or 'not liking'.

But they will be occupying CPU cores. And if you want GPUGrid to run at the highest possible speed, GPUGrid would like to have a CPU core back, please.
ID: 19884 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 19891 - Posted: 13 Dec 2010, 21:44:56 UTC - in response to Message 19884.  

True, but Windows doesn't have this problem, nor did I have any problem running 5 different projects on Linux prior to 6.12 & 6.13, now I've remove 2 projects on my PCs running Linux. I hope it's "good enough", because if the argument for this change was to lighten the burden that CPU hungry gpugrid.net WUs was causing on other projects, this new change is having the opposite effect.
ID: 19891 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19893 - Posted: 13 Dec 2010, 22:30:54 UTC - in response to Message 19891.  

Ultimately how much you wish to contribute is down to your evaluation of GPUGrid compared to other projects and how much you want to support GPUGrid. I assessed GPUGrid primarily on the science, not my professional IT expertise. I decided to redirect my overall Boinc contribution to here as I understand the research and the amount of work a GPU can do here compared to a CPU.
Initially it was not easy, I had one barely useful GPU and my efforts to buy what I thought would be useful GPU’s failed (192 is an evil number). Lesser GPU projects offer up more points per hour, so it took perseverance to reach my current level of contribution which I think is reasonable for my means. I’m happy that my current contribution costs less, in terms of electric, than it did 6months ago. This is mostly due to upgrading GPU’s.
ID: 19893 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19894 - Posted: 13 Dec 2010, 22:40:50 UTC

Liveonc,

thumbs up for your patience! Regarding the actual problem: as I understand it should be enough to leave one CPU core free. How many other projects etc. shouldn't matter. And judging by what SK says you also need to do this in Win to get maximum performance.

Sänger wrote:
only they do not listen.


That's not true. They've clearly listened, but didn't get a solution out of the door yet.

They obviously don't want to tell the truth that they <are not interested in people that don't invest a few hundred Euro every year just in cards, plus the electricity bill of another few hundred Euro.


That's what you see. For me this is not obvious at all.

I see that GPU-Grid is by definition interested in the fastest cards (they need results back ASAP in order to be productive) and needs to make their rather complex software work with these. While doing so they constantly need to fight bugs in CUDA libraries, drivers etc. as well as optimize the algorithms and actual science. This is not a small task at all.

Having said this doesn't mean I wouldn't like to see a better solution of the current problem(s). I'm just trying to put things into perspective.

MrS
Scanning for our furry friends since Jan 2002
ID: 19894 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Fred J. Verster

Send message
Joined: 1 Apr 09
Posts: 58
Credit: 35,833,978
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19895 - Posted: 13 Dec 2010, 22:45:16 UTC - in response to Message 19891.  
Last modified: 13 Dec 2010, 22:57:20 UTC

I have noticed too often, having a wingman, using a 200 series (NVidia), erroring out in 5 seconds, some 240's,260-216's on which I saw a correct result.
Isn't it a waiste of resources and time.

I have host with a GTS250, but is not able to run any GPUgrid.
Only my WIN XP64, X9650 @3.55GHz + GTX480 @ 1400 (engine), Mem 3880MHz.
Temp's for this host also is ideal, cause no casing, is used, since
2 FERMI running full load, they put out a lot of heat, 650Watt when I ran a 470
together with the 480, but gave too much 'trouble'
Often wondered why some 200 series, do crash a unit.


And yesterday, another 'new' experience, not enough virtual memory,
to continue, on a 64BIT WIN Host with 4(2x2)GiG DDR2 533MHz!
And the harddrives on 3 host, had such a Fragmentation, that I had to run
chkdsk X /f (/v), defore i could defragment!
Why are WU's sended to hosts, which only produce, if any at all, errors?

It is clear, some WU's, cannot be computed, by f.i. a GTS250, but GTX295 also makes too much errors, in too many cases.
GTX295 & GTX480 .
GTS250 & GTX480 .
GTX260 & GTX480 .
9500GT & GTX480 .
GOOD Results .
with 'older cards', compaired with an 480.

Also I notice a message, I should update to CUDA 2.2, i run 3.1, or the app.
Or is the Compute Capability, that is meant, cause there is the big difference between the 200,C.C=1.0-1.3 and 400 (FERMI)2.0, 2.1.

Knight Who Says Ni N!
ID: 19895 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19909 - Posted: 14 Dec 2010, 17:42:21 UTC - in response to Message 19895.  

Hi Fred, I looked at some of those errors. You can certainly find them :)

One guy has a single GTX295 with runaway errors. I suggested he uses 197.45 and to run the 6.12app.

Someone else has a GTX260-192, every task fails because their card is incapable of working on GPUGrid.

Another user for some reason is continuously aborting tasks.
Not sure why anyone would want to attach a GeForce 9500 GT in the first place, let alone keep aborting tasks. Again, I made a suggestion, just in case they read their PM's.

ID: 19909 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Werkstatt

Send message
Joined: 23 May 09
Posts: 121
Credit: 400,300,664
RAC: 12
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19914 - Posted: 15 Dec 2010, 0:27:26 UTC

Hi All,

I see postings like They obviously don't want to tell the truth that they <are not interested in people that don't invest a few hundred Euro every year just in cards, plus the electricity bill of another few hundred Euro.
I see posts describing ways to optimize speed and post explaining the troubles that the dev's see.
I'm also a victim of the 'evil number 192', but I found a way to sell that card.
I did upgrade to the cheapest GTX460 I could get (€ 149.90, not 'a few hundred') and experimented a bit.
It's not a pure crunching computer, its used for regular work and it runs cpu wu's also.
The results are here: http://www.gpugrid.net/results.php?userid=25200
I can finish two tasks a day and have a card using less than 170 Watt.
One trick is to use windows task manager to increase the priority of the acemd wu's if my system is not used for some hours. And I can easily switch back if I need the system for my work. It's not as effective as swan_sync but more flexible.
I would like to encourage everyone to experimet a bit to get out the most of his hardware. Science is 'Finding new ways', not getting virtual credits. And its sometimes a way of 'Try and error'.
If this project has a real potential to help saving lives all the troubles are worth to go trough them.

Alexander
ID: 19914 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19915 - Posted: 15 Dec 2010, 5:40:11 UTC

So that's a 3-digit Euro amount just for the card, that's a lot.
And that's another Euro per day for electricity, ~350 per year, that's a lot too.

And this is not about getting credits, otherwise I would long ago have left this project for one of the imho futile math-projects, it's about getting the 2-day deadline for useful crunching done. If the WUs take longer than 2 days, it's no longer useful crunching but just for credits.

I have to do a lot of manually adjusting every single WU as fast as possible after it arrives on my computer.
I have to delete apparently too long WUs asap, although I don't know how fast they would be, the project knows beforehand but fails to mark the WUs.

Now they even distributed extra-long WUs to every cruncher, that will definitely lead to a huge amount of wasted crunch-time, as a lot of them will take more than 2 days.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 19915 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19918 - Posted: 15 Dec 2010, 10:45:25 UTC - in response to Message 19915.  
Last modified: 15 Dec 2010, 10:52:25 UTC

A GTS450 is only about 40% faster than a GT240, but around 60% more power hungry. While a GTX460 is slightly more than twice as fast as a GT240, it uses around 150W; so it just about matches the GT240 in terms of energy efficient when crunching here. The GTS450 and GTX460 are also more expensive in terms of active performance per purchase price. The reason for this relatively poor performance compared to the high end Fermi’s is that the GTS450 and GTX460 have inaccessible shaders when it comes to crunching here.

For GPU crunchers running costs are very important and the most energy efficient cards in terms of points per Watt are the GTX580 and GTX570. These are expensive cards, around £260/300Euros for the GTX570. However even at that price they offer some crunchers an opportunity. Crunchers with several older cards and systems could sell what they have and build new systems around one of these more energy efficient cards. These cards can do as much work as several older cards and at the same time reduce running costs. You might not even have to spend anything if you get a reasonable price for your existing cards/system.

For example,
If I sell 4 or 5 GT240’s (the most energy efficient previous generation cards) I would have enough money to buy one GTX470. The Fermi would do about the same amount of work but cost slightly less to run. If I throw in my GTX260 I would be able to buy a GTX570 which would do about the same work, but only use 219W per hour, rather than 520W per hour.
If I also replaced my quad GT240 system, 4GB DDR2, Phenom II 940 (TDP 125W) with a slightly lesser quad core CPU, but much more energy efficient (45W) and used a Fermi, my overall power usage would fall dramatically and I would have 3 CPU cores to crunch on. The good thing is I would not have to spend anything overall.

Obviously this is not for the average cruncher, and I have not seen a better entry level Fermi card than the GT240; all the lesser Fermis have the unfavourable 48:1 ratio with their inaccessible shaders. Only the top Fermi’s use the 32:1 ratio. While I could sell 4 GT240’s and buy one GTX460, it would not do as much work. There is little merit to moving away from GT240’s towards a GTS450 or GTX460. It only makes sense if you have several pre-Fermi cards and are prepared to get a top end Fermi.

As for the long tasks, I am running one on a GT240 and it will take around 30h using the optimization methods I prefer, well inside 48h. So for me the 50% bonus will make it very worthwhile in terms of credit.

-
GTS450 on Linux 43.187 ms per step. Probably not well optimized.
Same task type/credit on a GT240 6.12app 41.422 ms per step
ID: 19918 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
TomaszPawel

Send message
Joined: 18 Aug 08
Posts: 121
Credit: 59,836,411
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19971 - Posted: 17 Dec 2010, 11:16:49 UTC
Last modified: 17 Dec 2010, 11:34:51 UTC

Windows 7 64bit + 263.06 x64 + GTX470 + Swan_Sync 0

GPU Load 45% p39-IBUCH_7_pYEEI_101214-1-4-RND2298_1 acemd2 version 613

Also acemd2_6.13_windows_intelx86__cuda31.exe use 100% of cpu! (25% of quad core)

Is this normal?

I also crunch aqua. When I suspend aqua GPU Load goes to 50%.

Any ideas how to boost GPU Load?

For comparison PrimeGrid for CUDA - 99% GPU Load with aqua on 4 cores! and cpu usage is 1-2% of maks 25%
POLISH NATIONAL TEAM - Join! Crunch! Win!
ID: 19971 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19981 - Posted: 17 Dec 2010, 21:54:29 UTC - in response to Message 19971.  

Tomasz, the GPU Utilization seems a bit low but I think I have the reason.

I see slightly higher utilization than that on my GTX470's:
My i7-920 system with two GTX470's is presently crunching two IBUCH tasks. When I free up 2 CPU threads the GPU utilization was between 60% and 63%.
When I free up another CPU thread the GPU utilization rose to between 62% and 66%. Similar observations, but slightly higher utilization numbers. So why.

Well firstly Win XP is faster than Win7, but the difference in this case may be largely down to the systems; a Q8200 @ 2.33GHz system compared to an i7-920 @ 2.8GHz. These tasks rely a lot on the CPU. It would be interesting to see how they perform on a better CPU. Someone with an i7-980X might want to post up some data.

Not sure if this would make any difference or not, but the 260.99 WHQL is the recommended driver for the GTX470 and not the 263.06 driver (it's for the GTX500 series cards and that effort of a GTX460 SE).

I think in this case running 2 GPU tasks at once would be useful.


ID: 19981 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist

Send message
Joined: 14 Mar 07
Posts: 1958
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 19987 - Posted: 18 Dec 2010, 9:38:38 UTC - in response to Message 19981.  

GPU utilization of some workunits should get better from next release, probably in January. We know why these are slower and it has been fixed.

Also, the problem of long workunits will also be solved either by two applications (so that you can decide) or by better selecting the host which can compute them.


gdf
ID: 19987 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 5 · 6 · 7 · 8

Message boards : Graphics cards (GPUs) : ACEMD2 6.12 cuda and 6.13 cuda31 for windows and linux

©2025 Universitat Pompeu Fabra