ACEMD2 6.12 cuda and 6.13 cuda31 for windows and linux

Message boards : Graphics cards (GPUs) : ACEMD2 6.12 cuda and 6.13 cuda31 for windows and linux
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · Next

AuthorMessage
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19676 - Posted: 26 Nov 2010, 10:34:25 UTC - in response to Message 19675.  

On my Phenom II 940 system with Four GT240's I had been running 4 heavy CPU tasks, 4 lightweight Free Hal tasks and a WU Prop task (also lightweight). The performance of the 6.12app was a bit sluggish at times; GPU utilization was only about 70%. With the odd project backoff, and not running tasks at peak power usage times (3.5 times the cheapest times) I had not been getting some tasks back in time for full credit; annoyingly missing out by less than an hour in most cases.

I'm now using swan_sync=0 without CPU tasks and the 6.12app is much faster. It does use a full core/thread per GPU and to benefit you do need to free up a core to use swan_sync=0, but the tasks are faster and the credit is much better.

3351679 2103512 9:36:23 UTC Completed and validated 44,048.93 35,309.45 6,016.70 9,025.06 ACEMD2: GPU molecular dynamics v6.12 (cuda)

43ms per step on a GT240 @ 1.6GHz (and thats with a couple of restarts and on Vista).

I'm just going to use 3 GPU's until I have completed partially completed CPU tasks and then just stop running any CPU tasks on that system. The daily RAC for that system could rise to over 60K per day. Much better than my present 43.8K RAC.

On my i7-920 with 2 GPU's I use 5 CPU cores.

I managed to totally mess up my Ubuntu system trying to change drivers, now the hard drive is causing issues and I can't reinstall Ubuntu. I might go back to Kubuntu 10.04 if I can dig out a replacement drive.
ID: 19676 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19677 - Posted: 26 Nov 2010, 12:04:32 UTC

Considering the very alpha status of the current awful apps, and the availability of a thing called
Run test applications? 
This helps us develop applications, but may cause jobs to fail on your computer
in your preferences, why did the project team let this rubbish loose on unsuspecting crunchers? Why wasn't it tested before? What's a test application good for if it's not used for testing, as it obviously happened?
The current apps failed completely from day one, and the project team did nothing at all to get the good ones back or help us victims of the bad decision with real hints, not just senseless wild guesses and contradicting suggestions.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 19677 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19678 - Posted: 26 Nov 2010, 15:55:21 UTC - in response to Message 19677.  

One of the problems is that the project MUST distinguish between cards capabilities in order to send the correct app (6.12 or 6.13) and the correct tasks for the app.
The natural progression is to support new GPU's and utilize improvements from new CUDA drivers, otherwise the project would stop – one of this projects main areas of research is improving molecular modelling, and this requires them to use the latest cards and the latest CUDA apps.

This time the project introduced the 6.13app to better cater for Fermi cards, and actually allow an app to work for Linux users with a GTX460 or GTS450 (and presumably all the other 48:1 ratio cards, mostly OEM). This part is a total success; Fermi’s are working well for Windows and Linux under the 6.13app. It’s only a problem for users of earlier cards that are using newish drivers, some of which don’t even work properly (260.89 and 260.99).

The best application to use for previous generations of cards is the 6.12app. On Windows this means using a driver between 195 and 197.45.
Why? Because these drivers includes CUDA support.
Specifically why? The 197.45 driver is the most up to date WHQL driver before the first driver that shipped with support for CUDA3.1 (the 257.15 Beta driver, or 257.21 WHQL driver). So if you use a 257.15 or later driver you have CUDA3.1 and because the project distinguishes which app to run by the drivers CUDA capabilities you get the 6.13app.

If you have an NVidia driver between 195 and 197.45 you will use the 6.12app to run tasks. These typically run faster than the 6.13 app.

Why not have a selection box in my profile run the 6.12app then, with 257.15 or later drivers? I can’t answer this for sure, but I know cards before Fermi do not benefit, in terms of speed, from anything beyond CUDA2.2. While the drivers with CUDA 2.3 and CUDA 3.0 are almost as fast as the drivers that first supported CUDA2.2 (195 drivers), I expect the 257.15 through to 258.96 drivers much slower running the 6.12 app, and we know the latest NVidia drivers make many 200 series GPUs go to sleep. So I guess if you have the CUDA3.1 driver, it will not run the 6.12app any faster than the 6.13app.

Why release lots of new apps? That’s a major part of what this project is about; application development, and when it leads to significant speed increases it makes the bumps worthwhile and this aids the Bio-Medical research in the long run.
ID: 19678 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19679 - Posted: 26 Nov 2010, 16:19:51 UTC - in response to Message 19678.  
Last modified: 26 Nov 2010, 16:23:51 UTC

If you have an NVidia driver between 195 and 197.45 you will use the 6.12app to run tasks. These typically run faster than the 6.13 app.

That's where the mess started with here.
I had this "better" driver, but the WUs were extremely slow, about 2-10 times as slow as before the "update". I had to abort them because some would not have made even a 4 day deadline, and this project has a real deadline of 2 days, everything slower is a waste of capacities.

I was convinced by all of you here to update to the newer, better drivers, which I run now. It's a 260.19.21, running at 550/1700/1340 MHz. I even included the dubious SWAN_SYNC somewhere, although all hints where to put it in this forum where plain useless. And I manually set the nice value to 0 as soon as I can for every new WU.

Suddenly you completely change your story, and the new driver is not any longer far better, but plain shit. Why did I even consider to use it? (Well, because you told me so;)

My card tells BOINC exactly what it's capable of:
NVIDIA GPU 0: GeForce GT 240 (driver version unknown, CUDA version 3020, compute capability 1.2, 511MB, 257 GFLOPS peak)

If this is not good for 6.13, it's the projects responsibility not to sent them to me, not mine to downgrade my system.

From my own experience there is no difference between 6.12 and 6.13, both are ridiculous slow compared to the one used before, 6.04.

I just saw that one of the Kashifs would perhaps have made it within 30h, but as the last ones took close to or even a bit above 48h, I of course aborted them asap, as anything longer than 48h is useless.

Edit:
If you included the credit claim in the name, I could simply just abort all WUs with more than 5500.
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 19679 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19685 - Posted: 26 Nov 2010, 22:59:21 UTC

Yes, quite often you are receiving suggestions that are nothing more than guesses? The project does not have every version of every card, driver and OS much less you exact system / configuration. Neither do the volunteers that are trying to help you. You say you want a complete guarantee that nothing will happen to your system ... not gonna happen with any piece of software in the world (take a look at any EULA) so I think you're being a little unrealistic there. You say your not a nerd and get outright hoistile when someone suggests you know something about the operating system you use. You say you don't want to switch drivers but then you do it anyway and post your results in a way that is outright hostile and insulting.

Clearly you are unhappy about the suggestions you are getting so how about trying to be constructive? Yes, there are projects that *sometimes* work like magic which I think is what you really want but look around, none of the CUDA projects (in my opinion) are doing hard core molecular biology and bringing advancements to the field the way that GPUGrid does. You've been here long enough to understand the state of the project so how about when you test things and post back about the sucess or failure you leave the attitude at the door.
Thanks - Steve
ID: 19685 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile liveonc
Avatar

Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 19686 - Posted: 26 Nov 2010, 23:43:08 UTC - in response to Message 19685.  

Hi Snow Crash,

sorry to say that I'm as equally frustrated as Saenger, & if I don't voice my frustration in the same way, it's because I choose to word it differently. He's using Linux & says that he's not so into all this Linux. I feel that it's refreshing that someone who isn't a "Guru" or "Wizard", even bothers to put so much effort into something he's having difficulties with.

That something is easy & just works, is why so many are staying away from Linux & CUDA. If people still go for it, I don't see how it's helpful to discourage the few that do. It's not good for the future prospects of Linux or CUDA.

If you feel that this excellent research should get all the support it deserves, don't discourage people from wanting to contribute. It's not that I don't value the Gurus & Wizards behind all this research, nor do I neglect the fact that Nvidia sells GPUs, & I don't believe this world can revolve without money. But If gpugrid wants people to support their project, they need to consider all these factors & include encouraging their members instead of discouraging them to find another project.
ID: 19686 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Snow Crash

Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19691 - Posted: 27 Nov 2010, 11:54:16 UTC

I have no problem with people trying to learn how to get this project up and running with any combination of cards, drivers, CPUs, GPUs. What I am trying to discourage is the attitude that if it does not work all work perfectly then people are justified in posting rude, insulting and downright hostile comments. That is not good for the project at all.
Thanks - Steve
ID: 19691 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19693 - Posted: 27 Nov 2010, 12:09:30 UTC - in response to Message 19691.  

Try running GPUGrid only tasks and leaving your CPU free.
I said this many times before, in order to benefit from swan_sync=0 you need to leave a CPU core free. If you don't you will not benefit, and the CPU tasks will dominate the CPU, slowing down the GPU tasks massively.

http://www.gpugrid.net/results.php?hostid=87478

I just reinstalled Ubuntu 10.10 and have a GTX260-216 and a GT240 in it.
One task returned OK in 20K sec. Using the 6.13app. It's probably not fully optimised yet, but it's not bad; 6h to run on a GTX260 at stock.
I'm not running any CPU tasks, just to test this.
The GT240 task should finish in about 18h.
ID: 19693 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19695 - Posted: 27 Nov 2010, 15:42:02 UTC - in response to Message 19693.  

Try running GPUGrid only tasks and leaving your CPU free.

Why should I use BOINC if I want to participate in just one project? I could run Folding then.

GPUgrid had no problems to use a full core with the old app, without even mentioning it in it's description. No you proclaim to have a sooo much better app, but it needs more manually fiddling than ever before methinks. That's OK for Beta-WUs, but leave them from production mode.

    * I've got the most current driver.
    * I've somehow managed to get "0" as an answer to echo $SWAN_SYNC.
    * I change the nice value to "0" asap for every WU.
    * I hand-select the downloaded WUs and abort those known to not make the 2-day deadline.



Imho that's far more than you can expect from anyone crunching here. If you want to have normal users, not just extreme nerds, to crunch here, you have to make the project KISS, and it's far away from that now.


Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki
ID: 19695 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19696 - Posted: 27 Nov 2010, 16:28:51 UTC - in response to Message 19695.  

Crying fowl again?

Take some advice from Mr. T.
ID: 19696 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Saenger
Avatar

Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19697 - Posted: 27 Nov 2010, 17:03:35 UTC - in response to Message 19696.  
Last modified: 27 Nov 2010, 17:04:07 UTC

Crying fowl again?

Take some advice from Mr. T.

Thanks for ridiculing me.
As I dont have the words
Project tester
Volunteer tester

under my name, I expect tested WUs in my BOINC, what's wrong with that expectation?
ID: 19697 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19698 - Posted: 27 Nov 2010, 17:25:08 UTC - in response to Message 19693.  

Try running GPUGrid only tasks and leaving your CPU free.
I said this many times before, in order to benefit from swan_sync=0 you need to leave a CPU core free. If you don't you will not benefit, and the CPU tasks will dominate the CPU, slowing down the GPU tasks massively.

CPU projects do valuable science too. People have over and over again expressed that they want to run other projects on their CPUs while running GPUGRID. I've tested and posted posted alternatives above and asked you to check my results. Yet no one one involved here seems to care about working with projects from other scientists. Anyway you'll be glad to hear that I won't be bugging you guys so much as I've moved most of my GPUs to other science. Won't be leaving entirely but also won't be wasting my time testing alternatives to try to improve things, since we're all just ignored anyway. Will still be around from time to time and wish you all the BEST in your endeavors. If things improve I'll move the bulk of my GPUs back. Thanks for the project!

Regards/Beyond
ID: 19698 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19699 - Posted: 27 Nov 2010, 17:26:26 UTC - in response to Message 19697.  

You are the unspoken and yet most important thing, a Cruncher.

Research projects come and go, scientists come and go, CA's come and go, science apps, drivers, GPU's, CPUs and systems come and go, but crunchers remain crunchers.
ID: 19699 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
MarkJ
Volunteer moderator
Volunteer tester

Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19705 - Posted: 28 Nov 2010, 6:22:37 UTC
Last modified: 28 Nov 2010, 6:24:18 UTC

I get BSOD's from it (under Windows). I have ceased running GPUgrid on my GTX295 rigs for the time being. There is another thread running in Number Crunching about it here

I would suggest cuda 3.0 app or cuda 3.2 (when its available).
BOINC blog
ID: 19705 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Oktan

Send message
Joined: 28 Mar 09
Posts: 16
Credit: 953,280,454
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19711 - Posted: 29 Nov 2010, 13:43:33 UTC

Hi there 6.13 works good if i run the task alone on the cpu with all 4 cpu free then the strange happens every 20-40sec core changes never stays in one core.
Yes i have the SWAN_SYNC=0 thingy.

Yes i am a noob plese get it working.

Keep up the good work.

MVH/ Oktan
ID: 19711 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19712 - Posted: 29 Nov 2010, 17:03:44 UTC - in response to Message 19711.  

This is normal, and a good thing; the CPU decides which core to use based on things like usage and heat.

Your results match up well with my GTX260 on Linux (Ubuntu 10.10).
Note I have a GT240 in there too, and when I restart Boinc the tasks often jump to the other GPU.
ID: 19712 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile robertmiles

Send message
Joined: 16 Apr 09
Posts: 503
Credit: 769,991,668
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19765 - Posted: 4 Dec 2010, 14:52:02 UTC - in response to Message 19695.  

Try running GPUGrid only tasks and leaving your CPU free.

Why should I use BOINC if I want to participate in just one project? I could run Folding then.



So you don't want to try something long enough to see if it works, and then possibly change back later?
ID: 19765 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile robertmiles

Send message
Joined: 16 Apr 09
Posts: 503
Credit: 769,991,668
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19766 - Posted: 4 Dec 2010, 15:05:04 UTC - in response to Message 19695.  

Try running GPUGrid only tasks and leaving your CPU free.

Why should I use BOINC if I want to participate in just one project? I could run Folding then.

GPUgrid had no problems to use a full core with the old app, without even mentioning it in it's description. No you proclaim to have a sooo much better app, but it needs more manually fiddling than ever before methinks. That's OK for Beta-WUs, but leave them from production mode.


So you haven't noticed that this whole project is still in beta test?

You might think about switching to the Rosetta@Home project, which got past the beta test stage, then started adding "improvements" so fast that the resulting level of problems looked like they were still in EARLY beta test.

How many CPU cores does your computer have? If it's more than one, first try telling BOINC to stop running any CPU projects, and see how much that helps run GPUGRID. Then try telling it to run CPU projects on all but one of the CPU cores and see if that gets you closer to the situation you want.
ID: 19766 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19769 - Posted: 4 Dec 2010, 18:48:48 UTC - in response to Message 19766.  

For the few members that have two different cards (or more) in the one system, in my case a GT240 and a GTX260, there is a simple way to finish tasks quicker - in my case the 30h tasks on a GT240. If a long work unit is running on the smaller card, suspend the other task (running on the larger card), restart Boinc and your long task should run on the big GPU (assuming this is in the top PCIE slot). Then just Resume the other task, and it will run on the lesser GPU.

Warning, do not suspend both tasks and then start them in the order you want to run them, without exiting Boinc, or both may crash.
ID: 19769 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
JAMES DORISIO

Send message
Joined: 6 Sep 10
Posts: 8
Credit: 3,479,747,495
RAC: 68,501
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19772 - Posted: 4 Dec 2010, 23:05:39 UTC - in response to Message 19652.  
Last modified: 4 Dec 2010, 23:08:55 UTC

The amount of CPU is fixed by the scheduler so it's the same for windows and linux. We would be pretty happy to use 1 CPU for each run and maximum speed, but then others will not like it. That's why we decided to give the opportunity to choose.

gdf


Any idea how long to implement this option: weeks? months?

Thanks Jim
ID: 19772 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · Next

Message boards : Graphics cards (GPUs) : ACEMD2 6.12 cuda and 6.13 cuda31 for windows and linux

©2025 Universitat Pompeu Fabra