GPUGRID and app_info

Message boards : Graphics cards (GPUs) : GPUGRID and app_info
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21618 - Posted: 6 Jul 2011, 4:42:25 UTC

GPU load varies from 65% to 80% on my OC'd GTX570. I think different WUs using GPU more or less.

I wonder if app_info trick exists on GPUGRID in order to run 2 WUs concurrently and thus maximize the output.

And one more question. Does linux app speed up calculation like in those days? I was out from GPUGRID for some time and now I'm looking to come back :-)
ID: 21618 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21619 - Posted: 6 Jul 2011, 5:03:32 UTC - in response to Message 21618.  

I get about 85% to 95% usage on my GTX 570, no OC, running Linux with SWAN_SYNC=0. Yes, it seems the usage depends on the app or task.

I haven't done any solid comparisons between OS's but they say the Linux app is 15% faster than the Windows app on Win7.

I've heard it said that 2 tasks running concurrently don't run in parallel, they run serially. If that's true then I don't see much benefit from running 2 tasks concurrently. I could be wrong. I've never tried it.
ID: 21619 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21621 - Posted: 6 Jul 2011, 8:15:47 UTC - in response to Message 21619.  

I've tested the app_info in the past on tasks with various GPU utilization.
By the time you get to about 85% utilization there is little or no point.
When you go up to about 95% you would see an overall loss.
At 65% there should be some gain, but then you have to consider task turn around time and the credit system. The problem with having multiple tasks is that you could gain from running one type, but lose from running the other.
I'm not sure what the ratio of task types are, and the utilization might change when new batches are created.
It may also be the case that different operating systems benefit differently. So for example, there might be less gain on Linux than W7.
ID: 21621 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21623 - Posted: 6 Jul 2011, 12:40:27 UTC - in response to Message 21619.  

I get about 85% to 95% usage on my GTX 570, no OC, running Linux with SWAN_SYNC=0. Yes, it seems the usage depends on the app or task.

I haven't done any solid comparisons between OS's but they say the Linux app is 15% faster than the Windows app on Win7.

I've heard it said that 2 tasks running concurrently don't run in parallel, they run serially. If that's true then I don't see much benefit from running 2 tasks concurrently. I could be wrong. I've never tried it.

I've got couple of questions:
1. SWAN_SYNC=0 - where I should input that? AFAIK it's on in linux by default...
2. How I can check GPU load on linux? AFAIK, nvclock project is dead since G200 series (beta support, my GTX5275 never worked properly though) and year 2008 :-(

That's good news that linux is faster. I need to spend some time on w7 though to find stable clocks/voltage and then to flash them and then - "home, sweet home" :-)

Theretically speaking 2 tasks runnig concurrently will increase GPU load to 100% (I see no reasons why not). That will slow down individual task for sure, but while running 2 tasks that should increase overall output.
ID: 21623 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21624 - Posted: 6 Jul 2011, 12:48:21 UTC - in response to Message 21621.  

I've tested the app_info in the past on tasks with various GPU utilization.
By the time you get to about 85% utilization there is little or no point.
When you go up to about 95% you would see an overall loss.
At 65% there should be some gain, but then you have to consider task turn around time and the credit system. The problem with having multiple tasks is that you could gain from running one type, but lose from running the other.
I'm not sure what the ratio of task types are, and the utilization might change when new batches are created.
It may also be the case that different operating systems benefit differently. So for example, there might be less gain on Linux than W7.

please, please where I can get that app_infO? I wanna try it - both on w7 x64 and linux (ubuntu 10.04 x64).

that's weird. in my understanding while you increasinf GPU load you getting more. That approach works on any other GPU project and I see no reason why it should not work on GPUGRID. But that's thoery, I'd like to try that in real life.

SO, if u've got that app_info, could u pls PM it to me and give the link where ir been discussed on the forum?
ID: 21624 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21625 - Posted: 6 Jul 2011, 15:52:06 UTC - in response to Message 21623.  
Last modified: 6 Jul 2011, 15:54:24 UTC

Theretically speaking 2 tasks runnig concurrently will increase GPU load to 100% (I see no reasons why not). That will slow down individual task for sure, but while running 2 tasks that should increase overall output.

That's correct, but the wu turnaround (return) time is more important in GPUGrid, than the overall output.

that's weird. in my understanding while you increasinf GPU load you getting more. That approach works on any other GPU project and I see no reason why it should not work on GPUGRID. But that's thoery, I'd like to try that in real life.

GPUGrid is different than other GPU projects. Here in GPUGrid a new workunit continues the calculation from where the previous wu finished. Therefore the new workunit depends on the result of the previous workunit, that's why turnaround time is very important, and honored by +50% bonus credit if you return the result within 24 hours, (and by +25% bonus credit if you return the result within 48 hours). If you run two workunits simultaneously on the same GPU, your wu turnaround time will miss the 24 hour deadline (with the long workunits), and you will lose the +50% bounus, and you will receive only +25% bonus. All in all, there is no point to increase the overall output by 5-15% to lose 25% credit bonus.

BTW your wus are failing because you're overclocking your GTX570 too much. The higher the GPU utilization is, the lower the overclocking can be. GPUs cannot be overclocked for 5-8 hours long GPUGrid tasks as much as for gaming (or for shorter workunits). You should also increase your GPU's fan speed. The lower the GPU temperature is, the more stable the GPU will be.
ID: 21625 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21626 - Posted: 6 Jul 2011, 17:54:43 UTC - in response to Message 21623.  

I've got couple of questions:
1. SWAN_SYNC=0 - where I should input that? AFAIK it's on in linux by default...


It might be on by default in Linux, I don't know. The stderr on the website for each task shows the value for SWAN_SYNC. If it says there the value is 0 and you have no "export SWAN_SYNC=0" statement anywhere then assume it's 0 by default. I put "export SWAN_SYNC=0" in ~/.bashrc. Note the period, it's a hidden file. I'm the only user on the system so that's adequate for me. If you have multiple users you'll want to put the statement in a script that runs for every user.

2. How I can check GPU load on linux? AFAIK, nvclock project is dead since G200 series (beta support, my GTX5275 never worked properly though) and year 2008 :-(


Yah, nvclock doesn't work here either. The nvidia-settings utility doesn't give GPU load either. I've been able to read GPU usage only with the nvidia-smi utility and only with the 270.xx drivers. It doesn't report usage here with the 260.xx driver.


That's good news that linux is faster. I need to spend some time on w7 though to find stable clocks/voltage and then to flash them and then - "home, sweet home" :-)

Theretically speaking 2 tasks runnig concurrently will increase GPU load to 100% (I see no reasons why not). That will slow down individual task for sure, but while running 2 tasks that should increase overall output.


There's only one way to be sure and that's to try it and see. I'm sure skgiven and others have been down that road before so keep in mind what they say when you're testing. And as Retvari said, make sure your tasks return within 24 hrs. if you want the credit bonus. I think a GTX 570 will have no problem returning 2 concurrently running tasks in 24 hrs. but ya never know.

Also, I had troubles with the temperature on my GTX570 when I started running it on Linux. The fan would not speed up as the temp rose so I had to manually force it to run at high speed permanently. That required some research and playing around. I expect you and others will run into the same problem so I wrote up what I did and posted it here on the BOINC dev forum. I recommend getting a handle on that BEFORE you run your first CUDA task on Linux.
ID: 21626 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21627 - Posted: 6 Jul 2011, 18:38:01 UTC - in response to Message 21625.  

That's correct, but the wu turnaround (return) time is more important in GPUGrid, than the overall output.

No doubt about return time :-)

GPUGrid is different than other GPU projects. Here in GPUGrid a new workunit continues the calculation from where the previous wu finished. Therefore the new workunit depends on the result of the previous workunit

Not really - MilkyWay is pretty much the same. They use the WUs results to correlate their model and based on new model they issuing new batch of WUs.

that's why turnaround time is very important, and honored by +50% bonus credit if you return the result within 24 hours, (and by +25% bonus credit if you return the result within 48 hours). If you run two workunits simultaneously on the same GPU, your wu turnaround time will miss the 24 hour deadline (with the long workunits), and you will lose the +50% bounus, and you will receive only +25% bonus. All in all, there is no point to increase the overall output by 5-15% to lose 25% credit bonus.

Let’s take the worst case scenario (theoretical case) when GPU load is 100%. If I'm running the 2nd WU concurrently, duration for both of them is twice longer then for single WU. Am I right?

Let's come back to the real world. I finished one WU in 6hrs, so if I'm running 2 WUs it should take me 12 hrs to complete both of them. But remember - GPU load is less than 100%, so in reality it will be not 12hrs, but, let's say, 9-11hrs. All in all, I'm pretty much OK to be within 24hrs limit to get +50% bonus. That’s why IMHO it makes sense to go with app_info trick.

In this case the project is OK in terms of return time as well as productivity from this video card.

BTW your wus are failing because you're overclocking your GTX570 too much. The higher the GPU utilization is, the lower the overclocking can be. GPUs cannot be overclocked for 5-8 hours long GPUGrid tasks as much as for gaming (or for shorter workunits). You should also increase your GPU's fan speed. The lower the GPU temperature is, the more stable the GPU will be.

I was failing due high OCing, that's correct. It's not a big secret that different projects allow different level of OCing. On PrimeGrid this card is solid rock stable for 7 months now @880/0.988V and another two - @900/1.0000V

That task I completed @860/1.0000V - way lower then PrimeGrid, but that's OK. I'll play around to figure out the balance between higher clocks, voltage and heat on these long run WUs. While GPU load is way less than 100% I can slightly increase voltage and clock and still get adequate temps. When I'm been done I'll flash the cards with clocks/voltage and move to linux.

So, my question is: where I can get app_info? Don’t get me wrong: I’m NOT trying to BS the project. In fact, I wanna do more.
ID: 21627 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21628 - Posted: 6 Jul 2011, 19:08:08 UTC - in response to Message 21626.  

It might be on by default in Linux, I don't know. The stderr on the website for each task shows the value for SWAN_SYNC. If it says there the value is 0 and you have no "export SWAN_SYNC=0" statement anywhere then assume it's 0 by default. I put "export SWAN_SYNC=0" in ~/.bashrc. Note the period, it's a hidden file. I'm the only user on the system so that's adequate for me. If you have multiple users you'll want to put the statement in a script that runs for every user.

Here’s the link on my completed task
But I cannot see any mentioning of SWAN_SYNC…

I’ll put "export SWAN_SYNC=0" in ~/.bashrc when I’ll be on linux. Thanks a lot, man :-)

Yah, nvclock doesn't work here either. The nvidia-settings utility doesn't give GPU load either. I've been able to read GPU usage only with the nvidia-smi utility and only with the 270.xx drivers. It doesn't report usage here with the 260.xx driver.

I never heard about this utility. I’ll try it for sure. Thanks again :-)
I installed the latest 275.xx.xx driver, so hopefully it will work with this version.

BTW, what’s version of drivers is the fastest? 260, 270 or 275?

There's only one way to be sure and that's to try it and see. I'm sure skgiven and others have been down that road before so keep in mind what they say when you're testing. And as Retvari said, make sure your tasks return within 24 hrs. if you want the credit bonus. I think a GTX 570 will have no problem returning 2 concurrently running tasks in 24 hrs. but ya never know.

That’s exactly what I want – to try by myself. No doubts that skgiven and other guys did that before, but look - we all helping science, so we all are some sort of scientist. So, let’s use scientific approach and try to repeat the results :-)

Me also think that I should be pretty much OK to meet 24hrs limit.

Also, I had troubles with the temperature on my GTX570 when I started running it on Linux. The fan would not speed up as the temp rose so I had to manually force it to run at high speed permanently. That required some research and playing around. I expect you and others will run into the same problem so I wrote up what I did and posted it here on the BOINC dev forum. I recommend getting a handle on that BEFORE you run your first CUDA task on Linux.

That’s nice manual. I tried coolbits 1, but never 4. May be that is why either fan speed either GPU OCing never worked for me. When I’m selecting "Thermal Settings" – nothing really happens. But I found my way – finding proper clocks/voltage in windows using MSI AfterBurner, modify BIOS using NiBiToR and then flash it. BTW, you cannot set fan speed less than 40% and greater than 85% on GTX5x0 cards. But starting from NiBiToR version 6.0 you can adjust fan speed in BIOS as well.

If you need modded BIOS for your card –just let me know, I’ll do it for u.

ID: 21628 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21629 - Posted: 6 Jul 2011, 21:12:12 UTC - in response to Message 21628.  
Last modified: 6 Jul 2011, 21:13:05 UTC

BTW, what’s version of drivers is the fastest? 260, 270 or 275?


I'm not sure, I've never tested to see which is fastest.

If you need modded BIOS for your card –just let me know, I’ll do it for u.


Thank you for the offer :)
ID: 21629 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21630 - Posted: 6 Jul 2011, 23:19:42 UTC

OK, I'll use 275.xx drivers, NP at all :-)
ID: 21630 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Kenneth Larsen

Send message
Joined: 11 Feb 09
Posts: 6
Credit: 162,131,296
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21631 - Posted: 7 Jul 2011, 8:51:33 UTC - in response to Message 21628.  

But I cannot see any mentioning of SWAN_SYNC…


It only says so if you are running SWAN_SYNC, it doesn't mention anything if not (at least when looking at my results).

Be careful when putting SWAN_SYNC in .bashrc, it will only work if boinc is running as that user.
To set it system-wide, look in /etc/environment or /etc/env.d/
ID: 21631 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21632 - Posted: 7 Jul 2011, 12:56:35 UTC - in response to Message 21631.  

It only says so if you are running SWAN_SYNC, it doesn't mention anything if not (at least when looking at my results).

Stupid me... That rig is on win7 now and that's why SWAN_SYNC is no there :-)

Be careful when putting SWAN_SYNC in .bashrc, it will only work if boinc is running as that user.
To set it system-wide, look in /etc/environment or /etc/env.d/

that's not a problem coz there's one user account on that PC.

BUT: I think it worth the efforts to put that in FAQ. May be it's necessary to talk to some1.

All in all, where I can get app_info to try it?
ID: 21632 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21633 - Posted: 7 Jul 2011, 16:16:35 UTC - in response to Message 21632.  
Last modified: 7 Jul 2011, 16:18:08 UTC

Be careful when putting SWAN_SYNC in .bashrc, it will only work if boinc is running as that user.
To set it system-wide, look in /etc/environment or /etc/env.d/

that's not a problem coz there's one user account on that PC.


If you installed BOINC from repositories then it is NOT setup to run on your account. It will run on a special boinc user's account which is usually named boinc or something similar. In that case the boinc user will not see the SWAN_SYNC environment variable if you put it in your .bashrc. If you installed BOINC from the Berkeley installer (the .sh script) then putting it in your .bashrc is adequate.
ID: 21633 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21635 - Posted: 7 Jul 2011, 16:54:14 UTC - in response to Message 21633.  

If you installed BOINC from repositories then it is NOT setup to run on your account. It will run on a special boinc user's account which is usually named boinc or something similar. In that case the boinc user will not see the SWAN_SYNC environment variable if you put it in your .bashrc. If you installed BOINC from the Berkeley installer (the .sh script) then putting it in your .bashrc is adequate.

normally I'm using repos, but BOINC is that rare exception. I'm downloading it from Berkeley and then runnin .sh script. So, it should work for me :-)
ID: 21635 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21639 - Posted: 8 Jul 2011, 3:39:17 UTC

Try this message from the Einstein forums. It has an app_info.xml used for running 4 Einstein tasks on a GTX480. It should give you the general idea. Maybe you can modify it to work with GPUgrid.
ID: 21639 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
CTAPbIi

Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 21640 - Posted: 8 Jul 2011, 4:39:22 UTC - in response to Message 21639.  

Try this message from the Einstein forums. It has an app_info.xml used for running 4 Einstein tasks on a GTX480. It should give you the general idea. Maybe you can modify it to work with GPUgrid.

I'll try it, but... different apps got different arguments and not necessary they are the same all over the projects... I've got app_info for MilkyWay, PrimeGrid, Collatz, but i'm not sure it will work.
ID: 21640 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21641 - Posted: 8 Jul 2011, 10:57:53 UTC - in response to Message 21640.  
Last modified: 8 Jul 2011, 14:00:41 UTC

You could try this,

<app_info>
<app>
<name>acemd2</name>
</app>
<file_info>
<name>acemdlong_6.15_windows_intel86__cuda31.exe</name>
<executable/>
</file_info>
<file_info>
<name>cudart32_31_9.dll</name>
<executable/>
</file_info>
<file_info>
<name>cufft32_31_9.dll</name>
<executable/>
</file_info>
<file_info>
<name>tcl85.dll</name>
<executable/>
</file_info>
<app_version>
<app_name>acemd2</app_name>
<version_num>615</version_num>
<avg_ncpus>0.025</avg_ncpus>
<max_ncpus>0.050</max_ncpus>
<flops>1089000000000</flops>
<plan_class>cuda31</plan_class>
<coproc>
<type>CUDA</type>
<count>0.5</count>
</coproc>
<gpu_ram>1280000000</gpu_ram>
<file_ref>
<file_name>acemdlong_6.15_windows_intel86__cuda31.exe</file_name>
<main_program/>
</file_ref>
<file_ref>
<file_name>cudart32_31_9.dll</file_name>
</file_ref>
<file_ref>
<file_name>cufft32_31_9.dll</file_name>
</file_ref>
<file_ref>
<file_name>tcl85.dll</file_name>
</file_ref>
</app_version>
</app_info>
ID: 21641 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21642 - Posted: 8 Jul 2011, 18:35:28 UTC - in response to Message 21641.  

<name>acemdlong_6.15_windows_intel86__cuda31.exe</name>

There is no ".exe" extension at the end of the filename.

The correct line is:

<name>acemdlong_6.15_windows_intel86__cuda31</name>

Back in February I've tried something similar you've just posted, and all I get is errors.

ps: also there is no ".dll" files on Linux, so this app_info.xml is only for Windows
ID: 21642 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21643 - Posted: 8 Jul 2011, 20:10:47 UTC - in response to Message 21642.  
Last modified: 8 Jul 2011, 20:11:18 UTC

I did run using an app_info file for a week or more on a dual GPU setup, but that was over 6months ago. Others also did this. For a short time it was worth it, but that was before 6.14 and only when there were lots of less GPU utilizing tasks around (~50%). I did post a bit about this and sent several PM's about my findings. Others concurred. I did not start a thread about this because using app_info at GPUGrid is definitely not the recommended way to go; by in large the tasks are fast enough. You have to be savvy and hands on when setting it up, and then the opposite - hands off, just let it run.

You could create a Linux app_info file, but you would have to alter several things. Obviously the app, which is still the 6.14 (cuda31) version. That was not changed as thread priority does not work the same way on Linux, AFAIK. No idea if or when the team will update the Linux app, but given that Linux is faster anyway, I don't see the point in even trying an app_info setup.

PS. You can tell my app_info was Windows only by the app name; acemdlong_6.15_windows_intel86__cuda31 [despite the Windows executable file extension] :))
ID: 21643 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · Next

Message boards : Graphics cards (GPUs) : GPUGRID and app_info

©2025 Universitat Pompeu Fabra