Advanced search

Message boards : Number crunching : Testing acemd3 windows (thread no longer relevant)

Author Message
Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 52190 - Posted: 5 Jul 2019 | 17:06:07 UTC
Last modified: 5 Jul 2019 | 17:09:50 UTC

Time to test acemd3 for windows. It worked locally. Now I've sent a few WUs named ...TEST31... . There are a few successes but also several failures.

Common errors appear to be

* ERR_RESULT_START couldn't start app: CreateProcess() failed - Access is denied.
* 195 (0xc3) Unknown error number # Engine failed: Error compiling program: nvrtc: error: invalid value for --gpu-architecture (-arch)

The workunits SHOULD be sent to hosts with CUDA 9.2, Kepler and beyond, driver >= 397.44 as per [1]. Each WU uses NVRTC to recompile its kernel at runtime: this (+ driver/card/arch mismatch) may explain the second error.

As for the first error, still no clue.

[1] https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 52191 - Posted: 5 Jul 2019 | 17:11:57 UTC - in response to Message 52190.

Also, I needed to make the assumption that the SystemRoot is c:\windows

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 52192 - Posted: 5 Jul 2019 | 17:19:53 UTC - in response to Message 52191.

Error 195 must be the 20x0s ! We need cuda 10 for that.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52193 - Posted: 5 Jul 2019 | 19:13:23 UTC

Thanks Toni, just in time for a more interesting period in the GPU market (Turing SUPER refresh)!

MrS
____________
Scanning for our furry friends since Jan 2002

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1340
Credit: 7,653,273,724
RAC: 13,371,735
Level
Tyr
Scientific publications
watwatwatwatwat
Message 52196 - Posted: 6 Jul 2019 | 3:01:34 UTC

I just noticed I'm crunching with a new acemd3 2.04 application. Still only beta tasks but all have crunched successfully with the Linux OS and CUDA10.

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 52199 - Posted: 6 Jul 2019 | 7:24:19 UTC - in response to Message 52196.
Last modified: 6 Jul 2019 | 7:25:11 UTC

Yes. acemd3 linux is working for what I can tell.

Windows version is next.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 52200 - Posted: 6 Jul 2019 | 15:33:58 UTC - in response to Message 52193.

Thanks Toni, just in time for a more interesting period in the GPU market (Turing SUPER refresh)!

MrS


Is this for the newer nVidia 20XX cards? This has been the main reason why I've been holding off water cooling the rest of my GPU's

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1340
Credit: 7,653,273,724
RAC: 13,371,735
Level
Tyr
Scientific publications
watwatwatwatwat
Message 52201 - Posted: 6 Jul 2019 | 19:23:05 UTC - in response to Message 52200.

Yes these new beta wrapper apps correctly work with Turing cards.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,755,010,632
RAC: 220,113
Level
Trp
Scientific publications
watwatwat
Message 52202 - Posted: 6 Jul 2019 | 22:54:51 UTC - in response to Message 52199.

Yes. acemd3 linux is working for what I can tell.

I saw this post and thought it was time to come back. My few Win7 computers are getting work but not my Linux rigs. I checked every box in my Preferences.
Too soon for a steady work flow???

____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1340
Credit: 7,653,273,724
RAC: 13,371,735
Level
Tyr
Scientific publications
watwatwatwatwat
Message 52203 - Posted: 6 Jul 2019 | 23:22:43 UTC - in response to Message 52202.

Yes, Toni only threw out another limited run of beta tasks again. If you didn't grab them right away, you missed them.

I gather we are still a long way before proper generation of non-beta, new acemd3 tasks.

I have a hunch we still have another period of beta work to come for further testing of the Windows application. I don't expect the app to be mainlined until both Linux and Windows apps are validated.

Profile JStateson
Avatar
Send message
Joined: 31 Oct 08
Posts: 186
Credit: 3,383,039,605
RAC: 1,332,228
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52245 - Posted: 13 Jul 2019 | 18:04:41 UTC - in response to Message 52199.

Yes. acemd3 linux is working for what I can tell.

Windows version is next.


I cannot seem to get work for my NVidia Linux system. I just converted it from windows 10 to ubuntu 18.04 as windows could not handle my mix of nvidia boards on risers.
tb85-nvidia

67 GPUGRID 7/13/2019 1:00:27 PM Sending scheduler request: To fetch work.
68 GPUGRID 7/13/2019 1:00:27 PM Requesting new tasks for NVIDIA GPU
69 GPUGRID 7/13/2019 1:00:29 PM Scheduler request completed: got 0 new tasks
70 GPUGRID 7/13/2019 1:00:29 PM No tasks sent
71 GPUGRID 7/13/2019 1:00:29 PM No tasks are available for Short runs (2-3 hours on fastest card)
72 GPUGRID 7/13/2019 1:00:29 PM No tasks are available for Long runs (8-12 hours on fastest card)
73 GPUGRID 7/13/2019 1:00:29 PM No tasks are available for New version of ACEMD
74 GPUGRID 7/13/2019 1:00:29 PM No tasks are available for Anaconda Python 3 Environment


I am guessing the Linux app is not ready?

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52246 - Posted: 13 Jul 2019 | 19:14:11 UTC - in response to Message 52245.

I am guessing the Linux app is not ready?

My limited understanding is that it is ready, but they are waiting for the Windows version in order to release them both at the same time.

It is too hot for me anyway. They can wait until September.

Profile JStateson
Avatar
Send message
Joined: 31 Oct 08
Posts: 186
Credit: 3,383,039,605
RAC: 1,332,228
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52263 - Posted: 14 Jul 2019 | 19:10:53 UTC - in response to Message 52246.

It is too hot for me anyway. They can wait until September


I hear you.

Went to open frame mining rig to help with cooling. Windows choked with 5th gpu. I switched to ubuntu with total of 6 gpus. NVidia driver did not spin the fans enough to cool. Spend 2 days figuring out how to enable fan control. Going to make a note here to myself and anyone else:

sudo apt install nvidia-driver-390
// the above puts in the proprietary driver
sudo nvidia-xconfig -a --cool-bits=4
// above created my 6 gpu entries and enabled fan control for all 6
// needs to run every time a board is added or removed.
nvidia-settings &
// above brings up the 6 devices where the fan speed can be set
// hopefully there is a way remember the setting after a reboot

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52264 - Posted: 14 Jul 2019 | 19:23:04 UTC - in response to Message 52263.

NVidia driver did not spin the fans enough to cool. Spend 2 days figuring out how to enable fan control.

Thanks. I normally don't bother with controlling fans on my Ubuntu machines, but that may be because I didn't know of any way to do it.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52265 - Posted: 14 Jul 2019 | 19:33:42 UTC - in response to Message 52263.

sudo apt install nvidia-driver-390
// the above puts in the proprietary driver
I suggest: (suspend GPU tasks first)
sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt-get update sudo apt-get install nvidia-driver-430
If the last one fails then:
sudo apt-get install libnvidia-compute-430
then the previous
This way you'll have CUDA 10.2 capable drivers.
(If you like the GUI better, then you can use only the first line, then go show apps -> software & updates -> other drives -> select the 430 driver and apply changes, wait for the driver download)

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 211
Credit: 4,496,324,562
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 52266 - Posted: 14 Jul 2019 | 20:43:36 UTC - in response to Message 52263.

It is too hot for me anyway. They can wait until September


I hear you.

Went to open frame mining rig to help with cooling. Windows choked with 5th gpu.


I'm with both of you there. Hitting over 100F everyday. Shut everything down.

Sounds like you need a bash file to override the nvidia to turn the fans up to 100% all the time.

Keith was kind enough to send me his but you need to make several adjustments to Ubuntu to use them.
____________

Profile JStateson
Avatar
Send message
Joined: 31 Oct 08
Posts: 186
Credit: 3,383,039,605
RAC: 1,332,228
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52267 - Posted: 14 Jul 2019 | 22:51:57 UTC - in response to Message 52265.

sudo apt install nvidia-driver-390
// the above puts in the proprietary driver
I suggest: (suspend GPU tasks first)
sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt-get update sudo apt-get install nvidia-driver-430
If the last one fails then:
sudo apt-get install libnvidia-compute-430
then the previous
This way you'll have CUDA 10.2 capable drivers.
(If you like the GUI better, then you can use only the first line, then go show apps -> software & updates -> other drives -> select the 430 driver and apply changes, wait for the driver download)


I got errors from the NVidia download. My attempt
sudo sh ./NVIDIA-Linux-x86_64-430.34.run
failed within seconds. I then read the instructions that recommended using a repository and NOT using their download. Best I could google was that 390 driver but I also read that it fully support the "10" series of boards. I don't have any newer boards. I will try your repository when I get to a stopping point (seti offline)

I have since discovered the "seti special app" for Linux that can does 6-8 work units in the time it would normally take a gtx1070 to do a single one. I only looked into this app since the Linux app is not working on gpugrid. I will probably crunch on seti with all 6 of my "10" Maybe I can get into the top 10. I posted some performance graphs here https://setiathome.berkeley.edu/forum_thread.php?id=81271 If I can get into the top 3 I may not come back to gpugrid for a while.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1340
Credit: 7,653,273,724
RAC: 13,371,735
Level
Tyr
Scientific publications
watwatwatwatwat
Message 52269 - Posted: 15 Jul 2019 | 20:05:55 UTC - in response to Message 52266.

It is too hot for me anyway. They can wait until September


I hear you.

Went to open frame mining rig to help with cooling. Windows choked with 5th gpu.


I'm with both of you there. Hitting over 100F everyday. Shut everything down.

Sounds like you need a bash file to override the nvidia to turn the fans up to 100% all the time.

Keith was kind enough to send me his but you need to make several adjustments to Ubuntu to use them.

Just run a bash script file each time you boot the host to set your overclocking and fan control once you have applied the coolbits tweak in xorg.conf.

This is the one I use for my daily driver. It all is accomplished with nvidia-settings and nvidia-smi if you are power limiting.

#!/bin/bash

/usr/bin/nvidia-settings -a "[gpu:0]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:1]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:2]/GPUPowerMizerMode=1"

nvidia-smi -i 0 -pl 215
nvidia-smi -i 1 -pl 215

/usr/bin/nvidia-settings -a "[gpu:0]/GPUFanControlState=1"
/usr/bin/nvidia-settings -a "[fan:0]/GPUTargetFanSpeed=100"
/usr/bin/nvidia-settings -a "[fan:1]/GPUTargetFanSpeed=100"
/usr/bin/nvidia-settings -a "[gpu:1]/GPUFanControlState=1"
/usr/bin/nvidia-settings -a "[fan:2]/GPUTargetFanSpeed=100"
/usr/bin/nvidia-settings -a "[fan:3]/GPUTargetFanSpeed=100"
/usr/bin/nvidia-settings -a "[gpu:2]/GPUFanControlState=1"
/usr/bin/nvidia-settings -a "[fan:4]/GPUTargetFanSpeed=100"

/usr/bin/nvidia-settings -a "[gpu:0]/GPULogoBrightness=20"
/usr/bin/nvidia-settings -a "[gpu:1]/GPULogoBrightness=20"
/usr/bin/nvidia-settings -a "[gpu:2]/GPULogoBrightness=20"

/usr/bin/nvidia-settings -a "[gpu:0]/GPUMemoryTransferRateOffset[4]=600" -a "[gpu:0]/GPUGraphicsClockOffset[4]=60"
/usr/bin/nvidia-settings -a "[gpu:1]/GPUMemoryTransferRateOffset[4]=600" -a "[gpu:1]/GPUGraphicsClockOffset[4]=60"
/usr/bin/nvidia-settings -a "[gpu:2]/GPUMemoryTransferRateOffset[3]=2000" -a "[gpu:2]/GPUGraphicsClockOffset[3]=30"


It only got tricky with the new Turing cards which have TWO fan interfaces since they have two fans on each card. They also have FOUR power levels compared to Pascal's 3 power levels. I had to figure out that you need to increment the fan count to properly identify the fans for control. Also you need to change the [X] number to identify which power level you are applying the overclock to.

This example is for two RTX 2080's and one GTX 1080. Should mention that the GPULogoBrightness command DOES NOT work on the Turing cards. That attribute is not exposed on the Turing cards anymore. Works fine for Maxwell and Pascal though. So for the Turing cards you either have to live with the logo being full on bright or use various levels of opaque tape to cover up the logo.


ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52271 - Posted: 15 Jul 2019 | 21:08:55 UTC

Guys, you're having a nice discussion here but please don't take this thread completely off-topic - important news could appear here.

MrS
____________
Scanning for our furry friends since Jan 2002

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 211
Credit: 4,496,324,562
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 52272 - Posted: 15 Jul 2019 | 22:37:58 UTC - in response to Message 52271.

Apes,

Maybe move this last few discussions to a new thread, something like "Turing adjustments for heat?"
____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1340
Credit: 7,653,273,724
RAC: 13,371,735
Level
Tyr
Scientific publications
watwatwatwatwat
Message 52274 - Posted: 15 Jul 2019 | 23:33:35 UTC - in response to Message 52271.

Guys, you're having a nice discussion here but please don't take this thread completely off-topic - important news could appear here.

MrS

Bah humbug. If you find these couple of posts so offensive move them to what you consider an appropriate labelled thread, Mr. Moderator.

I was just trying offer some help for a poster question.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 52302 - Posted: 18 Jul 2019 | 13:29:49 UTC - in response to Message 52274.

Bah humbug.


How much is your electric bill each month?

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1340
Credit: 7,653,273,724
RAC: 13,371,735
Level
Tyr
Scientific publications
watwatwatwatwat
Message 52309 - Posted: 19 Jul 2019 | 4:24:18 UTC - in response to Message 52302.

Bah humbug.


How much is your electric bill each month?

I assume this was directed at me. About $650.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,672,242,755
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 52312 - Posted: 19 Jul 2019 | 15:20:59 UTC - in response to Message 52309.

Just curious Keith, why under your computers does it say you have 64 2080s in one system? "[64] NVIDIA GeForce RTX 2080 (4095MB) driver: 430.26" And 48 GPUs in the others?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52314 - Posted: 19 Jul 2019 | 19:28:54 UTC - in response to Message 52312.
Last modified: 19 Jul 2019 | 19:32:54 UTC

Just curious Keith, why under your computers does it say you have 64 2080s in one system? "[64] NVIDIA GeForce RTX 2080 (4095MB) driver: 430.26" And 48 GPUs in the others?
It's a "hacked" BOINC manager for SETI@home and the CUDA10 special app. The SETI@Home project sends 100-100 workunits at max for CPU and GPU. This is fair enough for the CPU, but the CUDA10 special app finish a workunit in ~45 seconds on a GTX 2080Ti, so 100 workunits is done in less than an hour (which is inadequately low, especially for the regular outage on every tuesday). This way a "hacked" host can queue up to 6400 workunits for the GPU(s), which is adequate to sustain work during outages for such a fast processing speed.

Profile JStateson
Avatar
Send message
Joined: 31 Oct 08
Posts: 186
Credit: 3,383,039,605
RAC: 1,332,228
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52315 - Posted: 19 Jul 2019 | 19:52:42 UTC - in response to Message 52314.

Just curious Keith, why under your computers does it say you have 64 2080s in one system? "[64] NVIDIA GeForce RTX 2080 (4095MB) driver: 430.26" And 48 GPUs in the others?
It's a "hacked" BOINC manager for SETI@home and the CUDA10 special app. The SETI@Home project sends 100-100 workunits at max for CPU and GPU. This is fair enough for the CPU, but the CUDA10 special app finish a workunit in ~45 seconds on a GTX 2080Ti, so 100 workunits is done in less than an hour (which is inadequately low, especially for the regular outage on every tuesday). This way a "hacked" host can queue up to 6400 workunits for the GPU(s), which is adequate to sustain work during outages for such a fast processing speed.


I discovered that some time ago on a post Keith made over at SETI. The only problem I have with this hack is if something goes wrong then 1000's of work units could error out in a few minutes. From what I see his systems are well built and unlikely to have problems.

I remember years ago that it was possible to reject downloads from SETI that "took too long to finish" and the tasks were dumped by a script. I though that was cheating. On this project one can select the 2-3 hour or the 8 hour and there is no need to go to extremes to get ahead fast on credits.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1340
Credit: 7,653,273,724
RAC: 13,371,735
Level
Tyr
Scientific publications
watwatwatwatwat
Message 52317 - Posted: 20 Jul 2019 | 1:47:20 UTC - in response to Message 52314.

Just curious Keith, why under your computers does it say you have 64 2080s in one system? "[64] NVIDIA GeForce RTX 2080 (4095MB) driver: 430.26" And 48 GPUs in the others?
It's a "hacked" BOINC manager for SETI@home and the CUDA10 special app. The SETI@Home project sends 100-100 workunits at max for CPU and GPU. This is fair enough for the CPU, but the CUDA10 special app finish a workunit in ~45 seconds on a GTX 2080Ti, so 100 workunits is done in less than an hour (which is inadequately low, especially for the regular outage on every tuesday). This way a "hacked" host can queue up to 6400 workunits for the GPU(s), which is adequate to sustain work during outages for such a fast processing speed.

It's not the Manager, it's the client that has been modified.
This came about when the Seti Tuesday maintenance outages were lasting 14-16 hours a day or longer.

Not needed as much now that they have outages that only last the standard 5 hours.

You are correct, all it takes is for the CUDA driver to go missing while you aren't looking at the host and it will zip through the cache in a matter of minutes. I did a stupid just the other day when I updated while BOINC was running and I did not realize the update was going to update the Nvidia drivers. Errored out a hundred tasks in less than a minute before the driver got reloaded. So you have to be aware of what's going on and have well running systems to begin with.

I spoof the max number of cards (64) BOINC allows on the four card hosts. And (48) cards on the 3 card hosts. I could pull those back to probably 36 and 24 to make it through Tuesdays now. One of the other advantages is that I don't have to fight for tasks with all the other empty hosts when the project comes back. In fact I don't even report or ask for tasks till the RTS buffer gets refilled on the servers have settled into normality after the feeding frenzy.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,755,010,632
RAC: 220,113
Level
Trp
Scientific publications
watwatwat
Message 52376 - Posted: 30 Jul 2019 | 17:21:21 UTC
Last modified: 30 Jul 2019 | 17:22:45 UTC

Hey Toni, I'm getting tired of doing astronomy.

Give me some protein to chew on !!!
____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1340
Credit: 7,653,273,724
RAC: 13,371,735
Level
Tyr
Scientific publications
watwatwatwatwat
Message 52377 - Posted: 1 Aug 2019 | 20:13:45 UTC - in response to Message 52376.

Hey Toni, I'm getting tired of doing astronomy.

Give me some protein to chew on !!!

+ 1
Ha ha ha LOL. Love it.

STARBASEn
Avatar
Send message
Joined: 17 Feb 09
Posts: 91
Credit: 1,603,303,394
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 52380 - Posted: 3 Aug 2019 | 17:15:24 UTC

+2
Yes, wish they would at least release the Linux version of acemd3. Been doing E@H 100% since mid May although I do love astronomy. Don't know what the Windows/Linux ratio is here but I am sure they are missing out on a lot WU work keeping the Linux app offline until ready for Windows as well.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 581
Credit: 9,770,362,024
RAC: 21,500,013
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52381 - Posted: 3 Aug 2019 | 19:37:24 UTC
Last modified: 3 Aug 2019 | 19:39:24 UTC

Don't know what the Windows/Linux ratio is here but I am sure they are missing out on a lot WU work keeping the Linux app offline until ready for Windows as well.

As seen at the following link, at january we were celebrating to reach 4 PetaFLOPs computation power.
http://www.gpugrid.net/forum_thread.php?id=4880#51189
Now it has dropped to about half that value...
I'd get very surprised if some new version was liberated in August, because it is usually a low activity month at universitary environments :-|

STARBASEn
Avatar
Send message
Joined: 17 Feb 09
Posts: 91
Credit: 1,603,303,394
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 52383 - Posted: 4 Aug 2019 | 21:19:23 UTC

Ah, that kind of implies to me that the Windows/Linux ratio here is very approximately 1:1. That means GPUGrid is loosing about 1/2 of their current WU production keeping Linux machines inactive. Hey TONI please!! :)

mmonnin
Send message
Joined: 2 Jul 16
Posts: 337
Credit: 7,620,478,103
RAC: 10,937,627
Level
Tyr
Scientific publications
watwatwatwatwat
Message 52384 - Posted: 4 Aug 2019 | 22:10:17 UTC

When the Linux app first went down I had noted here somewhere that Free-DC saw a drop of about 1/3. Maybe some of it was Windows PCs now had more of the task pool.

It is also summer when people are on vacation and shut down PCs more due to being away and heat.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,755,010,632
RAC: 220,113
Level
Trp
Scientific publications
watwatwat
Message 52434 - Posted: 9 Aug 2019 | 14:57:24 UTC

Now that the windows license appears to have expired it's time to shut down the old applications and turn on the new Linux application.
____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1340
Credit: 7,653,273,724
RAC: 13,371,735
Level
Tyr
Scientific publications
watwatwatwatwat
Message 52439 - Posted: 9 Aug 2019 | 22:04:40 UTC - in response to Message 52434.

Doubtful that happens as they still haven't released a Windows acemd3 wrapper app to test yet that works.

Billy Ewell 1931
Send message
Joined: 22 Oct 10
Posts: 40
Credit: 1,542,991,872
RAC: 3,754,343
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53219 - Posted: 30 Nov 2019 | 2:28:39 UTC
Last modified: 30 Nov 2019 | 2:30:37 UTC

I moved the following quoted posting from the adjoining forum as it obviously fits this subject matter more closely; and it would have been lost without being answered where it was originally posted. Billy Ewell 1931

[quote]I think the definitions of "long-run" tasks and "short-run" tasks have gone away with their applications. Now only New ACEMD3 tasks are available and in the future.

[quote]
@TONI: would you please answer this above assumption. I have my RTX 2080 set for ACEMD3 only and my 2 GTX 1060s set for "Long" and "Short" WUs only. But my 1060s have not received a task in many days.Also why not update the GPUGrid preferences selection options to display reality.I realize this is not the best forum to address the situation but maybe it will be answered anyway. Billy Ewell 1931. [quote]

Billy Ewell 1931
Send message
Joined: 22 Oct 10
Posts: 40
Credit: 1,542,991,872
RAC: 3,754,343
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53220 - Posted: 30 Nov 2019 | 3:32:47 UTC - in response to Message 53219.
Last modified: 30 Nov 2019 | 3:34:40 UTC

I just accomplished an action that answered my own question: I modified my GPUGrid preferences on my 2 Windows 10 64bit Xeon and i3 computers, both equipped with one each GTX 1060. Both computers have joined my Windows 10 64bit i7 RTX 2080 happily crunching GPUGrid Work Units under the current title ACEMD3. By the way, I excluded all other options in the preferences menus even as I understand it probably does not matter.

Profile God is Love, JC proves it...
Avatar
Send message
Joined: 24 Nov 11
Posts: 30
Credit: 201,648,059
RAC: 2,722
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 53314 - Posted: 9 Dec 2019 | 16:49:38 UTC

Is anyone else having trouble with a LOT of WUs erroring out, on slightly older GPUs?
My new 1660 Ti is doing fine, but my (not that very old) 950M has a VERY high error rate, after running for many, many hours (after finishing, it would seem):


21553648 16894732 5 Dec 2019 | 23:25:58 UTC 8 Dec 2019 | 10:03:27 UTC Error while computing 210,744.15 208,424.00 --- New version of ACEMD v2.10 (cuda101)
21549725 16891188 3 Dec 2019 | 9:30:40 UTC 5 Dec 2019 | 23:30:57 UTC Completed and validated 221,732.69 219,584.90 61,000.00 New version of ACEMD v2.10 (cuda101)
21544426 16886529 30 Nov 2019 | 19:54:47 UTC 3 Dec 2019 | 9:25:02 UTC Error while computing 213,518.60 211,953.20 --- New version of ACEMD v2.10 (cuda101)
21532174 16876007 28 Nov 2019 | 6:09:16 UTC 30 Nov 2019 | 20:19:30 UTC Error while computing 221,587.20 219,136.50 --- New version of ACEMD v2.10 (cuda101)
21509135 16855905 23 Nov 2019 | 4:50:17 UTC 28 Nov 2019 | 6:09:16 UTC Completed and validated 151,235.95 150,607.10 61,000.00 New version of ACEMD v2.10 (cuda101)
21507371 16854655 22 Nov 2019 | 21:55:11 UTC 25 Nov 2019 | 6:44:29 UTC Error while computing 203,591.42 202,247.10 --- New version of ACEMD v2.10 (cuda101)


12/8/2019 11:33:58 PM
CUDA: NVIDIA GPU 0: GeForce GTX 950M (driver version 441.20, CUDA version 10.2, compute capability 5.0, 2048MB, 1682MB available, 1188 GFLOPS peak)
OpenCL: NVIDIA GPU 0: GeForce GTX 950M (driver version 441.20, device version OpenCL 1.2 CUDA, 2048MB, 1682MB available, 1188 GFLOPS peak)
OpenCL: Intel GPU 0: Intel(R) HD Graphics 530 (driver version 21.20.16.4550, device version OpenCL 2.0, 3227MB, 3227MB available, 202 GFLOPS peak)
OpenCL CPU: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz (OpenCL driver vendor: Intel(R) Corporation, driver version 6.8.0.392, device version OpenCL 2.0 (Build 392))
Host name: Laptop-6AQTD8V-VCP-LLP-PhD
Processor: 8 GenuineIntel Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz [Family 6 Model 94 Stepping 3]
Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss htt tm pni ssse3 fma cx16 sse4_1 sse4_2 movebe popcnt aes f16c rdrandsyscall nx lm avx avx2 vmx tm2 pbe fsgsbase bmi1 hle smep bmi2
OS: Microsoft Windows 10: Core x64 Edition, (10.00.18363.00)
Memory: 7.90 GB physical, 20.90 GB virtual
Disk: 929.69 GB total, 843.01 GB free

____________
I think ∴ I THINK I am
My thinking neither is the source of my being
NOR proves it to you
God Is Love, Jesus proves it! ∴ we are

Profile God is Love, JC proves it...
Avatar
Send message
Joined: 24 Nov 11
Posts: 30
Credit: 201,648,059
RAC: 2,722
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 53315 - Posted: 9 Dec 2019 | 19:17:45 UTC - in response to Message 53314.

a couple, resultid=21544426 and resultid=21532174, had said:
"Detected memory leaks!"
So I ran extensive memory diagnostics, but no errors were reported by windoze
Boinc did not indicate if this was RAM or GPU 'memory leaks'

I am also getting
upload failure: <file_xfer_error>
<file_name>initial_1132-ELISA_GSN0V1-6-100-RND5960_0_0</file_name>
<error_code>-240 (stat() failed)</error_code>
</file_xfer_error>
https://gpugrid.net/result.php?resultid=21544426

<file_name>test324-TONI_GSNTEST3-16-100-RND7959_0_0</file_name>
<error_code>-240 (stat() failed)</error_code>
https://gpugrid.net/result.php?resultid=21532174

(The next one said nothing about 'memory leaks' but still gave)
upload failure: <file_xfer_error>
<file_name>initial_1497-ELISA_GSN4V1-20-100-RND8978_0_0</file_name>
<error_code>-240 (stat() failed)</error_code>
</file_xfer_error>

The other projects I am running currently (Universeathome, collatz) have no problems with file uploads.

LLP, PhD

____________
I think ∴ I THINK I am
My thinking neither is the source of my being
NOR proves it to you
God Is Love, Jesus proves it! ∴ we are

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53318 - Posted: 9 Dec 2019 | 22:13:01 UTC - in response to Message 53315.

"memory leaks" messages are always present in windows - they are just an unfortunate printout, not errors themselves. If there is an error message, it will be somewhere else in the text.

Mobile cards are not suitable for crunching. It's surprising that it even starts. See the FAQ item.

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53319 - Posted: 9 Dec 2019 | 22:13:46 UTC - in response to Message 53318.

Also, this thread is old and not relevant any more.

Profile God is Love, JC proves it...
Avatar
Send message
Joined: 24 Nov 11
Posts: 30
Credit: 201,648,059
RAC: 2,722
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 53323 - Posted: 10 Dec 2019 | 0:49:00 UTC - in response to Message 53319.
Last modified: 10 Dec 2019 | 1:03:34 UTC

thread is old

It's what I found...
Furthermore, the last post before mine was dated
30 Nov 2019
That's not much more than just a week before my post

Profile God is Love, JC proves it...
Avatar
Send message
Joined: 24 Nov 11
Posts: 30
Credit: 201,648,059
RAC: 2,722
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 53324 - Posted: 10 Dec 2019 | 0:53:21 UTC - in response to Message 53318.

Mobile cards are not suitable for crunching

Strange... I've been running WUs on this 950M for some two years.

Plus, my understating is that NVidia has stopped designating any distinction between laptop and desktop GPUs, as the performances are, of late, very comparable

LLP, PhD
____________
I think ∴ I THINK I am
My thinking neither is the source of my being
NOR proves it to you
God Is Love, Jesus proves it! ∴ we are

Post to thread

Message boards : Number crunching : Testing acemd3 windows (thread no longer relevant)

//