Message boards : Server and website : GPU Results ready to send - number dwindling
Author | Message |
---|---|
As of 11:30pm EST, I see approx. 150 GPU Results ready to send. I imagine that will go quick given there are 4000 in process. | |
ID: 18394 | Rating: 0 | rate: / Reply Quote | |
It may just be a reporting error? | |
ID: 18395 | Rating: 0 | rate: / Reply Quote | |
It's zero at the moment... | |
ID: 18396 | Rating: 0 | rate: / Reply Quote | |
We recently discontinued some batches of WUs plus workflows coming to its end. | |
ID: 18397 | Rating: 0 | rate: / Reply Quote | |
The sooner the better :) | |
ID: 18398 | Rating: 0 | rate: / Reply Quote | |
Here also are one user who like bigger cache. Maybe 5 - 6 wu's is good. | |
ID: 18399 | Rating: 0 | rate: / Reply Quote | |
The sooner the better :) My fastest machine ran dry a few hours ago. It's going to another project for a while. A bigger queue for faster GPUs would help. | |
ID: 18400 | Rating: 0 | rate: / Reply Quote | |
I'm getting: | |
ID: 18402 | Rating: 0 | rate: / Reply Quote | |
GPUGRID 8/26/2010 2:09:26 PM Requesting new tasks for CPU and GPU | |
ID: 18403 | Rating: 0 | rate: / Reply Quote | |
Id like to see at least a 24-hour cache just in case something goes wrong with the server. Currently if GPUGRID goes down for something, I'm dry in 5-6 hours with this ridiculous max queue of 4 WUs. This is the most restricted DC project out there when it comes to the size of the buffer. | |
ID: 18404 | Rating: 0 | rate: / Reply Quote | |
Bad timing for me... All my Linux + Nvidia machines run either GPUGrid or GPUGrid + DNETC. Guess what's down for maint? Yup, DNETC. But so far only 1 GT8800 high and dry. How it ran dry and the GTX/GTS cards are still working I don't know but I'm sure it's just a coincidence of timing but I do need to see why it didn't have more DNETC que'd up.DNETC is supposed to be back up at 10pm (CDT) tonight so there's some relief soon there if the Linux boys wanna play with an "Alpha" project. | |
ID: 18405 | Rating: 0 | rate: / Reply Quote | |
As ignasi said, new jobs are to be submitted between today and tomorrow. So expect some more intermittent periods/outages for a few hours. | |
ID: 18406 | Rating: 0 | rate: / Reply Quote | |
I'd sort of wished that they could announce downtime. I've often wanted to experiment with other Linux Distribution's but don't, because it would mean downtime on my side. But I need to prepare & can't always do this on the fly. | |
ID: 18407 | Rating: 0 | rate: / Reply Quote | |
Looks like there is still a shortage, but it was not a complete outage. | |
ID: 18408 | Rating: 0 | rate: / Reply Quote | |
Hi | |
ID: 18409 | Rating: 0 | rate: / Reply Quote | |
I have a full complement of tasks in progress again. | |
ID: 18411 | Rating: 0 | rate: / Reply Quote | |
I got on my GT240 one of the new tasks with the name 29-IBUCH_revhi_TRYP_100827-0-10-RND3357 . | |
ID: 18412 | Rating: 0 | rate: / Reply Quote | |
These are a special case. We needed to make them longer to speed up the calculations. We are waiting for these results to finish a paper. | |
ID: 18414 | Rating: 0 | rate: / Reply Quote | |
You will still get the 25% bonus, and as these are about 30% faster you should get slightly more points than you would crunching normal work units for the same time period: 125% X 1.3 > 150% | |
ID: 18421 | Rating: 0 | rate: / Reply Quote | |
[As of 27 Aug 2010 18:42:23 UTC] | |
ID: 18423 | Rating: 0 | rate: / Reply Quote | |
I'm sure they are working on it. Two unforeseen big project batches finishing at once was the source of the shortage. My cache is full at the minute, but hopefully some more tasks will trickle into the feeders overnight to keep the faster cards ticking over. | |
ID: 18424 | Rating: 0 | rate: / Reply Quote | |
..... – but even at that your 33h seems a bit long to me; about 4h too long. I think, you're refering to my mentioned 33½ hours. My job is now at 24%. Calculated for 100% this means 33.1 hours. I freed 1 core for about 3 hours, but that makes not so much difference. I've clocked my GT240 with DDR5 at Core=683, Memory=1876 and Shader=1664. Perhaps my CPU is not that fast: 2.08GHz / FSB800. | |
ID: 18425 | Rating: 0 | rate: / Reply Quote | |
skgiven | |
ID: 18426 | Rating: 0 | rate: / Reply Quote | |
If there was only collatz and dnetc to chose from I would stop crunching. I dont know when more tasks will be ready, so I cant offer much more advice other than to look and see if you have tasks and make your own decision based on that. Perhaps raising your cache will help a bit, should tasks appear. If I had to guess I would say tonight will be similar to last night; some tasks will be released, but that is only based on last night - I picked up tasks at about 3am, 5am and 8.30am this morning. If you do decide to crunch elsewhere, you could set your Resource share for the other projects to a very low number (1% or so). Might be an idea to close and re-open Boinc if you do that. | |
ID: 18427 | Rating: 0 | rate: / Reply Quote | |
Got one "Fatty" WU.......now dry again. I don't understand how molecular biology simulations could EVER run out of work. It makes absolutely NO SENSE WHATSOEVER. There is enough calculation that needs to be done in this area to keep computers busy for at least the next 10,000 years (by today's computing technology capabilities). | |
ID: 18434 | Rating: 0 | rate: / Reply Quote | |
Even with infinite manpower, unlimited financial resources and perfect management we would still not be able to crunch now and again; your Internet would be out, your computer would crash or your GPU would fail tasks. | |
ID: 18437 | Rating: 0 | rate: / Reply Quote | |
I don't understand how molecular biology simulations could EVER run out of work. It makes absolutely NO SENSE WHATSOEVER. There is enough calculation that needs to be done in this area to keep computers busy for at least the next 10,000 years (by today's computing technology capabilities). You're right about that. This kind of simulation practically needs infinite time to finish. But as far as I know this project, its tasks depend on a previous task, as every task we crunch covers 5ns (a fatty task covers 10ns) of the given chemical reaction. In that way this project can run out of work, but of course only temporarily. | |
ID: 18438 | Rating: 0 | rate: / Reply Quote | |
You're right about that. This kind of simulation practically needs infinite time to finish. But as far as I know this project, its tasks depend on a previous task, as every task we crunch covers 5ns (a fatty task covers 10ns) of the given chemical reaction. In that way this project can run out of work, but of course only temporarily. Maybe they could setup different bins for nS times to be crunched. The default would be 5nS but they could have bins for 10, 15, 20, 25......all the way up to what is necessary for the entire simulation to meet whatever requirements they have for it. For the higher bins though they need an auto-reload for failed WUs (they actually should have that now!) so that if a computation error occurs, it just restarts from the last checkpoint that was saved to disk (BOINC default is 60 seconds). This way the most time anyone would lose on any given computation error is 59.999999 seconds of computing time instead of losing the entire WU, which is what happens currently. | |
ID: 18441 | Rating: 0 | rate: / Reply Quote | |
I'd sort of wished that they could announce downtime. I've often wanted to experiment with other Linux Distribution's but don't, because it would mean downtime on my side. But I need to prepare & can't always do this on the fly. Dual Boot? ____________ - da shu @ HeliOS, "A child's exposure to technology should never be predicated on an ability to afford it." | |
ID: 18445 | Rating: 0 | rate: / Reply Quote | |
I'd sort of wished that they could announce downtime. I've often wanted to experiment with other Linux Distribution's but don't, because it would mean downtime on my side. But I need to prepare & can't always do this on the fly. I only have dual boot on the PC that runs Windows, the rest run on Mint Linux 8 gnome or KDE. I've tried other distros, but they either failed too many WU's or were not my cup of tea for other reasons. I've used this down time to test YDL for Cuda, but still haven't gotten it too work. I "think" it's the security setting. BOINC can't connect to the Internet. I "might" try Fedora 13 LXDE 64bit. If I do & don't like, I'll go back to Mint Linux 8 AGAIN... ____________ | |
ID: 18447 | Rating: 0 | rate: / Reply Quote | |
Hi ! | |
ID: 18460 | Rating: 0 | rate: / Reply Quote | |
temporary shortage | |
ID: 18461 | Rating: 0 | rate: / Reply Quote | |
temporary shortage Thanks for the reply, saw jobs were coming back ;-) Regards | |
ID: 18463 | Rating: 0 | rate: / Reply Quote | |
GPU Results ready to send 2,075 | |
ID: 18481 | Rating: 0 | rate: / Reply Quote | |
GPU Results ready to send 2,075 NICE!!! Hopefully that will last a couple weeks and we can get another job in before it runs out. | |
ID: 18482 | Rating: 0 | rate: / Reply Quote | |
GPU Results ready to send: 329 and falling. | |
ID: 18756 | Rating: 0 | rate: / Reply Quote | |
GPU Results ready to send: 329 and falling. You and me both. Out of curiosity, how are you getting 5,500 more RAC than me? I've got both my GTX480s already overclocked to 800 MHz and 2 GHz for the memory. That's as high as they go without getting failed work units every other one.......at least on air anyways. Are you running yours even higher than that on liquid cooling or thermoelectric??? | |
ID: 18758 | Rating: 0 | rate: / Reply Quote | |
GPU Results ready to send: 233 | |
ID: 18759 | Rating: 0 | rate: / Reply Quote | |
Out of curiosity, how are you getting 5,500 more RAC than me? I've got both my GTX480s already overclocked to 800 MHz and 2 GHz for the memory. That's as high as they go without getting failed work units every other one.......at least on air anyways. Are you running yours even higher than that on liquid cooling or thermoelectric??? Skgiven is already answered your question. From my experience I can add to his answer: It is more rewarding (being more reliable) to overclock your CPU, than your GTX480. If you overclock the GPU only, you will experience falling GPU usage, and a little performance gain (and failing WUs). The best is to overclock both the CPU and the GPU(s), but it needs very decent GPU cooling wich is much harder (and more expensive) to install than a decent CPU cooler. My system runs with air cooling, my C2Q9650 CPU is overclocked to 4GHz (444MHz) it's cooled by a Noctua NH-D14 and a Noctua NF-P12 placed next to the NH-D14 on the side of my PC case (it has other two NF-P12s over the two GPUs). The CPU cores running at 44-53°C (with 100% CPU usage, 23°C ambient temperature). The two GPU cooler is standard, but I've modded it on my other system which has only 1 GPU. I removed the sideplate from the standard cooler, and replaced it with a Coolink SWiF2-120P 12cm fan and I've disconnected the standard cooling fan. This GPU runs at 74°C at 95% GPU usage without disturbing noise (I have two Noctua NF-P12 on the side of this PC case too). I will do the same with the other two GPUs, when I'll have some slim 12cm coolers (eg. Scythe SY1212SL12H) but it's hard to find them here. But all of the above is almost pointless, if one uses Windows 7 or Vista for crunching, because the WDDM overhead causing significant CUDA performace loss. This loss can be eliminated by means of software changes, which is much easier to do than any hardware tweaking. BTW my PC haven't reach it's peak RAC. I'm expecting it'll be around 145-150k. But there's some trouble always needs to happen: e.g. this morning I've got an internet outage for 3 hours and one of my GPUs ran out of work; one of my WUs failed yesterday; now the server seems to running out of work soon.... | |
ID: 18762 | Rating: 0 | rate: / Reply Quote | |
GPU Results ready to send: 187 and keep on falling. | |
ID: 18763 | Rating: 0 | rate: / Reply Quote | |
Thanks for the feedback. I have my CPU OCed to 3.33GHz with no load other than the usual background OS stuff. I also have had the Swan_Sync going for almost a month now. Haven't had many issues with failed WUs since I backed the GPUs down from 825 to 800 though. I guess it's Vista in combination with a slower CPU. I know when I overclocked my CPU my RAC jumped about 10% immediately after it had been leveled out and has been slowly climbing ever since but I'm as far as I can go on that since the bios doesn't go any further than 25% OC on the Rampage III. | |
ID: 18764 | Rating: 0 | rate: / Reply Quote | |
There are now 1,105 tasks waiting to be sent out ;) | |
ID: 18765 | Rating: 0 | rate: / Reply Quote | |
There is only 139 workunits in the long wu's queue at the moment. It was 151 today morning, and 131 an hour ago. | |
ID: 26590 | Rating: 0 | rate: / Reply Quote | |
Hopefully someone is keeping an eye on this and will add some tasks early next week :)
| |
ID: 26592 | Rating: 0 | rate: / Reply Quote | |
If anyone is concerned about running out of work (and only runs tasks from the Long queue) then I suggest selecting the following Project Preference, I have set this option a long time ago... Long runs' queue status atm: unsent :89 In progress: 1511 | |
ID: 26602 | Rating: 0 | rate: / Reply Quote | |
Hi dear long-queue volunteers, | |
ID: 26609 | Rating: 0 | rate: / Reply Quote | |
Thank you, Noelia! | |
ID: 26610 | Rating: 0 | rate: / Reply Quote | |
Number of unsent long workunits: 55 (1496 in progress) | |
ID: 26645 | Rating: 0 | rate: / Reply Quote | |
Long runs (8-12 hours on fastest card) 85 unsent; 2,189 in progress | |
ID: 27344 | Rating: 0 | rate: / Reply Quote | |
Long runs: 90 unsent; 1763 in progress | |
ID: 27437 | Rating: 0 | rate: / Reply Quote | |
Dears, I'm sending a new batch of long WUs, AGGd4. They are like AGGd3, but long. Thanks | |
ID: 27444 | Rating: 0 | rate: / Reply Quote | |
Thank you, Toni! | |
ID: 27447 | Rating: 0 | rate: / Reply Quote | |
Long runs: 6.240 Ready to send; 2.178 in progress. | |
ID: 28076 | Rating: 0 | rate: / Reply Quote | |
Are you sure? We started at 20000? At chrismas so 6000 left that should work out for a week minimum. Or was there a new long run batch i didnt recognize between? | |
ID: 28078 | Rating: 0 | rate: / Reply Quote | |
Long runs: 5.534 Unsent; 2.234 in progress. | |
ID: 28079 | Rating: 0 | rate: / Reply Quote | |
I might be wrong but I think the 20K tasks ran, returned and were used to autogenerate a second batch, based on the results of the first batch, which would mean 20K+20K went into the queue. | |
ID: 28082 | Rating: 0 | rate: / Reply Quote | |
I just got a couple TONI units after about a month of nothing but these NOELIA units, so we must running low on NOELIA units and coming to the end of this phase of the experiment. | |
ID: 28084 | Rating: 0 | rate: / Reply Quote | |
I just got a couple TONI units after about a month of nothing but these NOELIA units, so we must running low on NOELIA units and coming to the end of this phase of the experiment. I received such TONI too, so we won't run out that soon as I predicted, because these TONIs are more than 2 step "deep". | |
ID: 28086 | Rating: 0 | rate: / Reply Quote | |
The project status page shows the following numbers at the moment: Long runs Unsent: 1 In Progress: 1.795 There will be a shortage soon. | |
ID: 28783 | Rating: 0 | rate: / Reply Quote | |
We'll send a new big project to the long queue very soon. | |
ID: 28786 | Rating: 0 | rate: / Reply Quote | |
The long queue is empty at the moment. | |
ID: 29405 | Rating: 0 | rate: / Reply Quote | |
Both the cuda42 queues - long and short - seem to be low at the moment. | |
ID: 29551 | Rating: 0 | rate: / Reply Quote | |
I don't think that the counts that show on the stats page are accurate. It showed no long tasks available about 6 hours ago, and I'm still burning through them. | |
ID: 29555 | Rating: 0 | rate: / Reply Quote | |
I wasn't just going by the status page. I've been getting a number of, for example, | |
ID: 29558 | Rating: 0 | rate: / Reply Quote | |
I don't think that the counts that show on the stats page are accurate. It showed no long tasks available about 6 hours ago, and I'm still burning through them. You probadly become the resends or the new generated from another finished result. This does not change the fact about the empty queue ^^ ____________ DSKAG Austria Research Team: http://www.research.dskag.at | |
ID: 29559 | Rating: 0 | rate: / Reply Quote | |
Due to the unusually cold weather here in Hungary, I've put my last GTX 480 back to one of my "hibernated" hosts, and turned this host back on. It didn't received any long workunits, so I thought something's wrong with my host, but to my suprise, there is no work available in any queue. After 20 minutes of unsuccessful attepts to get work in the early morning, I've put my host back to "summer hibernate".... | |
ID: 30468 | Rating: 0 | rate: / Reply Quote | |
Jep all machines running idle :/ hope there are some new units soon because tomorrorw afternoon i activate a "third" 570 in a remote machine and have to start a workunit while sitting in front of it to see if it run correct gpugrid :! | |
ID: 30475 | Rating: 0 | rate: / Reply Quote | |
Yep, we are working on some new WU's just right now! | |
ID: 30477 | Rating: 0 | rate: / Reply Quote | |
Yep, we are working on some new WU's just right now! Just got a new 'SANTI' from the short queue. | |
ID: 30480 | Rating: 0 | rate: / Reply Quote | |
There are only 394 unsent workunits on the long queue, while there are 1926 in progress. | |
ID: 38438 | Rating: 0 | rate: / Reply Quote | |
Message boards : Server and website : GPU Results ready to send - number dwindling