Message boards :
Number crunching :
Milkyway@home on ATI cards
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
DoctorNow wrote: It's also possible now to run a new custom-made MilkyWay@Home-app on your GPU, but currently ONLY possible with an ATI-card and a 64-Bit Windows-system. Thought I'd just inform you, as it surely gets overlooked in the other thread. But be warned, currently it's really in pre-alpha stage. Buying a card for that wouldn't be fun. But if you've already got a HD38x0 (64 shader units) or HD48x0 (160 units) you might want to check it out. The speed is rediculous :) Paul D. Buck wrote: If they get it out the door soon I might just get a couple of the lower end ATI cards that can handle it just for the mean time till they get the Nvidia version done NV version is not going to happen anytome soon as they use double precision exclusively. You may remember that NV included 30 double units in GT200 along with the 240 single precision shaders. Well, ATIs RV770 has 160 5-way VLIW units and all of them can run 1 or 2 doubles each clock. That's such a massive advantage, it just plain wouldn't make sense to use NV cards here. MrS Scanning for our furry friends since Jan 2002 |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Which may mean that I will have a mix of GPUs in my future ... I don't mind, I have had good luck with ATI cards (and with Nvidia ... 6 of one ... two dozen of the other .... or something) Oh, I got the GTX 295 card today, a day early ... cards are moved about ... and I have 4 in flight on the i7 which is nice ... all the cards are moved about ... The wall draw of the pair of 295s is the same as the draw of the 295 and 280 card ... just for your information ... Now I only have the disk problem on the mac pro that is raining on my life ... sigh ... less than 10% space and the disk utilities are crashing and there is an error on the disk ... So, I got 6 new 1.5 TB drives on the way ... so how long will it take me to fill up a 3.something TB RAID 5 array ... ah well ... several days of file moving and installing and configuring the os ... sigh ... |
|
Send message Joined: 24 Dec 08 Posts: 738 Credit: 200,909,904 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
DoctorNow wrote:It's also possible now to run a new custom-made MilkyWay@Home-app on your GPU, but currently ONLY possible with an ATI-card and a 64-Bit Windows-system. One of the guys mentioned this in BOINC_dev a while back. I recall Dr A asking for details so he could add ATI support into BOINC. As to if he got the details I don't know. BOINC blog |
|
Send message Joined: 17 Apr 08 Posts: 113 Credit: 1,656,514,857 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
.... before we get too excited, doesn't the GPU work hit the credit/hour limit? Surely as you are using a GPU to crunch a CPU WU, as opposed to what happens here, the credits generated will be subject to the same limits. Nice of Dr A to show an interest in ATI - I hope he has the time to do fit that in with all the other items he is currently juggling/fumbling. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
One of the guys mentioned this in BOINC_dev a while back. I recall Dr A asking for details so he could add ATI support into BOINC. As to if he got the details I don't know. Read about that a few days ago as well. They found something which would help him, but I don't know about any further progress either. .... before we get too excited, doesn't the GPU work hit the credit/hour limit? Surely as you are using a GPU to crunch a CPU WU, as opposed to what happens here, the credits generated will be subject to the same limits. Exactly. That's what I had in mind when I said "Buying a card for that wouldn't be fun." The credit rules could change any day, though. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 12 Jan 09 Posts: 36 Credit: 1,075,543 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Something to keep in mind when building a new system, there are several motherboards out now that would support BOTH nVidia SLI and ATI Crossfire in a single system. You could conceivably have multiple cards of each running simultaneously... if you have the money to buy all those cards... |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Something to keep in mind when building a new system, there are several motherboards out now that would support BOTH nVidia SLI and ATI Crossfire in a single system. You could conceivably have multiple cards of each running simultaneously... if you have the money to buy all those cards... In a system with two PCI-e, one of each ... Three PCI-e, two of one, one of the other ... Four PCI-e, two and two ... Five PCI-e, major electrical fire! |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
But you wouldn't need SLI or crossfire for crunching, in fact you have to disable SLI anyway (don't know about crossfire). So the actual limit is rather the amount of power, cooling and space that you can provide.. ;) Edit: oh, and I don't know what windows does if you mix multiple cards which require different drivers. MrS Scanning for our furry friends since Jan 2002 |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Edit: oh, and I don't know what windows does if you mix multiple cards which require different drivers. In theory, as I have not tried it yet, is that you just install the card and the appropriate drivers and windows should be happy. Hard to say if there is hardware or driver incompatibility that would cause clashes though. In my case, I would likely take the conservative position and allocate cards to machines keeping all ATI in one and Nvidia in others... Though this is all in the future in that the first ATI application is not really ready for prime time and wide distribution on a variety of systems. As best as I can tell I have neither the right card or the right OS in my collective. Unlike some though I can wait ... heck I still have to digest the 4 new GPUs that I have installed in the last two months ... |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Apparently with some fiddling around you can run a game on an ATI card and use a nVidia for PhysX. So.. there's hope for some heterogeneous GPU landscape ;) MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 15 Feb 09 Posts: 55 Credit: 3,542,733 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Mixing video card brands in the same box only really works in Vista and Windows 7 atm. Don't even think of trying it in XP of any flavor. It won't be happy with two different display drivers fighting each other behind the scenes from what I've read. |
Paul D. BuckSend message Joined: 9 Jun 08 Posts: 1050 Credit: 37,321,185 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Mixing video card brands in the same box only really works in Vista and Windows 7 atm. Don't even think of trying it in XP of any flavor. It won't be happy with two different display drivers fighting each other behind the scenes from what I've read. Well, that answers that ... I did not have an ATI card of note that would allow me to have tested this and now i don't have too ... Though I am tempted to get one for my sole 64 bit machine so that I can take part in the GPU revolution happening at MW ... |
|
Send message Joined: 20 Oct 08 Posts: 11 Credit: 2,647,627 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Is someone know what is require to run Milkywayathome project on ATI GPU. I'm looking for because I own an ATI 2400HD pro. Thanks |
EdboardSend message Joined: 24 Sep 08 Posts: 72 Credit: 12,410,275 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
You need an ATI card with RV670 chip and up: HD38x0, HD4670?? y HD48x0. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Last time I checked 46xx series didn't work, only 38xx and 48xx. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 12 Jul 07 Posts: 100 Credit: 21,848,502 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Would anyone like to comment on the huge difference in credit awarded by the MilkyWay ATI GPU app and the GPUGrid GPU app? For example, my GTX260-216 returns about 13,000 credits a day at GPUGrid while my ATI HD4870 returns 77,000 credits a day at MilkyWay. I don't know how the 2 cards compare but don't imagine they are miles apart in performance. Possible reasons 1, some exceptionally efficient coding of the ATI GPU app by Gipsel 2, Milkyway awarding a higher than average credit return (despite recent adjustments) 3, inefficient coding of the GPUGrid GPU app 4, GPUGrid having a lower than average credit award 5, ATI cards are just better at Milkyway WUs than NVidia cards are at GPUGrid WUs I'm not suggesting a change in credit awards, I'm just puzzled at what appears to be a huge difference from what I would think are similar cards |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
1, some exceptionally efficient coding of the ATI GPU app by Gipsel yes 2, Milkyway awarding a higher than average credit return (despite recent adjustments) yes 3, inefficient coding of the GPUGrid GPU app no 4, GPUGrid having a lower than average credit award Can't say for sure. 5, ATI cards are just better at Milkyway WUs than NVidia cards are at GPUGrid WUs In some sense.. yes. OK, that was the short version. Let me eleborate a bit: Milkyway is an exceptional app in the way that the algorithm is perfectly suited to GPUs. The ATIs almost reach their peak FLOPS, a very rare case [if you intend do do anything useful in your code ;) ]. I think MW is still giving out a bit too much credits for CPUs.. now throw in the high-end GPUs, which are at least one order of magnitude faster, and you get a complicated situation. The main problem is: awarding credits according to the benchmark was an idea which was bound to fail in practice. Now we have FLOP counting.. which leads to another problem: if you have a small and very well tuned app like MW you will automatically extract higher FLOPS from your hardware than with more complex code. You could say that the hardware is running this code more efficiently. So.. should we all only run apps like MW and neglect the rest, because they give many more credits per time? I don't think this is what BOINC is made for and I don't see a solution yet. And a side note: under SETI@NV the current draw and temperatures are lower than under GPU-Grid, so you can be sure that GPU-Grid is not coded unefficiently ;) And we won't get an apples-apples comparison between the ATI and NV cards here, because MW runs double precision, where NV cards are really weak. Their crunching power is roughly comparable at singles, though. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 18 Sep 08 Posts: 368 Credit: 4,174,624,885 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It's definitely going to draw some people away from the GPUGrid Project no matter what. If you can get 60,000 to 70,000 Per Day verses 10,000 to 13,000 Per Day here not counting the GTX 295's what you gonna do. Even the GTX 295's are only capable of about 25,000 but cost 2 1/2 to 3 times the amount it does for a ATI 4870 it stands to reason to go with the ATI's & the Project that can use them. I won't lessen my Participation here @ the moment but I have ordered 2 ATI 4870's already & will order more as needed & if need be shut the NVidia's down over Time to save Electrical Cost. The word is a Nvidia Application will be out @ the MWay Project but it hasn't showed up yet, that & the word is also the NVidia Application's will be 3-4 Times slower, so a a Single ATI 4870 will be able to produce as much as 2 GTX 295's @ a Quarter or less of the Cost ... |
UBT - BenSend message Joined: 12 Aug 08 Posts: 8 Credit: 137,219 RAC: 0 Level ![]() Scientific publications ![]() ![]()
|
The only Nvidia cards which will be able to take part if MW do build a CUDA app is the GTX200 series as they can support double precision data i.e 12.3984958 However even then like poorboy has said, the GTX's won't be able to get anywhere near the top ATI cards performance. |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The only Nvidia cards which will be able to take part if MW do build a CUDA app is the GTX200 series as they can support double precision data i.e 12.3984958 You're right, apart from the fact that your example is floating point, not double precision ;) G80, G92 etc. are fine with single precision floating point, that is numbers are represented by 32 bits. Double precision requires 64 bit, which these chips can't do. If I remember correctly a GTX 280 can do about 30 GFlops in double precision, whereas an RV770 can do 200+. If MW goes CUDA, the credits will reflect this ratio. The real problem with FLOP counting is: if you create an app which just runs a stupid loop in the CPU caches and maximises ressource utilization, you should get the most credits per time for just running this application. MW is such an application, except for the fact that they actually do science in their highly optimized / efficient hot loop. So how are you going to award credits here? Based on FLOPS you have to give out many more.. MrS Scanning for our furry friends since Jan 2002 |
©2025 Universitat Pompeu Fabra