Advanced search

Message boards : Graphics cards (GPUs) : Shot through the heart by GPUGrid on ATI

Author Message
Richard Mitnick
Avatar
Send message
Joined: 8 Feb 12
Posts: 60
Credit: 17,816,440
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 28109 - Posted: 22 Jan 2013 | 13:36:51 UTC

I was planning out a new computer, a Maingear F131, with the express purpose of crunching GPUgrid. I already have a machine, a Maingear Shift Super Stock with Nvidia GTX 670's. For whatever reason, GPUGrid did not do well on this machine. I had to detach from the project.

This machine kept failing. It was totally rebuilt three times, and what finally seems to have settled things down has been the removal of GPUGrid. Why this was a problem I have no idea. GPUGrid was the impetus for this machine. The machine is currently doing GPU crunching on EINSTEIN and SETI so far with no difficulties.

Maybe various GPU projects just do not play well together.

So, I wanted to plan the new machine with ATI cards. Apparently, GPUGrid does not run on any ATI cards.

Am I correct about that, and is there any hope for the future?
____________
Please check out my blog
http://sciencesprings.wordpress.com
http://facebook.com/sciencesprings

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 28113 - Posted: 22 Jan 2013 | 14:33:27 UTC - in response to Message 28109.

hi,
no big hopes for ATI. The old code which run OpenCL has been deprecated for a new one which is now cuda only. It is still technically possible to do OpenCL but it does require a lot of work. Only justified if AMD really brings a top card in.

What was the problem with the old machine? WU failing? Temperature?
We ended up designing and building our own GPU chassis so tired of having poor cooling.

gdf

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28116 - Posted: 22 Jan 2013 | 14:44:02 UTC - in response to Message 28113.

It is still technically possible to do OpenCL but it does require a lot of work. Only justified if AMD really brings a top card in.

Hmm, judging from the performance at most other projects I would say AMD does have very fast cards. What's a 7970? Wouldn't be too hard to make a list of Open_CL projects where the 7970 is top dog.

Richard Mitnick
Avatar
Send message
Joined: 8 Feb 12
Posts: 60
Credit: 17,816,440
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 28123 - Posted: 22 Jan 2013 | 17:57:04 UTC

O.K., now I know the score on ATI.

So, I could go the other way: I could run GPUGrid as the only GPU project on the current machine, and run the others, EINSTEIN, SETI, and add MILKY WAY, on a machine with ATI.

But, there remains the question, part of which I posed above: is there any reason that GPUGrip would create difficulties on any machine with decent Nvidia cards when it is the lone GPU project? After all, before I finally quit the project, it amassed 17 million credits.

As I said above, this machine was rebuilt three times suspecting other problems than problems caused by any one project. It only calmed down once GPUGrid was no longer running.
____________
Please check out my blog
http://sciencesprings.wordpress.com
http://facebook.com/sciencesprings

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28128 - Posted: 22 Jan 2013 | 20:51:21 UTC - in response to Message 28123.

The GPU-Grid code is quite efficient and "compute dense", i.e. it taxes GPUs quite hard. The power consumption is significantly higher than running SETI or Einstein. That's why GDF asked you about temperatures. Prime candidate for your problems are overheating GPUs or insufficient PSU.

MrS
____________
Scanning for our furry friends since Jan 2002

Richard Mitnick
Avatar
Send message
Joined: 8 Feb 12
Posts: 60
Credit: 17,816,440
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 28132 - Posted: 22 Jan 2013 | 21:41:51 UTC

MrS

Thanks. I did run HW Monitor for a while. It never showed anything untoward regarding GPU temps. My power supply never failed, I was always able to get something to happen on failures, even if the machine refused to boot.

And, you know, I did manage 17 million credits on GPUGrid before I gave it up.

My GPU's are GTX 670's, air cooled. We tried sealed liquid coolers, but they repeatedly failed.

Even if the GPU's did overheat, I would think that would have initiated a shutdown of the crunching before actual damage to the cards. I believe that is what happens when CPU's overheat, a machine will shut down prior to damage.
____________
Please check out my blog
http://sciencesprings.wordpress.com
http://facebook.com/sciencesprings

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 28135 - Posted: 22 Jan 2013 | 22:25:04 UTC - in response to Message 28132.

if by "the machine kept failing", you mean that it was hanging, then it's probably insufficient power supply.
It the job was crashing after a while but the machine kept alive, it's probably temperature.

gdf

Richard Mitnick
Avatar
Send message
Joined: 8 Feb 12
Posts: 60
Credit: 17,816,440
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 28140 - Posted: 23 Jan 2013 | 1:15:29 UTC

So, I guess what I would request is, what would be considered good Nvidia cards to put on a new machine, and what in the way of specifications for a power supply?

Here is what the Maingear F131 Super Stock would have
Intel® Core™ i7 3930K Six-core 3.2GHz/3.8GHz Turbo 12MB L3 Cache w/ HyperThreading.

MAINGEAR EPIC 120 Supercooler (CPU cooling)- but that looks like a water cooled system and I have had bad luck with that, so I would ask for air cooling similar to what I have on the Shift Super Stock.

Intel® Turbo Boost Advanced Automatic Overclocking

16GB Corsair® Dominator™ Platinum DDR3-1600 Extremely Low Latency 1.5V (4x4GB).

The choices of Nvidia cards are way too plentiful

2x EVGA® GeForce™ GTX 680 SuperClocked 4GB Total GDDR5 In SLI w/PhysX [ENTHUSIAST - OC]
2x EVGA® GeForce™ GTX 680 FTW+ 8GB Total GDDR5 in SLI w/PhysX [ENTHUSIAST]
2x NVIDIA® GeForce™ GTX 680 4GB Total GDDR5 in SLI w/PhysX [ENTHUSIAST]
2x EVGA® GeForce™ GTX 670 8GB Total GDDR5 in SLI w/PhysX [ENTHUSIAST]
2x NVIDIA® GeForce™ GTX 670 4GB Total GDDR5 in SLI w/PhysX [ENTHUSIAST]
2x MSI® GeForce™ GTX 660 Ti Power Edition 4GB Total GDDR5 in SLI w/PhysX [ENTHUSIAST]
2x NVIDIA® GeForce™ GTX 660 Ti 4GB Total GDDR5 in SLI w/PhysX [ENTHUSIAST]
2x NVIDIA® GeForce™ GTX 660 4GB Total GDDR5 in SLI w/PhysX [ENTHUSIAST]



The SLI can be disabled, but since BOINC wants device 0 and Maingear wants device 0, I have SLI enabled on the Shift Super Stock. If either was O.K. with device 1, then I would not be doing this.

1TB Western Digital VelociRaptor SATA 6G 10,000rpm 64MB Cache
This was actually the last change made on the Shift Super Stock. This is alleged to be an "enterprise grade" (whatever that means) hard drive normally used in servers.

Power supplies available are
850 Watt Corsair® AX850 80+ Gold Certified Modular Power Supply ROHS
660 Watt Seasonic® X-660 80+ Gold Certified Modular Power Supply ROHS

So, this is a lot to ask, but I find people here really know their stuff.

Any opinions, and I know they could only be seen as opinions, will be gladly received. I am not about the business of building any machine myself. I am only about the business of paying for what works.

Thanks in advance for any and all replies.







____________
Please check out my blog
http://sciencesprings.wordpress.com
http://facebook.com/sciencesprings

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 28143 - Posted: 23 Jan 2013 | 2:20:26 UTC

Did you have SLI enabled on you're last rig with all the problems? If so, that may have caused all you're problems, it must be disabled to crunch WU's on GPU Grid. I'm thinking you already knew that though.

Richard Mitnick
Avatar
Send message
Joined: 8 Feb 12
Posts: 60
Credit: 17,816,440
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 28145 - Posted: 23 Jan 2013 | 3:34:59 UTC - in response to Message 28143.

flashawk-

We actually ran it both ways. In fact, with GPUGrid in the mix, we ran SLI, we ran two cards, we ran BOINC on Dev 0 only, Dev 1 only, we really tried everything, through two builds actually, and had a lot of success which was in both cases only temporary. The only thing that was not done was to connect the monitor to Dev 1. The builder was adamant that monitor be on Dev 0. The first build ran successfully for about ninety days, the second build for about 60 days, both running GPUGrid, each with SLI enabled and this disabled. The worst was SLI disabled and BOINC ignoring Dev 1, so that both BOINC and the rest of the computer were using Dev 0. I am sure that I said earlier, I was told that BOINC wants to be on Dev 0.

The way it looks now, I am not going to change anything on the Shift Super Stock. Of course, as soon as I post this, I may eat my words. But, up until now, with SLI enabled, and thus running BOINC and the rest of the machine on what looks like Dev 0, running SETI and Einstein GPU WU's, plus a bunch of CPU only projects, everything is stable.

And, this machine does nothing else. It was purchased with the express intent of running BOINC projects but with a focus on GPU work. My colleague has done the same thing, a Maingear F131 that all it does is crunch (that is an older style F131, the case is more standard and so GPU work can be questionable. Also, my colleague only runs WCG and he is on the WCG build of BOINC 6.10.58, still the standard at WCG). You know, we are talking about a lot of money devoted to this work.

If I can get some good advice on what to put on the F131, then I will run GPUGrid on that machine. If not, then I will run Einstein, SETI, and a bunch of CPU only projects.

One way or the other, the machine will be purchased. It will replace my oldest desktop, an i7-920 that is about 3-1/2 years old. It is apparently now accepted as a fact that i7's run hot. The topic is not new, but it has only recently been dealt with in any meaningful way with tthrottle to control temperature. This i7-920 ran BOINC at 100%/100% for along time with temperatures in the 90's C. Now, we know that is abusive. All of my machines are now controlled for CPU with tthrottle. So, this CPU is old before its time. There is work which is failing to finish, runt times that are ridiculous. I could have the CPU changed out, but it's like a car: what might be next.

I want to get this settled very soon. I can still get Win 7; but I do not know how much longer that will be possible.
____________
Please check out my blog
http://sciencesprings.wordpress.com
http://facebook.com/sciencesprings

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 28146 - Posted: 23 Jan 2013 | 4:17:10 UTC

You got GPU Grid to run on only one card at a time? Did you do the .xml file with

<use_all_gpus>1</use_all_gpus>

That's what I had to do to get it to run on both GPU's at the same time, look here in the first post.

http://www.gpugrid.net/forum_thread.php?id=2123&nowrap=true#16463

You're hardware looks good, I prefer Seasonic PSU's but 660 seems weak, go for the Corsair 850, there very good PSU's too.
____________

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28148 - Posted: 23 Jan 2013 | 6:28:40 UTC - in response to Message 28145.
Last modified: 23 Jan 2013 | 6:40:26 UTC

So, this CPU is old before its time. There is work which is failing to finish, runt times that are ridiculous. I could have the CPU changed out, but it's like a car: what might be next.


Not at all like a car. Cars have moving parts that suffer from wear due to friction, CPUs do not. There is likely absolutely nothing wrong with the rest of that computer yet you'll scrap it. Abuse it, refuse to follow every advice given you because "it's not my style" then scrap good kit. Sad beyond words. In fact words fail me.

My i7 runs at 60C with stock Intel fan and heat sink, no throttling. Why? Because I rub brains on my problems instead of money. Try it sometime.

I can still get Win 7; but I do not know how much longer that will be possible.


If the new GPUgrid apps about to come out are like the old then you'll get a 15% performance boost just by putting Linux on that rig. But...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28165 - Posted: 23 Jan 2013 | 20:41:37 UTC

Woha, there's really a lot here: debugging the old machine, choosing components for the new one and that "old" i7 920.

Debugging GPU-Grid
It's common advice to disable SLI for any GPU-BOINC work. Sure it's safer this way.. but SLI is no black magic. Since a driver update (maybe 2 years ago) SLI just applies to Direct3D and OpenGL, as it should, and doesn't interfere with CUDA and OpenCL at all. That's why both settings showed similar behaviour for you.

BOINC doesn't insist on running on GPU 0, it's rather that it can only use the GPUs which are not set to sleep by windows. And any non-SLI-ed GPU without a monitor attached and without the desktop extended to it is considered useless by Win and sent to sleep to save power. It's a shame it can't be waken up for GP-GPU work then. That's why you only crunched on 1 GPU with SLI disabled and the monitor disconnected.

However, if you still had problems running GPU-Grid with only 1 of 2 GPUs running, then we can probably rule out the PSU. Anything beneath this is speculation at this point and we'd need to get a more precise error description and try different things. Not sure it's worth it if you're fine running other projects as well.

Choosing a new rig
If you're goping for air cooling make sure it's high-end - that CPU will put out a lot of heat. Good coolers can easily handle it, weaker ones won't.

That Velociraptor is a fine drive for sure (though still much slower than a modern SSD), but totally useless for a dedicated cruncher. In fact, every decade-old 30 GB HDD would do, even 10 GB if you use linux or XP.

Regarding the GPUs: the GTX660Ti currently has the best price-performance ratio, as it's practically as fast as the GTX670, yet considerably cheaper. Getting the 4 GB version over the regular 2 GB ones does nothing for crunching speed (and will probably not matter in the future either), but they don't offer these at all. GTX660 is also out of the question, as it's consideraly slower than GTX660Ti. GTX680 is a high-end option. I don't think it's worth the money just for BOINC, but I'd get an i7 3770K Quad instead of the six-core as well. Less CPU throughput, but a lot less power consumption, more energy efficient and about half the price.

PSU: with 2 GTX660Ti I'd go for the Seasonic, with GTX680 superclocked you might want to go for the bigger unit.

The worn-down i7 920
Did you ever clean the heat sink and fan? If not we probably foudn out why it's running so hot now. Is it a whimpy Intel stock cooler? While that CPU does put out some heat (almost as much as your new 6-core), mid- to high-end air coolers would handle this easily, at full throttle. And the fan could be set to "very silent", a setting fine for regular work but not suitable for 24/7 number crunching. Or asked in another way: is it loud? If not, the fan probably not even tries to cool the CPU down. Or.. did the fan fail?

And you're right that a few years at ~90°C causes more stress to the CPU than normal work. However, these things can take a serious beating. I suppose it's just a matter of fixing the cooling and it will be good to go again. However, the CPU is not very power efficient by todays standards, soit might as well be retired from 24/7 crunching and be handed down to someone who just needs a regular desktop, or a gaming machine (yes, it's still pretty good at that).

@Dagorath: don't be too harsh and give him a chance.

MrS
____________
Scanning for our furry friends since Jan 2002

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28170 - Posted: 23 Jan 2013 | 22:03:28 UTC - in response to Message 28165.
Last modified: 23 Jan 2013 | 22:05:01 UTC

@Richard (mitrichr)

You've been a good friend and you always will be. I apologize for being so harsh but my *best* friends on my list of friends are the ones who kick me in the butt when I'm a dummy. Obviously I get kicked a lot! So if I'm harsh and kicking your butt it's not because I don't like you it's because I know you can do better and I don't want to see you fail.

BOINC doesn't insist on running on GPU 0, it's rather that it can only use the GPUs which are not set to sleep by windows. And any non-SLI-ed GPU without a monitor attached and without the desktop extended to it is considered useless by Win and sent to sleep to save power. It's a shame it can't be waken up for GP-GPU work then. That's why you only crunched on 1 GPU with SLI disabled and the monitor disconnected.


I've heard of crunchers putting a dongle on their Video-card and I've never really understood why. Maybe now I do? Is the purpose of the dongle to imitate a monitor to make Windows think the card is attached to a monitor which then prevents Windows from putting the card to sleep? Would that help mitrichr (Richard)?

Does Linux put a card to sleep if it doesn't have a monitor attached?

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28180 - Posted: 24 Jan 2013 | 8:47:53 UTC

A 660 Watt Power Supply is to les for two nVidia high-end cards, I found out myself.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28184 - Posted: 24 Jan 2013 | 9:57:19 UTC - in response to Message 28180.

I agree with MrS, an i7-3770K with two GTX660Ti's would be a better option. The only reasons for getting an LGA 2011 CPU would be to have 12 threads or support 4 GPU's.

I have an i7-3770K @4.2GHz, a GTX660Ti and a GTX470 FOC in the same system. When running two GPUGrid tasks and a few CPU tasks the system draws around 450W.
A similar system with two GTX660Ti's would draw ~50W less power taking the systems total to ~400W. A quality 80+ 550W PSU or better is capable of supporting this, and in the past I ran two GTX470's on a 550W Corsair PSU without issue. An average PSU is a no-no for GPU crunching. Note also that better PSU's are less wasteful and save you on electric. My Corsair HX750 provides optimal power efficiency (91 to 92%) at around 400W to 550W @240V. PSU fan speed, and therefore noise is also controlled according to the power draw - the higher the power draw the nosier they get. So it's usually better to 'air on the side of caution' and get a PSU that can deliver more power than you need.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Richard Mitnick
Avatar
Send message
Joined: 8 Feb 12
Posts: 60
Credit: 17,816,440
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 28185 - Posted: 24 Jan 2013 | 9:59:49 UTC
Last modified: 24 Jan 2013 | 10:02:12 UTC

MrS-

Thanks for taking so much time to look at my issues.

The i7-920 machine will go into the shop to have the CPU looked at. Either new thermal paste will be done, or if necessary and if possible a replacement, unless the cost is too high. Also, I will ask about better cooling.

Regarding Dagorath, he is just a nasty harsh mean-spirited Canuck who should move to Florida. He does not even like Hockey.

TJ-

Very interesting about the power supply. The Shift Super Stock has a 1000 watt unit.

skgiven-

Thanks also for your comments.

I have to say, I went into GPU crunching with very little knowledge, looking especially for the GPUGrid project. I am surprised I have gotten this far with GPU work. GPUGrid is very demanding; Milky Way requires DP cards, yet ATI is not all that popular. This field is still in its infancy.
____________
Please check out my blog
http://sciencesprings.wordpress.com
http://facebook.com/sciencesprings

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28187 - Posted: 24 Jan 2013 | 10:47:18 UTC - in response to Message 28185.

Regarding Dagorath, he is just a nasty harsh mean-spirited Canuck who should move to Florida. He does not even like Hockey.


You just want me to come down there and teach y'all how to play hockey so you can eventually get a team together ;-)

The pics you mentioned at Orbit@home... I'm working on them and will send you copies. As I mentioned the whole works is built mostly from scrap so it doesn't look pretty ATM, it needs paint which is sitting there in the corner exactly where I put it 2 months ago. I like painting even less than hockey but I'm making headway, for example I bought sandpaper last week and put there right beside the paint. Looking at buying a brush soon.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28202 - Posted: 24 Jan 2013 | 21:40:22 UTC - in response to Message 28185.

Regarding Dagorath, he is just a nasty harsh mean-spirited Canuck who should move to Florida.

Nice to see you two are getting along :D

And you're right, the field of GPU crunching is still developing in a pretty dynamic way. Lot's of opportunities and reward, but also some homework to do (to do it right).

@SK: I think you mean "err on the safe side", as in "to error"?

MrS
____________
Scanning for our furry friends since Jan 2002

Richard Mitnick
Avatar
Send message
Joined: 8 Feb 12
Posts: 60
Credit: 17,816,440
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 28216 - Posted: 26 Jan 2013 | 13:41:04 UTC

After reading through all of this, it seems to me that there should be somewhere on the web site a statement of the minimum requirements to safely and successfully run WU's on this project. Specifically, cards, power supply, CPU, DRAM, maybe cooling.

As I revealed, I have a very expensive and powerful computer which managed to work up 17 million credits, but which had to be rebuilt three times. That is a very costly situation, definitely to be avoided if possible.
____________
Please check out my blog
http://sciencesprings.wordpress.com
http://facebook.com/sciencesprings

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28221 - Posted: 26 Jan 2013 | 19:17:24 UTC - in response to Message 28216.

As I revealed, I have a very expensive and powerful computer which managed to work up 17 million credits, but which had to be rebuilt three times. That is a very costly situation, definitely to be avoided if possible.


A few of us tried to help you avoid an ugly situation but you wouldn't listen. Whatever losses you suffered are YOUR fault because you would not listen to common sense. If it was all the fault of this project or any other project then why are the rest of us not forced to rebuild our rigs too? Why has this happened only to you?

Your rig is no different than several others here with respect to hardware specs. The reason yours cost so much to build and repair is because you don't know how or where to buy. That was all explained to you months ago but you wouldn't have any of it. You made your bed now lay in it and stop blaming it on others.

Profile oldDirty
Avatar
Send message
Joined: 17 Jan 09
Posts: 22
Credit: 3,805,080
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwat
Message 28394 - Posted: 3 Feb 2013 | 13:33:20 UTC - in response to Message 28113.
Last modified: 3 Feb 2013 | 13:50:01 UTC

hi,
no big hopes for ATI. The old code which run OpenCL has been deprecated for a new one which is now cuda only. It is still technically possible to do OpenCL but it does require a lot of work. Only justified if AMD really brings a top card in.



gdf

or better, is gpugrid willing to break from the nVidia contract?
but what i know, just my 2 cents.

O.K., now I know the score on ATI.

So, I could go the other way: I could run GPUGrid as the only GPU project on the current machine, and run the others, EINSTEIN, SETI, and add MILKY WAY, on a machine with ATI.


You can add WCG HCC and Poem@home for nice support on real good crunch VGA No1, AMD HD79xx.

____________

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28395 - Posted: 3 Feb 2013 | 15:10:04 UTC
Last modified: 3 Feb 2013 | 15:13:03 UTC

WGC/HCC will be out of work in less than four months, and I don't know of any new projects that will be using GPUs at all.

POEM has only enough work to dribble out a few work units to each user, and even worse, when HCC ends a lot of those people will move over to POEM. So there is no net gain in work done, only dividing the present work among more people.

The fact is that if you are interested in biomedical research, your only real option is Nvidia for the foreseeable future. (Folding may have an improved AMD core out eventually, though Nvidia will probably still be better.)

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28396 - Posted: 3 Feb 2013 | 20:20:54 UTC - in response to Message 28394.

hi,
no big hopes for ATI. The old code which run OpenCL has been deprecated for a new one which is now cuda only. It is still technically possible to do OpenCL but it does require a lot of work. Only justified if AMD really brings a top card in.



gdf

or better, is gpugrid willing to break from the nVidia contract?
but what i know, just my 2 cents.


A contract? You mean nVIDIA is paying GPUgrid to use only nVIDIA thereby ignoring thousands of very capable AMD cards installed in machines that run BOINC? What is GPUgrid's motivation for agreeing to that contract... to minimize their production?

____________
BOINC <<--- credit whores, pedants, alien hunters

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 28399 - Posted: 3 Feb 2013 | 22:06:27 UTC - in response to Message 28396.


A contract? You mean nVIDIA is paying GPUgrid


If only.

MJH.

Profile AdamYusko
Send message
Joined: 29 Jun 12
Posts: 26
Credit: 21,540,800
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 28405 - Posted: 4 Feb 2013 | 1:35:09 UTC

While I am not affiliated with GPU Grid I think it has to do with the fact that the GPU Grid code is best suited to CUDA enabled projects which out perform OpenCL out of the box, without custom tailoring settings to optimize OpenCL settings which only just then become comparable to the CUDA set up.

I have not done much more reading into CUDA vs. OpenCL besides that, but depending on the types of tasks needed, and the way the code is implemented they will preform various tasks at varying proficiencies. Some projects are best suited for OpenCL while others are best suited for CUDA. That does not change the fact that if you want to get the most out of a card without needing to tinker with setting to optimize it for a given task, CUDA is always better.


I hate that quite a few people get on this forum and think GPU Grid has some sort of Vendetta against AMD cards. As an academic that understands the time crunch imposed on the researchers, I know that for such a small crew, running off a relatively tight budget ( face it next to no educational institution throws a lot of money towards supporting any project like this, and most institutions are relatively tight with their budgets as they would rather grow their endowment), but they made a decision some years ago that CUDA better suited their needs so they scrapped the OpenCL code because it was becoming too much of a headache to maintain. Yet so many people treat these projects like they are not there to help the projects out, but rather the projects are there to serve their users every need. I honestly find it sick and twisted.
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28412 - Posted: 4 Feb 2013 | 19:57:50 UTC - in response to Message 28405.

This projects CUDA code could probably still be ported to OpenCL, but the performance would be poorer for NVidia cards and it still wouldn't work, or work well for AMD cards.
It's debatable if the research team is even big enough personnel ways to properly support both OpenCL and CUDA research. There is also a big financial overhead; it would probably require a second server and support.

Performance is largely down to the drivers and AMD's support for OpenCL on their cards, which are different and do some things faster, but others slower, and simply can't do some things. NVidia have a bigger market share and have been supporting CUDA for longer. As well as being more mature CUDA is more capable when it comes to more complex analysis.

GPUGrid has been and still is the best research group that uses Boinc. Just because it hasn't been able to perform OpenCL research on AMD GPU's doesn't change that, nor will it.

If AMD's forthcoming HD 8000 series is supported with better drivers and is more reliable when it comes to OpenCL then perhaps things will change. However, NVidia aren't sitting about doing nothing - they are developing better GPU's and code and will continue to do so for both OpenCL and CUDA for the foreseeable future.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28414 - Posted: 4 Feb 2013 | 22:40:01 UTC - in response to Message 28412.

Before anybody accuses me of having a contract with nVIDIA be aware that I just took delivery of an AMD 7970, it's installed in host 154485@POEM.

This notion that CUDA is better suited for complex data analysis and modeling than OpenCL is widely reported on the 'net. skgiven isn't just making that up because he has a contract with nVIDIA, it's generally accepted as fact. I've seen it reported in several different places and have never seen anybody dispute it.

I would love to see GPUgrid support my sexxy new 7970 but I don't think it's a wise thing for them to do, at this point in time. Supporting CUDA alone is using up a lot of their development time and from reports in the News section CUDA is getting them all the data they can handle.
____________
BOINC <<--- credit whores, pedants, alien hunters

Profile tito
Send message
Joined: 21 May 09
Posts: 16
Credit: 1,057,958,678
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 28415 - Posted: 4 Feb 2013 | 23:09:20 UTC

POEM; 7970 + Athlon X2 ?
Waste of GPU power. PM me - i will tell You what's goin on with POEM on GPU. (till tomorow - later on no internet for 10 days).
BTW - owners of AMD GPU can support GPUGrid with Donate@home. I must think about as distrrgen started to make me angry.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28442 - Posted: 7 Feb 2013 | 15:37:13 UTC - in response to Message 28414.

This notion that CUDA is better suited for complex data analysis and modeling than OpenCL is widely reported on the 'net. skgiven isn't just making that up because he has a contract with nVIDIA, it's generally accepted as fact. I've seen it reported in several different places and have never seen anybody dispute it.

I think a good way to put it is that CUDA is more mature than Open_CL. It's been around much longer, however Open_CL is catching up. It's also true that since CUDA is NVidia's proprietary language they can tweak it to perform optimally on their particular hardware. The downside is that they have the considerable expense of having to support both CUDA and Open_CL. AMD on the other hand dropped their proprietary language and went for the open solution. Time will tell.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28443 - Posted: 7 Feb 2013 | 18:59:08 UTC - in response to Message 28442.

My statement summarizes the current state of affairs, yours holds open future possibilities so I think yours is a better way to put it except, lol, many readers can't appreciate what maturity has to do with a programming platform. I see the possibility that OpenCL might one day be just as capable as CUDA but I think that will be difficult to accomplish due to the fact that it's trying to work with 2 different machine architectures. As you say, time will tell. Amazing things can happen when the right combination of talent, money and motivation are brought to bear on a problem. I could be wrong (I don't read the markets as well or as regularly as many do) but I think sales are brisk for AMD as well as nVIDIA so the money is probably there but it depends on how much of that the shareholders want to siphon off into their pockets and how much they want to plow back into development.

____________
BOINC <<--- credit whores, pedants, alien hunters

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28445 - Posted: 7 Feb 2013 | 22:56:05 UTC - in response to Message 28443.

My statement summarizes the current state of affairs, yours holds open future possibilities so I think yours is a better way to put it except, lol, many readers can't appreciate what maturity has to do with a programming platform. I see the possibility that OpenCL might one day be just as capable as CUDA but I think that will be difficult to accomplish due to the fact that it's trying to work with 2 different machine architectures. As you say, time will tell.

Open_CL works with far more than just ATI/AMD and NVidia CPUs. From the Wikipedia:

"Open Computing Language (OpenCL) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), DSPs and other processors. OpenCL includes a language (based on C99) for writing kernels (functions that execute on OpenCL devices), plus application programming interfaces (APIs) that are used to define and then control the platforms. OpenCL provides parallel computing using task-based and data-based parallelism. OpenCL is an open standard maintained by the non-profit technology consortium Khronos Group. It has been adopted by Intel, Advanced Micro Devices, Nvidia, and ARM Holdings.
For example, OpenCL can be used to give an application access to a graphics processing unit for non-graphical computing (see general-purpose computing on graphics processing units). Academic researchers have investigated automatically compiling OpenCL programs into application-specific processors running on FPGAs, and commercial FPGA vendors are developing tools to translate OpenCL to run on their FPGA devices."

Amazing things can happen when the right combination of talent, money and motivation are brought to bear on a problem. I could be wrong (I don't read the markets as well or as regularly as many do) but I think sales are brisk for AMD as well as nVIDIA so the money is probably there but it depends on how much of that the shareholders want to siphon off into their pockets and how much they want to plow back into development.

NVidia is profitable and has JUST started paying a dividend as of last quarter. AMD isn't making a profit, but hopes to in 2013. It's funny that so many kids pounded AMD for trying to rip them off after the 79xx GPUs were introduced, considering that AMD was losing money. AMD does not pay a dividend so the shareholders aren't getting anything. The stock price of AMD has not done well in recent years so the shareholders have been looking at considerable loses. It's too bad for us as competition drives technological advancement.

Regards/Beyond

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28446 - Posted: 8 Feb 2013 | 8:38:37 UTC - in response to Message 28445.

...competition drives technological advancement.

While I don't doubt this statement, I think that better and better (GPU-based) supercomputers will be built whether there is or isn't competition in the (GP)GPU industry. Moreover the GPU based products made for gaming purposes become less and less important (i.e. profitable) along the way.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28447 - Posted: 8 Feb 2013 | 8:47:08 UTC - in response to Message 28445.

That's a pretty impressive list of things people think OpenCL is good for. I won't disagree with them because my expertise on the subject is stretched even at this point. I guess my impression is that it's kind of like a Swiss army knife. They're pretty cool looking things and one thinks there is no job it can't do but the fact is they really don't work that well. Or those screwdrivers that have 15 interchangeable bits stashed in the handle. They're compact and take up a lot less room than 15 screwdrivers but if you look in a mechanic's tool chest you won't find one. They hate them and if you give them one they'll toss it in the trash bin. And so OpenCL maybe does a lot of different things but does it do any of those things well? I honestly don't know, I don't work with it, just an enduser.

Btw, the SAT@home project announced testing of their new GPU app in this thread. It's CUDA so it seems they don't think much of OpenCL either.

You're right about the benefits of competition. Perhaps after a few years of competition and maturation OpenCL will push CUDA out.

____________
BOINC <<--- credit whores, pedants, alien hunters

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28448 - Posted: 8 Feb 2013 | 9:12:07 UTC - in response to Message 28446.

...competition drives technological advancement.

While I don't doubt this statement, I think that better and better (GPU-based) supercomputers will be built whether there is or isn't competition in the (GP)GPU industry.


If there is a demand for it someone will build a better supercomputer even if they have to use current tech. Seems to me one way to accomplish it would be to build a bigger box to house it then jam more of the current generation of processors into it. Or does it not work that way?

Moreover the GPU based products made for gaming purposes become less and less important (i.e. profitable) along the way.


Why less profitable... market saturation?

____________
BOINC <<--- credit whores, pedants, alien hunters

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28449 - Posted: 8 Feb 2013 | 13:04:22 UTC - in response to Message 28448.

At present CUDA is faster and more powerful for more complex applications. Because of the success of Fermi, NVidia was contracted specifically to build supercomputer GPU's. They did so, and when they were built, that's were they all went, until recently; you can now buy GK110 Tesla's. This contract helped NVidia financially; it meant they had enough money to develope both supercomputer GPU's and gaming GPU's, and thus compete on these two fronts. AMD don't have that luxury and are somewhat one-dimensional being OpenCL only. Despite producing the first PCIE3 GPU and manufacturing CPU's there are no 'native' PCIE3 AMD motherboards (just the odd bespoke exception from ASUS). An example of the lack of OpenCL maturity is the over reliance on PCIE bandwidth and systems memory rates. This isn't such an issue with CUDA. This limitation wasn't overcome by AMD, and they failed to support their own financially viable division. So to use an AMD GPU at PCIE3 rates you need to buy an Intel CPU! What's worse is that Intel don't make PCIE GPU's and can thus limit and control the market for their own benefit. It's no surprise that they are doing this. 32 PCIE lanes simply means you can only have one GPU at PCIE3x16, and the dual-channel RAM hurts discrete GPUs the most. While Haswell is supposed to support 40 PCIE lanes, you're still stuck with dual-channel RAM, and the L4 cache isn't there to support AMD's GPU's!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28451 - Posted: 8 Feb 2013 | 21:21:34 UTC - in response to Message 28448.

If there is a demand for it someone will build a better supercomputer even if they have to use current tech. Seems to me one way to accomplish it would be to build a bigger box to house it then jam more of the current generation of processors into it. Or does it not work that way?

It's possible to build a faster supercomputer in that way, but its running costs will be higher, therefore it's might not be financially viable. To build a better supercomputer which fits in the physical limitations (power consumption, dimensions) of the previous one and faster at the same time, the supplier must develop their technology.

Why less profitable... market saturation?

Basically because they are selling the same chip much cheaper for gaming than for supercomputers.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28452 - Posted: 8 Feb 2013 | 21:45:19 UTC - in response to Message 28446.

...competition drives technological advancement.

While I don't doubt this statement, I think that better and better (GPU-based) supercomputers will be built whether there is or isn't competition in the (GP)GPU industry. Moreover the GPU based products made for gaming purposes become less and less important (i.e. profitable) along the way.

With more than one competitor GPUs will obviously progress far faster than in a monopolistic scenario. We've seen technology stagnate more than once when competition was lacking. I could name a few if you like. Been building and upgrading PCs for a long, long time. Started with the Apple, then Zilog Z80 based CPM machines and then the good old 8088 and 8086 CPUs...

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28456 - Posted: 9 Feb 2013 | 2:53:59 UTC - in response to Message 28452.
Last modified: 9 Feb 2013 | 3:02:54 UTC

...competition drives technological advancement.

While I don't doubt this statement, I think that better and better (GPU-based) supercomputers will be built whether there is or isn't competition in the (GP)GPU industry. Moreover the GPU based products made for gaming purposes become less and less important (i.e. profitable) along the way.

With more than one competitor GPUs will obviously progress far faster than in a monopolistic scenario. We've seen technology stagnate more than once when competition was lacking. I could name a few if you like. Been building and upgrading PCs for a long, long time. Started with the Apple, then Zilog Z80 based CPM machines and then the good old 8088 and 8086 CPUs...


Neverone to pass up the opportunity for a little brinksmanship or perhaps just the opportunity to reminisce, my first one was a Heath kit a friend and I soldered together with an iron I used as a child to do woodburning, that and solder the size plumber's use. No temperature control on the iron, I took it to uncle who ground the tip down to a decent size on his bench grinder. We didn't have a clue in the beginning and I wish I could say we learned fast but we didn't. It yielded an early version of the original Apple Wozniak and friends built in... ummm...whose garage was it... Jobs'? Built it, took 2 months to fix all the garbage soldering and debug it but finally it worked. After that several different models of Radio Shack's 6809 based Color Computer, the first with 16K RAM and a cassette tape recorder for storage and the last with 1 MB RAM I built myself and an HD interface a friend designed and had built in a small board jobber in Toronto. He earned an article in PC mag for that, it was a right piece of work. That gave me 25 MB storage and was a huge step up from the 4 drive floppy array I had been using. It used OS/9 operating system (Tandy's OS/9 not a Mac thing), not as nice as CPM but multi-tasking and multi-user. Friends running 8088/86 systems were amazed. And it had a bus connector and parallel port we used and for which we built tons of gizmos, everything from home security systems to engine analyzers. All with no IRQ lines on the 6809, lol.

I passed on the 80286 because something told me there was something terribly wrong though I had no idea what it was. Win2.x and 3.1 was useless to me since my little CoCo NEVER crashed and did everything Win on a '286 could do including run a FidoNet node, Maximus BBS plus BinkleyTerm. Then the bomb went off... Gates publicly declared the '286 was braindead, IBM called in their option on OS/2, the rooftop party in Redmond, OS/2 being the first stable multitasking GUI OS to run on a PC and it evolving fairly quickly into a 16 bit OS while Win did nothing but stay 8 bit and crash a lot. Ran OS/2 on a '386 I OC'd and built a water cooling system for through the '486 years then a brief dalliance with Win98 on my first Pentium which made me puke repeatedly after rock solid OS/2 and genuine 16 bitness, CPM and OS/9 so on to Linux which I've never regretted for a minute.

Windows never really had any competition. IBM priced OS/2 right out of the market so it was never accepted, never reached critical mass and IBM eventually canned it. Apple did the same with Mac but somehow they clung on perhaps for the simple reason they were able to convince the suckers their Macs were a cut above the PCs, the pitch they still use today. CPM died, Commodore died, and Gates was the last man standing, no competition. And that is why Windows is such a piece of sh*t. What other examples do you know of?
____________
BOINC <<--- credit whores, pedants, alien hunters

Dylan
Send message
Joined: 16 Jul 12
Posts: 98
Credit: 386,043,752
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwat
Message 28457 - Posted: 9 Feb 2013 | 3:20:51 UTC - in response to Message 28456.

There are exceptions to everything, including the statement "competition drives technological advancement".

One example however where this is true is the 600 series GPU's by nvidia. There are rumors that say that the current 680 is actual the 660 that nvidia had planned out, however there wasn't anything to compete with the 680 they had originally planned, so they didn't release it and instead rebranded the 660 to a 680, which happened to be close in performance to AMD's 7970, and then made a whole new 600 series line based off of the planned 660, now current 680.

Furthermore, it is speculated now that nvidia is finally releasing the planned 680 as the Geforce Titan.

Whether these rumors are true or not, it still shows how without competition (in this case, an equivalent AMD card to nvidia's planned 680) nvidia didn't release what they had planned, and instead gave out a lesser performing card for the price of a higher-end one, and saved the extreme performing card (the Titan) for later.

In addition, less people will have the Titan because it is $900 versus the current 680, which is $500, however if AMD did have a more powerful card, nvidia would have had to put out the Titan as the 680 a while ago to compete. In other words, if there was some competition, nvidia would have given a more powerful card, a better piece of technology, for less versus the current 680.


I hope this story made sense, as I was typing it out, I felt that one could get confusing reading it. If one wants more information on these rumors, a Google search of something like "680 actually 660 rumor" to get something like this:


http://forums.anandtech.com/showthread.php?t=2234396



Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28458 - Posted: 9 Feb 2013 | 3:27:31 UTC - in response to Message 28451.

Retvari,

Yes, I had some trouble with that word you used... "better". Thanks for defining it. And thanks to you and skgiven for filling me in on the details concerning what's been going on in the marketplace regarding nVIDIA's supplying GPUs for supercomputers. I knew they were supplying them but I didn't see the rest of the puzzle until now. It all starts to make so much more sense now and once again I say "if you don't know history you can't truly appreciate where you are today and will likely have trouble choosing a wise path to your future".

So now a question.... I read recently that nVIDIA purposely disabled 64bit floating point ops to protect their high end models. Now that I realize what we're getting on these consumer cards is chips leftover from sales for supercomputers I am beginning to think that maybe what really happened is that these chips were ones that the 64 bit ops did not work on (manufacturing defects due to contaminants and what not) or did not work reliably which meant they did not meet the contracted specs for the supercomputers so they've taken those and made sure the 64bit does not work at all and marketed them as video cards. Similar to when part of the intended cache on an i7 doesn't work reliably they can sometimes excise/disable that portion while leaving the remainder functional. Then they sell it as a model with smaller cache at a lower price. Is that what's happened with the 64 bit ops on nVIDIA cards?


____________
BOINC <<--- credit whores, pedants, alien hunters

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28460 - Posted: 9 Feb 2013 | 16:11:11 UTC - in response to Message 28458.

Is that what's happened with the 64 bit ops on nVIDIA cards?

In short: no.

The number of consumer GPUs are still a lot higher than Quadro, Tesla and supercomputers (basicaly custom Teslas) combined. They don't have that many defective chips. And with GPUs it's easy to just disable a defective SMX, instead of trying to disable certain features seletively.

In the Fermi generation they did cut FP performance from 1/2 the SP performance down to 1/8 on consumer GF100 and GF110 chips (the flagships). However, this was purely to protect sales of the more expensive cards.

With Kepler it's different: all "small" chips, up to GK104 (GTX680) physically have only 1/12 the SP performance in DP. This makes them smaller and cheaper to produce. Only the flagship GK110 has 1/3 the SP performance in DP. These are the only Teslas which matter for serious number crunching. We'll see if they'll cut this again for GF Titan.

MrS
____________
Scanning for our furry friends since Jan 2002

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28461 - Posted: 9 Feb 2013 | 16:17:44 UTC - in response to Message 28449.

An example of the lack of OpenCL maturity is the over reliance on PCIE bandwidth and systems memory rates. This isn't such an issue with CUDA.

POEM@Home is special, the performance characteristic seen there is firstly a result of the specific code being run. What part of this can be attributed to OpenCL in general is not clear. E.g. look at Milkyway: they didn't loose any performance (and almost don't need CPU support) when transitioning from CAL to OpenCL in HD6000 and older cards. The reason: simple code.

However, there was/is some function call / library being used which was optimized in CAL. It's still being used on the older cards (requires some fancy tricks). However, HD7000 GPUs can't use the optimized CAL routine and loose about 30% performance jsut due to this single function. And nothing has changed in this regard since about a year. That's what maturity and optimization mean for GP-GPU.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 28462 - Posted: 9 Feb 2013 | 16:20:00 UTC - in response to Message 28458.

Is that what's happened with the 64 bit ops on nVIDIA cards?


The GTX580 and the 20x0 series Teslas used the same GF100/GF110 silicon. The products were differentiated, in part, by the amount of DP logic enabled. For the Kepler generation, there are separate designs for GTX680 and the Kx0 series silicon -GK114 and GK110 respectively. The former is distinguished by having a simpler SM design and only a few DP units. It will be interesting to see what features tip up in the GK110-using Geforce Titan. I expect DP performance will be dialled down, in part to maintain some differentiation against the Tesla K20c and also to allow reuse of partially defective dies.

Anyhow, the GPUGRID application has minimal need for DPFP arithmetic and - furthermore - was developed in a period before GPUs had any DP capability at all.

MJH




Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28463 - Posted: 9 Feb 2013 | 17:07:45 UTC - in response to Message 28456.

Neverone to pass up the opportunity for a little brinksmanship or perhaps just the opportunity to reminisce, my first one was a Heath kit a friend and I soldered together with an iron I used as a child to do woodburning, that and solder the size plumber's use. No temperature control on the iron, I took it to uncle who ground the tip down to a decent size on his bench grinder. We didn't have a clue in the beginning and I wish I could say we learned fast but we didn't. It yielded an early version of the original Apple Wozniak and friends built in... ummm...whose garage was it... Jobs'? Built it, took 2 months to fix all the garbage soldering and debug it but finally it worked. After that several different models of Radio Shack's 6809 based Color Computer, the first with 16K RAM and a cassette tape recorder for storage and the last with 1 MB RAM I built myself and an HD interface a friend designed and had built in a small board jobber in Toronto. He earned an article in PC mag for that, it was a right piece of work. That gave me 25 MB storage and was a huge step up from the 4 drive floppy array I had been using. It used OS/9 operating system (Tandy's OS/9 not a Mac thing), not as nice as CPM but multi-tasking and multi-user. Friends running 8088/86 systems were amazed. And it had a bus connector and parallel port we used and for which we built tons of gizmos, everything from home security systems to engine analyzers. All with no IRQ lines on the 6809, lol.

Wow, you're old too! Sounds like a nice piece of engineering you did there.

Win2.x and 3.1 was useless to me since my little CoCo NEVER crashed and did everything Win on a '286 could do including run a FidoNet node, Maximus BBS plus BinkleyTerm. Then the bomb went off... Gates publicly declared the '286 was braindead, IBM called in their option on OS/2, the rooftop party in Redmond, OS/2 being the first stable multitasking GUI OS to run on a PC and it evolving fairly quickly into a 16 bit OS while Win did nothing but stay 8 bit and crash a lot. Ran OS/2 on a '386 I OC'd and built a water cooling system for through the '486 years then a brief dalliance with Win98 on my first Pentium which made me puke repeatedly after rock solid OS/2 and genuine 16 bitness, CPM and OS/9 so on to Linux which I've never regretted for a minute.

I ran a 4 line ProBoard BBS for years , even before Al Gore invented the internet. OS/2 based of course as that was the x86 multitasking OS that was stable. I really did like OS/2.

Gates was the last man standing, no competition. And that is why Windows is such a piece of sh*t. What other examples do you know of?

A hardware example: x86 CPUs. Remember when Intel would release CPUs at 5 MHz speed bumps and charge a whopping $600 (think of that in today's dollars) for the latest greatest? The only thing that kept them honest at all was AMD, who due to licensing agreements was able to copy the 286/386/486 and bring out faster, cheaper versions of them all. Later Intel dropped some of the licensing and meanwhile brought out the very good P3. AMD then countered with the Athlon at about the same time Intel brought out the P4. The P4 was designed for high clock speeds to allow Intel to apply it's old incremental "small speed increase ad nauseum" strategy. Unfortunately for Intel, the Athlon was a much better processor than the P4 and Intel had to scramble hard to try to make up the ground (not to mention a lot of dirty tactics and FUD). They did of course but it took them years to do it. If AMD hadn't been there we'd probably still be using P4 based processors with small speed bumps every year. Competition drives technology...

Beyond

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28464 - Posted: 9 Feb 2013 | 23:53:20 UTC - in response to Message 28463.
Last modified: 10 Feb 2013 | 0:01:41 UTC

Oh yah, I'm older than dirt. I remember back when dirt first came out, you know dirt was clean back then.

A 4 line BBS was impressive in those days and would have been impossible on one machine without OS/2. As far as I remember the only guys running that many lines on DOS or Win were using one machine per line. At its zenith mine was fed via satellite downlink on sideband through a C band TV receiver. Uplink was dialup through the local fido hub. There were maybe 75 of us doing the sat downlink thing in North America (it wasn't available in Europe or the southern hemisphere) and everybody except me and one other OS/2 user needed a separate machine just to handle the data off the sat. They used Chuck Forsburg's gz to fire the data out the parallel port and over to the machine the BBS ran on. Ethernet just wasn't popular amongst the hobbyists back then. For me it all ran on 1 machine stable as a rock under OS/2. But OS/2 was Big Blue and everybody loved to hate Big Blue so they wouldn't buy in.

Competition drives technology...


That was a good reminisce about Intel vs. AMD, thanks. Competition does drive technology and I wish I could follow up with another reminisce on precisely that topic but I can't so I'll tell a story about how competition drives people nuts.

Recall I was running the CoCo (Color Computer) about the same time friends were running 8088/86 PCs. Mine ran at ~2.8 MHz, their PCs ran at... ummm... what was it... 8 MHz? 10 MHz? Anyway, the point is that the 6809 executes an instruction every 1/4 clock cycle (characteristic of all Motorola chips of that era, perhaps even modern ones) so my cheap little CoCo could pretty much keep up with their 8088/86 machines which execute every an instruction every cycle. (Yes, I'm over-simplifying here as some instuctions require more than 1 cycle or 1/4 cycle). Then one of those 500 MHz bump ups came out and they all forked out the big money for the faster chip and called me to bring my CoCo over so they could show me who was boss dog. Little did they know I had stumbled upon and installed an Hitachi variant of the 6809 that ran at ~3.2 MHz as opposed to the stock Motorola chip at 2.8 MHz and had soldered in a new oscillator to boost that up to ~3.5 MHz. Lol, their jaws dropped when we ran our crude little test suite for my humble little CoCo still kept up with their expensive PCs. Then they started to argue amongst themselves and accuse their ringleader of feeding them BS about the effectiveness of their expensive upgrades to the faster CPU. Oh they were were going to write Intel and tell them off and one said they had obviously gotten some bogus Chinese knockoffs and yada yada yada. I didn't tell them about my upgrade for a week, just let them stew in their own juices because you know, timing is everything. Then I told them and they settled down. They hated it but they settled down, hehehe, the calm before the storm and I let that ride for a week. Then I told them *my* upgrade had cost me about $12 which was the truth and then their jaws hit the floor again as their upgrades had cost.... oh I forget exactly but it was $200 for sure, maybe more. Back then $200 was a lot of money so out came that unfinished letter to Intel and more yada yada yada and steam blowing. Yah, competition drives people nuts, lol.
____________
BOINC <<--- credit whores, pedants, alien hunters

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28465 - Posted: 10 Feb 2013 | 0:10:32 UTC - in response to Message 28460.

Is that what's happened with the 64 bit ops on nVIDIA cards?

In short: no.


Thanks. I snipped the details for brevity but rest assured they help me fill in the puzzle. Much appreciated.

____________
BOINC <<--- credit whores, pedants, alien hunters

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28472 - Posted: 11 Feb 2013 | 0:03:21 UTC - in response to Message 28452.
Last modified: 11 Feb 2013 | 0:04:29 UTC

...competition drives technological advancement.

While I don't doubt this statement, I think that better and better (GPU-based) supercomputers will be built whether there is or isn't competition in the (GP)GPU industry. Moreover the GPU based products made for gaming purposes become less and less important (i.e. profitable) along the way.

With more than one competitor GPUs will obviously progress far faster than in a monopolistic scenario.

Once again: I don't doubt that. What I was trying to say is that nowadays the need for better supercomputers and the potential of the semiconductor industry drives more the progress than competition.

We've seen technology stagnate more than once when competition was lacking. I could name a few if you like. Been building and upgrading PCs for a long, long time. Started with the Apple, then Zilog Z80 based CPM machines and then the good old 8088 and 8086 CPUs...

Your sentences insinuating that there was stagnation in computing technology since the invention of the microprocessor because of lacking competition. I can't recall such times despite I'm engaged in home and personal computing since 1983. As far as I can recall, there was bigger competition in the PC industry before the Intel Pentium processor came out. Just to name a few PC CPU manufacturers from that time: NEC, IBM, Cyrix, SGS Thomson, and the others: Motorola, Zilog, MOS Technology, Ti, VIA. Since then they were 'expelled' from this market.
Nowadays the importance of the PC is decreasing, since computing became more and more mobile, so PC is the past (including gaming GPUs), smartphones and cloud computing (including supercomputers) are the present and the near future, maybe the smartwatches are the future, and nobody knows what will be in 10 years.

Josh
Send message
Joined: 12 Feb 13
Posts: 1
Credit: 6,902,136
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwat
Message 28485 - Posted: 13 Feb 2013 | 6:12:03 UTC

i think this will help you settle your psu issue and
http://images10.newegg.com/BizIntell/tool/psucalc/index.html?name=Power-Supply-Wattage-Calculator
CPU: Intel Core I7(lga2011)
Motherboard: High-end Desktop Motherboard
Video Card: NVIDIA GeForce GTX680 x2
Memory: 4GB DDR3 x4
Optical Drive: Combo x1
Hard Drive: 10,000RPM 3.5" HDD x1

Total wattage: 809 Watts

and with oc i wouldnt go less than the 850 which in my opinion is already pushing it. I would go to a 900 watt and you could build that same computer yourself for $3500 so if you were budgeting 5k you would have much room for improvement.

Also a closed loop liquid cooler usually never has problems, most on the market except a few are made by asetek and just rebadged as another brand. (AMD, Antec, ASUS, Corsair, ELSA, NZXT, Thermaltake, and Zalman) which is cheeper to just go through the manufacturer yourself.

and if you stick to air cooling try using Open Hardware Monitor to ramp up your nvidia fans to keep them cool. I use it to keep my GTX570 at full load under 70c at all times. when i didnt enable manual fan control and let the card do its thing it would peak at 92c under full load. and depending on how loud i want my computer i can choose to keep it at 57c full load with ambient air temp of 32c.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28500 - Posted: 13 Feb 2013 | 22:43:57 UTC - in response to Message 28485.

For my hardware I get "Our recommended PSU Wattage: 431 W", which is not actually factoring in the modest OC on CPU and GPU. Crunching POEM the machine consumes 205 W, measured at the wall. Running GPU-Grid (quite taxing) and 7 Einsteins on the CPU I reach 250 W, give or take 10 W.

Sure, a 430 W would be sufficient for my setup (actually I'm using a 500 W Enermax 80+ Gold). But as you see they're calculating very generously, as in my case the maximum PSU usage would be about 60%.

MrS
____________
Scanning for our furry friends since Jan 2002

Xenu
Send message
Joined: 18 Mar 10
Posts: 2
Credit: 7,829,013
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwat
Message 29398 - Posted: 9 Apr 2013 | 16:50:14 UTC

I'm a former GPUGrid cruncher who just wanted to add my voice to those supporting OpenCL use here.

I got rid of my nVidia cards due to many of the newer AMDs having such good electrical efficiency. I've been supporting GPUGrid through donate@home, but making bitcoins isn't quite the same as doing scientific research, and only a limited number of bitcoins can be made, so donate's appeal diminishes over time. Maybe some of the fellowships funded by donate could be used to port GPUGrid to OpenCL before the bitcoins run out altogether?

I just think GPUGrid's missing out, and wish the project would do something about it. It was my original card crunching project, and if there stops being any reasonable way for me to crunch for it, even indirectly, I'm gonna be bummed. Not bummed enough to spend $1000 replacing my cards with nVidias, but bummed nonetheless.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29400 - Posted: 10 Apr 2013 | 12:09:03 UTC - in response to Message 29398.

I'm a former GPUGrid cruncher who just wanted to add my voice to those supporting OpenCL use here.

I got rid of my nVidia cards due to many of the newer AMDs having such good electrical efficiency.

If you have an HD 7000 series card, then you are in luck on Folding@home. Their new Core_17 (currently in beta testing) does better on AMD than Nvidia, and improvements are still being made. Not only is the PPD better, but the CPU usage is comparably low, which is unusual for OpenCL. Note however that the higher-end cards do much better than the low-end cards, due to their quick-return bonus (QRB).

You can wait for the release, or try out the betas by setting a flag in the FAHClient to get them. CUDA has long reigned supreme over there, and so it is quite a change. Note however that you have to log in to their forums to see the beta testing section, which is at the bottom of the page. And you will need to check the Wiki to see how to set the beta flag. Given the amount of development required to get OpenCL to work properly (they have been at it for years), that will get you results far faster than waiting for GPUGrid to do it.

John C MacAlister
Send message
Joined: 17 Feb 13
Posts: 181
Credit: 144,871,276
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 29401 - Posted: 10 Apr 2013 | 13:23:36 UTC - in response to Message 29400.
Last modified: 10 Apr 2013 | 13:25:26 UTC

I'm a former GPUGrid cruncher who just wanted to add my voice to those supporting OpenCL use here.

I got rid of my nVidia cards due to many of the newer AMDs having such good electrical efficiency.

If you have an HD 7000 series card, then you are in luck on Folding@home. Their new Core_17 (currently in beta testing) does better on AMD than Nvidia, and improvements are still being made. Not only is the PPD better, but the CPU usage is comparably low, which is unusual for OpenCL. Note however that the higher-end cards do much better than the low-end cards, due to their quick-return bonus (QRB).

You can wait for the release, or try out the betas by setting a flag in the FAHClient to get them. CUDA has long reigned supreme over there, and so it is quite a change. Note however that you have to log in to their forums to see the beta testing section, which is at the bottom of the page. And you will need to check the Wiki to see how to set the beta flag. Given the amount of development required to get OpenCL to work properly (they have been at it for years), that will get you results far faster than waiting for GPUGrid to do it.


Hi, Jim1348:

I cannot find any mention of a likely implementation date for the new Core_17 at Folding@home. Have I missed the implementation date or has it not yet been given us?

Regards,

John

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29402 - Posted: 10 Apr 2013 | 17:26:23 UTC - in response to Message 29401.

John,

Folding does not give out implementation dates, as a matter of policy. But they first started a closed beta (only to registered beta testers who had a key) a couple of months ago, and now have moved to open beta, meaning anyone can do it. But still only the registered beta testers get help if it crashes. I am not a registered beta tester, but it has worked fine on my HD 7770 for several days now and I don't see many problems in the forums (except for the predictable ones from overclocking; don't do it, especially in a beta test).

Next, they move to the "advanced" stage, meaning it is out of beta and anyone can get it if they set an "advanced" flag (and get help if they need it). Finally, the do a full release, and anyone gets it, at least if they have an AMD card; I don't know if they will give it to the Nvidia crowd, who may stay on the present Core_15 for some time longer.

They are still making changes and speed improvements:
http://folding.typepad.com/news/2013/03/sneak-peak-at-openmm-51-about-2x-increase-in-ppd-for-gpu-core-17.html
So I don't know when it will get out of beta, but I would expect in a month or two. And then another month or two of Advanced. If that sounds like fun to you, give it a try; otherwise just wait for the formal release, which I assume will be by the latter part of the summer.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29625 - Posted: 1 May 2013 | 10:52:21 UTC - in response to Message 29402.

Last month Intel started supporting OpenCL/GL 1.2 on it's 3rd and 4th generation CPU's, and their Xeon Pi. They even have a Beta dev kit for Linux. It's my understanding however that if you are using a discrete GPU in a desktop you won't be able to use the integrated GPU's OpenCL functionality (might be board/bios dependent), but it's available in most new laptops.
AMD's HD7000 series are OpenCL1.2 capable and have been supported from their release.
To date NVidia is stuck on OpenCL 1.1, even though the GTX600 series cards are supposedly OpenCL1.2 capable. They haven't bothered to support 1.2 with drivers. I was hoping that the arival of the Titan would prompt NVidia to start supporting 1.2 but it hasn't happened so far. Perhaps the official HD 7990's will encourage NVidia to support OpenCL1.2, given AMD's embarrassingly superior OpenCL compute capabilities.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29627 - Posted: 1 May 2013 | 13:20:08 UTC - in response to Message 29625.
Last modified: 1 May 2013 | 13:29:58 UTC

To date NVidia is stuck on OpenCL 1.1, even though the GTX600 series cards are supposedly OpenCL1.2 capable. They haven't bothered to support 1.2 with drivers. I was hoping that the arival of the Titan would prompt NVidia to start supporting 1.2 but it hasn't happened so far. Perhaps the official HD 7990's will encourage NVidia to support OpenCL1.2, given AMD's embarrassingly superior OpenCL compute capabilities.

Conversely, AMD has made great strides in OpenCL for their 7xxx series cards. On the WCG OpenCL app even the lowly HD 7770 is twice as fast as as an HD 5850 that uses twice the power, so 4x greater efficiency. At the Open_CL Einstein, the GTX 660 is faster than the 660 TI. In fact the 560 TI is faster than the 660 TI. Seems strange?

Edit: I won't even mention that at Einstein the Titan runs at only 75% of the speed of the HD 7970, which is well under 1/2 the price. Oops, mentioned it...

Werkstatt
Send message
Joined: 23 May 09
Posts: 121
Credit: 321,525,386
RAC: 177,358
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30427 - Posted: 26 May 2013 | 20:05:11 UTC - in response to Message 29627.


Conversely, AMD has made great strides in OpenCL for their 7xxx series cards. On the WCG OpenCL app even the lowly HD 7770 is twice as fast as as an HD 5850 that uses twice the power, so 4x greater efficiency. At the Open_CL Einstein, the GTX 660 is faster than the 660 TI. In fact the 560 TI is faster than the 660 TI. Seems strange?

Edit: I won't even mention that at Einstein the Titan runs at only 75% of the speed of the HD 7970, which is well under 1/2 the price. Oops, mentioned it...


Pls correct me if I'm wrong:
AMD cards are stable, run more advanced software, are faster, use less energy and are cheaper.

Not so long ago a mod posted here, the reason why GPUGRID does not use AMD is time of the programmers and a resource problem.
OK, we need to accept that.
But maybe it's time to think about alternatives that reflect the reality.
First nVidia 7xx card are to be shipped, AMD's HD8xxx is under way. I think in a month or so we will see first results that makes a choice easier.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30432 - Posted: 26 May 2013 | 22:08:15 UTC - in response to Message 30427.
Last modified: 26 May 2013 | 22:18:01 UTC

AMD's have been looked at several times in the past. The project wasn't able to use them for research in the past, but did offer an alternative funding use for these GPU's. Not everybody's cup of tea, but ultimately it was good for the project. I expect new code will be tested on new AMDs. If the project can use AMD GPU's it will, but if not it won't. They are the best people to determine what the project can and can't do, and if they need help, assistance or advice, they have access to it.

If the time is right for the project I would suggest diversifying into other areas of GPU research to accommodate existing or near future AMD GPU resources.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30457 - Posted: 27 May 2013 | 20:39:25 UTC - in response to Message 30427.

Pls correct me if I'm wrong:
AMD cards are stable, run more advanced software, are faster, use less energy and are cheaper.

While I like AMD GPUs and especially the "new" GCN architecture, I think this goes too far.

The AMDs generally provide a bit more raw horse power at a given price point, but not dramatically so. I also don't see a general power consumption advantage sine nVidia introduced Kepler. And stable? Sure, given the right software task.

But saying "run more advanced software" makes it pretty difficult. They support higher OpenCL versions, sure. This is likely not an issue of hardware capability but rather nVidia not allocating ressources to OpenCL drivers. This won't change the fact that currently nVidias can not run the "more advanced" OpenCL code.. but I'm pretty sure that anything implemented there could just as well be written in CUDA. So can the GPUs run that ode or not? Really depends on your definition of "that code".

So it really boils down to OpenCL versus CUDA. CUDA is clearly more stable and the much more advanced development platform. And nVidias GPUs perform rather well using CUDA. Switch to OpenCL, however, and things don't look as rosy any more. There's still a lot of work on the table: drivers, SDKs, libraries etc. But they're doing their homework and progress is intense. It seems like regarding performance optimizations nVidia has fallen behind AMD here so far that it starts to hurt and we're beginning to see the differences mentioned above. Some of that may be hardware (e.g. the super-scalar shaders are almost impossible to use all the time), some of it "just" a lack of software optimization. In the latter case the difference is currently real.. but for how long?

MrS
____________
Scanning for our furry friends since Jan 2002

Werkstatt
Send message
Joined: 23 May 09
Posts: 121
Credit: 321,525,386
RAC: 177,358
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31437 - Posted: 12 Jul 2013 | 15:21:42 UTC - in response to Message 30457.


While I like AMD GPUs and especially the "new" GCN architecture, I think this goes too far.

MrS


Latest discussions at Einstein made it necessary te recheck my sight of things.
The performance chart I was referring showes the performance with cuda32 apps. So I was compairing latest version openCL against (outdated) cuda, which does not reflect the actual reality.
When Einstein switches over to cuda 5x things might look very different and the high-score table will look different.

Mea culpa. I apologize for that.

Alex

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31441 - Posted: 12 Jul 2013 | 18:34:45 UTC - in response to Message 31437.

Latest discussions at Einstein made it necessary te recheck my sight of things.
The performance chart I was referring showes the performance with cuda32 apps.

Alex, the reason for Einstein using CUDA32 (according to Bernd) is:

"Our BRP application triggers a bug in CUDA versions 4 & 5 that has been reported to NVida. The bug was confirmed but not fixed yet. Until it has, we're stuck with CUDA 3."

http://einstein.phys.uwm.edu/forum_thread.php?id=10028&nowrap=true#123838

He goes on to say they reported the bug to NVidia 2.5 years ago. Still not fixed. Glad we didn't hold our breath...

Werkstatt
Send message
Joined: 23 May 09
Posts: 121
Credit: 321,525,386
RAC: 177,358
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31444 - Posted: 12 Jul 2013 | 21:01:24 UTC - in response to Message 31441.


Alex, the reason for Einstein using CUDA32 (according to Bernd) is:

"Our BRP application triggers a bug in CUDA versions 4 & 5 that has been reported to NVida. The bug was confirmed but not fixed yet. Until it has, we're stuck with CUDA 3."

http://einstein.phys.uwm.edu/forum_thread.php?id=10028&nowrap=true#123838

He goes on to say they reported the bug to NVidia 2.5 years ago. Still not fixed. Glad we didn't hold our breath...



Yes, thanks for the link, I know this thread. But I did not know that there is a performance increase of about 2 (according to the discussion at Einstein) when using newest hardware and cuda 4x.

I have no way to compare the performance of cuda 4x wu's against AMD OpenCL. But with this (for me) new information I have a feeling of some kind of unfairness compairing cuda3x against newest AMD OpenCL. This is what I wanted to point out.

As far as the developement at Einstein is concerned: They do a great job there, and they have good reasons not to use cuda4x. And they have wu's for Intel GPU and also for Android (at Albert).

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31445 - Posted: 12 Jul 2013 | 21:41:07 UTC - in response to Message 31444.

I have no way to compare the performance of cuda 4x wu's against AMD OpenCL. But with this (for me) new information I have a feeling of some kind of unfairness compairing cuda3x against newest AMD OpenCL. This is what I wanted to point out.

All you can compare is what's there. If NV won't fix their bugs, that's their problem. You can certainly compare OpenCL on both platforms in various projects. Suposedly that's where GPU computing is going...

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31448 - Posted: 12 Jul 2013 | 23:26:27 UTC - in response to Message 31444.

GPUGrid has seen plenty of CUDA problems in the distant and recent past, and there are reasons why GPUGrid didn't move from CUDA4.2 to CUDA5 and have problems with the existing 4.2. There are different problems at different projects - some can adapt, some can't.

Porting CUDA to OpenCL would result in a performance drop for NVidia cards and with anything other than GCN performance would be far too poor. Even GCN is buggy and anything but the most simple models is still very inefficient.

BTW OpenCL isn't limited to GPU's, it can run on CPU's. Forthcoming APU architectures from AMD should prove useful for some research techniques. Certainly an opportunity exists to explore this potential, but whether it will suit this projects research ambitions or not is down to Gianni.
In my opinion new cures and treatment strategies can only be facilitated by new research techniques and avenues, but ask in 6 or 7 months when the technology arrives. Note that it's not all about how fast something can be done, it's also about what's new that can now be done that previously couldn't; GPU's aren't just getting faster, they also introduce new research resources.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31450 - Posted: 13 Jul 2013 | 0:10:37 UTC

Im (im)patiently waiting for GPUgrid to get their app working so I can bring my 780 here. Currently it is at Einstein. While I love the project, for what it is, the fact that they're effectively stuck on an older CUDA platform, thus not getting the most they can out of the newer cards makes me not want to crunch there.

It's most definitely a GCN world over there. Yes, you can crunch just fine there with NVIDIA GPUs, just don't expect any performance increases through different app updates.

If it weren't for GPUgrid, I would probably have an ATI rig. I still want to build one, problem is, I like building high end machines. I would rather all my resources go hear. :P

Different cards for different purposes I suppose is what it really all boils down to though. And there's nothing wrong with that.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31508 - Posted: 14 Jul 2013 | 14:22:57 UTC - in response to Message 31448.

Even GCN is buggy and anything but the most simple models is still very inefficient.

Anything complex enough to be called a processor is buggy. We need firmware and drivers to work around these issue. And to make sure they don't get too far in the way of actually using the chip.

Regarding GCN not being efficient for handling complex code: I'd say it's actually the other way around, that nVidia took a step back with Kepler for complex and variable code (while gaining efficiency for regular and simpler cases). If things are normal, both architectures are about on par. But there are certainly cases where GCN completely destroys Kepler (e.g. Anand's compute benchmarks). This could be due to bad OpenCL drivers rather than hardware.. but then Fermi shouldn't fare as well as it does in comparison, should it?

And there are of course cases where nVidia running CUDA destroys AMD running OpenCL.. software optimization is a huge part of GPU performance.

Anyway, the GPU-Grid team seems to have their hands full currently with a reduced staff and many problems popping up. Not much room for further development when bug fixing is so urgent...

MrS
____________
Scanning for our furry friends since Jan 2002

ronny
Send message
Joined: 20 Sep 12
Posts: 17
Credit: 19,131,325
RAC: 0
Level
Pro
Scientific publications
watwat
Message 31816 - Posted: 4 Aug 2013 | 15:58:49 UTC - in response to Message 28113.

We ended up designing and building our own GPU chassis so tired of having poor cooling.


Ooh, show us pictures! I want to build a solid (silent) cool (as in temperature, not epicness) case for multiple computers (to make a central house computer, much like central heating, since it will also heat the house), and need ideas.

Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 31828 - Posted: 5 Aug 2013 | 10:47:05 UTC

Here you go http://www.acellera.com/products/metrocubo/

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31833 - Posted: 5 Aug 2013 | 22:04:14 UTC - in response to Message 31828.

There is no way I would buy a Blue case
( ; P

Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 31877 - Posted: 7 Aug 2013 | 16:06:32 UTC
Last modified: 7 Aug 2013 | 16:07:19 UTC

They actually make them in any color :D We have a green and an orange one in the lab and I am still waiting for my pink one, hahah

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31884 - Posted: 7 Aug 2013 | 19:00:25 UTC - in response to Message 31877.

and I am still waiting for my pink one, hahah

Isn't that Noelias workstation? ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Post to thread

Message boards : Graphics cards (GPUs) : Shot through the heart by GPUGrid on ATI

//