Shot through the heart by GPUGrid on ATI

Message boards : Graphics cards (GPUs) : Shot through the heart by GPUGrid on ATI
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28221 - Posted: 26 Jan 2013, 19:17:24 UTC - in response to Message 28216.  

As I revealed, I have a very expensive and powerful computer which managed to work up 17 million credits, but which had to be rebuilt three times. That is a very costly situation, definitely to be avoided if possible.


A few of us tried to help you avoid an ugly situation but you wouldn't listen. Whatever losses you suffered are YOUR fault because you would not listen to common sense. If it was all the fault of this project or any other project then why are the rest of us not forced to rebuild our rigs too? Why has this happened only to you?

Your rig is no different than several others here with respect to hardware specs. The reason yours cost so much to build and repair is because you don't know how or where to buy. That was all explained to you months ago but you wouldn't have any of it. You made your bed now lay in it and stop blaming it on others.
ID: 28221 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile oldDirty
Avatar

Send message
Joined: 17 Jan 09
Posts: 22
Credit: 3,805,080
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwat
Message 28394 - Posted: 3 Feb 2013, 13:33:20 UTC - in response to Message 28113.  
Last modified: 3 Feb 2013, 13:50:01 UTC

hi,
no big hopes for ATI. The old code which run OpenCL has been deprecated for a new one which is now cuda only. It is still technically possible to do OpenCL but it does require a lot of work. Only justified if AMD really brings a top card in.



gdf

or better, is gpugrid willing to break from the nVidia contract?
but what i know, just my 2 cents.

O.K., now I know the score on ATI.

So, I could go the other way: I could run GPUGrid as the only GPU project on the current machine, and run the others, EINSTEIN, SETI, and add MILKY WAY, on a machine with ATI.


You can add WCG HCC and Poem@home for nice support on real good crunch VGA No1, AMD HD79xx.

ID: 28394 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28395 - Posted: 3 Feb 2013, 15:10:04 UTC
Last modified: 3 Feb 2013, 15:13:03 UTC

WGC/HCC will be out of work in less than four months, and I don't know of any new projects that will be using GPUs at all.

POEM has only enough work to dribble out a few work units to each user, and even worse, when HCC ends a lot of those people will move over to POEM. So there is no net gain in work done, only dividing the present work among more people.

The fact is that if you are interested in biomedical research, your only real option is Nvidia for the foreseeable future. (Folding may have an improved AMD core out eventually, though Nvidia will probably still be better.)
ID: 28395 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28396 - Posted: 3 Feb 2013, 20:20:54 UTC - in response to Message 28394.  

hi,
no big hopes for ATI. The old code which run OpenCL has been deprecated for a new one which is now cuda only. It is still technically possible to do OpenCL but it does require a lot of work. Only justified if AMD really brings a top card in.



gdf

or better, is gpugrid willing to break from the nVidia contract?
but what i know, just my 2 cents.


A contract? You mean nVIDIA is paying GPUgrid to use only nVIDIA thereby ignoring thousands of very capable AMD cards installed in machines that run BOINC? What is GPUgrid's motivation for agreeing to that contract... to minimize their production?

BOINC <<--- credit whores, pedants, alien hunters
ID: 28396 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile MJH

Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 28399 - Posted: 3 Feb 2013, 22:06:27 UTC - in response to Message 28396.  


A contract? You mean nVIDIA is paying GPUgrid


If only.

MJH.
ID: 28399 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile AdamYusko

Send message
Joined: 29 Jun 12
Posts: 26
Credit: 21,540,800
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 28405 - Posted: 4 Feb 2013, 1:35:09 UTC

While I am not affiliated with GPU Grid I think it has to do with the fact that the GPU Grid code is best suited to CUDA enabled projects which out perform OpenCL out of the box, without custom tailoring settings to optimize OpenCL settings which only just then become comparable to the CUDA set up.

I have not done much more reading into CUDA vs. OpenCL besides that, but depending on the types of tasks needed, and the way the code is implemented they will preform various tasks at varying proficiencies. Some projects are best suited for OpenCL while others are best suited for CUDA. That does not change the fact that if you want to get the most out of a card without needing to tinker with setting to optimize it for a given task, CUDA is always better.


I hate that quite a few people get on this forum and think GPU Grid has some sort of Vendetta against AMD cards. As an academic that understands the time crunch imposed on the researchers, I know that for such a small crew, running off a relatively tight budget ( face it next to no educational institution throws a lot of money towards supporting any project like this, and most institutions are relatively tight with their budgets as they would rather grow their endowment), but they made a decision some years ago that CUDA better suited their needs so they scrapped the OpenCL code because it was becoming too much of a headache to maintain. Yet so many people treat these projects like they are not there to help the projects out, but rather the projects are there to serve their users every need. I honestly find it sick and twisted.
ID: 28405 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28412 - Posted: 4 Feb 2013, 19:57:50 UTC - in response to Message 28405.  

This projects CUDA code could probably still be ported to OpenCL, but the performance would be poorer for NVidia cards and it still wouldn't work, or work well for AMD cards.
It's debatable if the research team is even big enough personnel ways to properly support both OpenCL and CUDA research. There is also a big financial overhead; it would probably require a second server and support.

Performance is largely down to the drivers and AMD's support for OpenCL on their cards, which are different and do some things faster, but others slower, and simply can't do some things. NVidia have a bigger market share and have been supporting CUDA for longer. As well as being more mature CUDA is more capable when it comes to more complex analysis.

GPUGrid has been and still is the best research group that uses Boinc. Just because it hasn't been able to perform OpenCL research on AMD GPU's doesn't change that, nor will it.

If AMD's forthcoming HD 8000 series is supported with better drivers and is more reliable when it comes to OpenCL then perhaps things will change. However, NVidia aren't sitting about doing nothing - they are developing better GPU's and code and will continue to do so for both OpenCL and CUDA for the foreseeable future.
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 28412 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28414 - Posted: 4 Feb 2013, 22:40:01 UTC - in response to Message 28412.  

Before anybody accuses me of having a contract with nVIDIA be aware that I just took delivery of an AMD 7970, it's installed in host 154485@POEM.

This notion that CUDA is better suited for complex data analysis and modeling than OpenCL is widely reported on the 'net. skgiven isn't just making that up because he has a contract with nVIDIA, it's generally accepted as fact. I've seen it reported in several different places and have never seen anybody dispute it.

I would love to see GPUgrid support my sexxy new 7970 but I don't think it's a wise thing for them to do, at this point in time. Supporting CUDA alone is using up a lot of their development time and from reports in the News section CUDA is getting them all the data they can handle.
BOINC <<--- credit whores, pedants, alien hunters
ID: 28414 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile tito

Send message
Joined: 21 May 09
Posts: 22
Credit: 2,002,780,169
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 28415 - Posted: 4 Feb 2013, 23:09:20 UTC

POEM; 7970 + Athlon X2 ?
Waste of GPU power. PM me - i will tell You what's goin on with POEM on GPU. (till tomorow - later on no internet for 10 days).
BTW - owners of AMD GPU can support GPUGrid with Donate@home. I must think about as distrrgen started to make me angry.
ID: 28415 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28442 - Posted: 7 Feb 2013, 15:37:13 UTC - in response to Message 28414.  

This notion that CUDA is better suited for complex data analysis and modeling than OpenCL is widely reported on the 'net. skgiven isn't just making that up because he has a contract with nVIDIA, it's generally accepted as fact. I've seen it reported in several different places and have never seen anybody dispute it.

I think a good way to put it is that CUDA is more mature than Open_CL. It's been around much longer, however Open_CL is catching up. It's also true that since CUDA is NVidia's proprietary language they can tweak it to perform optimally on their particular hardware. The downside is that they have the considerable expense of having to support both CUDA and Open_CL. AMD on the other hand dropped their proprietary language and went for the open solution. Time will tell.
ID: 28442 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28443 - Posted: 7 Feb 2013, 18:59:08 UTC - in response to Message 28442.  

My statement summarizes the current state of affairs, yours holds open future possibilities so I think yours is a better way to put it except, lol, many readers can't appreciate what maturity has to do with a programming platform. I see the possibility that OpenCL might one day be just as capable as CUDA but I think that will be difficult to accomplish due to the fact that it's trying to work with 2 different machine architectures. As you say, time will tell. Amazing things can happen when the right combination of talent, money and motivation are brought to bear on a problem. I could be wrong (I don't read the markets as well or as regularly as many do) but I think sales are brisk for AMD as well as nVIDIA so the money is probably there but it depends on how much of that the shareholders want to siphon off into their pockets and how much they want to plow back into development.

BOINC <<--- credit whores, pedants, alien hunters
ID: 28443 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28445 - Posted: 7 Feb 2013, 22:56:05 UTC - in response to Message 28443.  

My statement summarizes the current state of affairs, yours holds open future possibilities so I think yours is a better way to put it except, lol, many readers can't appreciate what maturity has to do with a programming platform. I see the possibility that OpenCL might one day be just as capable as CUDA but I think that will be difficult to accomplish due to the fact that it's trying to work with 2 different machine architectures. As you say, time will tell.

Open_CL works with far more than just ATI/AMD and NVidia CPUs. From the Wikipedia:

"Open Computing Language (OpenCL) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), DSPs and other processors. OpenCL includes a language (based on C99) for writing kernels (functions that execute on OpenCL devices), plus application programming interfaces (APIs) that are used to define and then control the platforms. OpenCL provides parallel computing using task-based and data-based parallelism. OpenCL is an open standard maintained by the non-profit technology consortium Khronos Group. It has been adopted by Intel, Advanced Micro Devices, Nvidia, and ARM Holdings.
For example, OpenCL can be used to give an application access to a graphics processing unit for non-graphical computing (see general-purpose computing on graphics processing units). Academic researchers have investigated automatically compiling OpenCL programs into application-specific processors running on FPGAs, and commercial FPGA vendors are developing tools to translate OpenCL to run on their FPGA devices."

Amazing things can happen when the right combination of talent, money and motivation are brought to bear on a problem. I could be wrong (I don't read the markets as well or as regularly as many do) but I think sales are brisk for AMD as well as nVIDIA so the money is probably there but it depends on how much of that the shareholders want to siphon off into their pockets and how much they want to plow back into development.

NVidia is profitable and has JUST started paying a dividend as of last quarter. AMD isn't making a profit, but hopes to in 2013. It's funny that so many kids pounded AMD for trying to rip them off after the 79xx GPUs were introduced, considering that AMD was losing money. AMD does not pay a dividend so the shareholders aren't getting anything. The stock price of AMD has not done well in recent years so the shareholders have been looking at considerable loses. It's too bad for us as competition drives technological advancement.

Regards/Beyond
ID: 28445 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28446 - Posted: 8 Feb 2013, 8:38:37 UTC - in response to Message 28445.  

...competition drives technological advancement.

While I don't doubt this statement, I think that better and better (GPU-based) supercomputers will be built whether there is or isn't competition in the (GP)GPU industry. Moreover the GPU based products made for gaming purposes become less and less important (i.e. profitable) along the way.
ID: 28446 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28447 - Posted: 8 Feb 2013, 8:47:08 UTC - in response to Message 28445.  

That's a pretty impressive list of things people think OpenCL is good for. I won't disagree with them because my expertise on the subject is stretched even at this point. I guess my impression is that it's kind of like a Swiss army knife. They're pretty cool looking things and one thinks there is no job it can't do but the fact is they really don't work that well. Or those screwdrivers that have 15 interchangeable bits stashed in the handle. They're compact and take up a lot less room than 15 screwdrivers but if you look in a mechanic's tool chest you won't find one. They hate them and if you give them one they'll toss it in the trash bin. And so OpenCL maybe does a lot of different things but does it do any of those things well? I honestly don't know, I don't work with it, just an enduser.

Btw, the SAT@home project announced testing of their new GPU app in this thread. It's CUDA so it seems they don't think much of OpenCL either.

You're right about the benefits of competition. Perhaps after a few years of competition and maturation OpenCL will push CUDA out.

BOINC <<--- credit whores, pedants, alien hunters
ID: 28447 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28448 - Posted: 8 Feb 2013, 9:12:07 UTC - in response to Message 28446.  

...competition drives technological advancement.

While I don't doubt this statement, I think that better and better (GPU-based) supercomputers will be built whether there is or isn't competition in the (GP)GPU industry.


If there is a demand for it someone will build a better supercomputer even if they have to use current tech. Seems to me one way to accomplish it would be to build a bigger box to house it then jam more of the current generation of processors into it. Or does it not work that way?

Moreover the GPU based products made for gaming purposes become less and less important (i.e. profitable) along the way.


Why less profitable... market saturation?

BOINC <<--- credit whores, pedants, alien hunters
ID: 28448 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28449 - Posted: 8 Feb 2013, 13:04:22 UTC - in response to Message 28448.  

At present CUDA is faster and more powerful for more complex applications. Because of the success of Fermi, NVidia was contracted specifically to build supercomputer GPU's. They did so, and when they were built, that's were they all went, until recently; you can now buy GK110 Tesla's. This contract helped NVidia financially; it meant they had enough money to develope both supercomputer GPU's and gaming GPU's, and thus compete on these two fronts. AMD don't have that luxury and are somewhat one-dimensional being OpenCL only. Despite producing the first PCIE3 GPU and manufacturing CPU's there are no 'native' PCIE3 AMD motherboards (just the odd bespoke exception from ASUS). An example of the lack of OpenCL maturity is the over reliance on PCIE bandwidth and systems memory rates. This isn't such an issue with CUDA. This limitation wasn't overcome by AMD, and they failed to support their own financially viable division. So to use an AMD GPU at PCIE3 rates you need to buy an Intel CPU! What's worse is that Intel don't make PCIE GPU's and can thus limit and control the market for their own benefit. It's no surprise that they are doing this. 32 PCIE lanes simply means you can only have one GPU at PCIE3x16, and the dual-channel RAM hurts discrete GPUs the most. While Haswell is supposed to support 40 PCIE lanes, you're still stuck with dual-channel RAM, and the L4 cache isn't there to support AMD's GPU's!
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help
ID: 28449 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Retvari Zoltan
Avatar

Send message
Joined: 20 Jan 09
Posts: 2380
Credit: 16,897,957,044
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28451 - Posted: 8 Feb 2013, 21:21:34 UTC - in response to Message 28448.  

If there is a demand for it someone will build a better supercomputer even if they have to use current tech. Seems to me one way to accomplish it would be to build a bigger box to house it then jam more of the current generation of processors into it. Or does it not work that way?

It's possible to build a faster supercomputer in that way, but its running costs will be higher, therefore it's might not be financially viable. To build a better supercomputer which fits in the physical limitations (power consumption, dimensions) of the previous one and faster at the same time, the supplier must develop their technology.

Why less profitable... market saturation?

Basically because they are selling the same chip much cheaper for gaming than for supercomputers.
ID: 28451 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28452 - Posted: 8 Feb 2013, 21:45:19 UTC - in response to Message 28446.  

...competition drives technological advancement.

While I don't doubt this statement, I think that better and better (GPU-based) supercomputers will be built whether there is or isn't competition in the (GP)GPU industry. Moreover the GPU based products made for gaming purposes become less and less important (i.e. profitable) along the way.

With more than one competitor GPUs will obviously progress far faster than in a monopolistic scenario. We've seen technology stagnate more than once when competition was lacking. I could name a few if you like. Been building and upgrading PCs for a long, long time. Started with the Apple, then Zilog Z80 based CPM machines and then the good old 8088 and 8086 CPUs...
ID: 28452 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28456 - Posted: 9 Feb 2013, 2:53:59 UTC - in response to Message 28452.  
Last modified: 9 Feb 2013, 3:02:54 UTC

...competition drives technological advancement.

While I don't doubt this statement, I think that better and better (GPU-based) supercomputers will be built whether there is or isn't competition in the (GP)GPU industry. Moreover the GPU based products made for gaming purposes become less and less important (i.e. profitable) along the way.

With more than one competitor GPUs will obviously progress far faster than in a monopolistic scenario. We've seen technology stagnate more than once when competition was lacking. I could name a few if you like. Been building and upgrading PCs for a long, long time. Started with the Apple, then Zilog Z80 based CPM machines and then the good old 8088 and 8086 CPUs...


Neverone to pass up the opportunity for a little brinksmanship or perhaps just the opportunity to reminisce, my first one was a Heath kit a friend and I soldered together with an iron I used as a child to do woodburning, that and solder the size plumber's use. No temperature control on the iron, I took it to uncle who ground the tip down to a decent size on his bench grinder. We didn't have a clue in the beginning and I wish I could say we learned fast but we didn't. It yielded an early version of the original Apple Wozniak and friends built in... ummm...whose garage was it... Jobs'? Built it, took 2 months to fix all the garbage soldering and debug it but finally it worked. After that several different models of Radio Shack's 6809 based Color Computer, the first with 16K RAM and a cassette tape recorder for storage and the last with 1 MB RAM I built myself and an HD interface a friend designed and had built in a small board jobber in Toronto. He earned an article in PC mag for that, it was a right piece of work. That gave me 25 MB storage and was a huge step up from the 4 drive floppy array I had been using. It used OS/9 operating system (Tandy's OS/9 not a Mac thing), not as nice as CPM but multi-tasking and multi-user. Friends running 8088/86 systems were amazed. And it had a bus connector and parallel port we used and for which we built tons of gizmos, everything from home security systems to engine analyzers. All with no IRQ lines on the 6809, lol.

I passed on the 80286 because something told me there was something terribly wrong though I had no idea what it was. Win2.x and 3.1 was useless to me since my little CoCo NEVER crashed and did everything Win on a '286 could do including run a FidoNet node, Maximus BBS plus BinkleyTerm. Then the bomb went off... Gates publicly declared the '286 was braindead, IBM called in their option on OS/2, the rooftop party in Redmond, OS/2 being the first stable multitasking GUI OS to run on a PC and it evolving fairly quickly into a 16 bit OS while Win did nothing but stay 8 bit and crash a lot. Ran OS/2 on a '386 I OC'd and built a water cooling system for through the '486 years then a brief dalliance with Win98 on my first Pentium which made me puke repeatedly after rock solid OS/2 and genuine 16 bitness, CPM and OS/9 so on to Linux which I've never regretted for a minute.

Windows never really had any competition. IBM priced OS/2 right out of the market so it was never accepted, never reached critical mass and IBM eventually canned it. Apple did the same with Mac but somehow they clung on perhaps for the simple reason they were able to convince the suckers their Macs were a cut above the PCs, the pitch they still use today. CPM died, Commodore died, and Gates was the last man standing, no competition. And that is why Windows is such a piece of sh*t. What other examples do you know of?
BOINC <<--- credit whores, pedants, alien hunters
ID: 28456 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dylan

Send message
Joined: 16 Jul 12
Posts: 98
Credit: 386,043,752
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwat
Message 28457 - Posted: 9 Feb 2013, 3:20:51 UTC - in response to Message 28456.  

There are exceptions to everything, including the statement "competition drives technological advancement".

One example however where this is true is the 600 series GPU's by nvidia. There are rumors that say that the current 680 is actual the 660 that nvidia had planned out, however there wasn't anything to compete with the 680 they had originally planned, so they didn't release it and instead rebranded the 660 to a 680, which happened to be close in performance to AMD's 7970, and then made a whole new 600 series line based off of the planned 660, now current 680.

Furthermore, it is speculated now that nvidia is finally releasing the planned 680 as the Geforce Titan.

Whether these rumors are true or not, it still shows how without competition (in this case, an equivalent AMD card to nvidia's planned 680) nvidia didn't release what they had planned, and instead gave out a lesser performing card for the price of a higher-end one, and saved the extreme performing card (the Titan) for later.

In addition, less people will have the Titan because it is $900 versus the current 680, which is $500, however if AMD did have a more powerful card, nvidia would have had to put out the Titan as the 680 a while ago to compete. In other words, if there was some competition, nvidia would have given a more powerful card, a better piece of technology, for less versus the current 680.


I hope this story made sense, as I was typing it out, I felt that one could get confusing reading it. If one wants more information on these rumors, a Google search of something like "680 actually 660 rumor" to get something like this:


http://forums.anandtech.com/showthread.php?t=2234396



ID: 28457 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Graphics cards (GPUs) : Shot through the heart by GPUGrid on ATI

©2025 Universitat Pompeu Fabra