Message boards :
Graphics cards (GPUs) :
Fermi
Message board moderation
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 16 · Next
| Author | Message |
|---|---|
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
The Adventure is a PCIe expansion board based on a derivative of the HYDRA 100 ASIC (LT12102). The system allows easy, high speed connection to any PC or Workstation platform using the PCIe bus. The system is optimized for GPUs and provides superb connectivity to different PCIe devices such as SSDs, HDs and other peripherals. The board has four PCIe slots and is powered by a standard PC power supply. The Adventure is best for your multi displays, broadcasting, digital signage and storage solution. The Adventure board is typically deployed within a 4U rack mount case together with a standard power supply and up to four slots. That's why I'm interested in a Lucid Adventure! A 1000W PSU, x2 GTX480, for every PCIe slot I've got in my PC. GPUGRID runs WU's that use 1 core pr GPU, that would require an Core i9, then we're crunching!
|
MJHSend message Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level ![]() Scientific publications ![]()
|
It's actually a rebranded Tyan FT72B7015 http://www.tyan.com/product_SKU_spec.aspx?ProductType=BB&pid=412&SKU=600000150. Difficult to source though. For you keen GPUGRID crunchers, it might be better find the minimum-cost host system for a GPU. CPU performance isn't an issue for our apps, so a very cheap motherboard-processor combination would do, perhaps in a wee box like this one: http://www.scan.co.uk/Product.aspx?WebProductId=982116 MJH |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I "guess", that it won't just be Workstation & Mainstream GPU's, Compute deserves it's own line. Not now, but soon... Is the Compute market big enough for Nvidia to earn enough on it without a very large reduction in their sales? If I understand the GPU architectures correctly, they do NOT include the capability of reaching memory or peripherals off the graphics boards. Therefore, they cannot reach any BOINC projects. |
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I finally found what I was looking for! http://www.lucidlogix.com/product-adventure2000.html One of these baby's can put that nasty Fermi outside the case & give me 2 PCIe x16 or 4 PCIe x8 for every one PCIe on my mobo. If a cheap Atom is enough, one of these on the mobo would make it possible to use 2-4 High End CUDA GPU's to play crunchbox: http://www.lucidlogix.com/products_hydra200.html But I don't make PC's, I buy them. So maybe if the price ain't bad, & I can find out where i can get an Adventure 2000, I'd be able to run an external multi Fermi GPU box... Looks like a good idea, if GPUGRID decides to rewrite their application to requires less communication between the CPU section and the GPU section. |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Are you 100% sure about this? Lucid claims that: The Adventure is a powerful PCIe Gen2.0 expansion board for heavy graphic computing environment. The platform allows connection of multiple PCIe based devices to a standard PC or server rack.The Adventure 2000 series is based on the different derivative of the HYDRA 200 series targeting a wide range of performance market segments. It provides a solution for wide range of applications such as gaming (driver required), GPGPU, high performance computing, mass storage, multi-display, digital and medical imaging. That said, what would be require are: x1 Lucid Adventure 2000 http://www.lucidlogix.com/product-adventure2000.html x1 4U rack mount x1 850W-1000W PSU x2 GTX480 With that in place, you're supposed to just connect the thing to one of your PCIe slots within your PC. That's "maybe" $2000 a pop... I'm also thinking about if x8 PCIe 2.0 doesn't effect performance, x4 GTX480 & maybe a 1500W PSU might be possible for "maybe" $3000 a pop...
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
GPUGRID runs WU's that use 1 core pr GPU, that would require an Core i9, then we're crunching! That’s not the case: GPUGrid tasks do not require that the CPU component of a process to be explicitly associated with one CPU Core (or logical core in the case of the i7). So, a low costing single core CPU could support two (present) high end GPUs! As quad core i7’s use Hyperthreading, they have 8 logical cores. So, even if GPUGrid tasks were exclusively associated to one core an i7 could support 8 GPUs! Remember the faster the CPU, the less CPU time required! I normally crunch CPU tasks on my i7-920, leaving one logical core free for my two GT240s. Overall my system only uses about 91% of the CPU, so bigger cards would be fine! As my two GT240s (equivalent to One GTX260 sp216) only use 3.5% of my CPU then a GTX 295 would use about 7% and two GTX 295’s would use about 14%. Therefore two GTX480 would use about 33% - so an i7-920 could support 6 Fermi cards crunching on GPUGrid! Most people would be better off with a highly clocked dual core CPU (3.33GHz) than say a Q6600 (at only 2.4GHz), or just overclock it and leave a core or two free. PS. There is no i9; Intel ended up calling it the i7-980X. But fortunately you don’t need it, as it costs £855. Better to build a dual GTX470 based system for that. |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
But can you answer if it's robertmiles, or Lucid who's right? Is it possible to build an "affordable" Supercomputer? Consisting of 4x 4U rack mounts, one for a Single or Dual CPU i7 & 3 for 2-4 GTXGPU's? If my "guess" is right, such a system would cost $8-12.000 & nobody says that you have to use it for crunching GPUGRID.net projects... Do you know if there's still prize money going to whoever finds the highest Prime Number, if so, can CUDA help to find it, if putting together that "affordable" Super Computer is possible?
|
|
Send message Joined: 4 Apr 09 Posts: 450 Credit: 539,316,349 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
On the list of things to investigate to get your supercomputer project off the ground I would like to suggest that you look into how many GPUs in one system the NVidia drivers will support properly. Are there OS limits? How about motherboard bios limits? Talk to the people with multiple GTX295 cards and you will see they had to do unconventional things regarding drivers and bios. skgiven ... I think that the current linux version of gpugrid app is using up a full cpu core per gpu core. That's probably the single biggest reason I have not tried linux yet. I have seen some fast runtimes which always interest me but I am just not willing to take that much away from WCG. Thanks - Steve |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Its still pretty raw, but this was an article I found from PC Persctive http://www.pcper.com/article.php?aid=815&type=expert&pid=1 They showed & tested the Hydra 200 & the Adventure. There was a mention about folding@home.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Snow Crash, your right. 6.04 is still running over 90% CPU time on Linux, but it is saving 50min per task - Which means 5h rather than 5h 50min, or 16.5% better in terms of results/points. My GTX 260 brings back about 24500 points per day, so under Linux I would get about 28500 per day (~ 4000 more). The quad CPU on my system only gets around 1300 Boinc points (my GTX 260 does almost 20 times the work). So using Linux I would get 2600more points per day on that one system, assuming I could not run any other CPU tasks. Liveonc, I would say that you would really have to talk to someone who has one, to know what it can do - if the Lucid Hydra is even available yet? You would also, as Snow Crash said, need to look into the GPU drivers, other software (application), and especially the motherboards limitations. You would also need to be very sure what you want it for; crunching, gaming, rendering or display. I guess that a direct PCIE cable would allow the whole GPU box to function 'basically' like a single GPU device and the Hydra is essentially an unmanaged GPU Layer 1&2 switch. The techs here might be able to tell you if it could at least theoretically work. Although I design and build bespoke systems and servers this is all new and rather expensive kit. It is a bit of a niche technology, so it will be expensive and of limited use. That said if it could be used for GPUGrid, I am sure the techs would be very interested in it as it would allow them to run their own experiments internally and develop new research techniques. For most, one Fermi in a system will be plenty! Two will be for the real hard core enthusiast or gamer. For the rare motherboards that might actually support 3 Fermi's you really are looking at a 1200W PSU (or 2 PSU's) in a Very well ventilated Tower System (£2000+). |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Sorry for asking too many questions that can't really be answered unless you actually had the thing. It just fascinated me with the possible potential of the Hydra. Not just for use with GPUGRID.net, I myself "sometimes", like to play games on my PC. That most of the GPU's used in GPUGRID.net are mainstream GPU's & that Hydra allows mixing Nvidia with Ati cards, Nvidia & Ati GPU's of different types. Brought to mind someone in another thread who looked at a GTX275 with GTS250 physics card. That Hydra "might" allow mixing different GPU chips on the same card, "might" also enable Dual GPU chip cards with fx a Cypress & a Fermi, instead of putting the Hydra on the mainboard or going external. Also, Nvidia had abandoned the idea of Hybrid-SLI GeForce® Boost, & decided to go with NVIDIA Optimus instead. If notebooks had a Hydra, they "might" be able to use both Integrated & Discrete, instead of just switching between the two.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hydra is very interseting - it has unknown potential ;) I am sure many event organisers (DJs) would love to see it in a laptop or a box that could be attached to a laptop, for performances! It could even become a must have for ATI and NVidia alike, when demonstrating new GPU's to audiances. Back to Fermi. I was thinking about the difference between DDR3 and GDDR5. In itself it should mean about a 20 to 25% difference. So with improved Fermi architecture (20%) and slightly better clocks (~10%) I think the GTX480 will be at least 2.6 times as fast as a GTX295 but perhaps 2.8 times as fast. I think they are keeping any chips with 512 working shaders aside, for monster dual cards. Although the clocks are not likely to be so high, a card that could do 4.5 times the work of a GTX295 would be a turn up for the books. Should any lesser versiosn of Fermi turn up with DDR3, for whatever reason, avoid at all costs! |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hydra: it's not there yet for games, they still have to sort out the software side. Load balancing similar GPUs is difficult enough (see SLI and Crossfire), but with very different GPUs it becomes even more challenging. They'd also teach their software how crunching with Hydra had to work. Back to Fermi: there won't be any DDR3 GF100 chips - that would be totally insane. Really bad publicity and still a very expensive board which people almost won't buy at all. Manufacturers play these tricks a lot with low end cards (which are mostly bought by people who don't know / care about performance), and sometimes mid-range cards. I was thinking about the difference between DDR3 and GDDR5. In itself it should mean about a 20 to 25% difference. No, it doesn't. If you take a well balanced chip (anything else doesn't get out of the door at the mid to high end range anyway) and increase its raw power by a factor of 2 you'll get anything between 0 and 100% as a speedup, depending on the application. In games probably more like 30 - 70%. If you doulbe raw power and double memory bandwidth (and keep relative latency constant) then you'll see 100% speedup among all applications. The point is: higher memory speed doesn't make you faster by definition, because the memory bandwidth requirements scale with performance. So if for example Fermi got 3 times as much raw crunching power as GT200 and only 2 times the bandwidth, this is not going to speed things up, regardless of the memory being GDDR5 or whatever. However, Fermi is more than just an increase of raw power: there's also the new caching system. Which should alleviate the need for memory bandwidth somewhat, but doesn't change the fundamental issue. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I think the GTX480 will be at least 2.6 times as fast as a GTX295 but perhaps 2.8 times as fast. from where u took this? AFAIK from various forums, here's some "facts": - GTX480 and GTX470 are faster then GTX285 on 40% and 25% respectively - GTX480 is weaker then 5870 in Vantage - OCing is really poor and in the same time - make card "critically hot" - VERY noisy - if the quantity of good 512 core will be enough, it will be for Tesla only - fermi2 will no be earlier then summer 2011, most probably - fall 2011, if to start redesigning today - there are plans for GTX495. BUT: it will be based on 275 chip (retail name GTX470 with 448 cores), but not on 375 (GTX480 - 480 cores). Another problem - PCI certification (300W), which will be really hard to meet. Power consumption gonna be "mind blowing". Sure, let's wait for a week and see if all these true or not
|
robertmilesSend message Joined: 16 Apr 09 Posts: 503 Credit: 769,991,668 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Are you 100% sure about this? Lucid claims that: Is who 100% sure about what? Looks like the GPUGRID project scientists and programmers need to say more about just how fast the CPU-GPU communications needs to be, and how powerful a CPU is needed to keep up with the needs of two or four GTX480s. |
GDFSend message Joined: 14 Mar 07 Posts: 1958 Credit: 629,356 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() |
[quote]Are you 100% sure about this? Lucid claims that: It does not matter at all. The code always run on the GPU only apart from IO. gdf |
liveoncSend message Joined: 1 Jan 10 Posts: 292 Credit: 41,567,650 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]()
|
Well, I "guess" that maybe after Lucid sorts itself out, it "might" just be the One Chip to rule them all, One Chip to find them, One Chip to bring them all and in the darkness bind them In the Land of Mordor where the Shadows lie. ;-) With that said, I hope the Orcs get it before the Hobits destroys it. BTW, I read somewhere that Intel has a stake in Lucid.
|
skgivenSend message Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I bet its a pointy stake with a Lawyer behind it. ATI and NVidia will hadrdly be bending over backwards to make their kit work with Hydra then! |
|
Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
AFAIK from various forums, here's some "facts": I second that: reserving fully functional chips for ultra expensive Teslas is a smart move (if you don't have many of them ;) Regarding the performance estimate: I don't doubt someone measured this with some immature hardware and driver, but I don't think it's going to be the final result. Consider this: When ATI went from DX10 to DX11 they required 1.10 times as many transistors per shader (2200/1600 : 1000/800) and got 11% less performance per shader per clock (see 4870 1 GB vs. 5830). If the 40% for GTX480 versus GTX285 were true that would mean nVidia spent 1.07 times as many transistors per shader (3200/512 : 1400/240) and got 0.74 times as much performance per shader per clock, i.e. 35% less [140%/(480*1400) : 100%/(240*1476)]. nVidias strength so far has been design, so I don't think they'd screw this one up so badly. Especially since they already screw up manufacturing and dimensioning. MrS Scanning for our furry friends since Jan 2002 |
|
Send message Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I understand what u r talking about. But remember - this time nvidia changed the “dispatching scheme” and now they sending cards and drivers directly by themselves, but not from manufacturers, as it was before. In fact, nvidia already had sent cards and the latest driver to some close to them reporters and these results came from one of them. Saying frankly, me personally expected and nvidia promised way better performance and frequences, so if this is true I gonna stay on my GTX275, may be I’ll try to get the 2nd one for cheap. But no Fermi until Fermi2 will be available, sorry…
|
©2025 Universitat Pompeu Fabra