Message boards :
News :
Geforce 10 / Pascal app coming soon
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 . . . 6 · Next
Author | Message |
---|---|
![]() Send message Joined: 11 Jan 13 Posts: 216 Credit: 846,538,252 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Just checking in. I have a 1080 and 1070 waiting for a Windows app. They've been busy on Einstein in the meantime. |
![]() Send message Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level ![]() Scientific publications ![]() ![]() |
Just checking in. I have a 1080 and 1070 waiting for a Windows app. They've been busy on Einstein in the meantime. Still waiting on the release of CUDA 8.5 |
![]() ![]() Send message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 1 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Just checking in. I have a 1080 and 1070 waiting for a Windows app. They've been busy on Einstein in the meantime. This is worse than I've feared. A quick google search of "CUDA 8.5" gave the following two results as best match: http://www.cudabrand.com/cuda-8-5-titanium-bonded-bent-needle-nose-pliers.html http://www.cudabrand.com/cuda-8-5-titanium-bonded-dehooker.html which left me unimpressed. |
![]() Send message Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Still waiting on the release of CUDA 8.5 Zoltan, there has to be a "catch and release" joke in there somewhere... |
![]() ![]() Send message Joined: 30 Jul 14 Posts: 225 Credit: 2,658,976,345 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
HAH! |
Send message Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Just checking in. I have a 1080 and 1070 waiting for a Windows app. They've been busy on Einstein in the meantime. Speculation: CUDA 8.5 publicly available near (GP107) GTX 1050 release (October) or further down the road when (GP102) GTX 1080ti appears in early 2017 or late 2016. http://ambermd.org/gpus/benchmarks.htm CUDA 8.0 AMBER benchmarks show what we're missing in potential throughput with Pascal. For example: GTX 1070 Single job throughput equal to Maxwell Titan X or GTX 980ti. |
Send message Joined: 23 Dec 09 Posts: 189 Credit: 4,798,881,008 RAC: 343 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
My strategy seems to work: I restricted my purchase of a latest generation Nvidia GPU to the availability of a working app for this generation on my priority project GPUGRID. The price of these cards goes in the right direction… down:-) Lesson learned from the GTX 670 introduction… |
Send message Joined: 20 Jul 16 Posts: 3 Credit: 479,881,429 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() |
I must be missing an important detail, but is there a reason why the GPUGRID code for NVIDIA GPUs will not run on the new Pascal cards? Is the hardware not backwards compatible that it can run older code like other projects such as Einstein or POEM which have been working without the release of CUDA 8.0/8.5 support on the 1080/1070? |
![]() ![]() Send message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 1 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
...is there a reason why the GPUGRID code for NVIDIA GPUs will not run on the new Pascal cards? Judging only by the error messages the app throws, it uses different libraries for different Compute Capabilities (to maximize efficiency I guess), and the one which is necessary for CC6.1 is simply not present in the CUDA6.5 code (obviously because there was no CC6.1 back then when the CUDA6.5 came out). Is the hardware not backwards compatible that it can run older code They are backwards compatible, and they could run older code. ... it can run older code like other projects such as Einstein or POEM which have been working without the release of CUDA 8.0/8.5 support on the 1080/1070? Probably the CUDA 6.5 code could be made to work on the Pascals, but when you do that, your app could be missing a significant part of the hardware improvements, thus it could be slower and/or less energy efficient than a new code. |
Send message Joined: 8 Jun 14 Posts: 18 Credit: 19,804,091 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() |
I would say better less than 100% efficient but to have at least something. |
![]() ![]() Send message Joined: 20 Jan 09 Posts: 2380 Credit: 16,897,957,044 RAC: 1 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I would say better less than 100% efficient but to have at least something. A fellow cruncher called the CUDA6.5 client "ancient" in another thread. Other projects go the way according to the "to have at least something" mentality, resulting in CUDA3.2, CUDA5.0, and CUDA5.5 apps. What would that fellow call those clients? This project has chosen the other way: to have state of the art app (because it's using a proprietary code provided by Acellera, so it has to be competitive). To have clients serving both mentalities requires doubled development teams, but it's gratifying to have at least one such team. Thus there always could be a group of people who can pick on the way a project goes, but I'm quite satisfied with the way this project goes, however the pace could be faster, but it's limited by 3rd party (NVidia). |
Send message Joined: 1 Mar 10 Posts: 147 Credit: 1,077,535,540 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hi everybody ! Excuse my "newbie" question but would it be possible that all apps be written with OpenCL in mind instead of proprietary CUDA ? Lubuntu 16.04.1 LTS x64 |
Send message Joined: 3 Nov 15 Posts: 38 Credit: 6,768,093 RAC: 0 Level ![]() Scientific publications ![]() |
Excuse my "newbie" question but would it be possible that all apps be written with OpenCL in mind instead of proprietary CUDA ? That would be ideal to have all applications in OpenCL. But the acemd application is written with CUDA and there is no intention to rewrite it to OpenCL, which is not trivial task and part of it code is not open so not even possible. On the other hand some benchmarks show that OpenCL is bit slower on nvidia cards (citation-needed). ![]() |
Send message Joined: 1 Mar 10 Posts: 147 Credit: 1,077,535,540 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks, I understand your answer but "the bit slower" could be forgotten if AMD GPUs could also crunch these WUs because more GPUs crunching at the same time means that the batch is sooner finished ! I'm planning to buy a GTX 10 series card. For Lubuntu I have to install a driver from ppa:graphics-drivers/ppa because the actual Xenial distrib is limited to 361.42 version and this does not support GTX 10 series After that, I'll have to install the latest release of CUDA either from this ppa (actually only 7.5 is availale in Xenial, or from Nvidia . Lubuntu 16.04.1 LTS x64 |
![]() ![]() Send message Joined: 30 Jul 14 Posts: 225 Credit: 2,658,976,345 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Two things. 1) They have been testing with a Python app to run on the GPU so it can port to AMD GPUs. This MAY be, if successful, also a port to native Intel chipset GPUs as well. 2) distriuted.net has an OpenCL and a CUDA client and the OpenCL runs a lot more keys per second than the CUDA client on the same NVIDIA GPU. Now this may be a difference in the way dnetc and a true GPU client may work or it may be a good benchmark, I am not sure, but that is what I have found when experimenting. 1 Corinthians 9:16 "For though I preach the gospel, I have nothing to glory of: for necessity is laid upon me; yea, woe is unto me, if I preach not the gospel!" Ephesians 6:18-20, please ;-) http://tbc-pa.org |
Send message Joined: 23 May 09 Posts: 121 Credit: 400,300,664 RAC: 14,406 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Maybe this could be of interest: https://software.intel.com/en-us/blogs/2016/09/08/intel-distribution-for-python |
![]() ![]() Send message Joined: 30 Jul 14 Posts: 225 Credit: 2,658,976,345 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Yup. That would prolly be it. :-) |
Send message Joined: 1 Mar 10 Posts: 147 Credit: 1,077,535,540 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Maybe this could be of interest: Hope that Anaconda GPUGRID distrib is not only for INTEL !!... Lubuntu 16.04.1 LTS x64 |
![]() Send message Joined: 17 Aug 08 Posts: 2705 Credit: 1,311,122,549 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
@Jihal: the last time they tried to port their code to OpenCL was probably 2-3 years ago. At that point the app was really slow. If I remember correctly the AMD GPUs were about a factor of 10 behind the "gaming equivalent" nVidias running CUDA code. Some libraries were clearly not optimized well. But worst was that they couldn't make it stable, i.e. the app would crash randomly. At that point they stopped these efforts, after having spent a few months on it I guess. Things will have improved by now, but I can understand if they're not keep on trying OpenCL again. @caffeineyellow5: the general conception is that OpenCL is mostly inferior to CUDA. But this doens't mean it always has to be like that. Depending on how hard you try, you can have the best programming tools and screw up and write bad code. I'm not saying that's what DNETC is doing, but their code is rather simple (since their task is very easy and regular), so one could easily see one hand-optimization outperform the other one, despite the latter having far better librariers (which would matter in more complex problems). And if you consider CPUs, you don't argue "an Intel Skylake is good for Fortan, but if you want to run C you should use an AMD" for good reason. The libraries and compilers have some influence, but it's mostly the code itself which governs which hardware it fits well. E.g. "branchy game-AI loves CPUs with low branch mispredit penalty and good branch prediction" or "in-memory big data analysis loves memory bandwidth". @Topic: I suspect the Maxwell code would work very well for Pascal, considering the minor changes to the SM logic blocks in this generation. However, the project team currently seems to be rather limited by scientific manpower than computational power. MrS Scanning for our furry friends since Jan 2002 |
![]() ![]() Send message Joined: 30 Jul 14 Posts: 225 Credit: 2,658,976,345 RAC: 0 Level ![]() Scientific publications ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
So with the release of the CUDA 8 toolkit, we can assume support for the new cards is forthcoming very shortly? https://developer.nvidia.com/cuda-toolkit |
©2025 Universitat Pompeu Fabra