Advanced search

Message boards : Graphics cards (GPUs) : Mixing GTX 10X0 GPUs with older GPUs in the same system

Author Message
Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1073
Credit: 4,486,277,554
RAC: 406,502
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 46994 - Posted: 18 Apr 2017 | 3:12:17 UTC
Last modified: 18 Apr 2017 | 3:14:18 UTC

>> DO NOT MIX GTX 10X0 GPU with older GPUs in the same system

I assume that with the new app we can mix 10xx and older GPUs since they're all now cuda80...

eXaPower
Send message
Joined: 25 Sep 13
Posts: 263
Credit: 997,157,667
RAC: 2,163,136
Level
Glu
Scientific publications
watwatwatwatwatwat
Message 47019 - Posted: 18 Apr 2017 | 16:19:49 UTC - in response to Message 46994.

>> DO NOT MIX GTX 10X0 GPU with older GPUs in the same system

I assume that with the new app we can mix 10xx and older GPUs since they're all now cuda80...

Yes - mixing GPUs (9.18 version) come with a caveat: one doesn't suspend or resume WU on any other GPU.
Resume / suspend on the same GPU that begins a WU currently okay.
My GTX1080/1070/1060/970/970 host thrown a bunch of errors due to suspend or resume on the non-original Pascal or Maxwell GPU workunits.
Otherwise WU run without issues.

BTW: CUDA8 ACEMD PCIe2.0 x1 performance is a lot faster than the previous CUDA6.5 WU.


Erich56
Send message
Joined: 1 Jan 15
Posts: 369
Credit: 1,599,071,802
RAC: 2,790,601
Level
His
Scientific publications
watwatwat
Message 47027 - Posted: 19 Apr 2017 | 3:29:00 UTC - in response to Message 47019.

BTW: CUDA8 ACEMD PCIe2.0 x1 performance is a lot faster than the previous CUDA6.5 WU.

hm, my experience is exactly the opposite. Chrunching time now is considerabely longer :-(

BelgianEnthousiast
Send message
Joined: 7 Apr 15
Posts: 20
Credit: 281,058,525
RAC: 757,772
Level
Asn
Scientific publications
wat
Message 47032 - Posted: 19 Apr 2017 | 16:51:13 UTC - in response to Message 47027.

Same here,

Now WU's are running fine on both cards (Titan & 1070), but where it used to take
around 9 hours to complete (at 65 % load), now it takes 15 hours on the 1070 and nearly 24 hours on the Titan, still both at 65 % load...

Was this the work-around where efficiency was sacrificed in favor of being able to crunch on a mixed environment ? (not sarcastic here ! :-))

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,824,814,939
RAC: 368,622
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47033 - Posted: 19 Apr 2017 | 17:46:49 UTC

The new app allows for mixing of new and older generations of GPU's.
For example a GTX970 can now be used alongside a GTX1060 is a Windows 10 system.

The thread called DO NOT MIX GTX 10X0 GPU with older GPUs in the same system has been retired.

Will move the last few posts to here...
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Erich56
Send message
Joined: 1 Jan 15
Posts: 369
Credit: 1,599,071,802
RAC: 2,790,601
Level
His
Scientific publications
watwatwat
Message 47045 - Posted: 20 Apr 2017 | 5:40:02 UTC - in response to Message 47032.

BelgianEnthousiast wrote:

Same here,

Now WU's are running fine on both cards (Titan & 1070), but where it used to take
around 9 hours to complete (at 65 % load), now it takes 15 hours on the 1070 and nearly 24 hours on the Titan, still both at 65 % load...

Was this the work-around where efficiency was sacrificed in favor of being able to crunch on a mixed environment ? (not sarcastic here ! :-))


maybe Matt could look into this.
As I said before, on my three Windows 10 PCs (one with GTX970 and two with GTX750ti, all updated to latest driver 381.65) I also notice considerably longer crunching times compared to before.
I hardly imagine that this was intended to happen.

I guess that this happens virtually to all crunchers, while most of them may not have noticed it.

eXaPower
Send message
Joined: 25 Sep 13
Posts: 263
Credit: 997,157,667
RAC: 2,163,136
Level
Glu
Scientific publications
watwatwatwatwatwat
Message 47047 - Posted: 20 Apr 2017 | 11:41:34 UTC - in response to Message 47045.

BelgianEnthousiast wrote:
Same here,

Now WU's are running fine on both cards (Titan & 1070), but where it used to take
around 9 hours to complete (at 65 % load), now it takes 15 hours on the 1070 and nearly 24 hours on the Titan, still both at 65 % load...

Was this the work-around where efficiency was sacrificed in favor of being able to crunch on a mixed environment ? (not sarcastic here ! :-))


maybe Matt could look into this.
As I said before, on my three Windows 10 PCs (one with GTX970 and two with GTX750ti, all updated to latest driver 381.65) I also notice considerably longer crunching times compared to before.
I hardly imagine that this was intended to happen.

I guess that this happens virtually to all crunchers, while most of them may not have noticed it.

Are any GPUs connected to PCIe 2.0 x1? Or x4/x8/x16? You're hosts all have typical runtimes for it's cards and CPU combo.

Runtimes naturally vary from tasks step length or atom count. Currently longer WU are going around - PABLO_contact_goal_KIX_CMYB or ADRIA_FOLDGREED10_crystal_ss_contacts_100_ubiquitin

@ Erich56: you're GTX980tis are top tier fast. My 1080/1070 below average tier runtime from CPU bogging GPU down by 10%.
The CPU doesn't slow up GTX1060 though.
I don't see any WU indicating a bottle mark of some sort on you're host.

Erich56
Send message
Joined: 1 Jan 15
Posts: 369
Credit: 1,599,071,802
RAC: 2,790,601
Level
His
Scientific publications
watwatwat
Message 47050 - Posted: 20 Apr 2017 | 12:55:58 UTC - in response to Message 47047.
Last modified: 20 Apr 2017 | 12:57:39 UTC

@ Erich56: you're GTX980tis are top tier fast. My 1080/1070 below average tier runtime from CPU bogging GPU down by 10%.
The CPU doesn't slow up GTX1060 though.
I don't see any WU indicating a bottle mark of some sort on you're host.

@eXaPower: I don't complain about the good job my two GTX980ti's are doing in host 329650 with Windows XP :-)

However, I see a marked efficiency deterioration particularly with host 205584 - Windows 10 with a GTX750ti connected on PCI 1.1
After the recent crunching software change (plus update of the driver to 381.65) it seems to take considerably longer to crunch a given task, plus it's even more susceptible to overclocking than it was before already (see here: http://www.gpugrid.net/forum_thread.php?id=4487)

Also, I have noticed (like someone else here in the forum, too) that since these recent changes, once in a while crunching comes to a halt for an undetermined period of time - CPU and GPU usage drops to zero, and after a while crunching is resumed. This definitely has NOT happened before.

In other words: maybe for smaller GPUs and/or older systems, the recent change seems to have negative results.

BelgianEnthousiast
Send message
Joined: 7 Apr 15
Posts: 20
Credit: 281,058,525
RAC: 757,772
Level
Asn
Scientific publications
wat
Message 47122 - Posted: 27 Apr 2017 | 12:27:15 UTC - in response to Message 47047.

@ExaPower,
Both cards are in PCI 3.0 x16 slots

Just noticed that on my Titan card I'm crunching for 20 hours already and it indicates another 06h15 minutes to go... Or a total of 26 hours ? That's an increase of about 14 hours compared to it's former performance. (before I added the 1070 card)
It's currently working on e7s7_e3s5p0f321-ADRIA_FOLDGREED50_crystal_ss_contacts_100_ubiquitin_1-1-2-RND2200-0.

Any ideas ?

eXaPower
Send message
Joined: 25 Sep 13
Posts: 263
Credit: 997,157,667
RAC: 2,163,136
Level
Glu
Scientific publications
watwatwatwatwatwat
Message 47124 - Posted: 27 Apr 2017 | 16:07:35 UTC - in response to Message 47122.
Last modified: 27 Apr 2017 | 16:42:58 UTC

@ExaPower,
Both cards are in PCI 3.0 x16 slots

Just noticed that on my Titan card I'm crunching for 20 hours already and it indicates another 06h15 minutes to go... Or a total of 26 hours ? That's an increase of about 14 hours compared to it's former performance. (before I added the 1070 card)
It's currently working on e7s7_e3s5p0f321-ADRIA_FOLDGREED50_crystal_ss_contacts_100_ubiquitin_1-1-2-RND2200-0.

Any ideas ?

Have you checked true PCIe link speed with any one of these: GPU-Z / NVidia inspector / SIV?
Suggestion - upon next system restart go in BIOS menu to change PCIe settings.
If slot setting is AUTO change to PCIe3.0.

It's possible the Titan or slot data pin(s) went bad. (PCIe will run with a bad data pin as long it not x1. If x8 is bad then x4 next option.)

Another possible slowdown culprit: x99 CPU 40 PCIe lane controller only running one direct connection for GTX 1070 but not the Titan.
x99 PCIe2.0 x8 chipset could be supplying PCIe2.0 x4 to Titan GPU.

You're CPU handle's 8 or at least 5 direct x8 PCIe3.0 GPU connections.
Z77/87/97/170/Z270 handles only 3 CPU+GPU PCIe3.0 (x8/x4/x4) connections though Z170/270 series chipset can carry 5 GPU+chipset connection at PCIe3.0 x4. The upcoming x299 chipset has a total of up to 68 PCIe3.0 lanes (24 from chipset). Currently released Ryzen (x370 chipset) has a total of 24 PCIe3.0 lanes.

Post to thread

Message boards : Graphics cards (GPUs) : Mixing GTX 10X0 GPUs with older GPUs in the same system