zombe on 15/6/2018 at 07:21
Actually, since it is somewhat relevant to the topic. What are the rules of the hardware game nowadays ... A few observations:
According to Steam hardware survey - Nvidia cards are worryingly dominant (75%!). This is horrible for consumers. Has the shit hit the fan already or are there signs it is about to do so? Or has it yet had any effect at all?
Bitcoin mining has wrecked the market hard it seems. I bought a super-low end card (45€, the lowest card they had) a year+ ago (long story) and now the exact same card is sold at 53€. Since no sane miner will buy that crap i presume (even considering its low power consumption, its lack of processing power i would expect to nullify that advantage ... question mark.) - i guess it is caused by higher end cards becoming too expensive for too many therefore pressuring the low end crap. If the absolute crap has risen 18% (more if you count inflation) - any idea how much the high end has suffered? Is there any GPU metric noteworthy in regards to miner price pressure?
Sulphur on 15/6/2018 at 07:46
The prices aren't that bad right now for middle to high-end parts as mining pressure has slowly eased off. You're likely seeing the effects from the previous five months of bullshit that's slowly fading out.
nvidia's always had the majority share in Steam surveys, though I seem to recall a time long ago when its lead wasn't that massive. Part of the problem is AMD parts are pretty scarce and they're really good for crypto mining (apparently), so the Vega 56 and 64 are pretty difficult to find at anything near a standard price in my neck of the woods - if they're available at all in the first place. They are (were? I guess ASICs have rolled out now) good for ETH miners and consequently have a ridiculous markup.
They've also been pretty low-key in competing with nvidia, content to trade blows rather than really outperform, so we've been at a lock in terms of performance for the past few years: AMD's generally better at DX12/Vulkan but not so great at DX11, while nvidia is a bit slower with those newer APIs but outpaces AMD in DX11.
This is going to change a bit come 2020 or 2021 if Intel's able to deliver a compelling discrete GPU product. They've done this dance before, but this time it seems they're really bringing some commitment to getting a competitor to the market. As always with these things, we'll see.
Nameless Voice on 16/6/2018 at 01:00
I generally prefer to buy AMD, since Nvidia are pretty evil, but there's just no contest these days. I had waited for Vega to come out before buying my current card, but in the end I went with nvidia because the Vega basically gave you worse performance, vastly more power consumption/heat, for the same price. I just couldn't justify buying one. Maybe they've improved now (I haven't been keeping up-to-date), but I somehow doubt it.
zombe on 16/6/2018 at 21:13
Quote Posted by Sulphur
AMD's generally better at DX12/Vulkan but not so great at DX11, while nvidia is a bit slower with those newer APIs but outpaces AMD in DX11.
Is that actually true? I do not remember any comparisons to that effect. I do remember a lot of people misunderstanding a comparison of OGL to Vulkan for both cards where AMD showed considerably better performance gains than Nvidia. Assuming it meant that AMD is better at Vulkan than Nvidia - which it does not mean at all. It is considerably easier to make a good driver for Vulkan than it is for OGL (DX is out of my turf, but afaik DX11 and earlier were quite difficult also with the only saving grace was its dominance and consequently AMD trying harder. DX12 is very similar to Vulkan, so the same rules should apply). AMD OGL has famously been craptastic at best and absolute shit at worst (the reason why i switched to Nvidia - i wanted to do OGL and that is what ATI/AMD just could not do [can not try NEW things when the GPU either cannot do it or is too god-damn buggy to maintain ones sanity]). Mantle might have given a slight initial boost for AMD, but i doubt it (for reasons).
My OGL days are past - nowadays i play with Vulkan. I have to say that AMD has inherited a feel of weirdness/restrictiveness/chunkiness from OGL times. Like its very restrictive transfer queue (beyond usability breaking point for me). In general Nvidia implementation seems to have more stuff it can silently ignore (within spec of course - ie. not to be confused with Nvidia assholeness in bending/fucking the OGL specs). Etc.
Cannot see how anyone could mess up Vulkan drivers - even Intel does not seem to have any problems. My two cents nowadays is that if you use Vulkan or DX12 then driver quality is not a real concern anymore and the hardware is all that matters.
Quote Posted by Sulphur
... if Intel's able to deliver a compelling discrete GPU product.
Discrete GPU? They have done that before? Never heard of that before. I thought all they did were the GPUs built into CPU. Which from my experience are terrible even for office. I hope they manage to become relevant to gaming somehow.
Quote Posted by Nameless Voice
I generally prefer to buy AMD, since Nvidia are pretty evil, but there's just no contest these days.
Yep. Well, my AMD preference days are over from when they were still ATI. But the rest ... yeah, that about sums it up.
Sulphur on 17/6/2018 at 05:48
Quote Posted by zombe
Is that actually true? I do not remember any comparisons to that effect. I do remember a lot of people misunderstanding a comparison of OGL to Vulkan for both cards where AMD showed considerably better performance gains than Nvidia. Assuming it meant that AMD is better at Vulkan than Nvidia - which it does not mean at all. It is considerably easier to make a good driver for Vulkan than it is for OGL (DX is out of my turf, but afaik DX11 and earlier were quite difficult also with the only saving grace was its dominance and consequently AMD trying harder. DX12 is very similar to Vulkan, so the same rules should apply). AMD OGL has famously been craptastic at best and absolute shit at worst (the reason why i switched to Nvidia - i wanted to do OGL and that is what ATI/AMD just could not do [can not try NEW things when the GPU either cannot do it or is too god-damn buggy to maintain ones sanity]). Mantle might have given a slight initial boost for AMD, but i doubt it (for reasons).
My OGL days are past - nowadays i play with Vulkan. I have to say that AMD has inherited a feel of weirdness/restrictiveness/chunkiness from OGL times. Like its very restrictive transfer queue (beyond usability breaking point for me). In general Nvidia implementation seems to have more stuff it can silently ignore (within spec of course - ie. not to be confused with Nvidia assholeness in bending/fucking the OGL specs). Etc.
Cannot see how anyone could mess up Vulkan drivers - even Intel does not seem to have any problems. My two cents nowadays is that if you use Vulkan or DX12 then driver quality is not a real concern anymore and the hardware is all that matters.
Vulkan's based on Mantle, which AMD created. Anyway, take a look see for yourself: (
https://www.tomshardware.com/reviews/amd-radeon-rx-vega-64,5173-8.html) https://www.tomshardware.com/reviews/amd-radeon-rx-vega-64,5173-8.html
Quote:
Discrete GPU? They have done that before? Never heard of that before. I thought all they did were the GPUs built into CPU. Which from my experience are terrible even for office. I hope they manage to become relevant to gaming somehow.
They've been threatening to before, but last time they disappeared up their own real-time raytracing butthole - it was called Larrabee, and it was cancelled. This time they seem far more serious since they have Raja Koduri, who was AMD's Radeon Senior VP and Chief Architect. Here's a (
https://www.anandtech.com/show/12964/intels-first-discrete-gpu-set-for-2020) blurb. The timeline may be a little too ambitious, but in 2021 the GPU landscape may be a little bit different - just not for gaming. We'll see.
zombe on 17/6/2018 at 10:25
Quote Posted by Sulphur
Vulkan's based on Mantle, which AMD created.
Like i said - i doubt it gave them any advantage over a month or two. Now, years (well, barely), later it makes no difference. Mantle was made because all the alternatives (OGL, DX) were terrible (driver overhead / inability for an app to say what it wants to do) at the time and AMD smelled an opportunity which they took (especially as AMD implementation for thous alternatives was really-really terrible [less so for DX for reasons already mentioned] in comparison to Nvidia). Mantle removed that mess - and Vulkan / DX12 did the same. BECAUSE they can. Because the hardware is more-or-less the same and the wast majority of the abstractions are just pointless hindrances for everyone (Unification of sorts. A process i remember people noting ~10+ years ago and it never stopped).
A thin and direct driver layer can not have favorites. AMD does like to use Mantle in its Vulkan ads - because the average Tom does not understand any of it anyway. It is meaningless.
Nvidia is not in the business of paying favors to competitors - quite the opposite, they are the biggest assholes around. Vulkan is not Mantle either - it is an common hardware abstraction which most notably contains core (for Vulkan) stuff for tile based devices (phones etc). Something that is completely absent from Mantle.
Could not find anything even remotely relevant there :/. I am not terribly familiar with that mammoth-of-a-site. Maybe i do not know where to look.
All i found was this quote: "Although Nvidia’s performance under Vulkan is much improved, AMD continues to dominate in Doom.".
The first part of that said absolutely nothing about Vulkan implementation quality between the two - to their credit it does not pretend to. It is just a statement of fact and exactly matches what one would expect to be the case. Even so, i could not find what the statement is based on (i guess it directly addresses the improvements from switching to Vulkan).
The second part i have no idea what it is trying to say or what it is based on (or it is just too stupid for me to easily accept the idiocy it seems to portray). Could not find anything on the site to clarify what it is supposed to mean.
----------------
A few words about GPU performance.
It is a function of:
* GPU capabilities (inc. processing power)
* Driver capabilities and overhead (inc. host caps/etc ... which is not relevant here, so i omit them from now on)
* User overhead.
Lets call them G, D and U for short and give them weights. A few illustrative out-of-my-ass-but-representible numbers:
Ancient OGL: no-one cares.
Older OGL: G:5 D:21 U:3 (also known as dentistry using the anal approach)
Basically the driver has to literately reverse engineer on the fly what actually needs to be done and predict it ahead of time. Common side effect is that GPU is bored out of its mind as the driver/user cannot feed it enough work and it is just idling around. In the past it did not matter as the GPU was too slow to keep up anyway.
Newer OGL has improved a lot (if you use only the right stuff): G:13 D:9 U:2
AMD: G:12 D:12 U:2
Nvidia: G:13 D:8 U:2
Improved, but not completely fixed. But if you tread carefully then you can feed the GPU reasonably well most of the time while sacrificing noticeable amount of CPU time (assuming you can afford to do so).
Vulkan/Mantle/DX12 remove a lot of the crap: G:15 D:2 U:1
AMD, Nvidia: essentially the same (AMD has more restrictions, gradually getting rid of them would help i guess, but not much room for improvement either way. Iirc AMD had pretty poor parallelization for Vulkan [interleaving GPU work] ... don't remember. Was not relevant for me at the time).
If your app was not driver/user overhead bound then none of them will be of any help at all.
----------
Finding out the quality of Vulkan/DX12 implementation is rather difficult, as what you want is to extract the weight of D from G+D+U when D is very low (and scale thous by the effects of pipeline bubbles). Much easier to do that for older OGL as D weight is rather huge and often to the point that you can directly measure idle time of G (which should and usually is 0 or thereabout on newer APIs)
It is hard for Nvidia to improve in doom (OGL vs Vulkan) as their OGL was not shit to begin with.
It is easy for AMD to improve in doom as their OGL was and is shit in comparison.
Ie. switching to Vulkan will expectedly show small gains for Nvidia and big gains for AMD. Also: AMD gains will naturally rise faster than Nvidia as new GPUs come out and OGL overhead affects AMD more.
This is the core of the common misunderstanding (which seems to always involve Doom).
Quote Posted by Sulphur
... but last time they disappeared up their own real-time raytracing butthole - it was called Larrabee ...
Oh, right. Completely forgot that. That whole project was just perplexing.
I hope intel gets something useful done this time.
Sulphur on 17/6/2018 at 10:27
Not talking about API quality or ease of implementation, zombe. Just raw hardware performance and benchmarks compared on a per-GPU, per-API level. Analysis of driver-level implementation is not something a hardware site would normally do, and I'm unaware of a site that actually does that, though I'd love to read up on it.
Thirith on 18/6/2018 at 07:51
Complete un-technical question concerning graphics cards: I was thinking that when the new Nvidia cards come out I'd wait half a year or so and then get one. However, with the whole crypto mining thing, how much of a risk is there that cards will be more or less unavailable at that time or the ones that are available cost considerably more than they did at release?
Sulphur on 18/6/2018 at 08:36
Well, there's no easy answer to that one (that I can see). While mining's taken a tumble in recent times, and folks seem to have finally grokked exactly how volatile/speculative it is, the very nature of its open, decentralised philosophy means people are free to keep coming up with ASIC-resistant variants. nvidia has addressed the problem a little bit with dedicated mining cards and being the only place you can be certain sells cards at MRP, but that's not a guarantee there won't be a constant tide of average joes baited by the crypto bubbles. The day nations decide strong regulation for CC is the day you'd see a massive drop in people wanting in, but when that's coming is uncertain.
I'd say if you plan on waiting for an upgrade, see how the pickup is when the new cards arrive. If stuff gets sold out within a few weeks (and it probably will if there's higher efficiency and lower TDP with new parts, so people would want to add on to/refit their mining rigs) or lesser, it's likely price drops will be difficult to come by in the following months.
I don't expect nvidia or AMD to combat mining in any actual fashion because they've been reaping the rewards of it so far. If you're lucky, supply and demand will get back to normal and you can pick up third party cards at a discount eventually. But failing that, you should at least be able get a vanilla card at MRP from your local nvidia site when they're in stock.
zombe on 18/6/2018 at 09:03
Ah, the sweet-sweet confusion.
Is it fair to say that what you meant to convey was something like: If one has a choice then for AMD, the new APIs should be used as their implementation for earlier APIs is markedly really bad if compared to Nvidia which pays a significantly lesser penalty for earlier APIs. In other words, if you have a choice between equally powerful AMD and Nvidia hardware then pick Nvidia as it is either demonstrably better or just equal (the unlikely worst case scenario).
That would match this:
Quote Posted by Sulphur
AMD's generally better at DX12/Vulkan but not so great at DX11, while nvidia is a bit slower with those newer APIs but outpaces AMD in DX11.
My suggested version above and yours quoted here are compatible. However, they don't look like it - hence why i though you meant specifically newer API implementation quality.
Something i would like to see myself too. Especially for Vulkan (my turf) - there has been a lot of wiggling around both AMD and Nvidia camps around it.
Basically, the early responses from Khronos/AMD/Nvidia to questions in the form of "should i do A or B" was: "which one would you like to prefer?" - to some degree it still holds. Ie. there is enough wiggle-room for usage to dictate what should be done. For example: is it worth to interleave work with Queues (nvidia has 16, AMD and Intel have a max of 1 across all hardware - ie. cannot do anyway) or enough to depend on interleaving work on same queue (only option for AMD and Intel). What difference does it make? How well can they interleave with present drivers?
Totally unrelated, but relevant to what i said earlier. I noticed that there are now AMD (*) cards that have their transfer queue limitations lifted (i suspect that the possibility for the limitation was added to the Vulkan specification ONLY (**) for AMD as i have never seen ANY other GPU out of 3500 listed with any limitations at all - not even any of the mobiles need it).
*) (
http://vulkan.gpuinfo.org/displayreport.php?id=3471#queuefamilies) "AMD cape verde" - f* finally has a usable transfer queue.
**) not the first time something gets added to spec to accommodate AMD junk - i went to Nvidia because of a stunt like that when OGL for AMD/ATI got a max of 4 texture indirections for even their latest and greatest cards when everyone else said: use as many as you want and can cram into max instruction count (they literally just used the same number for the limit - because, well, you have to write something there and the concept is meaningless for everyone else, so write whatever there).
edit: erm, nevermind. AMD still has the transfer limitations even for "Radeon RX Vega". The "AMD cape verde" was a LLVM version for Gentoo Linux - that hardly counts. Back to "normal" i guess.