Computer Build, Upgrade & Buying Resource Thread - Page 206
| Forum Index > Tech Support |
When using this resource, please read the opening post. The Tech Support forum regulars have helped create countless of desktop systems without any compensation. The least you can do is provide all of the information required for them to help you properly. | ||
|
Cyro
United Kingdom20318 Posts
| ||
|
Cyro
United Kingdom20318 Posts
First up, we have Battlefield 4, the primary game that AMD has mentioned in relation to Mantle since the tech’s unveiling. CPU-limited scenario: 40.9% (1080p) and 40.1% (1600p) performance improvement under Ultra settings and 4xAA on the AMD A10-7700K with an AMD Radeon R9 290X. GPU-limited scenario: 2.7% (1080p) and 1.4% (1600p) performance improvement under Ultra settings and FXAA on the Core i7-4960X with an AMD Radeon R7 260X Well, i'm really fucking dissapointed with this It's nice to not have stupid as shit CPU bound games that don't run properly, but this is so small that it's literally irrelevant on the GPU front. Given that the CPU limited situations are fixed for intel/nvidia running mantle too, there is no grounds at all for switching to radeon for mantle. Unless they make it 100% proprietary, which would be beyond ridiculous and make a radeon GPU mandatory for running for example bf4 if you were on a 120hz screen - as it's incapable of getting great performance, even with a 4930k and two 780ti's - yet mantle could allow a 4770k + single 280x to scale to way higher FPS at med-low settings. Oxide Games’ StarSwarm is AMD’s other proof-of-concept. Oxide consists of industry veterans from Firaxis and Microsoft Studios, so they seem like good candidates for executing Mantle at its full potential. CPU-limited scenario: 319% (1080p) and 281% (1600p) performance improvement in the “RTS” test on Extreme settings with the AMD A10-7700K and an AMD Radeon R9 290X. GPU-limited scenario: 5.1% (1080p) and 16.7% (1600p) performance improvement in the “RTS” test on Extreme settings with the Core i7-4960X and an AMD Radeon R7 260X That's better. A bit - but i'm sorely dissapointed. 5% GPU performance gain for mantle? Seems the entire gcn-acceleration side of it was waaaaaaaaaaay overhyped, and it's only use is (a brilliant use, but not what it was marketted to be originally!) for much much more efficient use of CPU resources. We really, really needed that, but this does not really help their GPU's . . . I don't believe a lot of the high numbers for mantle performance apply to realistic gpu bound situations though, so i'm holding off on if it be a massive weight in performance/dollar on radeon vs geforce GPU's First up, we have Battlefield 4, the primary game that AMD has mentioned in relation to Mantle since the tech’s unveiling. CPU-limited scenario: 40.9% (1080p) and 40.1% (1600p) performance improvement under Ultra settings and 4xAA on the AMD A10-7700K with an AMD Radeon R9 290X. GPU-limited scenario: 2.7% (1080p) and 1.4% (1600p) performance improvement under Ultra settings and FXAA on the Core i7-4960X with an AMD Radeon R7 260X I hate being right | ||
|
Incognoto
France10239 Posts
http://techgage.com/news/amd-shares-fresh-mantle-numbers-promises-driver-to-support-it-soon/ http://www.pcper.com/reviews/Graphics-Cards/AMD-Catalyst-141-Beta-Driver-Brings-Mantle-Support-Frame-Pacing-Phase-2-HSA On the GPU front the gains are indeed minimal if your CPU is stronger than the GPU. The gains are rather minimal. However I think it's still noteworthy that there are still some gains to be had: Core i7-4960X CPU + R9 290X GPU 1080p, Ultra Preset, 4xAA: 9.2% improvement with Mantle 1600p, Ultra Preset, 4xAA: 10% improvement with Mantle Core i7-4960X CPU + R7 260X GPU 1080p, Ultra Preset, 4xAA: 2.7% improvement 1600p, Ultra Preset, 4xAA: 1.4% improvement A10-7700K CPU + R9 290X GPU 1080p, Ultra Preset, 4xAA: 40.9% improvement 1600p, Ultra Preset, 4xAA: 17.3% improvement A10-7700K CPU + R7 260X GPU 1080p, Ultra Preset, 4xAA: 8.3% improvement 1600p, Low Preset: 16.8% improvement In particular, 9-10% gains on the 4960X/290X rig is quite interesting. They didn't say what framerates they were playing at though. For a 290X with such a beeefy CPU at 1080p (I think these benchmarks were done in single player?), that should be at least 60+ FPS, no? So that would equate to 6+ FPS gains. It's not spectacular, but it's nothing to scoff at either, imo. The biggest gains by far are done on the CPU front for sure though. This kind of plays into AMD's hands on the CPU front, especially Kaveri. I think Kaveri + R7 250 + Mantle could get some interesting results. Overall though, Mantle seems to be "good" in only certain, limited, situations. The fact that the developer also needs to code everything with Mantle in mind means that Mantle can't be used for other games. Seeing as I play almost no games that use the Frostbite engine, Mantle is almost completely irrelevant to me. As expected from AMD. Speaking of CPU power, I finally found the thread that inspired questions about the FX 6300 for me. Well, it's actually not that great. Single thread performance for an FX 6300 at 4.8 GHz still isn't on par with an IB core at 3.4 GHz (stock): http://browser.primatelabs.com/geekbench3/29291 http://browser.primatelabs.com/geekbench3/29280 The multi-core score is interesting though, it definitely goes to show that in well threaded tasks the FX 6300 definitely isn't a bad chip. Though I wouldn't game with it, not with these results; the gap between a Haswell i3 and an FX 6300, even overclocked, should be even bigger. I think Haswell has 15% more IPC than IB. Thread: http://www.overclock.net/t/1421579/fx-6300-vs-core-i5-benches/30 Edit: Failure rate numbers in 2013: http://linustechtips.com/main/topic/108284-huge-list-of-failure-rates-on-pc-components-french-but-i-translated-nearly-everything/ To think that I have a Sapphire 7970 OC. Scary. q_q Also interesting, for those not sure whether to get Seagate or WD: http://www.overclock.net/t/1460976/gamersnexus-hard-drive-failure-rates-studied-seagate-vs-wd-vs-hitachi | ||
|
Cyro
United Kingdom20318 Posts
In particular, 9-10% gains on the 4960X/290X rig is quite interesting. Not really to me. Battlefield 4 has shown CPU/API limits in an annoying way. In beta i posted graphs showing ~3-5% of frames, on a 4770k and a gtx770 running game at 720p without aa, were slower than 16.7ms (60fps mark). Such activity in game engine continued across a variety of setups, AMD and Nvidia GPU, AMD and Intel CPU. Simply by fixing that broken stuff in game engine/api/platform interactions, they can get most of those results, leaving only a small fraction of that 10% being down to GCN-acceleration, or actual GPU performance improvement. I really appriciate those fixes and such an API open to everyone (it's one of the most important advances in a long time! No more terrible, CPU bound games that refuse to run well under any circumstances like sc2 and bf4!), but if i waited 6 months and made a purchasing decision based on a 2% GPU performance improvement (like i did for shadowplay) then i would be really quite upset. The armies of AMD fanboys quoting it to be "maybe a 20-30% improvement" makes it worse, i never expected that, but damn at least a solid 10 would have been nice. A blind-testable number in a GPU bound case, yknow? Oh, Haswell IPC is more like +8-10% in most cases, it's just bigger in x264 and loads that can incorporate avx2 | ||
|
Incognoto
France10239 Posts
Well, we'll have more numbers once BF4 comes out and people get mantle to try for themselves. BF4 isn't the only game that will use mantle either. It's taking its baby steps at the moment so maybe things will get better. Just the fact that mantle is free and it reduces CPU usage by the amount it does (freeing up resources for other things, such as streaming) is already quite good. It's just not the good we were expecting. 8-10% gains still isn't bad and still exacerbates Haswell i3 vs FX 6300. If it's 15% in x264 then that basically means Haswell is more suited to encode video (with OBS or Xsplit) than IB is, correct? Though you can't really hold that against AMD processors since they're well suited for encoding either way. | ||
|
Cyro
United Kingdom20318 Posts
| ||
|
Ropid
Germany3557 Posts
| ||
|
Cyro
United Kingdom20318 Posts
It would mean that 4670k+280x would be able to run the game at higher FPS than 4770k+dual 780ti - Significantly higher. Even though the dual 780ti setup could bench easily over 3x higher in GPU bound cases and costs like 3.5x more. | ||
|
Ropid
Germany3557 Posts
Regarding CPU and GPU and where you get what gain, check this out, a comparison of two fast cores and four slow cores and the AMD and NVIDIA drivers and GPUs: http://pclab.pl/art55238-3.html It's Polish, but I guess the graphs are self explanatory. Those dudes recorded numbers for their quad-core Haswell overclocked to 4.6 GHz with only two cores enabled, and then numbers for the same CPU running at only 2.3 GHz but all four cores enabled. The NVIDIA driver team managed to get a lot out of the extra cores, but the AMD driver did not care at all, only cared for speed. So what's happening with Mantle might be, AMD simply lets the game developers have a go at it themselves! I assume they'll make it possible to get some sort of pointer and memory access directly into the graphics card's VRAM without any driver in the way. The programmer then directly writes the data structures the GCN cores need into the graphics card memory. If it works like that, it would likely be impossible for NVIDIA to implement Mantle so that it's compatible to cards using AMD's GCN cores. On a normal API that's a little farther away from the actual hardware like OpenGL or DirectX, the driver will translate all data structures into whatever the GPU uses on the inside. The drivers come with things like a compiler for OpenGL's and DirectX's shader language that will create the real shader code. There was already an OpenGL extension from NVIDIA that allowed some sort of direct access to shaders and the vertex buffer, but no one used it. You can find things about this if you search for "bindless graphics OpenGL". I don't really understand the details much, but there was something happening like, even if AMD would implement the extensions, if you would have written your code to manually work on the shaders for NVIDIA, you wouldn't be able to re-use that on AMD as the shaders would have to be different. I might have misunderstood. In any case, I wouldn't hold my breath for Mantle on NVIDIA. What makes more sense is ideas getting used for the design of the next big DirectX version and the next big OpenGL version.As that stuff on that Polish site shows, it might also simply not be that urgent. It might be running fine on quad-core CPUs and NVIDIA, and only work a little shitty on dual-core CPUs. Developers also already had access to those OpenGL extensions of NVIDIA and ignored them. This might be because if they'd actually use those extensions, they might be able to create graphics that simply wouldn't run fast enough on AMD cards, so they had no choice but to ignore it. I'm thinking of that "Star Swarm Demonstration" video about what's possible if using Mantle. Now that there's Mantle on AMD, developers could use that for AMD and use OpenGL for NVIDIA. They could first create a crappy OpenGL version of their code which is useful for porting to Apple and Linux (Steam). Then they could create a special NVIDIA OpenGL version, and a special AMD Mantle version. I think AMD mentioned that you'll be able to write a DirectX program and use Mantle together with DirectX. Maybe that's also possible if you write an OpenGL program. | ||
|
Cyro
United Kingdom20318 Posts
As of now, you can keep 90'th percentile FPS 50% higher with a 4770k and a 280x than with a 4770k and a 780ti.. so.. ouch. Anyone targeting a low fps like 60 with an otherwise overkill CPU will be bothered by some slow frames and weird stutters, but anybody targetting high and consistent FPS is screwed without mantle. I'm glad i don't play that game because +50% FPS in a first person shooter is too big to ignore The +200, +300% gains on cpu front are even more worrying. If something as simple as an API allows an fx6300 to make a 4.5ghz 4670k look like a sandy bridge pentium, but only when used with a radeon GPU, it'll split the market really hard. That star swarm demo thing is supposed to show up on steam today? Which is quoting such numbers. | ||
|
Ropid
Germany3557 Posts
It's interesting that the fps drop seems avoidable by simply disabling a bunch of effects. That would be ugly if they used that to get it running fine without Mantle. | ||
|
Myrmidon
United States9452 Posts
e.g. AnandTech: "to this day the number of draw calls high-end GPUs can process is far greater than the number of draw calls high-end CPUs can submit in most instances." i.e. it's helping out the CPUs http://images.anandtech.com/doci/7371/FBMantle.jpg Of course, some other benefits too maybe, but that wasn't the focus. Also, low-level stuff, even if not Mantle specifically, is of course what can help out those puny Jaguar cores in the consoles. It is also quite possible that programmers have enough trouble porting their stuff to Mantle that they have not really found time to do all the "low-level optimizations" that would speed up things on the GCN GPU. Better utilization and exploitation may come with time. I haven't read the articles yet though, so... | ||
|
Cyro
United Kingdom20318 Posts
| ||
|
Ropid
Germany3557 Posts
I wonder what'll happen over the next 12 months or so. I might just happen that NVIDIA gets on board with this. | ||
|
Cyro
United Kingdom20318 Posts
| ||
|
Ropid
Germany3557 Posts
You have to imagine the "API" as some sort of contract. It's just a piece of paper with rules. It describes how programs and the driver have to talk to each other. NVIDIA would have to build their own driver for that Mantle API. They are the only ones that know how their hardware works as those things are secret, so no one else can do this. The catch is that it's beta or something, only the game developers involved know the concrete details of the API at the moment. DirectX has some special installation program, but OpenGL for example has nothing, comes with the graphics card driver. Mantle should work like that, comes with the graphics card driver. | ||
|
RiSkysc2
696 Posts
Cpu - i5 2500k Gpu - radeon 7950 Ram - some random 2x 4gb low profile RAM Im going travelling and to college in the next 2 years so i want a PC that is extremely portable as well as very powerful, as already mentioned ive already got the main stuff but in need of some help with the extra stuff such as cooler, case, PSU (maybe), mobo and CPU (maybe). My favourite case probably so far is the corsair obsidian 250D and i was thinking of using a h100i cooler with that. Anyways i hope thats enough info. Oh also it would help if the case had a handle or the ability to be modded with a handle. Totally forgot to say my budget, preferably as low as £350 but can go as high as £1200 at the very highest (would prefer to save as much money for travelling). | ||
|
Incognoto
France10239 Posts
However, I think for portable itx builds I may review that position. Rads are less cumbersome than huge heatsinks overall. Secondly, more important, CLCs place a much smaller strain on the motherboard compared to what I think a big cooler would, which is definitely something to take into account for if you're going to be moving your rig around a lot. In no way whatsoever did I offer any good advice in this post; just thinking outloud and mulling over the previous Mantle discussion and it's implications. | ||
|
RiSkysc2
696 Posts
I dont need to take it out when travelling (probably). Easier to maintenance. I really want as small of a case as possible so air flow would be a major problem with a air cooler. Never used a water cooler so i want to learn how to set it up. Less susceptible to damage (maybe?) And also a high end air cooler will take up quite a bit more area than a CLC (less visually appealing). | ||
|
Cyro
United Kingdom20318 Posts
I don't really like traveling with a big heatsink mounted, it's a pain to unmount and remount cooler and gpu etc when moving though. Small system with clc might be nice Saw h100i for ~£88 earlier at amazon and scan. That's cheaper than it was available before, if you're happy with the stock fans. Pretty simple to set up, since it's just mounting on the CPU then attaching rad/fans to case (not entirely sure how this works but it should be simple) And also a high end air cooler will take up quite a bit more area than a CLC (less visually appealing). This is really subjective, i find big bulky with pretty colors to be very visually appealing and powerful looking, same with fan noise etc :D | ||
| ||
What makes more sense is ideas getting used for the design of the next big DirectX version and the next big OpenGL version.