When using this resource, please read the opening post. The Tech Support forum regulars have helped create countless of desktop systems without any compensation. The least you can do is provide all of the information required for them to help you properly.
On August 06 2015 23:02 Cyro wrote: "tdp" is just a power limit that anyone on an unlocked board can change. For reasons unknown, the broadwell silicon doesn't seem to clock nearly as well as skylake. With CPU performance these days, even a 7-10% loss in frequency is a huge deal
Gustafson's Law instead proposes that programmers tend to set the size of problems to use the available equipment to solve problems within a practical fixed time. Therefore, if faster (more parallel) equipment is available, larger problems can be solved in the same time.
To some extent, but it doesn't work out well in practice. Sc2 gets no benefits from a third core and it launched like three years after quad cores started to become common. You can always trust developers to lack competence and money, if nothing else.
That's because you only look at games. Modern software already utilises threads. Threading can be improved but don't blame all developers for gaming's bad performance. There is much more to computers than games.
Why don't you actually learn how to program before you insult all developers? Then you'll see how common threads are.
Edit: It actually works both ways. There are hardware people who would go for crappy, non-optimal builds. Is it still developers who you should blame? There are good and bad developers and engineers.
I think broadwell vs skylake OC has a lot to do with the layout of the chips and thermal dissipation: both 14nm chips look to be more capped by temps than by voltage and the huge iGPU on the broadwell die forces the heat to be cramped onto a much smaller piece of the processor wheras the skylake layout has the cores much more evenly spread out.
The core size is actually pretty much the same on both AFAIK. Cores are actually very small on the die. On Skylake, only something like 15-20%ish of the die is CPU cores and the rest goes to cache, memory controller etc with of course 55% to iGPU - but that's just added on the side, it doesn't really cut into the CPU (aside from they could have used that die area for more cpu related things)
the broadwell cpu's were more optimized for low power, intel said themselves that they were not really prepared for desktop usage and they might have just had some stuff different in the manufacturing process to improve performance at low clocks+volts (2ghz range) at the cost of clock ceiling dropping. It's not really clear, i'm not an intel engineer
That's because you only look at games. Modern software already utilises threads. Threading can be improved but don't blame all developers for gaming's bad performance. There is much more to computers than games.
Why don't you actually learn how to program before you insult all developers? Then you'll see how common threads are.
I know some basics of programming. It's not difficult to thread. It's difficult to thread WELL. Like i said, amdahl's law - if you thread 90% of stuff perfectly onto SIX CORES, you'll get 4x performance and not 6x because you couldn't do the last 10%. Depending on the program, there is often MANDATORY work to be done on one thread or fewer threads than you'd like to use which completely ruins your day for scaling. For some uses (video encoder, you can split a 1920x1080 frame into 6 segments easily, run 6 threads in parallel and get a hair under 6x performance with relatively minor losses in video quality) it's almost perfect, for others it's not.
Threading extremely efficiently requires a lot of time, and skill which costs real money. It's not the highest priority to most developers and it will not be in the near future so you shouldn't expect otherwise. Nobody will pay you twice as much to make a program that runs a bit faster if the end result is the same, for most applications. If it's not a particularly optimization-sensitive program then it's usually just not worth the time and money, assuming your guys are quite highly skilled to do it well in the first place, given both.
Is it still developers who you should blame? There are good and bad developers and engineers.
Not blaming them, it's just the reality of the situation that if you can spend a year making a game that runs at 100fps or two years making one that runs at 110-120fps, usually nobody will pay you for the second option. And well, there has been more than enough talk from places like Nvidia where they claim to have an entire team dedicated to fixing the rookie mistakes that people make again and again when shipping AAA games which kill graphical performance. If people did a better job of coding, 90% of nvidia's driver team would be out of a job.
If anything, you should blame game project managers instead of developers. Remember games like Arkham Knight? Who releases them before they're actually ready? I'm sure it's not developers.
I'm building a computer for someone with very basic needs, i.e. Internet browsing, Youtube, Skyping, movies, and music. Here are the components I intend to buy:
I already have a couple of 2Gb RAM sticks so I'm not buying any. Nor is there any need for an OS. I don't know the resolution but it's not very likely to be particularly high or very important here so I'm guessing that Intel HD graphics should be more than enough for the purposes outlined above.
This totals almost exactly 200 EUR where I'm from which is the budget I'm aiming for.
What I would like to know is whether all of this is compatible and future-proof enough to last the user at least 3 or 4 years without complications given the mode of use I've specified. Also please indicate if I'm missing anything or if some of the components are an overkill for the task at hand.
For instance, I feel that it would be better to get a 300W PSU with better efficiency rather than the HEXA+ option I've mentioned. Although there're other units available in the store I'm planning to buy from very few of them are popular enough to have reliable reviews and/or test reports online so it's difficult to say if their marketing claims are legit. I would really appreciate it if somebody could take a look (just click here) at what they have on offer and vouch for a PSU costing around 40 EUR or, better yet, point me in the direction of a good review/test report for a decent unit I've overlooked.
How about a cooler master elite 110 case + msi h81i mobo? It would be itx (smaller, cuter) plus it would have 2xusb3 ports! It won't be much more expensive, though I don't know the exact prices.
Corsair Builder CX430 is a good choice for you I think, though it's a bit more expensive. It's not great for more serious systems but at least it's corsair, would be able to run a single celeron without complications for years. Here is a review for it.
SFX psu will fit ATX with an adapter bracket. Some sfx'es come with it, cheaper ones don't.
Alas, neither the size nor cuteness are a factor here. The price is, however, and your suggestions would add at least another 50 EUR to my expenses which is hard to justify given that H81M-E33 also sports 2x USB 3.0 ports.
Speaking of Corsair, I was looking at their VS350 PSU but there seems to be some ambiguity as to whether they are really worth buying and since I was able to find at least one satisfactory report on HEXA and the prices do not differ all that much where I'm from, I decided to go with FSP's device. Still open for suggestions, though :-) That said, CX430 does seem like a better if a bit pricier option.
Memory world records are done with low cpu clock speeds
cpu clock speed world records are done with low memory clock speeds
nothing new here if you try to break both world records at the same time, you'll break neither
Suicide run validations are not about practicality. It's a game of turn the computer on and take a screenshot before it crashes, hoping that you can keep the OS up for 10 seconds.
skylake 6600k/6700k pcb thickness and die size/shape + Show Spoiler +
top = devil's canyon, bottom = skylake
more square than Haswell
Seen reports of ~20c temp drops w/ delid (but more risky to delid with vice method due to thinner pcb) - temps are not as bad as launch Haswell though to start out with, i think.
After seeing how the 6700k compared to 3570k is a ~50% increase in performance for Sc2 I just ordered one by pure impulse.. However given time to think I'm starting to wonder if I shouldn't cancel it before the weekend is over as it's pretty much a luxury I shouldn't go for.. Do you guys think LotV will have increased performance as is, has Blizzard given any indication on working in this area?
How important are the extra cores(HT) for DX12? I saw some test that seemed to indicate that it did a lot, but haven't seen anything to back it up besides that.
I also went modest on the cpu-cooler and ordered a Noctua NH-D9L for it, it's not the best but it was cheap and if it did 4790k with around 70-72c on load it should be more than enough for Skylake on stock don't you think?
It wouldn't suck to have a decent CPU to complement my GTX980 but now I'm trying to find a reason not to go through with it..
After seeing how the 6700k compared to 3570k is a ~50% increase in performance for Sc2
It doesn't have that much increase :0
The only bench i've seen showed ~30% higher FPS, but improving the FPS number by that much doesn't neccesarily improve percieved performance that much because the sc2 engine is weird. Going from slower RAM to fast DDR4 would improve performance significantly, though.
How important are the extra cores(HT) for DX12?
Going from 4 to 6 real cores (a 50% increase in core count) adds ~20-25% on synthetic draw calls test so HT is probably only a fraction of that. It might be more or less for overall performance depending on the game engine (because the load from dx12 is only a part of the CPU workload when playing a game) - there's probably a ceiling of +20% or so if the game benefits a lot and is entirely CPU limited at that time
The only bench i've seen showed ~30% higher FPS, but improving the FPS number by that much doesn't neccesarily improve percieved performance that much because the sc2 engine is weird. Going from slower RAM to fast DDR4 would improve performance significantly, though.
Going from 4 to 6 real cores (a 50% increase in core count) adds ~20-25% on synthetic draw calls test so HT is probably only a fraction of that. It might be more or less for overall performance depending on the game engine (because the load from dx12 is only a part of the CPU workload when playing a game)
Well I did get 2133 speed DDR4 ram, going from 1600 speed DDR3 so I guess that's a bit faster but AFAIK memory speeds rarely have any real tangible effect on games perhaps a percent or two at the most. Unless that's changed and if so perhaps I shouldn't have been so hasty on going for the native Skylake speed ram..
30% gains at the same clock speed, IDK what clock you're running on 3570k but 6600/6700k's seem to be able to sit approximately in the 4.5 - 5ghz range at safe volts. For actual percieved performance (time that the slowest frames take) i'd expect ~1.2x faster with similar RAM performance, not 1.3x (inflated FPS number)
Well I did get 2133 speed DDR4 ram, going from 1600 speed DDR3 so I guess that's a bit faster
It's actually not, because the 2133 ddr4 is usually at completely awful timings like cas15. You can get ~3000 c15 for barely higher price and it performs much better. The ddr3 equivelant would be like buying 1333mhz c9 instead of 1866 c9 there.
but AFAIK memory speeds rarely have any real tangible effect on games perhaps a percent or two at the most
Irrelevant for GPU bound games, makes as much difference as +100-400mhz on CPU for some games that are not GPU limited. Sc2 engine is one of the highest gaining from RAM performance out there so +10% performance wouldn't be unexpected
Oh I don't plan to overclock, and I haven't overclocked my 3570k. Which is why I'm only comparing the stock speed performance. I'll look and see what other options I have for memory then, there's a pair of 2400 sticks that's almost the same price but anything over that which is in stock packs on like 20% where I'm buying from, whelp! Thanks for the advice, Sc2 performance is obviously my highest concern as I'm never happy with it.
Running the CPU and RAM over the stock speeds gives a lot more performance. 6600k and 6700k are functionally the same for sc2, no notable performance change from either the hyperthreading or 33% more L3 cache.
OC'd i5 + RAM vastly outperforms stock i7 + 2133 RAM and costs less at the same time, just requires some time/learning to set up
we only have the 6600k/6700k available right now and z170 boards, so you have to pay a lot of the price that an overclocker would pay if you're buying now. Also, i've heard that the 6600k/6700k don't come with a stock cooler any more, but didn't confirm that
non-6700k's would probably have reduced stock speeds though to make people who don't want to tweak clocks themselves pay a ton extra for a 6700k even when it offers nothing for them otherwise
You're right that it doesn't come with a stock cooler, which is why I already added an aftermarket one. Well tbh the Noctua NH-D9L is probably nothing more than a glorified but silent stock cooler and nothing I'd feel safe overclocking with. I'll have a use for the i7 when I stream which is why I went that route this time. Well, I guess I'm going through with it then will look into memory asap also..
memory is a secondary concern to CPU performance and frequencies over 2133 are technically overclocking, it's just much better to invest in OC i5 (and add fast memory) for streaming sc2 than stock i7
the higher frequency out of the box on i7 is a good excuse for them to charge 130 euros extra for people who can't tweak themelves, but it's really not that hard.
I just rarely recommend quad core i7 these days for gaming+streaming because of the drawbacks. Mainly increased cost (130 euro, maybe 150 when considering better cooler because HT increases temps) and the limited way that the performance increase applies. I don't actually have data in front of me right now, but based on previous tests i think streaming sc2 @720p60/1080p30 you wouldn't actually see any FPS change in the game at all from Hyperthreading
and i will never CPU encode streaming sc2 again since the performance hits make an engine that's already a bit painful for me to play in become less tolerable and NVENC is available + good enough without impacting performance in an annoying way. I'l actually benchmark performance a bit, i'm curious. The main thing is - those performance hits are not strictly related to how fast the CPU is. They're due to weird stuff like transferring frame data from GPU to main system RAM and having a 6700k instead of a 6600k doesn't reduce it AFAIK
Why are you guys so concerned about sc2 performance? It even runs on my i3 with high graphics, although 3v3 suffers a little bit. Are you playing 8 player FFA on 4k 120hz?
@Cyro You're probably right about NVENC, how's OBS support for that encoder nowadays? Using shadowplay by itself is pretty boring when you can't overlay, the quality being lower at any given bitrate is also a concern I suppose. And with Sc2 I've got GPU horsepower to spare with the 980..
@mantequilla I think many of the lategame FPS drops in 1v1 down to 30-40 are pretty annoying, I know I'll never hit a constant 120fps no matter what I upgrade to but the closer I can get the better.
At this point I think the happiest day of my recent life would be if Blizzard surprised us with an engine overhaul before release of LotV, being able to effectively use multiple cores for example.. It'll probably never happen, but one can dream.
Because sc2 runs pretty terribly on any hardware. The engine gives you frames with uneven spacing, which makes the FPS number that you see artificially inflated (you can tell the difference between 200 and 300fps on a 144hz monitor) and FPS drops a lot.
For performance to not change in a significant way with a 60hz monitor, you'd need to maintain ~90-100fps or so (as this game engine says) and you can't do that during fights with lots of units involved.
Here's one of my old bench runs on overclocked haswell w/ fast RAM in a game with about 500 supply
note constant low-ish FPS and big dips during combat shown on fraps FPS monitor @ bottom right.
You can get that low FPS or even worse in extreme cases in 1v1. For example a game that i got replay of before, a GM 1v1 ladder game that ended with TvZ max supply air engagement over a field of missile turrets. The longest frames during that engagement were taking 60-70 milliseconds - equivelant of 14-17fps.
If you're just playing 1v1 it's not strictly neccesary to have more performance, but you really feel it especially in certain cases. When you make an upgrade and get 1.5x FPS it's like "holy shit performance didn't fall off a cliff" whenever you do certain things, like get stormed or fly into a wall of missile turrets. Whenever you're high supply, you feel it. If you have a monitor that's not 60hz (60hz is really low and not all that smooth as a ceiling) then you'll feel it a lot more even from the early game.
Running sc2 at 4k is easy, it's very gpu light. FPS (whenever it matters) is probably identical at 540p or 4k (16x resolution increase) with a midrange GPU these days, especially if you're playing on competitive settings and not straight max settings --------------
@Cyro You're probably right about NVENC, how's OBS support for that encoder nowadays? Using shadowplay by itself is pretty boring when you can't overlay, the quality being lower at any given bitrate is also a concern I suppose. And with Sc2 I've got GPU horsepower to spare with the 980..
1; The OBS support for NVENC is laughably even better than Shadowplay. Shadowplay is still in beta after like 2 and a half years and it's clearly aimed at people lacking technical knowledge; there are not a lot of knobs to adjust and you have some default settings forced on you. The lower quality at the same bitrate is the main downside, but it's not so bad especially with the maxwell NVENC (being twice as fast as last gen) and better NVENC setting that you can use because of that.
2; 980 could max sc2 and be at like 10% load during the FPS dips where performance really matters (due to being cpu limited really hard) but even if that wasn't the case, NVENC encoding is not a typical added GPU load, it's completely seperate thing so it doesn't really matter at all if your GPU is taxed or not
That reminds me of pes5 (2005 game) crawling on hardware that's 3x stronger than its recommended specs. Bad programming is bad programming. Although sc2 is way more complex so it would be better to compare it with other RTSes.