|
United Kingdom20285 Posts
![[image loading]](http://i.imgur.com/t5zm2Km.png)
Taken from the benchmark i was doing before: 5 min replay segment at med shaders, maxed physics and effects reflections on at x4 speed, but i'm pretty sure the performance would carry over to regular gameplay.
I'l drop my core clock to 3.5ghz and run bench again (:
Frames, Min, Max, Avg 3048, 24, 89, 56.161
i couldn't set 3.5ghz because ud3h bios even though it says x35, 3.5ghz right there after you set it, somehow decides to boot into x25 2.5ghz (gg ud3h), so i benched with 3.6:
3.6 to 4.5ghz on Haswell
25% increase in frequency
37.5% increase in minimums
27% increase in averages
^Both numbers from 1600mhz 10-10-10-28 RAM and stock uncore.
Oh and also:
Running 4.5ghz core stock uncore 2000mhz 10-10-10-28 1.45v 1.05vring 1.86vrin +0.15 digital +0.15 analog +0.1 system agent 1.195 vcore
Through just a ton of painful trial and error it became obvious that IBT does not do its job well at all at least with avx off, and after passing probably what added up to hours of IBT under 1.16v (some 1.155) i had to raise vcore slowly up to 1.195 for 4.5ghz for stability to hold up in system, under high loads, x264 encoding etc, and to make whea erros go away. I have NEVER before seen a CPU pass 20 minutes of IBT max ram, and then reboot with +0.02vcore, and bluescreen multiple times without even reaching desktop - even if it's common to need a bit more voltage than to pass IBT, this was extremely confusing to me. Been rock solid at 1.195 (i initially set 1.2, and then dropped when i had no sign of issues whatsoever) and i passed 3 and a half hours IBT max ram at similar settings before (1.195vcore) i'm pretty sure i'l be fine at this voltage, or maybe a tiny tiny bit higher. I made some wrong assumptions about what worked for stability and what i "should" be able to do, which cost me a lot of time and caused a lot of confusion.
System agent voltage down to +0.1 now (i didn't try to lower it further; not really bothered about +0.1, can maybe experiment a bit)
When i set Digital i/o and Analog i/o to +0.5 instead of +0.15, i got a 0x0124 bluescreen before windows even loaded - might nudge them down a bit, but it's obvious they are relevant for stability. By far the most important things i'd say though would be Vcore and VRIN (IVR input voltage), people seem to be using vastly different numbers with different boards, my best is around 1.86 (at least for this vcore) but i've seen some people who know what they are doing with different motherboards set 1.9 - 2.22 VRIN, and i've been told up to ~2.35v should be fine on air (which means basically any cooling with your cpu temperature around normal +ambient cooling, or at least in normal air-water temp ranges) though i personally wouldn't push limits like that for long term when short term it already killed some (though apparently non retail) CPU's
VRIN and a bit of tweaking digital+analog i/o voltages can shave quite a bit off the vcore you need to be stable, or turn unbootable into impervious to all stress testing failures quite easily
|
Aren't slower memory kits usually tighter timings? Even the cheapest 1066 kits available today are CAS 7.
|
United Kingdom20285 Posts
Sc2 doesn't really care about timings AFAIK.
I remembered this:
FPS Shown in Min/Max/Average
2.8Ghz CPU Clock
1066 CL9 FPS: 28/133/61.6--95% 1066 CL6 FPS: 28/134/62.6--96.6% 1333 CL9 FPS: 30/139/64.8--100% 1333 CL6 FPS: 30/139/65.2--100.6% 1600 CL9 FPS: 31/142/67.2--103.7% 1600 CL6 FPS: 31/142/67.3--103.8%
I'l tighten the timings to cas8 or something at one point and see if the game cares, but i don't really think so. 8 vs 12 might be relevant
|
On June 17 2013 00:11 SoulWager wrote: Aren't slower memory kits usually tighter timings? Even the cheapest 1066 kits available today are CAS 7.
For whatever reason, that doesn't seem to help very much. The memory speed is what determines bandwidth. The various caches of the CPU hide higher latency better than low memory speed.
There's also this:
7 / 1066 = 0.0066 10 / 2000 = 0.005
10 / 2000 < 7 / 1066
Those 10 clock cycles at 2000 MHz take a lot less real time than those 7 clock cycles at 1066 MHz.
Interestingly, the BIOS of my Gigabyte board increases latencies through that formula by itself if I try to overclock memory speed. In that 7 at 1066 MHz example, if I would set that memory to 2133 MHz and leave latencies on Auto, the board would change the 7 into 14. The latency in real time would stay the same, but the memory would still have double the bandwidth if it would actually work without errors at that higher speed.
|
United Kingdom20285 Posts
I like this scaling i'm gonna try and grab 2400mhz
I'm uncertain of IMC's capabilities (ivy bridge hit lots of trouble around 2200-2400 two-stick, or even ~2000-2200 four-stick often i hear) people report haswell at stock can take massive memory frequencies, well into the 3000's for two stick, but loses memory frequency headroom when you OC. It's unclear if core frequency costs this headroom, or uncore etc; I saw almost no performance gain in sc2 from increasing uncore clocks, so if i had to leave it at stock, that's not really a big deal if it doesn't play a major part in performance for anything else i want to do. 1600 cas10 to 2400cas12 could give pretty groundbreaking increases on minimum and average FPS, as much as 15-20% even perhaps. That's a LOT more than i would have previously expected from RAM
|
Cyro, your graphs are so pretty thanks for doing all the work! Have you done all of this on OCN yet?
I'm also stable at 4.5 Ghz. I just passed seven and a half hours of AIDA64, along with a long IBT at the beginning, so I think I'm good to go. My stats are:
Asrock Z87M Extreme4 44x multiplier @ 1.18 Vcore 40x cache/uncore @ 1.1V 1.85 VRIN +.15 Digital, Analog IO and System Voltage offset 1.65V 1600mhz RAM
Max temps in IBT were about 87, 84, 79, 76 iirc, and AIDA was 84, 82, 80, and 74. I might try and up the voltage to 1.2 or 1.21 and see if I can hit 4.6 Ghz, but if I can't hit it at 1.21, I don't think I'll be able to reach it. I might be able to go a little bit lower with AVX off and testing using x264 though, I'll possibly try that.
|
United Kingdom20285 Posts
Cyro, your graphs are so pretty thanks for doing all the work! Have you done all of this on OCN yet?
Lol, random online chart tool, and np
Probably not going to post anywhere else, not much to gain. I dont see any point in getting into an online argument about how silly temperatures are with AVX when i can't provide absolute evidence of all of the whats, hows, why's etc. I'm sure beyond reasonable doubt that it's some weird artificial temperature gain and i think it's silly to overclock around it when it does not happen outside of specific stress tests under the condition of them heavily using avx instructions, but i can't exactly convince people of that without more data, evidence and understanding of everything (unless something were to happen for instance vcore readout point showing +0.12v with a multimeter only when running ibt/prime and only with avx enabled, at which point it's obvious what's going on) so i'm not going to try. I showed my temps in one thread, pretty much nobody even bothered to comment on it (~70c with x264 absolute max, ~74c ibt as expected absolute max in shorter time with avx off, but thermal failure with avx on) so i'l hold back unless i can say things for certain. And the other data, it's more relevant to sc2, very few people on OCN know or care about sc2 in particular (the ones that do are more likely to visit teamliquid than OCN)
1.21, I don't think I'll be able to reach it. I might be able to go a little bit lower with AVX off and testing using x264 though, I'll possibly try that.
With AVX off, your temps will drop like a rock in everything, to pretty much where they SHOULD be in the first place. Nothing you would actually run changes temp even a notable amount with avx on/off (from what i've seen so far) it's just some stress tests that go wild. You should be able to run quite a bit more than 1.2v on an i5 with a u12s. On ivy bridge you could probably do 1.35.
|
By the way, do you have a power meter? How much power is consumed during IBT with AVX enabled and then with it disabled? How much under say maxed out x264 encoding? Why not just set a power or current cap such that you throttle slightly down only if under overclocked synthetic AVX2 load? (but high enough such that other real-world loads are unaffected) Then you can leave AVX on and maybe even with lower voltages / higher clocks.
Load-line calibration on Haswell is for IVR output and not IVR input, right? So it's something clearly defined by Intel and implemented by the IVR?
|
I'm successfully (so far) testing 4.6Ghz on 1.23Volts with AIDA... I'm 50 minutes in and temps are all 88celsius. It seems core temp has had a problem reporting the temps on my CPU, Aida seems much more accurate. All settings the same as my previous posts other than the vcore which is up .05V.
Haswell seems like it's fantastic for overclocking if you know what to do. With AVX on I'm pretty much at the limit of what I think I can test on air with AIDA and the like, although I could always use x264 to test. (Without delid of course). It seems like I should be able to get 4.6 stable here, which I think would be a pretty good result. Cyro is testing like 4.9Ghz right now, but I'll leave it to him to give specifics, other than him being crazy ^^
|
United Kingdom20285 Posts
my chip is too voltage hungry
MOAR POWERRRRRR
By the way, do you have a power meter? How much power is consumed during IBT with AVX enabled and then with it disabled? How much under say maxed out x264 encoding? Why not just set a power or current cap such that you throttle slightly down only if under overclocked synthetic AVX2 load? (but high enough such that other real-world loads are unaffected) Then you can leave AVX on and maybe even with lower voltages / higher clocks.
Very interesting and i'l look into it
|
|
United Kingdom20285 Posts
![[image loading]](http://i.imgur.com/CSgBnOz.png)
Gotta stabilize higher frequencies.. Passing lots of stuff but hitting 0x0101 bluescreens seemingly regardless of vcore and not sure what to adjust
|
I'm curently at 4.3 right now, 1.15v vcore, everything else is auto. No problems running 24/7 but keeps blue screening after 2-3 hours on prime95. I usually let prime95 run and then go do something else so not sure if it is a thermal or voltage problem yet, the CPU takes a while to reach over 80C.
|
Try making some of the changes we have... They make a huge difference. What motherboard do you have?
|
I disagree with few points made in this thread.
1. Stress testing, as name suggests, is supposed to push the limits of the platform. By disabling AVX/HT you just make it lighter on the stress. What's the point?
2. There is a reason that HT/AVX is implemented in this chip. And that is to improve performance in specific cases. Assumption that: "disabling specific parts of the chip makes it OC better" is being equal to: "the chip is OCing better" is simply not viable, isn't it?
3. AVX raising temps is not strange at all. It has been very distinctive since the first benchmarks of Sandy Bridge, where it was implemented by Intel for the first time.
|
United Kingdom20285 Posts
^That's all kinda one point in one.
I never planned to leave AVX disabled, my temperatures under full CPU load with AVX enabled are lower than IBT without AVX. They are barely affected at all under such real world load - while enabling it for IBT makes temps soar 20-25c higher which is just unrealistic - There was no other CPU that i have seen that would have such a massive temperature gap between the most intense real world loads you can come up with, and synthetic loads. Having spent days each overclocking three architectures (and learning temperature and overclocking characteristics in depth) i think it's really silly to base a Haswell overclock around synthetic-load-with-avx temperatures. It behaves differently to other architectures, the gap is much much wider.
Oh and HT, i wouldn't run it disabled with any actual overclock. Kinda silly to buy a 4770k and do that, no?
Is there a good argument for setting an overclock designed for stability, gaming and x264 load, based on the temperatures of synthetic-with-avx loads, which are >20c hotter than synthetic-without-avx loads, that are already hotter than full CPU load of x264-with-avx?
^Bit tricky to use terms, but that's what i am trying to say.
I believe in being stable against any and all loads etc; it just seems absurd in the very specific case of haswell with synthetic-avx to hold yourself back so much for something that, specifically, will never happen.
If there's a good argument for not doing this, I'd like to hear it, but it seems to me that the intuition going against it is down to misunderstanding with how Haswell behaves, or sticking to the roots of conventional CPU overclocking even when it has disastrous results for this architecture, which is why i did what i did.
Also, i am sure a lot of my old posts look pretty bad/uninformed; I was learning on the fly, and it seems i have a complete new understanding of how things work every single day. Stuff will probably be a lot clearer a month from now, for everyone
|
Its been documented more than once by several people that its possible for has well to override your voltage settings and request additional voltage on the order of .1 or more with unrealistically high AVX loads - some in "adaptive" mode if your motherboard has it, others normally. The other CPU architectures certainly don't do this. In this case it doesn't make sense to test with AVX in cases where it limits your stress testing by temperatures. While the extra .1v isn't bad by itself, it is when it arbitrarily limits your overclock by artificial temperature increases.
I've gotten 4.6ghz stable with AVX hitting about 89c on IBT at 1.23ghz, passing an hour and a half of AIDA. on the other hand, x264 crashed on me until 1.245V, but even then it was only hitting 75c or so. So no, we aren't necessarily limiting the stress put on the platform by disabling avx or using x264. And in this case, conventional wisdom seems to fail. No other CPU architecture afaik overrides your fixed voltage and takes you from 1.2 to 1.35V on abnormally high AVX loads.
|
On June 17 2013 20:42 Alryk wrote: Try making some of the changes we have... They make a huge difference. What motherboard do you have?
I'm running an MSI Z87 MPower Max. It has an ez mode overclock where you push a button and it does 4.2Ghz automatically, setting the vcore at 1.2v. I used it for a few days then try manual overclock. I wouldn't want to have to use 1.2v+ until maybe 4.4-4.5. I'll look into other settings as well.
|
On June 17 2013 14:09 JDI1 wrote: I'm curently at 4.3 right now, 1.15v vcore, everything else is auto. No problems running 24/7 but keeps blue screening after 2-3 hours on prime95. I usually let prime95 run and then go do something else so not sure if it is a thermal or voltage problem yet, the CPU takes a while to reach over 80C.
Does it necessarily matter that you BSoD after such a long prime95 session? I've been using 4.4ghz @1.152V for quite a few days now. It's passed pretty hard IBT but only 30 minutes of prime95. I don't know if there's something you normally do that'd actually be equivalent to 30 minutes of torture test, let along 2-3 hours. What is the normal bench mark for stability testing? I thought IBT is most likely enough.
|
On June 18 2013 12:27 Spec wrote: Does it necessarily matter that you BSoD after such a long prime95 session? I've been using 4.4ghz @1.152V for quite a few days now. It's passed pretty hard IBT but only 30 minutes of prime95. I don't know if there's something you normally do that'd actually be equivalent to 30 minutes of torture test, let along 2-3 hours. What is the normal bench mark for stability testing? I thought IBT is most likely enough.
That's what I think too, even though I thought IBT is supposed to be more strenuous than prime95. I learned how to overclock when IBT still wasn't around yet and prime95's blend test was the stress test of choice. Not sure how the new AIDA 64 and others stack up though.
|
|
|
|