|
United Kingdom20263 Posts
I just wrote this in the PC build thread but figured it's probably better in a blog - lots of stuff about my upgrade to an 8700k system and 4000mhz RAM. Warning for some techy language ;D
---
It's delidded, this CPU isn't a great overclocker.
That's 400mhz over my 6700k at the same voltage and about 66.5% faster (+11% per-core, *1.5 for core count). It's on a newer encoder version so that may have changed the performance slightly. Temps are ~10c higher even though it's delidded, they were ~25c higher on this voltage without the delid on either CPU!
It's hard to say exactly how lucky it was on the silicon lottery compared to other CPU's - there isn't an AVX offset while lots of sites and overclockers are dropping their CPU core frequency by 200mhz under such loads so they're not getting the advertised clocks while encoding or even doing some basic stuff like running a web browser or certain games. I ran clock-monitoring software and overlays while experimenting with that setting and the main thing that i found was that people were unaware or lying about what it actually does, the clock drops were very common. I also found that - at least in some circumstances - the non-avx loads that i use would crash the CPU at say 4.8ghz much earlier than the avx programs would crash at 4.7, so much earlier that it wasn't really worth using even a 100mhz AVX offset. A 100mhz offset may be worth it at the very limits of CPU stability, i haven't tested it much up there yet.
It really doesn't want to go past 4.9 HT on, 5.0 HT off without absurd voltage scaling (like +0.1v for 100mhz) so not worth fighting for a 24/7 overclock. It's at its limit and it's better to respect that and be super comfortable at 98% of max performance instead of pushing alll the way to 100% unless you have a large sack of money to throw away.
The other CPU sample that i tried would do 100mhz more (5.0 HT, 5.1 non-HT) with 24/7 level voltages but i kept it at 4.8ghz with HT on because it wasn't delidded and the temps would be impractical at 4.9 and out of range at 5.0 with HT. The 5.1 HT-off was passing its tests at 1.35v nice and cool - could possibly delid that one and bring it back in the future.
8600k's in general will OC very well - turning HT off helps temperatures more than delid and both of my CPU's seem to clock 100mhz higher on the same voltage when HT is turned off; siliconlottery.com reports 8600k's achieving higher clocks through lower temperatures and better voltage scaling even when their 8700k's have delids to control temperatures at max OC's - without that, the 8700k's will overheat but the 8600k's won't so the gap will be larger. It was trivial to run at 5.0 - 5.1 ghz HT-off solid stable (~1.31 - 1.37v with low temps) without delid.
The RAM is a decent bin of Samsung B-die and works fine at 4000 17-17-17-38-2t with all of the secondaries set, i have vccio at 1.2v and SA at 1.225v; they can likely be lowered a bit but the previous 8700k needed at least one of them to be that high. XMP wanted to put some of them over 1.3, no good!
There would seem to be a motherboard limit here since my motherboard has a 4-DIMM layout; the higher end boards which can routintely clock past 4000 (like 4400) such as the maximus X apex are more expensive (+£70 here over an already costly board) and have only two DIMM slots; it'd also likely involve pushing vccio/SA volts up a bit high, maybe uncomfortably high to take advantage of such clocks beyond 4000.
That 4000mhz is very strong and nice for a 24/7! The RAM itself is very comfortable, running at 17-17-17 on 1.39v down from the rated/common 19-23-23; mobo just can't seem to POST with anything over 4000mhz no matter what. This is already technically a 50% overclock on the IMC and boards, it'd seem to be where the limit is for now!
I have two profiles for the RAM, one bandwidth-optimized (4000 17-17-17-38-2T achieving 57,009 MB/s at 41.2ns) and one latency-optimized which is slightly unstable at the moment (3733 16-16-16-35-1T achieving 55,700 MB/s read at 39.7ns). The games that i play usually scale from bandwidth and not latency (SC2, WoW) but some games and programs out there scale from latency primarily (source engine, cinema 4d). That being said, the bandwidth and latency are simultaneously excellent on both setups ;D
I actually haven't tried command rate 3, it wouldn't POST past 3800 with CR1 and if 4266 on like 18-21-21-45-3t would work then it'd probably perform great for bandwidth benchmarks but i don't think i could get a better 24/7 setup out of it. (Edit: Tried that with RAM voltage, vccio & sa bumped up - got my first POST at 4133 but with terrible performance, no POST at 4266)
The uncore can quite possibly clock a bit higher, i just based it on what the other CPU sample could do and then knocked a bit off for safety so i'l have to check it closely in the next few days. Uncore/L3 performance rarely makes significant impact on benchmarks, it's nowhere near as important as CPU Core or RAM. It's also a bit annoying to test, it could run a lot of stuff okay but fail like one test unless you reduced it by 200mhz so it initially gave me some troubles that weren't easy to diagnose.
I have done some RAM benchmarks and i'l do some more, post about that incoming soonish. Quick one for SC2 to make people interested i guess ;D
|
I upgraded to an 8700 from a 4770 and 1600mhz ddr3 ram to 3000ddr4 a couple of months ago and wow... The difference was amazing. My FPS skyrocketed in every game I play and having a 2nd tab up on my 2nd monitor with twitch while playing games didn't lag the game anymore.
Not much of an overclocker but you seem to know what you're doing.
|
United Kingdom20263 Posts
Hi blade o/
Yeah, it's been a lot of fun!
What motherboard do you have? It's likely that your ddr4 is downclocked to 2133 by default. The memory controller max stock speed is 2666 (so many boards can't set higher than that AFAIK) but mine seems to still drop down to 2133c15@1.2v on a clean bios profile or when there is a problem
|
I use the GIGABYTE Z370 AORUS Gaming 5 (https://www.newegg.com/Product/Product.aspx?Item=N82E16813145035)
Thought I had a different one but guess not! I'll have to go into the bios and check sometime ;D
|
Yay for fellow PC enthusiasts to chat with.
Great blog, very in depth. Thank you for sharing the information here, i'm sure I won't be the only one to learn from it. Your machine is a beast.
I recently upgraded to a new performance machine as well. I'll share my experience with you guys.
For the last 7 years I used a store bought HP Pavilion w/ an AMD Phenom II960T quad core processor, crappy integrated graphics and 8gb of ram. I noticed it became harder to run SC2 on lowest settings so I figured it was time to get a new computer. This thing was a huge piece of crap that could only play scr, sc2 and LoL on lowest settings with a decent frame rate. I'm surprised it is still running. I think i'm going to throw it on craigslist or something to recoup some of my laptop costs. I'll be happy to get 100 usd for it lol.
I ended up going with an MSI GT83VR-6RE Laptop. I fell in love with the GT80 titan when it first came out but I didn't want to spend the money on it when it came out. This time I was able to afford one of their flagship configurations.
The good: I was lucky enough to find a seller on ebay who was willing to let it go for 1,800 USD. It's used in fantastic cosmetic condition. Brand new these run for 3k-3.5k so I was pretty happy. This laptop(using it as a desktop replacement) came fitted with 16gb of 2133 cl13 ram, TWO gtx 1070s in sli, i7-6820hk(unlocked), 256gb NvMe ssd and a 1tb hdd.
The bad: It has a single "stuck" pixel that I can't seem to fix. It arrived in the mail this way, seller claimed it was fine pre-shipping. Who knows though, I knew that If I end up replacing the monitor for 100 dollars I would be still be well under retails costs for this monster. It was also thermal throttling when overclocked and couldn't even pass a stress test or benchmark test on sport mode without thermal throttling or restarting on its own, that wasn't even when it was OCed lol. Temperatures when playing sc2 while streaming or while playing crysis 3 were on average 90c-95c. This is too close for comfort and too high for my liking. I was also unable to register the warranty under my name when I contacted MSI. They stated it's still under someone else's name and that I'd have to provide the ORIGINAL receipts for an authorized dealer. Apparently the seller I bought this laptop from wasn't the original owner either and the warranty wasn't under his name. Was this laptop stolen? Who knows. To be safe, I turned off the tracking device to avoid any potential unwanted legal troubles if they were to arise. Unfortunately, There is nothing I can do to get the last 4 months of the warranty in my name. I would have loved to send it in to get the monitor swapped out for free. O well, I'll have to try more fixes for the lit up white "stuck" pixel.
Upgrades, tuning and fixing: This is where I finally jumped into the world of building/tuning etc. I have to say it's really fun to do. I can understand why motorheads are the way they are now.
I decided to upgrade the ram from two 8 gb 2133mhz cards to four 8gb hyperX impact 2400mhz cards for a little boost. Unfortunately the MS-1815 motherboard only supports 2400mhz tops. I also added a second NvMe m.2 ssd and an m.2 sata 3 ssd. All available slots are now full, yay.
Temperatures After trying to overclock this to 4ghz for 2.7ghz for a couple days I began to take a hard look at individual temperatures in this laptop with HWinFo64. Low and behold, the first and last cores were running 8-10c hotter than the other 2 cores. I began to think about how old this laptop is, how many owners it could have had and what it's generally used for. Based off all this information I came to the conclusion that the CPU needed to be repasted as one side of it was running hot. I guess after about 18 months of being beat on and overlocked crappy stock toothpaste will degrade a bit faster than performance paste.
The repaste: I did a lot of research on what to use. I didn't just want something that would squeek me by a stresstest/benchmark. I wanted something that would give me full piece of mind when under full load while overclocked to 4.0 on all 4 cores. I decided to go with Thermal Grizzly Konductonaut Liquid Metal. This stuff is the real deal. Not only can I pass stresstests/benchmark, my max temp during these tests while under full load with full fan rpm was a full 20c cooler! 75c down from 95c woohooo. I was no longer thermal throttling or losing performance due to heat. She runs steady overclocked from 2.7ghz to 4.0ghz.
Monitor overclock: Found out about this the other day. Provided you have an Nvidia GPU, you can overclock your monitor. Unfortunately all gt83vr models only come with a 60hz FHD ips LCD 18.4" display and there isn't an option to upgrade to anything better without using an external monitor. I didn't want an external monitor unless it was extremely portable. OCing was simple. All I had to do was go into the nvidia control panel and create a custom resolution and keep increasing the hz until it was unstable. Once I found the found the point where it was unstable, I backed it off a bit so that I could have a stable overclock. My monitor managed to go from 60hz to 90hz OCed(stable, no artifacts or malfunctions). That's a cool 50% gain in displayed fps for FREE. Neat.
Ram installation:I originally had two 8GB 2133mhz cl13 kingston value ram cards. This left me with 2 empty slots. I don't like empty slots. I wanted to increase my ram in case I ever decide to go for a 4k triple monitor set up as i'd be running many many programs at once using more memory. I also wanted a higher frequency because well bigger numbers are better right? I bought four 8GB Kingston HyperX Impact 2400mhz cl14 sticks of ram so I could fill every slot. The ram said plug and play with xmp profiles. No BIOS tweaking needed. Sounds simple right? Somehow I screwed it up LOL. I installed it and my laptop wouldn't go past a blank monitor. I was scared I did something wrong. After about 36hours of reading/youtubing and constant testing with the hardware(trying out different ram cards in different slots etc etc), I was finally able to get the computer to boot up to windows(at one point windows wouldn't even boot lol, just straight to windows repair mode) and the ram was at max clock for my motherboard. yay. This really tested my nerves. I thought I threw away 1800 LOL. At this point I knew I needed to dive into learning more about installations and hardware tuning as well as how/why a computer will react the way it does in certain situations. I learned a lot during this period such as how to use a BIOS, how to properly test ram, how to properly install ram, how to configure select settings etc. It was fun and i'm glad I had the problems I did because probably wouldn't have taken the initiative to learn all that stuff.
Processor OC and Undervolt: I used the same process that I used for the monitor. Increased the speed till I would crash/hang then backed it off(before the repaste it would shutoff due to overheating). The stock speed of my processor is 2.7ghz and i'm able to push it to 4.0ghz on all 4 cores even while undervolted. Success. I think I may try going to 4.1 or up to 4.3ghz but I'll probably have to increase the voltage for that(I'm still on the ropes about doing this).
The undervolt was a little trickier. Undervolting the CPU core voltage is a trick to reduce heat & power consumption. It generally lowers your CPU temps by 5c-10c and it's FREE. I could crush benchmarks and stresstests undervolted as high as -.100. I was ecstatic. What I didn't know, is that stresstests/benchmarks aren't the same as say playing sc2 or crysis 3(probably the most demanding pc game). Everytime I would try to load sc2/crysis 3 the program would crash or the laptop would BSOD. I had to cut the undervolt down to -.055 to keep it completely stable. I also realized/learned that undervolting is different for each speed. On stock speeds I could undervolt higher than when OCed due to the additional power draw. It was an interesting and fun learning experience.
OCing the dual gtx 1070s: I originally started to configure a stable OC for these. They each have a base core clock of 1443 or 1645-with boost. When I started to monitor my gpu performance temps at stock, I noticed the core clock on each gpu would go up on it's own while underload to about 2k. This was substantially higher than a desktop gtx 1080. I was nervous about it going up so high because I didn't know anything about gpus. Apparently this is a great thing and the big contributor to this was my internal cooling system consisting of 15 pipes and 3 fans/heatsinks. i'm able to get well above the advertised rate without any overclocking. For the first time in my life i've hit the silicon lottery, yay. After finding this out, I had to OC them just to see how high of a stable OC i could get, ya know, because why the hell not. The best I could do while keeping max temps at or below 75c on the gpus was a 2.2k core clock with 9gb memory clock for each. Great, just great . These things are on par with a pair of overclocked desktop gtx 1080s, yay.
NvMe ssd & sata 3 installations: These were the easiest things to do. Plug and play on one, the other I had to create a volume. EZPZ. Decided not to put any of my storage devices in raid for the time being. My set up is as follows: Utility Drive(booting, software programs)-256gb toshiba m.2 PCIe NvMe ssd Gaming Drive(games and game dependent software)-256gb samsung m.2 PCIe NvMe ssd Media(movies/music)-500gb crucial m.2 sata 3 ssd Storage & backup(general downloads/backup/DR)-1 TB crappy HDD
i'm considering slamming all my ssds into raid 1 to mirror everything to my HDD that way I can set my HDD as a secondary boot drive to use for automatic failover if my Utility SSD ever failed on boot. It's either that or I create recovery images of all my drives on a regular basis which is annoying. i'm also going to load windows onto a USB stick incase of an emergency.
I opted against putting my pair of NvMe drives into raid 0/4 because if one ever fails, all the data is permanently lost. Plus raid 0/4 both shorten the lifecycle of the drives. If you have money to light on fire every 3 years, go for raid 0 as it's super fast
Pre upgrades/tuning/fixing my laptop benchmarked in the 87th percentile at best compared to all other models with identical hardware. Post upgrades/tuning/fixing my laptop now benchmarks in the 99th percentile. Yay. It was a fun learning journey and a good success.
Couple questions for you: How did you figure out the best timings for your ram? I'm currently relying on XMP profiles for everything. I'm wondering if it's possible for me to improve my timings or even increase the frequency past 2400mhz.
What other programs do you use for tuning/ocing/testing aside from CPUID/HWinFO64? I currently use Intel XTU, HWinFo64 and MSI Afterburner for overclocking needs as well as monitoring needs. I use Aida64, 3dmark firestrike and XTU for stresstests and benchmarking. Sometimes i'll use userbenchmark.com for benchmarking but i've found that website to be inaccurate as it claims my computer is thermal throttling at 99% CPU usage even though temps during benchmarks/stress testing haven't gone over 75c.
Have you tried overclocking your monitor(assuming you don't already have a sick gaming monitor with a high refresh rate)? From the research i've done, some monitors overclock better than others. From what I read, Dell monitors are apparently really bad for OCing, they either give you 5 extra hz or none. Some monitors will give you a 25% increase in hz/refresh rates while others will give you 50%. One gt83vr user reported an OC of 120hz stable. a 100% increase, just nuts.
|
United Kingdom20263 Posts
Hiya! that's a lot to reply to so I'll get started on the easy stuff :D
How did you figure out the best timings for your ram? I'm currently relying on XMP profiles for everything. I'm wondering if it's possible for me to improve my timings or even increase the frequency past 2400mhz.
Firstly by figuring out the type of RAM i was going to use, samsung b-die - there are programs that can identify the type that you have and some people make lists online of which RAM kits is which type etc. After that you can look at other kits of the same type at a range of speeds and voltages. For this type at the lower end of speeds 14-14-14-34 is common, 16-18-18-38 - and then higher up you get to 19-23-23-43, 19-26-26-45 etc. There are also a lot of forum threads out there for overclocking with specific types of memory so you can work from the kinds of timings that are on other kits and that other people use, even going into the secondary and tertiary timings if you're hardcore about it - a lot of the defaults will be way too loose, so setting them to good values can give more performance than tiny optimizations to primary timings.
Command rate 1 wouldn't POST for me at decent frequencies and a lot of this memory uses CR2 at higher clocks so i started with that although i later got cr1 working at 3600 and kinda 3733mhz for a latency-optimized profile.
I started at like 19-21-21, worked down to 18-19-19. A voltage bump (1.35 to 1.39) made 17-19-19 work, and after further testing 17-18-18 and 17-17-17 also worked fine but cas16 with anything else would need a further voltage bump AFAIK. I've used some memory testing tools in windows, you can get a bit better testing with tools outside of the OS or one linux program but i haven't had any need to use them yet - no signs of instability, no WHEA errors etc when the windows tools (prime95 and one just called memtest) test okay.
Memory timings are actually really complicated and i don't understand them fully so i can't write much of a guide on what is what or how to adjust them :D
It's more likely that you'd be able to tighten timings than increase frequency but the amount that you could tighten them would depend a lot on the RAM that you were using (type and quality) and the voltage that you could give to it. If it's stuck at 2400mhz and 1.2v then you probably can't do much and i wouldn't expect a lot of config on a laptop.
---
What other programs do you use for tuning/ocing/testing aside from CPUID/HWinFO64?
I mainly use programs made for stability/benchmark testing with the x264 and x265 encoders plus aida64 for memory performance testing and some games for further stability and performance testing. Simulationcraft (WoW DPS simulator ) was a go-to non-avx test as well as it's pretty crash heavy and would crash before anything else on my system at times in the past when i had a slightly unstable CPU setup. I don't usually run graphics benchmarks because everything that i do is CPU limited.
---
Have you tried overclocking your monitor(assuming you don't already have a sick gaming monitor with a high refresh rate)? From the research i've done, some monitors overclock better than others. From what I read, Dell monitors are apparently really bad for OCing, they either give you 5 extra hz or none. Some monitors will give you a 25% increase in hz/refresh rates while others will give you 50%. One gt83vr user reported an OC of 120hz stable. a 100% increase, just nuts.
I have, i've never had a monitor that could overclock any meaningful amount though.
Got this one after using a vg248qe for 4 years (144hz)
240hz + gsync, major upgrade and one of the most important pieces of hardware that i have
---
I think I may try going to 4.1 or up to 4.3ghz but I'll probably have to increase the voltage for that(I'm still on the ropes about doing this).
What kind of volts are you using?
|
Hmmmm I think i'm going to stay away from the ram timings haha. It may not be worth it to invest a whole lot of extra time into it. I'm currently using Kingston HyperX Impact DDR4 cas14 2400MHZ ram (4 x 8GB sticks). i'm not entirely sure how comfortable I am turning up the voltage on them since my bios says it supports 2133mhz or 2400mhz ram. They run on 1.2v. Is it possible for a bios support higher timings than what it says?
That is one hell of a monitor. 240 refresh rate AND gsync, ballin. One of these days I may end up going with a gsync monitor. Is gsync really that much better than enabling vsync in the control panel? Is gsync better for SLI? Right now i'm gaming at 1080p resolution, sometimes at 4k when DSR is enabled. Part of me wants to go 4k(60hz standard right now), the other part wants to go 1080p at 120hz-144hz with gsync. Not sure what would be optimal for current set up. i'm thinking double the frames might be best since i'm running sli but i'm not sure how well gsync gels with sli. For reference i'm running two Nvidia GTX 1070 pascal cards in sli and an Intel Core i7-6820HK processor. The laptop monitor is an 18.4" 60hz(oc to 90) samsung IPS 1080p display)
My voltage was originally set at stock and then I undervolted by .055 to drop CPU temperatures an additional 5c-7c(goal was to keep all temperatures below 80c when underload since it's a laptop). If I want to go to 4.1-4.3ghz am I correct to assume i'd have to increase the voltage past the stock values aka overvolting? I'm not sure what you mean by what kind of volts but when I look in XTU it is mV, is that what you mean?
|
United Kingdom20263 Posts
i'm not entirely sure how comfortable I am turning up the voltage on them since my bios says it supports 2133mhz or 2400mhz ram. They run on 1.2v. Is it possible for a bios support higher timings than what it says?
Not sure if you'd be able to set over 2400 or over 1.2v in there or if it'd be wise to do so even if you could. Most ddr4 OCing is done around 1.35 - 1.4v with some of the wilder 24/7 overclocks using 1.45 - 1.5v, that jump to 1.35v is relatively safe and improves the RAM performance by a lot, like it'd let a 2400 stick clock to 3000 or 3200 with all the same timings in many cases. It's probably only easy and safe on a proper overclocking mobo. Manually setting timings is generally quite easy and widely supported but without voltage headroom there's not much to gain.
My voltage was originally set at stock and then I undervolted by .055 to drop CPU temperatures an additional 5c-7c(goal was to keep all temperatures below 80c when underload since it's a laptop). If I want to go to 4.1-4.3ghz am I correct to assume i'd have to increase the voltage past the stock values aka overvolting? I'm not sure what you mean by what kind of volts but when I look in XTU it is mV, is that what you mean?
Maybe - around 1.0v is possible at lower clocks (wild guess, somewhere between 3ghz and 4ghz) with OCers pushing ~1.4v. My 6700k (same CPU gen) needed >1.35v for 4.5ghz. The power draw per 100mhz increases quite dramatically when you're near the limits of the CPU and 4ghz is probably a great spot to be.
-----
Is gsync really that much better than enabling vsync in the control panel?
Yes, they both prevent tearing but gsync does way more.
If you want a smooth output from regular vsync you'd be stuck with the lag from the top settings here:
while Gsync can limit the game to run slightly slower than the screen is capable of, have the screen wait for each frame (instead of holding a buffer) and achieve that 39ms lag instead of 90ms.
Since the screen is capable of waiting for the game (instead of the game waiting for the screen with Vsync On or the two being mismatched with Vsync Off) it can naturally represent variances in frametime and any changes in framerate.
Vsync Off on a 60hz monitor can display an even 60fps perfectly (change frame once per refresh) and it can display an even 30fps perfectly (change frame every second refresh) but if you try to display 40fps on 60hz you'll end up having a new frame every 1.5 refreshes. Half of the screen will get updated in one refresh, the other half will have to wait for the next refresh which creates a tear and an inconsistent jerk in motion + input lag.
Vsync On with a 60hz monitor can also display 60fps perfectly and 30fps perfectly but if you try to represent 45fps you'll just see an unplayable stuttery mess as a quarter of the refreshes don't have a new frame on them because the buffer is empty. You'd get a frame on refresh 1, 2, 3, 5, 6, 7, 9, 10, 11, 13 and so on - and the frames that you do get will have been buffered for a little while (input lag) to make even that possible.
Gsync in normal operation doesn't start a refresh without a new frame ready so there is 1 frame per 1 refresh, always, none of those problems happen. It can't display perfect 60fps@60hz any better than a regular monitor but Vsync on/off lose a lot of smoothness and consistency in motion/lag in the range between perfect 60fps and perfect 30fps which Gsync does not, it can display anything perfectly.
This is also why a 144hz monitor can display content around 33-54fps a lot better than a 60hz monitor even without Gsync, less aliasing between the update rate of the frames and the update rate of the monitor. 40fps on 60hz is one frame every 1.5 refreshes which isn't very smooth but 40fps on 144hz is one frame every 3.6 refreshes which is pretty good. Whole numbers are perfect (1.0, 2.0, 3.0, 4.0 etc) and higher is better; 1.5 is one of the worst ratios.
The whole "60hz means you can display up to 60fps perfectly" is a huge misconception - they can't at all, even if the framerate were perfectly even. Games having significant frametime variance messes things up further; "50fps" doesn't mean one frame every 20ms, it probably means that you have a frame here that took 23ms, another one over here that took 17ms etc and it just averages out to getting 50 frames within a 1000ms span of time. The screen refresh doesn't care about the average of the last 50 or 100 frames, only the timing of one frame up to maybe a small handful of frames depending on the output method you're using.
^Perfect latency and smoothness simultaneously w/ gsync.
If you can run a game 1.5x++ faster than the refresh rate of your screen (like 200fps on 90hz) then you can look into Fast Sync. It's relatively new, basically proper triple buffered vsync on the driver level so it can maintain a Vsync without most (or even almost all) of the added lag if you have consistently very high performance. It's a good compromise between vsync off and vsync on for a regular monitor but it can't handle dips in performance or lower performance games like Gsync can.
There's a lot more on the subject and similar stuff relating to input lag, blur, high refresh rates etc on blurbusters.com
-------
SLI is not so great in general - even if it's well supported almost everything that uses it uses a rendering method called alternate frame rendering. A more powerful single-GPU system pushing 100fps would have 10ms GPU frametimes but a 2-way-SLI system pushing 100fps with AFR would have two GPU's outputting interleaved 20ms frametimes - the GPU's still take twice as long to make a frame, you just have two of them. That longer time period for one of the weaker GPU's to render its own frame presents as a significant increase in input lag and to make it worse there is usually an increased amount of buffering so that frames can be delivered smoothly (yet delayed) despite minor differences in rendering speed between the GPU's which are always working on different images in real time. Two 1070's is often a lot more useful than one 1070 but it's better to have a single 1080ti instead for realtime gaming.
|
On May 16 2018 16:28 Cyro wrote:Show nested quote +i'm not entirely sure how comfortable I am turning up the voltage on them since my bios says it supports 2133mhz or 2400mhz ram. They run on 1.2v. Is it possible for a bios support higher timings than what it says? Not sure if you'd be able to set over 2400 or over 1.2v in there or if it'd be wise to do so even if you could. Most ddr4 OCing is done around 1.35 - 1.4v with some of the wilder 24/7 overclocks using 1.45 - 1.5v, that jump to 1.35v is relatively safe and improves the RAM performance by a lot, like it'd let a 2400 stick clock to 3000 or 3200 with all the same timings in many cases. It's probably only easy and safe on a proper overclocking mobo. Manually setting timings is generally quite easy and widely supported but without voltage headroom there's not much to gain. Show nested quote +My voltage was originally set at stock and then I undervolted by .055 to drop CPU temperatures an additional 5c-7c(goal was to keep all temperatures below 80c when underload since it's a laptop). If I want to go to 4.1-4.3ghz am I correct to assume i'd have to increase the voltage past the stock values aka overvolting? I'm not sure what you mean by what kind of volts but when I look in XTU it is mV, is that what you mean? Maybe - around 1.0v is possible at lower clocks (wild guess, somewhere between 3ghz and 4ghz) with OCers pushing ~1.4v. My 6700k (same CPU gen) needed >1.35v for 4.5ghz. The power draw per 100mhz increases quite dramatically when you're near the limits of the CPU and 4ghz is probably a great spot to be. ----- Yes, they both prevent tearing but gsync does way more. If you want a smooth output from regular vsync you'd be stuck with the lag from the top settings here: while Gsync can limit the game to run slightly slower than the screen is capable of, have the screen wait for each frame (instead of holding a buffer) and achieve that 39ms lag instead of 90ms. Since the screen is capable of waiting for the game (instead of the game waiting for the screen with Vsync On or the two being mismatched with Vsync Off) it can naturally represent variances in frametime and any changes in framerate. Vsync Off on a 60hz monitor can display an even 60fps perfectly (change frame once per refresh) and it can display an even 30fps perfectly (change frame every second refresh) but if you try to display 40fps on 60hz you'll end up having a new frame every 1.5 refreshes. Half of the screen will get updated in one refresh, the other half will have to wait for the next refresh which creates a tear and an inconsistent jerk in motion + input lag. Vsync On with a 60hz monitor can also display 60fps perfectly and 30fps perfectly but if you try to represent 45fps you'll just see an unplayable stuttery mess as a quarter of the refreshes don't have a new frame on them because the buffer is empty. You'd get a frame on refresh 1, 2, 3, 5, 6, 7, 9, 10, 11, 13 and so on - and the frames that you do get will have been buffered for a little while (input lag) to make even that possible. Gsync in normal operation doesn't start a refresh without a new frame ready so there is 1 frame per 1 refresh, always, none of those problems happen. It can't display perfect 60fps@60hz any better than a regular monitor but Vsync on/off lose a lot of smoothness and consistency in motion/lag in the range between perfect 60fps and perfect 30fps which Gsync does not, it can display anything perfectly. This is also why a 144hz monitor can display content around 33-54fps a lot better than a 60hz monitor even without Gsync, less aliasing between the update rate of the frames and the update rate of the monitor. 40fps on 60hz is one frame every 1.5 refreshes which isn't very smooth but 40fps on 144hz is one frame every 3.6 refreshes which is pretty good. Whole numbers are perfect (1.0, 2.0, 3.0, 4.0 etc) and higher is better; 1.5 is one of the worst ratios. The whole "60hz means you can display up to 60fps perfectly" is a huge misconception - they can't at all, even if the framerate were perfectly even. Games having significant frametime variance messes things up further; "50fps" doesn't mean one frame every 20ms, it probably means that you have a frame here that took 23ms, another one over here that took 17ms etc and it just averages out to getting 50 frames within a 1000ms span of time. The screen refresh doesn't care about the average of the last 50 or 100 frames, only the timing of one frame up to maybe a small handful of frames depending on the output method you're using. ^Perfect latency and smoothness simultaneously w/ gsync. If you can run a game 1.5x++ faster than the refresh rate of your screen (like 200fps on 90hz) then you can look into Fast Sync. It's relatively new, basically proper triple buffered vsync on the driver level so it can maintain a Vsync without most (or even almost all) of the added lag if you have consistently very high performance. It's a good compromise between vsync off and vsync on for a regular monitor but it can't handle dips in performance or lower performance games like Gsync can. There's a lot more on the subject and similar stuff relating to input lag, blur, high refresh rates etc on blurbusters.com ------- SLI is not so great in general - even if it's well supported almost everything that uses it uses a rendering method called alternate frame rendering. A more powerful single-GPU system pushing 100fps would have 10ms GPU frametimes but a 2-way-SLI system pushing 100fps with AFR would have two GPU's outputting interleaved 20ms frametimes - the GPU's still take twice as long to make a frame, you just have two of them. That longer time period for one of the weaker GPU's to render its own frame presents as a significant increase in input lag and to make it worse there is usually an increased amount of buffering so that frames can be delivered smoothly (yet delayed) despite minor differences in rendering speed between the GPU's which are always working on different images in real time. Two 1070's is often a lot more useful than one 1070 but it's better to have a single 1080ti instead for realtime gaming.
Thank you for the explanation. I never took into consideration the frametime differences between graphics card set ups. I only considered raw power(websites stated that two 1070s was on par with a 1080ti, sometimes better depending on the game). I suppose i'll pick up a gsync monitor after reading all this. Vsync really affects CS:GO compared to the other games I play. I typically use it to avoid tearing and to keep my graphics cards from working harder than they needed to but it sounds like gsync is a better option. Any idea what would happen if I use gsync + an RTTS option to limit how many frames I want to put out in an effort to control how hard my gpu's work? When I used RTTS in MSI Afterburner it gave me tearing while playing csgo(was hoping it would operate similar to vsync, I get arouond 200 fps or more in csgo and 100-110 of those frames aren't shown when i'm at 90hz so I didn't want my gpus working harder than they needed to).
I'll be sure to check out the website, thank you for that.
|
United Kingdom20263 Posts
Any idea what would happen if I use gsync + an RTTS option to limit how many frames I want to put out in an effort to control how hard my gpu's work?
That's the standard operation for Gsync unless a game has an FPS limiter built in; running RTSS on everything with a limit of e.g. 140fps if you have 144hz. Without the FPS limit gsync can't work above the update rate of the screen, it just reverts to being like vsync on or vsync off regular screen. That's one of the things that isn't particularly user friendly about it.
CSGO does have one of those limits, the FPS_max command i think. Double the refresh rate (180fps limit for 90hz) is a good place to start but it sounds like you're already around there; 90fps or maybe 100fps (try a range of values from around 85-100 for minimal tearing annoyance) on 90hz would be better for power saving.
---
I get arouond 200 fps or more in csgo and 100-110 of those frames aren't shown when i'm at 90hz so I didn't want my gpus working harder than they needed to
90hz will display 200fps fine, you'll just see less than half of each frame on average with vsync off so increasing FPS into the triple digits won't be as impactful as at 0-100. The input lag improves notably with framerates above the refresh rate of the screen, especially on low refresh rates. Blurbusters have another test getting 31ms input lag in CSGO with 60fps@60hz, cut down to 22ms with 180fps@60hz.
That's part of the reason that 60hz gsync isn't very good, 120-144hz is a lot better and 240+ is great - because limiting yourself to 58fps@60hz leaves a lot of input lag for lack of both framerate and refresh speed.
Ideally you'd have high refresh rate AND high framerate (232fps@240hz), having one of them be high is pretty good (e.g. 60fps@240hz or 300fps@60hz) and having them both low (58fps@60hz) is bad for input lag.
RTSS's 1 frame of added lag also hurts far more at 58fps than it does at 140 or 232 when a game doesn't have an internal FPS limit.
|
Gsync it is . Thank you for helping me understand how all this technology works, I feel like I just became a little less of a newb
|
|
|
|