Nevermind, i5 is dual core. Stupid laptop terminology. It's 200mhz faster with turbo and both cores in use, but that's negated by the i3 being ivy bridge. On top of that you have less power consumption and heat from the ivy bridge CPU and way stronger integrated graphics, so it's easy choice (i3)
Yah about that, how do you properly benchmark SC2? At the beginning of games everything is >100 FPS, but later on nothing is consistent. Could you transfer saves? O_O
Replay files are accurate to within a fraction of a percent, fraps benchmark function; i'd usually just go to etc 20:50, wait til timer hits 21:01 and then hit start on timed benchmark on a game camera with a single hatchery etc selected and somebodies camera
i3: Min. 14, Max. 62, Avg. 39.731. i5: Min. 38, Max. 85, Avg. 56.911.
Really surprising results TBH, I guess the i3 really is a low-end part. I was completely ready to believe you guys too thinking HD 4000 must > HD 3000, even though it's an i3 vs. an i5.
Pretty sure that's not true (i got food so cant google stuff atm) but AFAIK there's not multiple igpu's with the same name, hd3k is hd3k and hd4k is hd4k
and also, that's a massive performance difference in sc2. 42% higher average? The i5 is on worse architecture clocked only a little bit higher
i5-2410M - 2C/4T, 2.3 GHz (2.6/2.9 GHz on Turbo), Sandy Bridge, standard 35W TDP i3-3310M - 2C/4T, 2.4 GHz, Ivy Bridge, standard 35W TDP
HD 3000 - 12 EU, clock speed depends on processor (1200 MHz max on i5-2410M) HD 4000 - 16 EU, clock speed depends on processor (1000 MHz max on i3-3310M)
Recheck clock speeds for CPU and GPU (check power profiles too) on a separate monitor while benches are running. It should be closer than that.
Aren't you overestimating Sandy to Ivy ? Iirc the gain was 8% max in ideal conditions, which makes it close to 5% in real usage. And it was on desktop architectures. Then we have 2.9GHz vs 2.4GHz, which is a lot higher, especially on sc2 where higher clock = higher fps. The results are making sense imo (except too much difference in sc2 but not "that" much compared to the clock speed difference)
Pretty sure that's not true (i got food so cant google stuff atm) but AFAIK there's not multiple igpu's with the same name, hd3k is hd3k and hd4k is hd4k
and also, that's a massive performance difference in sc2. 42% higher average? The i5 is on worse architecture clocked only a little bit higher
Turbo and base graphics speeds vary between different models. All full voltage parts have the same number of EUs though.
On December 21 2013 09:31 MrCon wrote: Aren't you overestimating Sandy to Ivy ? Iirc the gain was 8% max in ideal conditions, which makes it close to 5% in real usage. And it was on desktop architectures. Then we have 2.9GHz vs 2.4GHz, which is a lot higher, especially on sc2 where higher clock = higher fps. The results are making sense imo (except too much difference in sc2 but not "that" much compared to the clock speed difference)
It shouldn't be hitting the 1-core-active-only 2.9 GHz Turbo Boost bin when running games usually, only the 2.6 GHz one and possibly not even that 100% of the time depending on how it's configured and the cooling and GPU load. So 2.4 GHz Ivy to 2.6 GHz (sometimes 2.3 GHz? possibly sometimes 2.9 GHz?) Sandy should be close, mostly a wash.
On December 21 2013 04:23 Myrmidon wrote: i5-2410M - 2C/4T, 2.3 GHz (2.6/2.9 GHz on Turbo), Sandy Bridge, standard 35W TDP i3-3310M - 2C/4T, 2.4 GHz, Ivy Bridge, standard 35W TDP
HD 3000 - 12 EU, clock speed depends on processor (1200 MHz max on i5-2410M) HD 4000 - 16 EU, clock speed depends on processor (1000 MHz max on i3-3310M)
Recheck clock speeds for CPU and GPU (check power profiles too) on a separate monitor while benches are running. It should be closer than that.
I have done some tests before. If you want to run at max GPU speed, you can't be in a turbo clock with the CPU.
I'm really surprised at the results too. I guess CPU speed is all that matters in SC2. Less EUs using less of the TDP share might help.
But the ivy bridge CPU is on smaller manufacturing process, it should have significantly less power usage. In the same TDP envelope it should do better, and it has at worst a small performance deficit, >40% is way too big of a performance difference
Will take Myrmidon's suggestion and bench again tmr with cpu-z running on a 2nd monitor, I didn't check power profile on the i3 when I did the benchmarks yesterday. Also, what can I use to check GPU clocks with Intel HD graphics? Can I use stuff like RivaTuner which I've only used with dedicated GPUs before?