|
When using this resource, please read FragKrag's opening post. The Tech Support forum regulars have helped create countless of desktop systems without any compensation. The least you can do is provide all of the information required for them to help you properly. |
On March 23 2013 23:58 upperbound wrote: Best deals I can see are the MSI G45 for €80 and the Gigabyte D3H for €100. The latter is a somewhat better board but the cost difference makes it a tossup. I can sell the Pro3 for 80/85 new easily as there will be no postage involved (you have to pay at least 90 to get this board new and delivered in Ireland). So I'll really only be paying 15 or 20 quid for it. Is this board better than the Z77 Pro3?
Thanks as usual.
Edit: So my last question doesn't get buried (as it's on the last page), please have a look at it; thanks!!!
Edit 2: Can you link the Gigabyte board. If it's marked as a single piece it means it's a returned product (I think). Not necessarily a bad thing, though.
|
|
A better board overall how? I see it has more connectors (more USB ports etc), but that doesn't woo me overly so.
|
I used an ASRock pro4-m and switched to Gigabyte. I have nothing to say about Asus and MSI, as I only researched about Gigabyte after seeing a good price where I was buying.
From Gigabyte, the z77-d3h should be able to do the 4.6 ghz you want and is 100 EUR. The z77x-d3h is 120 EUR and looks suspiciously similar to the z77x-ud3h which can do who-knows-how-much (6 ghz?). The marketing talks a lot about how good the boards are regarding possible damage from static charges. 
Gigabyte uses VIA audio and Qualcomm networking instead of the usual Realtek. It's intended to be better, but will probably be a headache at some point. While it's working fine now, I had some problems getting sleep/resume to work right while overclocking, the cause maybe being drivers. It seems Gigabyte is not yet fully done with changing everything about their BIOS to UEFI regarding fast booting and secure boot. Playing with the options in a beta version made the board not boot anymore. Clearing CMOS did not help and it would have been dead without the second backup BIOS the board has. That was scary as there's no documentation on how to force the second BIOS to take over if it doesn't do so automatically. Finally, ASRock showed more useful temperature sensor readings in SpeedFan.
That's all I know that could be a problem about Gigabyte. I still like my board (a lot) more than the ASRock I tried. I was worried about hot summer days as the board was starting to throttle the CPU to protect itself in my experiments at 4.5 or 4.6 ghz. The VRM heatsink was (maybe) too hot to touch at 4.4 ghz, while this does not happen now at 4.7 ghz.
|
Hmmm, I'll wait for more input on that board. I don't think VRM temps were a problem on my old board. Is this something I can monitor with software?
Thanks.
|
VP550 is okay. It's actually a significant upgrade over VP450, a different design, so it's decent. That little extra for Antec Neo Eco is probably worth it though.
Offset voltage is better if you care about lower power draw and temps on idle; it might be a little less stable when coming from lower-power states to heavy load and vice versa though.
Seems like the AsRock voltages are about right, just not when using LLC because of a busted implementation? They switched to what they thought was a pin-compatible chip, except it doesn't behave the same way? If you already know of the issue, I wouldn't think it's a huge loss unless you're really fine-tuning things. LLC and greater nominal VCore both lead to higher actual VCore...
Some kind of temp sensor, which may or may not be accurate, usually gives VRM temps, which can be seen in usual temperature-monitoring software. You could also probe the heatsink if you wanted, with some kind of temperature sensor. That would give a relative kind of reading that may not really be comparable across boards, depending on where the sensor is. I don't really know where they put the stuff.
|
On March 24 2013 02:33 Myrmidon wrote: VP550 is okay. It's actually a significant upgrade over VP450, a different design, so it's decent. That little extra for Antec Neo Eco is probably worth it though.
Offset voltage is better if you care about lower power draw and temps on idle; it might be a little less stable when coming from lower-power states to heavy load and vice versa though.
Seems like the AsRock voltages are about right, just not when using LLC because of a busted implementation? They switched to what they thought was a pin-compatible chip, except it doesn't behave the same way? If you already know of the issue, I wouldn't think it's a huge loss unless you're really fine-tuning things. LLC and greater nominal VCore both lead to higher actual VCore...
Some kind of temp sensor, which may or may not be accurate, usually gives VRM temps, which can be seen in usual temperature-monitoring software. You could also probe the heatsink if you wanted, with some kind of temperature sensor. That would give a relative kind of reading that may not really be comparable across boards, depending on where the sensor is. I don't really know where they put the stuff. Would you recommend me just replacing my defective board with the same model again?
|
On March 24 2013 01:22 Gumbi wrote: Hmmm, I'll wait for more input on that board. I don't think VRM temps were a problem on my old board. Is this something I can monitor with software?
Thanks. As I remember it, there were some readings in HWMonitor on my ASRock that could have been the VRM (deducing from how those numbers were changing under load). It usually shouldn't be needed. The chips for that job are fine at far over 100 C. That's why it was so strange that I had problems. People also didn't believe me that the board was throttling the CPU when I mentioned it. It could have been something unfortunate happening with hot air getting trapped over that area of the board, as the PC case does not have openings for ventilation at the top side, while the PSU's position is at the bottom.
What I mentioned about the Gigabyte's VRM heatsink being colder than on the ASRock Pro4-M, that was about the Z77X-D3H, so doesn't say anything about how the Z77-D3H compares.
I feel I had more problems with Gigabyte than with ASRock but I could be imagining things. I immediately started overclocking pretty high when I got the Gigabyte, and never first made sure if everything's stable at default settings like I did with the ASRock. The PC sometimes did not come out of sleep, for example, so I was flashing a different BIOS and searching for newer drivers than what Windows Update automatically installs, while still experimenting with the lowest possible vcore setting for the overclock.
|
If bottlenecking is the wrong term for performance, application-specific scenarios, is another term used for that? "Throttling" sounds like a good candidate.
"My 300W PSU is bottlenecking my dual 680s." "My i3 core is throttling performance when I run Crysis, even though I have dual 680s."
|
No, definitely not.
A bottleneck is the weakest point in your system that prevents your performance from being better for a given task. Throttling is when your system scales itself back because it can't handle one of your components (usually to protect components, typically heat-related for the CPU).
|
MisterFred's copy paste spoiler explanations have it the other way.
But it's good to know both terms are used to distinguish. it's annoying when there is no word for it and a void is left, but people say not to use the B word. xp
|
If you want to avoid the word, I'd say what is limiting performance or what's the limiting factor for a task.
Throttling already means something else, as explained above.
|
On March 24 2013 04:56 waffling1 wrote: MisterFred's copy paste spoiler explanations have it the other way.
But it's good to know both terms are used to distinguish. it's annoying when there is no word for it and a void is left, but people say not to use the B word. xp
I dunno who is saying not to use the word "bottleneck" but that's pretty much the best word there is to describe that sort of situation. When someone comes in saying what can I upgrade to get better performance people are always going to replace the slowest part which would be the bottleneck.
|
On March 24 2013 06:21 Infernal_dream wrote:Show nested quote +On March 24 2013 04:56 waffling1 wrote: MisterFred's copy paste spoiler explanations have it the other way.
But it's good to know both terms are used to distinguish. it's annoying when there is no word for it and a void is left, but people say not to use the B word. xp
I dunno who is saying not to use the word "bottleneck" but that's pretty much the best word there is to describe that sort of situation. When someone comes in saying what can I upgrade to get better performance people are always going to replace the slowest part which would be the bottleneck.
I am saying don't use the word "bottleneck". Because the two different definitions make people confused. Like you, in the post you just wrote. "Bottleneck" the way you're thinking about is APPLICATION SPECIFIC, so without specifying the application "slowest part" the way you're using it is meaningless.
For instance, take a computer with a i7-3970x overclocked to 4.5ghz and a Radeon 7770 for a graphics card. Now if I asked you "what part is the bottleneck?" without specifying the application, you'd get the question wrong.
See, you'd probably answer the Radeon 7770. But you'd be wrong. Because I didn't tell you the application was playing late game SC2 4v4s all day long. With that additional information - the application the computer is being used for, the "slowest part," meaning the part that's holding the others back in terms of real-world performance, is the i7-3970x. So the i7-3970x is the "bottleneck" in my example. If someone with infinite money wanted to upgrade, they'd have to go get an ivy-bridge chip with a setup capable of overclocking it to 4.5ghz or higher. Not get a new video card.
All that above is what I call the incorrect definition. The correct definition of "bottleneck" is when part of your computer prevents another part of your computer from performing optimally regardless of usage scenario. This is application NON-specific. The classic examples are bandwidth related: plugging a SATA3 SSD into a SATA2 port or jamming a fancy video card into a pci-e slot limited to x1 lanes.
The reason I hate the word is because people inevitably get the two definitions confused (for example: thinking that there are magic parts configurations that can eliminate application-specific "bottlenecks") or simply using the application specific definition without defining an application. Like you just did.
|
^
I see what you're saying now. I was actually too lazy to go and find the post myself which is what lead to that post. I agree with mostly everything you stated there. Most of the time though there's some sort of specification such as gaming or encoding or something like that. However to say that i used to the term incorrectly would be wrong. Just because different applications have different bottlenecks depending on the system doesn't make the my usage of the word wrong. It would just change parts depending on the application if your computer happened to fit into that set of parameters.
"A bottleneck is a phenomenon where the performance or capacity of an entire system is limited by a single or limited number of components or resources. The term bottleneck is taken from the 'assets are water' metaphor. As water is poured out of a bottle, the rate of outflow is limited by the width of the conduit of exit—that is, bottleneck. By increasing the width of the bottleneck one can increase the rate at which the water flows out of the neck at different frequencies. Such limiting components of a system are sometimes referred to as bottleneck points."
As you see there you're taking a very, very literal translation and trying to spread it across every application that can be ran on a computer. Take your above post for example, sure for that one game the cpu would be limiting, but for every other game that he plays it'd be the GPU. So if he asked what he should upgrade you would say the GPU. Unless he stated that the only thing he played is sc2. Some people just have a looser translation of the word and that doesn't make them wrong.
Edit: Again for your example lets say he also plays bf3, crysis 3 and whatever other graphics intensive game you'd like to add on. Says "I play all these games, what is the bottleneck of my computer?" The GPU. Sure it might not help with the one specific example but for 90% of what he's doing it is the bottleneck.
|
As you see there you're taking a very, very literal translation and trying to spread it across every application that can be ran on a computer. Take your above post for example, sure for that one game the cpu would be limiting, but for every other game that he plays it'd be the GPU. So if he asked what he should upgrade you would say the GPU. Unless he stated that the only thing he played is sc2.
This is one of the points I'm making. And I mentioned that in the longer example. The question "what is my bottleneck" is pretty much meaningless (technically unanswerable with any degree of accuracy) if you're using the incorrect definition without applying some sort of application (even if that application is 'games of some sort that might come out'). You can assume an application, but...
Some people just have a looser translation of the word and that doesn't make them wrong.
Precise language leads to precise results. And good advice. Sloppy language leads to assumptions & intuition to fill the gap, which occasionally leads to bad advice.
Yes, I'm nit-picking. This is one of the nits I allow myself the time to pick. Good lord that analogy is gross.
|
This is a rather dumb argument, honestly.
If you ask for a bottleneck with no specific, then you do choose the weakest overall part. If you ask it for a specific task, you give the one for that task.
There's no ambiguity here.
|
Different parts are responsible for different tasks.
There's no good way to decide which is weakest overall outside of the context of accomplishing some task. I mean, you could compare by price, or better yet, price compared to other alternatives (60 percentile expensive CPU vs. 60 percentile expensive RAM), but that's not really going anywhere. If you want to go by performance, then you need to start averaging over different workloads in some arbitrary way, if it's unspecified.
Anyway, whatever gets people to think in terms of their own needs is what's best. It is true that some people get confused into thinking that certain parts combinations are inherently imbalanced, bottlenecked.
|
does a higher offset voltage mean the overclocking can be more flexible throughout heavy and non-heavy use? how much offset voltage is good/reasonable/needed?
belial, i signed up for the microcenter email list, but they didn't send me anything when i signed up. Do you have a link, or image i can screen shot, print, and show them in-person - for the motherboard: z77x-ud3h ? I would like to wait and combine the special email discount with the $40 core+mobo discount.
I'm leaning towards gigabyte z77x-ud3h right now, for my i5-3570k, hyper 212+, and capstone 450W. I don't feel comfortable OCing MSI z77x-g41 to its limit at 4.5 GHz, although the huge $$ savings is nice.
what are the key differences between the three boards, (besides reading off the specs)? also, the differences in terms of overclocking? Gigabyte z77-d3h Gigabyte z77x-d3h Gigabyte z77x-ud3h
Apparently the ud3h has more useful features for a small price difference. + Show Spoiler +from http://www.tomshardware.com/forum/322574-30-z77x-ud3h"As I look, the lowest price you can get UD3H is $139, and D3H is $129. For $10 the UD3H offers better connectivity, (more SATA 3 ports, more flexible RAID options, and an extra PCIe 3.0 slot*) support for faster memory - up to 2666 Mhz and a couple more USB ports. That plus the overclockers'/enthusiast features like a dual bios switch and onboard power and reset buttons would be worth it TO ME. YMMV * - It's not really an 'extra' slot. Both boards have 16X, 8X and 4X PCIe slots. The difference is that they are 3.0, 3.0 2.0 on the UD3h and 3.0, 2.0, 2.0 on the D3h."
Before delidding the i5 3570k, i think i should test if it's functional without delidding. To test this, I'd have to apply a cooler with thermal paste and everything right? Then once i know it's good, delid, remove thermal paste, attach the hyper 212+ cooler.
|
Honestly if you have to ask these questions you really shouldn't risk trying to delid it.
|
|
|
|