|
|
|
ATI trying to cheat and still putting out less performance
|
What's sad is it's clear Nvidia has been working on SC2 drivers, in-fact awhile back they had one during beta phase 1 clearly meant just for the beta.
So now there is a small terrain detail and the x4AA that nvidia cards get along with the stronger overall performance.
I thought ATI may be working with blizzard on drivers because with the release of 10.6 the problem of missing graphics on ati cards went away according to blizzard tech support.
The 460 is the clear winning in recommending for ultra settings in sc2 currently.
|
It's why Nvidia products can generally be worth their price premium. The benefit of the drivers that enhances gaming performance, and superior SLI scaling along with more game devs working with Nvidia.
|
Sanya12364 Posts
Wow, that Fermi GPU architecture so much better than before it's is unfair. No wonder it's so much better. I'd estimated that it's about twice as efficient with its memory and compute scheduling than previous generations of GPUs.
Previous GPUs were all massive massive hammers you could throw at any graphics problem. But it didn't matter if you had a massive problem or just one tiny nail. So it was really inefficient with the odd small task that would make the GPU waste over 80% of its cycles. It's also got multi-layer caches like the CPUs and that saves on taxing the memory system.
Fermi is a step towards a physics engine and GPGPU and a large away from pure graphics acceleration, but that has to make the Fermi drivers a lot easier to program / optimize / get performance out of. Damn.
|
It's no secret that GPUs have been moving towards the idea of CPUs for a long time now. The new GF104 apparently also incorporates superscalar architecture which I don't think has ever been used in a GPU before. CUDA was already a step in the direction of GPGPU and ATI Stream is going for the same idea.
Not sure what you mean by physics engine since Nvidia has been pushing their dedicated PhysX stuff for as long as anybody can remember.
|
it may be a very impressive gpgpu arch and has a benefit of running dx11 due to that, but it still needs to be refine and properly spun during making the chip, lower the gate leakage and die size so it competes with ATI in terms of realestate and then it has something.
The arch shows promise much more then the heavily recycled R600 which is what all ATI chips are based on past the HD2000.
|
Sanya12364 Posts
On July 20 2010 14:05 FragKrag wrote: It's no secret that GPUs have been moving towards the idea of CPUs for a long time now. The new GF104 apparently also incorporates superscalar architecture which I don't think has ever been used in a GPU before. CUDA was already a step in the direction of GPGPU and ATI Stream is going for the same idea.
Not sure what you mean by physics engine since Nvidia has been pushing their dedicated PhysX stuff for as long as anybody can remember.
Just because they market it doesn't mean that it's any good, or even correct. If I remember correctly, they were getting answers fast but the answers could often be wrong. This time they got their math a bit more accurate, it's no longer an unreliable answer.
|
On July 20 2010 14:06 semantics wrote: it may be a very impressive gpgpu arch and has a benefit of running dx11 due to that, but it still needs to be refine and properly spun during making the chip, lower the gate leakage and die size so it competes with ATI in terms of realestate and then it has something.
The arch shows promise much more then the heavily recycled R600 which is what all ATI chips are based on past the HD2000.
That may be true, but chances are Nvidia is making more money per gpu than ATI anyways since ATI is always going to be pressured to lower prices. I never understood why Nvidia insisted on crafting these super massive GPUs. Nvidia's SLI scaling and drivers are still a level above ATI's and they could easily rely on SLI scaling to cover the high end like ATI was doing with CrossFire RV770, except much more efficiently.
GF104 is in the right direction though. 1.95 billion instead of that massive 3 billion :>. Performs amazing and we still haven't seen the full potential because TSMC sucks at 40nm ;_;
On July 20 2010 14:10 TanGeng wrote:Show nested quote +On July 20 2010 14:05 FragKrag wrote: It's no secret that GPUs have been moving towards the idea of CPUs for a long time now. The new GF104 apparently also incorporates superscalar architecture which I don't think has ever been used in a GPU before. CUDA was already a step in the direction of GPGPU and ATI Stream is going for the same idea.
Not sure what you mean by physics engine since Nvidia has been pushing their dedicated PhysX stuff for as long as anybody can remember. Just because they market it doesn't mean that it's any good, or even correct. If I remember correctly, they were getting answers fast but the answers could often be wrong.  This time they got their math a bit more accurate, it's no longer an unreliable answer.
I believe this would be a driver issue, and not an issue with the GPU itself. Unless you're referring to the double precision accuracy (which has been crippled in GF104 and GT200 had) or ECC (which GF104 lacks completely).
|
51455 Posts
On July 19 2010 12:34 R04R wrote:D: Why is there so much shipping. =/ http://www.newegg.ca/Product/Product.aspx?Item=N82E16820231253Although it doesn't have heatspreaders, RAM shouldn't get very hot unless you overclock them. Gokey: It's entirely how they're manufactured. The 555 BEs originally started as 955s, but if the 955s had faulty cores they would be turned into 555s to make up the loss. If that was the only way 555s were manufactured there obviously wouldn't be enough 555s in the market. There are also some 955s that are perfectly fine but are turned into 555s. AMD doesn't manufacture the 555 chip because that would require a completely different assembly line, so they just disable cores on the 955.
so when you buy a 555 for the purpose of trying to get the extra cores, it's basically a lottery to see whether you are able to unlock the extra cores or not?
|
yeah GTR. The cores can be disabled for any number of reasons. Out of spec voltage leaks, completely disfunctional cores, etc. I've heard of Athlon II X3s that unlock into full Phenom IIs and those seem to be the best bets since they are cheap, and if you can't unlock you get a decently powered trip core.
|
Yeah I got lucky and my new X3 435 unlocked into a full Phenom X4 with the 6mb L3 cache as well. No problems with it at all really, just can't OC ut very much.
|
Will this run sc2 at low on XP?
Intel® Atom™ 230 Processor Radeon HD 4350 2 Gig intern
It's the MSI Windbox DE200 (comes with 2gig in the netherlands)
|
Sorry, the processor isn't fast enough. And the graphics card is a borderline playable at low.
|
On July 19 2010 17:40 R04R wrote: Yup, one of my RAMs is faulty. Shame. I hope the other one doesn't crap out.
I think I have to do the RMA stuff. Damnit.
I advise you against changing to G.Skill. one of my g.skill stick got 8000 errors on memtest -_-.
|
is there a way to switch to AHCI mode after installing OS? I have a customized ubuntu installed in IDE mode, got a windows 7 on SSD and I would prefer to change everything to AHCI or RAID but everytime i try to change it at BIOS, it would hang at post. 
|
|
|
Sanya12364 Posts
On July 20 2010 14:26 FragKrag wrote:Show nested quote +On July 20 2010 14:06 semantics wrote: it may be a very impressive gpgpu arch and has a benefit of running dx11 due to that, but it still needs to be refine and properly spun during making the chip, lower the gate leakage and die size so it competes with ATI in terms of realestate and then it has something.
The arch shows promise much more then the heavily recycled R600 which is what all ATI chips are based on past the HD2000. That may be true, but chances are Nvidia is making more money per gpu than ATI anyways since ATI is always going to be pressured to lower prices. I never understood why Nvidia insisted on crafting these super massive GPUs. Nvidia's SLI scaling and drivers are still a level above ATI's and they could easily rely on SLI scaling to cover the high end like ATI was doing with CrossFire RV770, except much more efficiently. GF104 is in the right direction though. 1.95 billion instead of that massive 3 billion :>. Performs amazing and we still haven't seen the full potential because TSMC sucks at 40nm ;_;
I thought that the nVidia parts were too big for good yields, so it's probably not a good business for them.
The thing with ATI is that they are really good at packing in those compute engines and not so great at getting the I/O to flow well. They have the right idea in that at the higher quality levels it's more about the compute, but their parts look so much harder to program. My guess is that ATI's software is going to lag behind. If ATI spends a lot of money on software/firmware, they might burn nVidia badly on both benchmarks and visual quality.
|
|
|
|