When using this resource, please read the opening post. The Tech Support forum regulars have helped create countless of desktop systems without any compensation. The least you can do is provide all of the information required for them to help you properly.
Well, in this case the difference was what, 35 euros to pick between PCI-E 3 and PCI-4? I think slight premium is still worth it. While your comment is detailed and has benchmarks, I'm not entirely confident it shows the full picture though. For instance, what's "loading time" in that benchmark video? Is it only to start up a game? What about transitioning from level to level where you need to load again?
So: 1. Time to load <your game here> 2. Time to load mission 2 when you finish the previous one 3. Time to load mission 3 when you finish the previous one
In other words, this performance penalty is paid more than once. I do agree with your perspective that if you do something for a living, then paying extra for the highest performance is worth it (to a certain extent, i.e. no overvalued items).
"Final Fantasy XIV Shadowbringers is a free real-world game benchmark that easily and accurately compares game load times without the inaccuracy of using a stopwatch."
My current system has one major weak link and that’s the gpu (AsRock 5500xt), gonna switch back to nvidia and get the Zotac 3060 Ti twin AMP. I had set a budget for myself that included all the peripherals so that gpu was the best I could squeeze back then, knew that was gonna be the first thing I was gonna replace. 2y later we’ve arrived at that point.
Been out of the PC building stuff but I’m quite sure nothing is gonna cause a bottleneck with that card.
Ryzen 5 3600 16gb 3200mhz cl14 memory from G. Skill Samsung 970 evo NVME (1tb) AsRock pro4 b450m motherboard
My PSU is the only thing where I’m in doubt. Running a 650w from Corsair and I’m not too familiar with the power draw of the 3060Ti.. I’ve read people stating they’re using even less (550w psu’s for example) with that card without issues.
Corsair RM650 80+ Gold (90% efficiency according to Corsair)
I’m looking at the very same RTX card but not sure which one. Always heard Zotac cards being really good but it appears they tend to be louder than the average 3060Ti, personally I don’t mind but it just begs the question why? Why are they louder than their competitors?
Because they all put in different coolers. I don't know about this situation, but for example if you put in two fans instead of three, those fans need to run faster, which is louder. Some fans generally run louder, and if you overclock a bit more and use more power, that also leads to higher fan speeds and thus a louder card.
I measured and labeled the datapoints on Nvidia's Ada (4090?) vs Ampere (3090?) power/performance chart. They don't label the performance or say how it was measured so grain of salt.
On September 21 2022 01:49 Cyro wrote: I measured and labeled the datapoints on Nvidia's Ada (4090?) vs Ampere (3090?) power/performance chart. They don't label the performance or say how it was measured so grain of salt.
On September 21 2022 01:49 Cyro wrote: I measured and labeled the datapoints on Nvidia's Ada (4090?) vs Ampere (3090?) power/performance chart. They don't label the performance or say how it was measured so grain of salt.
Thoughts on today's presentation?
Not Cryo, but I think Nvidia is trying to milk as much as possible off the last dregs of the crypto boom before prices crash, which they are actively doing.
The cards are avoiding the 30 series performance areas/pricing. Nvidia wants to empty inventories of 30 series before letting prices fall on 40 series, because there's a few billion worth of 30 series sitting on shelves right now.
Edit: if the chart is raster perf, the efficiency is amazing. The sweet spot would be getting 3090 perf at 3060 power levels
GPU mining is 6 feet underground right now, might have some recovery but the market is currently crashing and everything is selling at a loss because board partners paid super inflated prices for stuff that they can't pass on to consumers any more.
All of the AIB's have loads of 3080's sat in warehouses so Nvidia can't release competition at lower pricepoints without losing all of their partners
Some of the performance improvements and features look good.
Giant red flag on the mystery chart because it looks like Nvidia may not have even been doing an apples to apples comparison, but e.g. using DLSS3 with frame doubling on the 4000 series while the 3000 series got DLSS2 without it.
-----
I've been waiting for somebody to do framerate interpolation properly for a long time - it's difficult and relies on a lot of data, but the potential for results is extremely strong.
For downsides, there is a latency penalty. If you want to make an intermediary frame between Frame 1 and Frame 2, you cannot even start work on this "frame 1.5" until after Frame 2 is already finished and so there is fundamentally more latency added.
The amount of latency that you have to add depends on the frame time. Lets say we're doubling from 50fps to 100fps - for starters you don't have a true 100fps, you're actually working from a 50fps base, so this gameplay already has that difference in frametime in additional latency compared to a 100fps gameplay. That's 10ms.
You also have to wait for a second frame at 50fps, which is an additional 20ms.
So we have a latency of maybe 30-50ms but then we're adding 30ms more.. that's enough to be easily detected by most gamers i think and very annoying for latency-sensitive users. It's a lot less bad than gaming via an encoded videostream sent over the internet though, and a lot of reviewers have warmed up to that recently.
There is a huge plus-side to that latency being frametime based - when you run at higher and higher framerates, the added latency penalty gets smaller and smaller to the point of being minimally annoying if even detectable. A 100fps baseline framerate brings that down to 15ms and 200fps brings it down to 7.5ms. You can then scale those framerates by 2-4x (advertised) or maybe 10x (later) with a lot less compute power in theory than it takes to do a completely fresh render each time.
The difference between frames also becomes smaller as you scale from a higher framerate, so it's computationally easier per-frame and less error-prone.
As to why i wouldn't consider it valid for a direct benchmark, this kind of interpolation may have excellent results but it will never be perfect. The ideal end goal is close enough to perfect that you can't tell the difference, but it's really misleading to paint this as baseline performance.
-----
The product lineup is really confusing here. It's probably best if the names are ignored entirely in favor of the specs to make sense of it:
PRODUCT 1: 7680 cuda, 192-bit bus, $900
Add about 30% performance and 33% price, then we get: PRODUCT 2: 9728 cuda, 256-bit bus, $1200
Add another 30% performance (in more efficient modes) to 60% (equal clocks) for 33% more money, then we get: PRODUCT 3: 16,384 cuda, 384-bit bus, $1600
The big one looks appealing, but only because the pricing on the other two is so jacked up i think. It would make a lot more sense at $700, $1000, $1600 and maybe they want to make a press event around dropping prices when RDNA3 launches.
Looking closely at a lot of the numbers as to why Nvidia might do this, it seems that they have an enormous architectural advantage over RDNA3 that Radeon won't be able to compete with for performance or efficiency at the high end. If you take AMD's most optimistic efficiency numbers and put them against Nvidia's, these Ada cards will have 30% more perf/watt than rdna3 at 300w. Neither of these claims are proven, but it's a fat margin - and it's in the wrong direction.
There has also been a lot of talk about the Radeon guys not wanting to go over 300w at stock, while Nvidia is opening the gates at 285-450w. I guess that means that Nvidia will have 30% more perf 300w-vs-300w or 48% perf at 300w-vs-450w.
I think the RT/DL performance gaps may also widen gen on gen.
Probably the best representation of what you're buying. From a performance standpoint, there's really nothing to complain about other than Nvidia trying to take MSRP pricing as of a month ago, and extend that upwards for 40 series.
They've got no competition, so it's probably worthwhile to wait til early 2023 or at least Christmas season to buy.
Edit: all this assumes nvidia ain't pulling some marketing bs
Well, NVIDIA stocks are 1/3 less than what they used to be 2-3 months. Also, US prevented NVIDIA from selling certain chips to China and Russia, so they have to rely on other markets more this time around. Add rising inflation on top of that and we may expect some nice discounts at some point. Of course, it could be wishful thinking on my side.
On September 21 2022 21:04 SC-Shield wrote: Well, NVIDIA stocks are 1/3 less than what they used to be 2-3 months. Also, US prevented NVIDIA from selling certain chips to China and Russia, so they have to rely on other markets more this time around. Add rising inflation on top of that and we may expect some nice discounts at some point. Of course, it could be wishful thinking on my side.
Sadly i think nvidia cards will sell like hotcakes. People just want that performance no matter the cost.
Edit: After seeing the fake frame insertion im not too impressed anymore. I would never want a card that guesses whats actually happening. :D
Got a question for everyone. I've done some digging but I can't find what I'm looking for.
I need a GPU for a HP workstation for my job, this one that is capable of running Autodesk Revit. We've ordered a Nvidia T600 but it didn't play nice with it and my coworker couldn't actually do anything. Anyone got any ideas of compatible cards or where I would find it? I've already checked the autodesk website and I still can't find what I'm looking for.
On September 25 2022 12:39 ZerOCoolSC2 wrote: Got a question for everyone. I've done some digging but I can't find what I'm looking for.
I need a GPU for a HP workstation for my job, this one that is capable of running Autodesk Revit. We've ordered a Nvidia T600 but it didn't play nice with it and my coworker couldn't actually do anything. Anyone got any ideas of compatible cards or where I would find it? I've already checked the autodesk website and I still can't find what I'm looking for.
On September 25 2022 12:39 ZerOCoolSC2 wrote: Got a question for everyone. I've done some digging but I can't find what I'm looking for.
I need a GPU for a HP workstation for my job, this one that is capable of running Autodesk Revit. We've ordered a Nvidia T600 but it didn't play nice with it and my coworker couldn't actually do anything. Anyone got any ideas of compatible cards or where I would find it? I've already checked the autodesk website and I still can't find what I'm looking for.
I don't know if there's a professional card which would do better/have better compatibility though. Might have to do some forum trawling for that.
That might work. I didn't see that one mentioned. We've tried that Nvidia card and a AMD WX3200. The Nvidia just didn't want to work apparently and the wx3200 was too long (and the port shield was too long as well, kind of like how the 6400 is looking). They might just have to buy them a new computer.
On September 25 2022 12:39 ZerOCoolSC2 wrote: Got a question for everyone. I've done some digging but I can't find what I'm looking for.
I need a GPU for a HP workstation for my job, this one that is capable of running Autodesk Revit. We've ordered a Nvidia T600 but it didn't play nice with it and my coworker couldn't actually do anything. Anyone got any ideas of compatible cards or where I would find it? I've already checked the autodesk website and I still can't find what I'm looking for.
I don't know if there's a professional card which would do better/have better compatibility though. Might have to do some forum trawling for that.
That might work. I didn't see that one mentioned. We've tried that Nvidia card and a AMD WX3200. The Nvidia just didn't want to work apparently and the wx3200 was too long (and the port shield was too long as well, kind of like how the 6400 is looking). They might just have to buy them a new computer.
Yeah, unfortunately workstation applications are not something widely used, so the audience/content around it is pretty sparse. The 6400 I linked has a half-height bracket (silver metal thing on end) which can be swapped in, so it should fit (assumign HP follows standards).
Good luck though, whichever way you go. PITA troubleshooting compatibility with software.
In other news though, AMD 7000 series CPUs are out. Expensive, and fast is the conclusion. Solid for gaming albeit expensive, and generally the king for productivity.
The gaping hole in the lineup for the 7800X3D is probably where the patient, rich person's money should go though. That'll probably dominate gaming charts comfortably for a until the next generation of CPU with v-cache. There's also Intel's next gen in the next month, so the best time to buy is probably early next year, rather than now.