|
Although this thread does not function under the same strict guidelines as the USPMT, it is still a general practice on TL to provide a source with an explanation on why it is relevant and what purpose it adds to the discussion. Failure to do so will result in a mod action. |
On May 05 2026 23:01 GreenHorizons wrote:Show nested quote +On May 05 2026 22:08 Jankisa wrote:+ Show Spoiler +I personally think that it's something of a bubble, just like the dot com was a bubble, but we still have Cisco as a huge company despite it being one that was riding the high crest of the bubble at the time.
The need for compute is not going to go away, the development work and inference will be needed for the foreseeable future, unless there is a cataclysmic event where insane amount of economic activity goes away, the compute that a DC like this will be churning out will find a buyer, at what prices, well, that is not for me to worry about, more for the investors.
Even if AI development stops right now, new way to apply the agents we currently have will keep popping up, as critical as I am of the people leading this industry on the US side, it's pretty obvious that this is not just hype, it's going to be the most important technology of the next decade and I'm yet to find a convincing argument against that.
The "these things have 50 people working once constructed" is a typical Reddit number being thrown around for a year now, it's silly. The rough calculation, for DC operations only and at Hyperscaler size is 0.5 employees per MW, this is a GW DC so the number of employees in the DC itself would be 200-500, depending on a variety of factors.
Like I mentioned before, this monstrosity, if built, would at peek capacity be using about a quarter of the energy of my whole country, this, however, wouldn't all come from our grid, a big part of this project is an LNG power plant + fields of solar, for which there is plenty of space around the area, these both come with it's own staff, so the upper estimate of 1.500 people working in and around this project doesn't seem crazy.
For the "it just makes shit up" people, yeah, sometimes it does, that's why no serious business or person is going to give it access to critical systems and allow it to delete shit (as recently publicized) or have it do free form data analysis that no one double checks for months at the time. + Show Spoiler + As an example, when I try to get to the bottom of an issue, I often review logs, since there is a ton of data, I feed it in to the AI. Once it does a comparative analysis it will give me timestamps and I'll go and check if the lines are in the logs, anything else would be reckless.
I'm having a hard time figuring out if this is sarcasm or not? Seems like you actually believe it, but it also seems way too ridiculous/oblivious for anyone to seriously believe. Maybe I'm misunderstanding your qualifiers? An example that comes to mind: Show nested quote +UnitedHealth uses faulty AI to deny elderly patients medically necessary coverage
The lawsuit, filed last Tuesday in federal court in Minnesota, claims UnitedHealth illegally denied "elderly patients care owed to them under Medicare Advantage Plans" by deploying an AI model known by the company to have a 90% error rate, overriding determinations made by the patients' physicians that the expenses were medically necessary.
"The elderly are prematurely kicked out of care facilities nationwide or forced to deplete family savings to continue receiving necessary medical care, all because [UnitedHealth's] AI model 'disagrees' with their real live doctors' determinations," according to the complaint. https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/I don't know if AI (with a known 90% error rate) deployed by a top 10 fortune 500 company making life and death decisions for people counts as a "serious business" or a "critical system" to you though.
Simberto already replied to you, horrible companies using this to be able to shift responsibility for killing people doesn't mean that they are "serious people and organizations", I mean, they are, but they aren't using this in a genuine way where they expect high quality outputs, the goal is to utilize the low quality to shift responsibility.
American capitalism and healthcare being broken does not invalidate that AI as a technology, when used by intelligent people and companies who are trying to deliver good products faster then they could before, AI is an absolute boon to productivity.
|
On May 05 2026 22:08 Jankisa wrote: For the "it just makes shit up" people, yeah, sometimes it does, that's why no serious business or person is going to give it access to critical systems and allow it to delete shit (as recently publicized) or have it do free form data analysis that no one double checks for months at the time.
As an example, when I try to get to the bottom of an issue, I often review logs, since there is a ton of data, I feed it in to the AI. Once it does a comparative analysis it will give me timestamps and I'll go and check if the lines are in the logs, anything else would be reckless.
That's a hope that you can have at the start, but once people get used to it and it usually looks decent they'll just go with it. When Google came out we used to verify that we weren't getting nonsense results on the interwebs too, and now we have grandpas getting their news from Facebook.
|
On May 06 2026 00:00 Jankisa wrote:Show nested quote +On May 05 2026 23:01 GreenHorizons wrote:On May 05 2026 22:08 Jankisa wrote:+ Show Spoiler +I personally think that it's something of a bubble, just like the dot com was a bubble, but we still have Cisco as a huge company despite it being one that was riding the high crest of the bubble at the time.
The need for compute is not going to go away, the development work and inference will be needed for the foreseeable future, unless there is a cataclysmic event where insane amount of economic activity goes away, the compute that a DC like this will be churning out will find a buyer, at what prices, well, that is not for me to worry about, more for the investors.
Even if AI development stops right now, new way to apply the agents we currently have will keep popping up, as critical as I am of the people leading this industry on the US side, it's pretty obvious that this is not just hype, it's going to be the most important technology of the next decade and I'm yet to find a convincing argument against that.
The "these things have 50 people working once constructed" is a typical Reddit number being thrown around for a year now, it's silly. The rough calculation, for DC operations only and at Hyperscaler size is 0.5 employees per MW, this is a GW DC so the number of employees in the DC itself would be 200-500, depending on a variety of factors.
Like I mentioned before, this monstrosity, if built, would at peek capacity be using about a quarter of the energy of my whole country, this, however, wouldn't all come from our grid, a big part of this project is an LNG power plant + fields of solar, for which there is plenty of space around the area, these both come with it's own staff, so the upper estimate of 1.500 people working in and around this project doesn't seem crazy.
For the "it just makes shit up" people, yeah, sometimes it does, that's why no serious business or person is going to give it access to critical systems and allow it to delete shit (as recently publicized) or have it do free form data analysis that no one double checks for months at the time. + Show Spoiler + As an example, when I try to get to the bottom of an issue, I often review logs, since there is a ton of data, I feed it in to the AI. Once it does a comparative analysis it will give me timestamps and I'll go and check if the lines are in the logs, anything else would be reckless.
I'm having a hard time figuring out if this is sarcasm or not? Seems like you actually believe it, but it also seems way too ridiculous/oblivious for anyone to seriously believe. Maybe I'm misunderstanding your qualifiers? An example that comes to mind: UnitedHealth uses faulty AI to deny elderly patients medically necessary coverage
The lawsuit, filed last Tuesday in federal court in Minnesota, claims UnitedHealth illegally denied "elderly patients care owed to them under Medicare Advantage Plans" by deploying an AI model known by the company to have a 90% error rate, overriding determinations made by the patients' physicians that the expenses were medically necessary.
"The elderly are prematurely kicked out of care facilities nationwide or forced to deplete family savings to continue receiving necessary medical care, all because [UnitedHealth's] AI model 'disagrees' with their real live doctors' determinations," according to the complaint. https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/I don't know if AI (with a known 90% error rate) deployed by a top 10 fortune 500 company making life and death decisions for people counts as a "serious business" or a "critical system" to you though. Simberto already replied to you, horrible companies using this to be able to shift responsibility for killing people doesn't mean that they are "serious people and organizations", I mean, they are, but they aren't using this in a genuine way where they expect high quality outputs, the goal is to utilize the low quality to shift responsibility. American capitalism and healthcare being broken does not invalidate that AI as a technology, when used by intelligent people and companies who are trying to deliver good products faster then they could before, AI is an absolute boon to productivity.
There was the case of an AI bot inciting violence or suicide or something like that. When you get machines performing human tasks, corporations can shift responsibility for harm caused to actual people to the consumer who sees the content tailored to them. It's a premise that removes accountability altogether for corporations but not for users.
|
|
|
|
|
|