• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 17:27
CET 22:27
KST 06:27
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
ByuL: The Forgotten Master of ZvT30Behind the Blue - Team Liquid History Book19Clem wins HomeStory Cup 289HomeStory Cup 28 - Info & Preview13Rongyi Cup S3 - Preview & Info8
Community News
Weekly Cups (March 9-15): herO, Clem, ByuN win02026 KungFu Cup Announcement5BGE Stara Zagora 2026 cancelled12Blizzard Classic Cup - Tastosis announced as captains17Weekly Cups (March 2-8): ByuN overcomes PvT block4
StarCraft 2
General
Potential Updates Coming to the SC2 CN Server Blizzard Classic Cup - Tastosis announced as captains Weekly Cups (March 9-15): herO, Clem, ByuN win GSL CK - New online series BGE Stara Zagora 2026 cancelled
Tourneys
2026 KungFu Cup Announcement [GSL CK] #2: Team Classic vs. Team Solar [GSL CK] #1: Team Maru vs. Team herO RSL Season 4 announced for March-April PIG STY FESTIVAL 7.0! (19 Feb - 1 Mar)
Strategy
Custom Maps
Publishing has been re-enabled! [Feb 24th 2026] Map Editor closed ?
External Content
The PondCast: SC2 News & Results Mutation # 517 Distant Threat Mutation # 516 Specter of Death Mutation # 515 Together Forever
Brood War
General
ASL21 General Discussion BGH Auto Balance -> http://bghmmr.eu/ Gypsy to Korea BSL 22 Map Contest — Submissions OPEN to March 10 Are you ready for ASL 21? Hype VIDEO
Tourneys
ASL Season 21 Qualifiers March 7-8 [Megathread] Daily Proleagues [BSL22] Open Qualifiers & Ladder Tours IPSL Spring 2026 is here!
Strategy
Simple Questions, Simple Answers Soma's 9 hatch build from ASL Game 2 Fighting Spirit mining rates Zealot bombing is no longer popular?
Other Games
General Games
Stormgate/Frost Giant Megathread Dawn of War IV Path of Exile Nintendo Switch Thread PC Games Sales Thread
Dota 2
Official 'what is Dota anymore' discussion The Story of Wings Gaming
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Five o'clock TL Mafia Mafia Game Mode Feedback/Ideas Vanilla Mini Mafia TL Mafia Community Thread
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine Mexico's Drug War Russo-Ukrainian War Thread NASA and the Private Sector
Fan Clubs
The IdrA Fan Club
Media & Entertainment
[Manga] One Piece Movie Discussion! [Req][Books] Good Fantasy/SciFi books
Sports
2024 - 2026 Football Thread Tokyo Olympics 2021 Thread Formula 1 Discussion General nutrition recommendations Cricket [SPORT]
World Cup 2022
Tech Support
Laptop capable of using Photoshop Lightroom?
TL Community
The Automated Ban List
Blogs
Funny Nicknames
LUCKY_NOOB
Money Laundering In Video Ga…
TrAiDoS
Iranian anarchists: organize…
XenOsky
FS++
Kraekkling
Shocked by a laser…
Spydermine0240
Unintentional protectionism…
Uldridge
ASL S21 English Commentary…
namkraft
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1764 users

US Politics Mega-thread - Page 5562

Forum Index > General Forum
Post a Reply
Prev 1 5560 5561 5562 5563 Next
Now that we have a new thread, in order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a complete and thorough read before posting!

NOTE: When providing a source, please provide a very brief summary on what it's about and what purpose it adds to the discussion. The supporting statement should clearly explain why the subject is relevant and needs to be discussed. Please follow this rule especially for tweets.

Your supporting statement should always come BEFORE you provide the source.


If you have any questions, comments, concern, or feedback regarding the USPMT, then please use this thread: http://www.teamliquid.net/forum/website-feedback/510156-us-politics-thread
Fleetfeet
Profile Blog Joined May 2014
Canada2664 Posts
4 hours ago
#111221
On March 17 2026 01:21 Simberto wrote:
Show nested quote +
On March 17 2026 00:51 Fleetfeet wrote:
On March 17 2026 00:45 LightSpectra wrote:
On March 17 2026 00:41 Fleetfeet wrote:
If people expect the answers AI gives them to be at least partially incorrect, doesn't this promote critical thinking and not deter it?


This is about as realistic as expecting Young Earth Creationism museums to promote interest in biology


Worthless oneliner tbh.

Do you mean "I don't agree that most people question what AI tells them, and instead just blindly accept it"? I'd accept that as an answer, though we're both on the same level of anecdotal evidence in that case.


Also anecdotal experience, but in my experience as a teacher, a lot of students just accept whatever AI tells them as absolute truth immediately. A lot of people are generally not in the business of questioning answers they got, they accept the first reasonably-sounding answer as truth.

AI answers have all the trappings that people are used to from good sources (language, orthography, style), and it tends to say what you want to hear while sounding confident and competent. This is a very tempting combination. It is also correct often enough so for most people, it doesn't immediately fail in the habit-forming phase.

For it to promote critical thinking skills, people would need to regularly get into situations where AI answers are incorrect, and where they notice that. I don't think that happens often enough for this to happen.


Huh, fair enough. Most of my personal use cases are for specific answers that provide direction, I.E. "What's the syntax for a switch-case in javascript" or "what are access/egress requirements for a dwelling in Alberta" etc where it is usually somewhat wrong but (in google's case) provides references to where it got its answers from. The answers are also evidently wrong and/or falsifiable in those cases - if it gives you the wrong syntax for code it just won't run.

I also tend AWAY from using AI and would happily not use google's AI-assisted searching if they hadn't made that more difficult. Point taken, though!
Liquid`Drone
Profile Joined September 2002
Norway28765 Posts
4 hours ago
#111222
On March 16 2026 23:00 LightSpectra wrote:
Show nested quote +
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.
Moderator
Dan HH
Profile Joined July 2012
Romania9183 Posts
4 hours ago
#111223
On March 17 2026 01:46 Billyboy wrote:
I think a big problem is no one’s knows what they don’t know. So if you use AI to do something you are an expert in, it can be a very powerful time saver. Because you can fairly accurately and quickly weed out what’s wrong. But if you don’t know the subject matter it is really hard to know what is wrong and why.

Another big societal issue is how many people are using it to confirm their pop psychology diagnosis of themselves or others in their life. It will always confirm what you think. It will even basically lead you with what additional questions you need to confirm your belief. Feel free to go into private mode and have two AI open and ask each about if a person you know is a narcissist or not. In one box act as though you believe they are and in the other not. Both times the bot will confirm you answer.

And it is doing that all the time in all sorts of topics because people think it’s a really smart friend who is impartial. And it is far from impartial.

That's the frustrating part about all this, from Facebook to Youtube to LLMs. They didn't have to be this shit, it was intentional design choices by company leadership to maximize addictiveness that makes them shit. It's not inherent to AI that it has to be flattering and sycophantic, we're making them that way on purpose and most people can absolutely not deal with it.

I'm reminded of Boris Johnson being extremely happy with the meaningless praise and validation he's getting from ChatGPT:

+ Show Spoiler +




It's not just the average joe falling for it. DOGE used it to cut research grants, the White House used it to tarriff penguins, RFK Jr used it to issue medical advice with hallucinated sources, police used it to target the wrong people. We were already having massive problems with lack of accountability (due to human preferential treatment and favor-trading) and this adds another layer of non-accountability, "it's not my fault, the crystal ball said it was brilliant".
Dan HH
Profile Joined July 2012
Romania9183 Posts
Last Edited: 2026-03-16 17:27:13
4 hours ago
#111224
On March 17 2026 02:06 Liquid`Drone wrote:
Show nested quote +
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.

I use Claude for work and sure it's a useful instrument that helps me fix things and saves me some time, but while I appreciate that Anthropic had a moral red line (for now) in regards to the Pentagon request that doesn't make them a force for good, in every other matter they've acted exactly like Open AI: stealing personal data, selling bullshit hype about consciousness and solving all the world's greatest problems when their model is capable of 0 innovation just like the others, intentional addiction mechanics and unnecessary sycophancy.

I'm incapable of reading/hearing the phrase "You’re absolutely right!" without rolling my eyes at this point.
KwarK
Profile Blog Joined July 2006
United States43681 Posts
4 hours ago
#111225
On March 17 2026 02:14 Dan HH wrote:
the White House used it to tariff penguins

What happened was substantially worse than this. The penguins got more news time because they're weird birds that swim and wear tuxedos but the entire tariff formula was created by AI.
https://www.theverge.com/news/642620/trump-tariffs-formula-ai-chatgpt-gemini-claude-grok
ModeratorThe angels have the phone box
LightSpectra
Profile Blog Joined October 2011
United States2239 Posts
Last Edited: 2026-03-16 17:27:43
3 hours ago
#111226
On March 17 2026 02:06 Liquid`Drone wrote:
Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.


I'm glad you're against xAI and OpenAI for ethical reasons, but surely you're aware that Google (Gemini), Microsoft (CoPilot), etc. are all guilty of innumerable evil things as well? Anthropic could potentially be the odd one out, but it seems naive to think they wouldn't do grossly unethical things if they thought they could get away with it, especially when they need an upper hand against the aforementioned megacorps. There's a good chance they'd simply get bought out at some point too.

On March 17 2026 02:06 Liquid`Drone wrote:
If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.


Hallucinations mean the LLM will make up information if it can't supply a good answer for what the user is asking for. Depending on how you test it, this was found to happen with up to 40% of their answers as of last year. Even if it's improved, would you really want to trust something that's wrong 20%, even 10% of the time? That's a higher error rate than Wikipedia or public domain textbooks and encyclopedias.
2006 Shinhan Bank OSL Season 3 was the greatest tournament of all time
Jankisa
Profile Blog Joined October 2010
Croatia1245 Posts
Last Edited: 2026-03-16 17:35:03
3 hours ago
#111227
Yeah, the bias reinforcement is an even bigger problem then Social Media echo chambers and information bubbles, one being stacked on each-other makes everything even more alarming.

BB's observation on psychological advice is also something that worries the fuck out of me, I relatively recently cought both my dad and mom using it, for different proposes, but in both cases to "make arguments" based on "well, AI thinks so, so it must be true", explaining to older people first what AI is, how it works and that it shouldn't always be trusted is very hard, they didn't even get immunized from social media, the older generations are absolutely not ready for this.

When it comes to AI Hallucinations, I saw LS's 40 % number (with a very important "up to" caveat) and reflectively wanted to call him out for being a luddite who confidently shares outdated information, but, it turns out that the number is still around there.

Ironically, Grok seems to be the one with least hallucinations, to me, it kind of makes sense because Grok seems to be the one which cares the least about the costs, so it does double check it's stuff more often then the other "freemiuim" ones:

[image loading]


Of course, there is a lot of nuance, this study was done on older models and it was designed to test their worse instincts, these models have reward systems where answers like "I don't know" are very negatively graded, and the study put them in a situation where they had to choose between saying that or lying, but, nevertheless, combined with the other reward system of AI being trained on human interactions and humans liking it when it flatters them, which makes sure they spend more time with it, which in turn makes these longer interactions be a larger chunk of the training data is a very big problem.

I can say, after using AI, both self hosted and frontier models for the last 3 years that the latest models, like Opus 4.6 are excellent and I haven't caught it making stuff up, yet. I'm sure it does, but as others said I use it of either work or to "talk" to it to try to see if there is a ghost in the machine, so not exactly for looking up information where the chances of hallucinations are highest.
So, are you a pessimist? - On my better days. Are you a nihilist? - Not as much as I should be.
LightSpectra
Profile Blog Joined October 2011
United States2239 Posts
Last Edited: 2026-03-16 17:40:12
3 hours ago
#111228
Using an LLM to program is one of the few times it's not a supremely terrible idea, because bad programming leads to immediate error messages at the very least; it doesn't take long to find out where it fucked up. (Edit: I mean using it as an assistant to auto-generate small amounts of code at a time. If you vibe generate an entire application, that's entirely different and a cybersecurity nightmare.)

If you use it for legal advice, the error message will come in the form of court sanctions or an unwanted verdict, and nobody will care if you try to blame the LLM. If you use it to decide who to vote for, you won't find out you got played until after the elections are over. If you use it for medical advice, you won't get an error message until you're in the hospital or dead.

To say nothing of the actual psychosis people have in claiming to have married their LLM or had a religious experience through it.
2006 Shinhan Bank OSL Season 3 was the greatest tournament of all time
dyhb
Profile Joined August 2021
United States169 Posts
3 hours ago
#111229
If you want broad understanding on subjects like photosynthesis or grammar or the industrial revolution they're great. But if you get more and more interested and ask for specifics, they start to invent and you're left double-checking all the specifics and asking for sources. Three questions in and they'll confuse who did what in discovery, describe one important reaction as if it was a different one, and wholly invent details. Then you ask it why the source they quote for it says nothing of the kind, and they spout vaguely about tokens and patterns and associations and a "Sorry, you are correct, they are different things."

A funny example tangentially related to grammar was the period of months where ChatGPT would count two r's in strawberry. I had to verify this for myself late last year, and it was so confident that it could count correctly while counting all three as proof that there were two.
WombaT
Profile Blog Joined May 2010
Northern Ireland26367 Posts
3 hours ago
#111230
On March 17 2026 01:46 Billyboy wrote:
I think a big problem is no one’s knows what they don’t know. So if you use AI to do something you are an expert in, it can be a very powerful time saver. Because you can fairly accurately and quickly weed out what’s wrong. But if you don’t know the subject matter it is really hard to know what is wrong and why.

Another big societal issue is how many people are using it to confirm their pop psychology diagnosis of themselves or others in their life. It will always confirm what you think. It will even basically lead you with what additional questions you need to confirm your belief. Feel free to go into private mode and have two AI open and ask each about if a person you know is a narcissist or not. In one box act as though you believe they are and in the other not. Both times the bot will confirm you answer.

And it is doing that all the time in all sorts of topics because people think it’s a really smart friend who is impartial. And it is far from impartial.

Aye, I don’t even think we’ve collectively properly adjusted to the changes social media brought into society yet, the coming epoch I fear may look like that only on crack.

Any potentially transformative technology does tend to bring problems with it, even if it’s a net positive, just how these things go.

One of my main bones of contention is the folks pushing this aren’t even really trying to grapple with them, I mean by and large they do not care at all. It’s not that they tried to anticipate potential issues and lacked perfect foresight or whatever, there seemingly isn’t any mental energy put into anticipation much less mitigation.

I don’t fundamentally hate the underlying tech or whatever, I’m not a Luddite in that sense but a whole bunch of stuff surrounding it really fundamentally stinks.

Ungodly amounts of copyright infringement? Oh well. Deepfake porn? Oh my well. The rather obvious potential to add even further to political and cultural misinformation? Oh well.

Like there’s no concern for any such things that are pretty egregious, much less the more complex tradeoffs.

Person A may find chatting to an LLM useful for whatever reason and it benefits them somehow, whereas Person B may pay for some AI waifu that validates some pretty awful life choices for them. That kinda thing gets a bit more complex and a provider can plausibly say that how people use their product isn’t 100% their responsibility. Hell you can go as far back as alcohol for such a tradeoff, many people enjoy it with few real ill-effects, but you still get alcoholics.

I’ve a bit more sympathy when we get into such areas, but again to stress, there’s seemingly no concern whatsoever for any of it. And that greatly concerns me.
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
LightSpectra
Profile Blog Joined October 2011
United States2239 Posts
3 hours ago
#111231
Just fyi, the Luddites weren't anti-technology because they believed in simple living like the Amish, they were proto-Marxists that destroyed machinery because industrialists were replacing skilled workers in order to reduce wages.
2006 Shinhan Bank OSL Season 3 was the greatest tournament of all time
KwarK
Profile Blog Joined July 2006
United States43681 Posts
3 hours ago
#111232
On March 17 2026 02:55 LightSpectra wrote:
Just fyi, the Luddites weren't anti-technology because they believed in simple living like the Amish, they were proto-Marxists that destroyed machinery because industrialists were replacing skilled workers in order to reduce wages.

They weren't skilled workers. They had been skilled workers before a machine was created that made their skill obsolete. The new skilled workers were the people who could maintain the machines. Being able to do it by hand wasn't a skill at that point.

The lesson of the luddites is that government intervention is needed to prevent regional economic dislocation resulting from industrial shifts. The same lesson as we see with the decline of coal mining etc. The invisible hand seeks optimal efficiency without regard for the broader social consequences, the government needs to take the money from some of those efficiency gains and use them to clean up after the hand.
ModeratorThe angels have the phone box
WombaT
Profile Blog Joined May 2010
Northern Ireland26367 Posts
Last Edited: 2026-03-16 18:05:52
3 hours ago
#111233
On March 17 2026 02:55 LightSpectra wrote:
Just fyi, the Luddites weren't anti-technology because they believed in simple living like the Amish, they were proto-Marxists that destroyed machinery because industrialists were replacing skilled workers in order to reduce wages.

I am aware that the Luddites were more concerned with the potential deleterious impacts of certain technologies on the social fabric than being actually anti-technology or development.

Which would actually give me some commonality there, but it’s not how the invocation is colloquially understood these days.

It is certainly something that’s interesting to know and consider for sure though, especially in the modern context.

Sure, within specific industries there’s been technological disruption for forever. But in terms of the general labour force, ‘AI’ is probably the biggest potential transformative force since initial industrialisation way back.
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
Simberto
Profile Blog Joined July 2010
Germany11776 Posts
3 hours ago
#111234
On March 17 2026 02:27 LightSpectra wrote:
Hallucinations mean the LLM will make up information if it can't supply a good answer for what the user is asking for. Depending on how you test it, this was found to happen with up to 40% of their answers as of last year. Even if it's improved, would you really want to trust something that's wrong 20%, even 10% of the time? That's a higher error rate than Wikipedia or public domain textbooks and encyclopedias.


If i am not completely mistaken, it is much worse than this. LLMs don't "make up information". They don't actually interact with the topics they talk about on an information level at all. They simply give you the combination of words that is the statistically most likely answer to your question according to their training data.

LLMs have no concept of truth or knowledge. They are simply doing improv theater based on your input.

Which makes them a very bad source for knowledge.
GreenHorizons
Profile Blog Joined April 2011
United States23720 Posts
Last Edited: 2026-03-16 18:14:39
3 hours ago
#111235
On March 17 2026 01:44 Dan HH wrote:
Show nested quote +
On March 16 2026 17:02 GreenHorizons wrote:
On March 16 2026 03:09 Gorsameth wrote:
On March 16 2026 01:55 GreenHorizons wrote:
There's probably some unforeseen economic impacts of removing bots (and now AI agents) when we consider they make up about half of the internet traffic most ads are metric'd off of.

There's basically a centi-billion dollar industry (without counting the platforms themselves really) in arbitraging (frauding) ad engagement by buying fake engagement and selling it to advertisers.

(EDIT: Advertising is rather uniquely central to the US economy.)

That probably has an unrecognized impact on the culture/sociology of the humans on (and off) the internet that is worthy of consideration.
And half the S&P 500 is an AI bubble with little purpose and no financial viability,

The economy is utterly fucked either way.


NVIDIA pays Meta/X/etc to generate AI enhanced advertisements, the AI learns the most effective ads target AI bots, the AI bots learn the best ads are AI generated, Meta/X/etc needs to buy more NVIDIA stuff to handle the ever increasing AI traffic. Infinite money glitch achieved.

If ads don't get converted to sales they get cut pretty quickly. Naturally, the solution is to give AI bots a stipend to occasionally order products. AI bots might get basic income before humans, if they become the consumers there's not much need for us.
You're still thinking about ads for tangible things for humans to use. You might not have noticed, but that's not what "the US economy" (as most people imagine it) really is any more.

[image loading]


As demonstrated in the S&P 500 Index shown above, the composition of corporate value has undergone a fundamental transformation over the past five decades. In 1975, tangible assets—property, plant, equipment, inventory, and other physical capital—represented 83% of the market value of companies comprising the S&P 500 index, with intangible assets accounting for only 17%. By the end of 2025, this relationship had completely inverted: intangible assets now constitute approximately 92% of S&P 500 market capitalization, while tangible assets have been reduced to a mere 8%.


https://oceantomo.com/insights/ocean-tomo-releases-2025-intangible-asset-market-value-study-results/

EDIT: On the LLMs as learning tools part, this sounds a lot like navigating the creation of Wikipedia. To me, the obvious problem is that we're objectively already "paperclipping" ourselves with data centers.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
WombaT
Profile Blog Joined May 2010
Northern Ireland26367 Posts
3 hours ago
#111236
On March 17 2026 03:03 Simberto wrote:
Show nested quote +
On March 17 2026 02:27 LightSpectra wrote:
Hallucinations mean the LLM will make up information if it can't supply a good answer for what the user is asking for. Depending on how you test it, this was found to happen with up to 40% of their answers as of last year. Even if it's improved, would you really want to trust something that's wrong 20%, even 10% of the time? That's a higher error rate than Wikipedia or public domain textbooks and encyclopedias.


If i am not completely mistaken, it is much worse than this. LLMs don't "make up information". They don't actually interact with the topics they talk about on an information level at all. They simply give you the combination of words that is the statistically most likely answer to your question according to their training data.

LLMs have no concept of truth or knowledge. They are simply doing improv theater based on your input.

Which makes them a very bad source for knowledge.

Try telling people how it actually works, they’ll outright not believe you. Even if you’ve education in a tangentially connected domain.

I know what the word ‘the’ means and where to stick it. An LLM does not know the former, and on the latter is just making a best guess based on probability, albeit a solid guess based on uncountable amounts of prior texts.

What I don’t understand is the reticence by some to take this crude explanation on face value. To someone who doesn’t know much about how computers fundamentally work in pretty decent detail I mean, yeah it can sound mental. But what’s the alternative explanation? Magic?
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
KwarK
Profile Blog Joined July 2006
United States43681 Posts
Last Edited: 2026-03-16 18:17:14
3 hours ago
#111237
GH, that tangible vs intangible assets analysis was created by an idiot strictly for the use of idiots. The accounting definition of assets has almost no relevance to the valuation of a company.
ModeratorThe angels have the phone box
GreenHorizons
Profile Blog Joined April 2011
United States23720 Posts
Last Edited: 2026-03-16 18:55:09
2 hours ago
#111238
On March 17 2026 03:16 KwarK wrote:
Show nested quote +
On March 17 2026 03:09 GreenHorizons wrote:
On March 17 2026 01:44 Dan HH wrote:
On March 16 2026 17:02 GreenHorizons wrote:
On March 16 2026 03:09 Gorsameth wrote:
On March 16 2026 01:55 GreenHorizons wrote:
There's probably some unforeseen economic impacts of removing bots (and now AI agents) when we consider they make up about half of the internet traffic most ads are metric'd off of.

There's basically a centi-billion dollar industry (without counting the platforms themselves really) in arbitraging (frauding) ad engagement by buying fake engagement and selling it to advertisers.

(EDIT: Advertising is rather uniquely central to the US economy.)

That probably has an unrecognized impact on the culture/sociology of the humans on (and off) the internet that is worthy of consideration.
And half the S&P 500 is an AI bubble with little purpose and no financial viability,

The economy is utterly fucked either way.


NVIDIA pays Meta/X/etc to generate AI enhanced advertisements, the AI learns the most effective ads target AI bots, the AI bots learn the best ads are AI generated, Meta/X/etc needs to buy more NVIDIA stuff to handle the ever increasing AI traffic. Infinite money glitch achieved.

If ads don't get converted to sales they get cut pretty quickly. Naturally, the solution is to give AI bots a stipend to occasionally order products. AI bots might get basic income before humans, if they become the consumers there's not much need for us.
You're still thinking about ads for tangible things for humans to use. You might not have noticed, but that's not what "the US economy" (as most people imagine it) really is any more.

[image loading]


As demonstrated in the S&P 500 Index shown above, the composition of corporate value has undergone a fundamental transformation over the past five decades. In 1975, tangible assets—property, plant, equipment, inventory, and other physical capital—represented 83% of the market value of companies comprising the S&P 500 index, with intangible assets accounting for only 17%. By the end of 2025, this relationship had completely inverted: intangible assets now constitute approximately 92% of S&P 500 market capitalization, while tangible assets have been reduced to a mere 8%.


https://oceantomo.com/insights/ocean-tomo-releases-2025-intangible-asset-market-value-study-results/

EDIT: On the LLMs as learning tools part, this sounds a lot like navigating the creation of Wikipedia. To me, the obvious problem is that we're objectively already "paperclipping" ourselves with data centers.


GH, that tangible vs intangible assets analysis was created by an idiot strictly for the use of idiots. The accounting definition of assets has almost no relevance to the valuation of a company.


Fair enough, but I sense your personal animosity against me and personal familiarity with the subject matter is impinging on your recognition of my point (which obviously, given your expertise, isn't specifically the "valuation of a company")

I should have used different data to more effectively make my point. The point simply being that the US economy isn't driven by making cars and such anymore (I think most people get this?).

Even these early AI agents (typically with human assistance still) are reasonably capable of generating revenue, including by doing ad arbitrage/fraud. They can then turn that revenue into a subscription to their own AI services to fund buying more compute to generate more ads for AI subscription services as a rough example.

I should also mention I don't literally mean it is an "infinite money glitch", that's sardonic sarcasm.

On March 17 2026 03:15 WombaT wrote:
Show nested quote +
On March 17 2026 03:03 Simberto wrote:
On March 17 2026 02:27 LightSpectra wrote:
Hallucinations mean the LLM will make up information if it can't supply a good answer for what the user is asking for. Depending on how you test it, this was found to happen with up to 40% of their answers as of last year. Even if it's improved, would you really want to trust something that's wrong 20%, even 10% of the time? That's a higher error rate than Wikipedia or public domain textbooks and encyclopedias.


If i am not completely mistaken, it is much worse than this. LLMs don't "make up information". They don't actually interact with the topics they talk about on an information level at all. They simply give you the combination of words that is the statistically most likely answer to your question according to their training data.

LLMs have no concept of truth or knowledge. They are simply doing improv theater based on your input.

Which makes them a very bad source for knowledge.

+ Show Spoiler +
Try telling people how it actually works, they’ll outright not believe you. Even if you’ve education in a tangentially connected domain.

I know what the word ‘the’ means and where to stick it. An LLM does not know the former, and on the latter is just making a best guess based on probability, albeit a solid guess based on uncountable amounts of prior texts.

What I don’t understand is the reticence by some to take this crude explanation on face value.
To someone who doesn’t know much about how computers fundamentally work in pretty decent detail I mean, yeah it can sound mental. But what’s the alternative explanation? Magic?


You ever see a scene where someone from the past sees a TV?
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
EnDeR_
Profile Blog Joined May 2004
Spain2787 Posts
2 hours ago
#111239
On March 17 2026 02:06 Liquid`Drone wrote:
Show nested quote +
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.


There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works.

Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it.

My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem.

My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem.
estás más desubicao q un croissant en un plato de nécoras
KwarK
Profile Blog Joined July 2006
United States43681 Posts
2 hours ago
#111240
On March 17 2026 03:37 GreenHorizons wrote:
Show nested quote +
On March 17 2026 03:16 KwarK wrote:
On March 17 2026 03:09 GreenHorizons wrote:
On March 17 2026 01:44 Dan HH wrote:
On March 16 2026 17:02 GreenHorizons wrote:
On March 16 2026 03:09 Gorsameth wrote:
On March 16 2026 01:55 GreenHorizons wrote:
There's probably some unforeseen economic impacts of removing bots (and now AI agents) when we consider they make up about half of the internet traffic most ads are metric'd off of.

There's basically a centi-billion dollar industry (without counting the platforms themselves really) in arbitraging (frauding) ad engagement by buying fake engagement and selling it to advertisers.

(EDIT: Advertising is rather uniquely central to the US economy.)

That probably has an unrecognized impact on the culture/sociology of the humans on (and off) the internet that is worthy of consideration.
And half the S&P 500 is an AI bubble with little purpose and no financial viability,

The economy is utterly fucked either way.


NVIDIA pays Meta/X/etc to generate AI enhanced advertisements, the AI learns the most effective ads target AI bots, the AI bots learn the best ads are AI generated, Meta/X/etc needs to buy more NVIDIA stuff to handle the ever increasing AI traffic. Infinite money glitch achieved.

If ads don't get converted to sales they get cut pretty quickly. Naturally, the solution is to give AI bots a stipend to occasionally order products. AI bots might get basic income before humans, if they become the consumers there's not much need for us.
You're still thinking about ads for tangible things for humans to use. You might not have noticed, but that's not what "the US economy" (as most people imagine it) really is any more.

[image loading]


As demonstrated in the S&P 500 Index shown above, the composition of corporate value has undergone a fundamental transformation over the past five decades. In 1975, tangible assets—property, plant, equipment, inventory, and other physical capital—represented 83% of the market value of companies comprising the S&P 500 index, with intangible assets accounting for only 17%. By the end of 2025, this relationship had completely inverted: intangible assets now constitute approximately 92% of S&P 500 market capitalization, while tangible assets have been reduced to a mere 8%.


https://oceantomo.com/insights/ocean-tomo-releases-2025-intangible-asset-market-value-study-results/

EDIT: On the LLMs as learning tools part, this sounds a lot like navigating the creation of Wikipedia. To me, the obvious problem is that we're objectively already "paperclipping" ourselves with data centers.


GH, that tangible vs intangible assets analysis was created by an idiot strictly for the use of idiots. The accounting definition of assets has almost no relevance to the valuation of a company.


Fair enough I did see it mentioned on Forbes, but I sense you're personal animosity against me and personal familiarity with the subject matter is impinging on your recognition of my point (which obviously, given your expertise, isn't specifically the "valuation of a company")

I should have used different data to more effectively make my point. The point simply being that the US economy isn't driven by making cars and such anymore (I think most people get this?).

Even these early AI agents (typically with human assistance still) are reasonably capable of generating revenue, including by doing ad arbitrage/fraud. They can then turn that revenue into a subscription to their own AI services to fund buying more compute to generate more ads for AI subscription services as a rough example.

I should also mention I don't literally mean it is an "infinite money glitch", that's sardonic sarcasm.

Forbes has become a blogging host. They're not a reputable website.

You're right on one of the causes, the shift of heavy industry out of the US. Some industries are more plant heavy than others. To make steel you need a giant foundry for example. To make ships you need a dockyard. There are industries in which a lot of the capital that is tied up in generating the profits are big physical assets that the accounting rules let you capitalize. Then there are industries that don't use as much plant in the creation of revenue. For example software companies won't have as high a % of their asset value in factories or machinery and will have a higher % of their value in patents, trademarks, licenses etc.

But the other more important cause is mergers and acquisitions creating our favourite intangible, "goodwill". Let's say that a company has something not on the balance sheet but that it uses to generate a lot of profit. Customer relationships built up over decades, excellent brand recognition, a loyal and skilled workforce, that kind of thing.

A larger company wants to absorb it and they will negotiate a price based on the expected cash flow which makes sense. If they were to come in and say "well your building is worth $1m so we'll pay $1.1m, that's a good price, right?" then the current owners would laugh and explain that it isn't the building that is being sold, it is the business that resides in the building that is being sold. Everyone understands this and the business generates $1m in profit per year and so they come up with a sales price of $10m in this example.

The accounting treatment here for the larger company is
PPE net of depreciation $1m
Goodwill $9m
Cash -$10m

They've spent $10m cash and they now have a business that is worth $10m but $9m of that $10 value is goodwill. It's intangible. It exists and has value because there's a building that spits out a shitload of cash every year and everyone would agree that that's something very valuable but it's not like the building is made up of super valuable bricks.

Then another company buys out that company and it compounds. And once on the books goodwill stays, pretty much forever.

The SP500 is, almost by definition, going to be made up of companies that buy a lot of other companies. The analysis is comparable to making a hundred bricks and graphing the age of the bricks over time. They're measuring the same thing on the X and Y axis.
ModeratorThe angels have the phone box
Prev 1 5560 5561 5562 5563 Next
Please log in or register to reply.
Live Events Refresh
Next event in 14h 34m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
SteadfastSC 352
JuggernautJason121
UpATreeSC 105
SpeCial 53
StarCraft: Brood War
Bonyth 101
sorry 81
Nal_rA 26
Dota 2
monkeys_forever325
canceldota69
League of Legends
JimRising 582
Counter-Strike
tarik_tv5544
pashabiceps2221
Other Games
summit1g8847
Grubby4583
Beastyqt628
shahzam382
KnowMe214
ArmadaUGS145
C9.Mang0141
Livibee76
Trikslyr46
Mew2King39
Organizations
Dota 2
PGL Dota 2 - Main Stream422
Other Games
BasetradeTV232
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 19 non-featured ]
StarCraft 2
• kabyraGe 184
• Reevou 6
• IndyKCrew
• sooper7s
• Migwel
• AfreecaTV YouTube
• LaughNgamezSOOP
• intothetv
• Kozan
StarCraft: Brood War
• Eskiya23 13
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• masondota2865
• WagamamaTV321
League of Legends
• TFBlade1110
Other Games
• imaqtpie1301
• Scarra1047
• Shiphtur213
Upcoming Events
WardiTV Team League
14h 34m
PiGosaur Cup
1d 2h
Kung Fu Cup
1d 13h
OSC
2 days
The PondCast
2 days
KCM Race Survival
2 days
WardiTV Team League
2 days
Replay Cast
3 days
KCM Race Survival
3 days
WardiTV Team League
3 days
[ Show More ]
Korean StarCraft League
4 days
RSL Revival
4 days
Maru vs Zoun
Cure vs ByuN
uThermal 2v2 Circuit
4 days
BSL
4 days
RSL Revival
5 days
herO vs MaxPax
Rogue vs TriGGeR
BSL
5 days
Replay Cast
6 days
Replay Cast
6 days
Afreeca Starleague
6 days
Sharp vs Scan
Rain vs Mong
Wardi Open
6 days
Monday Night Weeklies
6 days
Liquipedia Results

Completed

Proleague 2026-03-15
WardiTV Winter 2026
Underdog Cup #3

Ongoing

KCM Race Survival 2026 Season 1
Jeongseon Sooper Cup
BSL Season 22
CSL Elite League 2026
RSL Revival: Season 4
Nations Cup 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual

Upcoming

ASL Season 21
Acropolis #4 - TS6
2026 Changsha Offline CUP
Acropolis #4
IPSL Spring 2026
BSL 22 Non-Korean Championship
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
NationLESS Cup
Stake Ranked Episode 2
CS Asia Championships 2026
Asian Champions League 2026
IEM Atlanta 2026
PGL Astana 2026
BLAST Rivals Spring 2026
CCT Season 3 Global Finals
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.