|
Now that we have a new thread, in order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a complete and thorough read before posting! NOTE: When providing a source, please provide a very brief summary on what it's about and what purpose it adds to the discussion. The supporting statement should clearly explain why the subject is relevant and needs to be discussed. Please follow this rule especially for tweets.
Your supporting statement should always come BEFORE you provide the source.If you have any questions, comments, concern, or feedback regarding the USPMT, then please use this thread: http://www.teamliquid.net/forum/website-feedback/510156-us-politics-thread |
On March 17 2026 01:21 Simberto wrote:Show nested quote +On March 17 2026 00:51 Fleetfeet wrote:On March 17 2026 00:45 LightSpectra wrote:On March 17 2026 00:41 Fleetfeet wrote: If people expect the answers AI gives them to be at least partially incorrect, doesn't this promote critical thinking and not deter it? This is about as realistic as expecting Young Earth Creationism museums to promote interest in biology Worthless oneliner tbh. Do you mean "I don't agree that most people question what AI tells them, and instead just blindly accept it"? I'd accept that as an answer, though we're both on the same level of anecdotal evidence in that case. Also anecdotal experience, but in my experience as a teacher, a lot of students just accept whatever AI tells them as absolute truth immediately. A lot of people are generally not in the business of questioning answers they got, they accept the first reasonably-sounding answer as truth. AI answers have all the trappings that people are used to from good sources (language, orthography, style), and it tends to say what you want to hear while sounding confident and competent. This is a very tempting combination. It is also correct often enough so for most people, it doesn't immediately fail in the habit-forming phase. For it to promote critical thinking skills, people would need to regularly get into situations where AI answers are incorrect, and where they notice that. I don't think that happens often enough for this to happen.
Huh, fair enough. Most of my personal use cases are for specific answers that provide direction, I.E. "What's the syntax for a switch-case in javascript" or "what are access/egress requirements for a dwelling in Alberta" etc where it is usually somewhat wrong but (in google's case) provides references to where it got its answers from. The answers are also evidently wrong and/or falsifiable in those cases - if it gives you the wrong syntax for code it just won't run.
I also tend AWAY from using AI and would happily not use google's AI-assisted searching if they hadn't made that more difficult. Point taken, though!
|
Norway28765 Posts
On March 16 2026 23:00 LightSpectra wrote:Show nested quote +On March 16 2026 15:06 Liquid`Drone wrote: Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.
I do have issues with people posting chatgpt posts as arguments but that is different. LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).
Grok is an outlier and should not be trusted for anything.
I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.
If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.
|
On March 17 2026 01:46 Billyboy wrote: I think a big problem is no one’s knows what they don’t know. So if you use AI to do something you are an expert in, it can be a very powerful time saver. Because you can fairly accurately and quickly weed out what’s wrong. But if you don’t know the subject matter it is really hard to know what is wrong and why.
Another big societal issue is how many people are using it to confirm their pop psychology diagnosis of themselves or others in their life. It will always confirm what you think. It will even basically lead you with what additional questions you need to confirm your belief. Feel free to go into private mode and have two AI open and ask each about if a person you know is a narcissist or not. In one box act as though you believe they are and in the other not. Both times the bot will confirm you answer.
And it is doing that all the time in all sorts of topics because people think it’s a really smart friend who is impartial. And it is far from impartial. That's the frustrating part about all this, from Facebook to Youtube to LLMs. They didn't have to be this shit, it was intentional design choices by company leadership to maximize addictiveness that makes them shit. It's not inherent to AI that it has to be flattering and sycophantic, we're making them that way on purpose and most people can absolutely not deal with it.
I'm reminded of Boris Johnson being extremely happy with the meaningless praise and validation he's getting from ChatGPT:
+ Show Spoiler +
It's not just the average joe falling for it. DOGE used it to cut research grants, the White House used it to tarriff penguins, RFK Jr used it to issue medical advice with hallucinated sources, police used it to target the wrong people. We were already having massive problems with lack of accountability (due to human preferential treatment and favor-trading) and this adds another layer of non-accountability, "it's not my fault, the crystal ball said it was brilliant".
|
On March 17 2026 02:06 Liquid`Drone wrote:Show nested quote +On March 16 2026 23:00 LightSpectra wrote:On March 16 2026 15:06 Liquid`Drone wrote: Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.
I do have issues with people posting chatgpt posts as arguments but that is different. LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect). Grok is an outlier and should not be trusted for anything. I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense. If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic. I use Claude for work and sure it's a useful instrument that helps me fix things and saves me some time, but while I appreciate that Anthropic had a moral red line (for now) in regards to the Pentagon request that doesn't make them a force for good, in every other matter they've acted exactly like Open AI: stealing personal data, selling bullshit hype about consciousness and solving all the world's greatest problems when their model is capable of 0 innovation just like the others, intentional addiction mechanics and unnecessary sycophancy.
I'm incapable of reading/hearing the phrase "You’re absolutely right!" without rolling my eyes at this point.
|
United States43681 Posts
|
On March 17 2026 02:06 Liquid`Drone wrote: Grok is an outlier and should not be trusted for anything.
I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.
I'm glad you're against xAI and OpenAI for ethical reasons, but surely you're aware that Google (Gemini), Microsoft (CoPilot), etc. are all guilty of innumerable evil things as well? Anthropic could potentially be the odd one out, but it seems naive to think they wouldn't do grossly unethical things if they thought they could get away with it, especially when they need an upper hand against the aforementioned megacorps. There's a good chance they'd simply get bought out at some point too.
On March 17 2026 02:06 Liquid`Drone wrote: If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.
Hallucinations mean the LLM will make up information if it can't supply a good answer for what the user is asking for. Depending on how you test it, this was found to happen with up to 40% of their answers as of last year. Even if it's improved, would you really want to trust something that's wrong 20%, even 10% of the time? That's a higher error rate than Wikipedia or public domain textbooks and encyclopedias.
|
Yeah, the bias reinforcement is an even bigger problem then Social Media echo chambers and information bubbles, one being stacked on each-other makes everything even more alarming.
BB's observation on psychological advice is also something that worries the fuck out of me, I relatively recently cought both my dad and mom using it, for different proposes, but in both cases to "make arguments" based on "well, AI thinks so, so it must be true", explaining to older people first what AI is, how it works and that it shouldn't always be trusted is very hard, they didn't even get immunized from social media, the older generations are absolutely not ready for this.
When it comes to AI Hallucinations, I saw LS's 40 % number (with a very important "up to" caveat) and reflectively wanted to call him out for being a luddite who confidently shares outdated information, but, it turns out that the number is still around there.
Ironically, Grok seems to be the one with least hallucinations, to me, it kind of makes sense because Grok seems to be the one which cares the least about the costs, so it does double check it's stuff more often then the other "freemiuim" ones:
Of course, there is a lot of nuance, this study was done on older models and it was designed to test their worse instincts, these models have reward systems where answers like "I don't know" are very negatively graded, and the study put them in a situation where they had to choose between saying that or lying, but, nevertheless, combined with the other reward system of AI being trained on human interactions and humans liking it when it flatters them, which makes sure they spend more time with it, which in turn makes these longer interactions be a larger chunk of the training data is a very big problem.
I can say, after using AI, both self hosted and frontier models for the last 3 years that the latest models, like Opus 4.6 are excellent and I haven't caught it making stuff up, yet. I'm sure it does, but as others said I use it of either work or to "talk" to it to try to see if there is a ghost in the machine, so not exactly for looking up information where the chances of hallucinations are highest.
|
Using an LLM to program is one of the few times it's not a supremely terrible idea, because bad programming leads to immediate error messages at the very least; it doesn't take long to find out where it fucked up. (Edit: I mean using it as an assistant to auto-generate small amounts of code at a time. If you vibe generate an entire application, that's entirely different and a cybersecurity nightmare.)
If you use it for legal advice, the error message will come in the form of court sanctions or an unwanted verdict, and nobody will care if you try to blame the LLM. If you use it to decide who to vote for, you won't find out you got played until after the elections are over. If you use it for medical advice, you won't get an error message until you're in the hospital or dead.
To say nothing of the actual psychosis people have in claiming to have married their LLM or had a religious experience through it.
|
If you want broad understanding on subjects like photosynthesis or grammar or the industrial revolution they're great. But if you get more and more interested and ask for specifics, they start to invent and you're left double-checking all the specifics and asking for sources. Three questions in and they'll confuse who did what in discovery, describe one important reaction as if it was a different one, and wholly invent details. Then you ask it why the source they quote for it says nothing of the kind, and they spout vaguely about tokens and patterns and associations and a "Sorry, you are correct, they are different things."
A funny example tangentially related to grammar was the period of months where ChatGPT would count two r's in strawberry. I had to verify this for myself late last year, and it was so confident that it could count correctly while counting all three as proof that there were two.
|
Northern Ireland26367 Posts
On March 17 2026 01:46 Billyboy wrote: I think a big problem is no one’s knows what they don’t know. So if you use AI to do something you are an expert in, it can be a very powerful time saver. Because you can fairly accurately and quickly weed out what’s wrong. But if you don’t know the subject matter it is really hard to know what is wrong and why.
Another big societal issue is how many people are using it to confirm their pop psychology diagnosis of themselves or others in their life. It will always confirm what you think. It will even basically lead you with what additional questions you need to confirm your belief. Feel free to go into private mode and have two AI open and ask each about if a person you know is a narcissist or not. In one box act as though you believe they are and in the other not. Both times the bot will confirm you answer.
And it is doing that all the time in all sorts of topics because people think it’s a really smart friend who is impartial. And it is far from impartial. Aye, I don’t even think we’ve collectively properly adjusted to the changes social media brought into society yet, the coming epoch I fear may look like that only on crack.
Any potentially transformative technology does tend to bring problems with it, even if it’s a net positive, just how these things go.
One of my main bones of contention is the folks pushing this aren’t even really trying to grapple with them, I mean by and large they do not care at all. It’s not that they tried to anticipate potential issues and lacked perfect foresight or whatever, there seemingly isn’t any mental energy put into anticipation much less mitigation.
I don’t fundamentally hate the underlying tech or whatever, I’m not a Luddite in that sense but a whole bunch of stuff surrounding it really fundamentally stinks.
Ungodly amounts of copyright infringement? Oh well. Deepfake porn? Oh my well. The rather obvious potential to add even further to political and cultural misinformation? Oh well.
Like there’s no concern for any such things that are pretty egregious, much less the more complex tradeoffs.
Person A may find chatting to an LLM useful for whatever reason and it benefits them somehow, whereas Person B may pay for some AI waifu that validates some pretty awful life choices for them. That kinda thing gets a bit more complex and a provider can plausibly say that how people use their product isn’t 100% their responsibility. Hell you can go as far back as alcohol for such a tradeoff, many people enjoy it with few real ill-effects, but you still get alcoholics.
I’ve a bit more sympathy when we get into such areas, but again to stress, there’s seemingly no concern whatsoever for any of it. And that greatly concerns me.
|
Just fyi, the Luddites weren't anti-technology because they believed in simple living like the Amish, they were proto-Marxists that destroyed machinery because industrialists were replacing skilled workers in order to reduce wages.
|
United States43681 Posts
On March 17 2026 02:55 LightSpectra wrote: Just fyi, the Luddites weren't anti-technology because they believed in simple living like the Amish, they were proto-Marxists that destroyed machinery because industrialists were replacing skilled workers in order to reduce wages. They weren't skilled workers. They had been skilled workers before a machine was created that made their skill obsolete. The new skilled workers were the people who could maintain the machines. Being able to do it by hand wasn't a skill at that point.
The lesson of the luddites is that government intervention is needed to prevent regional economic dislocation resulting from industrial shifts. The same lesson as we see with the decline of coal mining etc. The invisible hand seeks optimal efficiency without regard for the broader social consequences, the government needs to take the money from some of those efficiency gains and use them to clean up after the hand.
|
Northern Ireland26367 Posts
On March 17 2026 02:55 LightSpectra wrote: Just fyi, the Luddites weren't anti-technology because they believed in simple living like the Amish, they were proto-Marxists that destroyed machinery because industrialists were replacing skilled workers in order to reduce wages. I am aware that the Luddites were more concerned with the potential deleterious impacts of certain technologies on the social fabric than being actually anti-technology or development.
Which would actually give me some commonality there, but it’s not how the invocation is colloquially understood these days.
It is certainly something that’s interesting to know and consider for sure though, especially in the modern context.
Sure, within specific industries there’s been technological disruption for forever. But in terms of the general labour force, ‘AI’ is probably the biggest potential transformative force since initial industrialisation way back.
|
On March 17 2026 02:27 LightSpectra wrote: Hallucinations mean the LLM will make up information if it can't supply a good answer for what the user is asking for. Depending on how you test it, this was found to happen with up to 40% of their answers as of last year. Even if it's improved, would you really want to trust something that's wrong 20%, even 10% of the time? That's a higher error rate than Wikipedia or public domain textbooks and encyclopedias.
If i am not completely mistaken, it is much worse than this. LLMs don't "make up information". They don't actually interact with the topics they talk about on an information level at all. They simply give you the combination of words that is the statistically most likely answer to your question according to their training data.
LLMs have no concept of truth or knowledge. They are simply doing improv theater based on your input.
Which makes them a very bad source for knowledge.
|
On March 17 2026 01:44 Dan HH wrote:Show nested quote +On March 16 2026 17:02 GreenHorizons wrote:On March 16 2026 03:09 Gorsameth wrote:On March 16 2026 01:55 GreenHorizons wrote: There's probably some unforeseen economic impacts of removing bots (and now AI agents) when we consider they make up about half of the internet traffic most ads are metric'd off of.
There's basically a centi-billion dollar industry (without counting the platforms themselves really) in arbitraging (frauding) ad engagement by buying fake engagement and selling it to advertisers.
(EDIT: Advertising is rather uniquely central to the US economy.)
That probably has an unrecognized impact on the culture/sociology of the humans on (and off) the internet that is worthy of consideration. And half the S&P 500 is an AI bubble with little purpose and no financial viability, The economy is utterly fucked either way. NVIDIA pays Meta/X/etc to generate AI enhanced advertisements, the AI learns the most effective ads target AI bots, the AI bots learn the best ads are AI generated, Meta/X/etc needs to buy more NVIDIA stuff to handle the ever increasing AI traffic. Infinite money glitch achieved. If ads don't get converted to sales they get cut pretty quickly. Naturally, the solution is to give AI bots a stipend to occasionally order products. AI bots might get basic income before humans, if they become the consumers there's not much need for us. You're still thinking about ads for tangible things for humans to use. You might not have noticed, but that's not what "the US economy" (as most people imagine it) really is any more.
As demonstrated in the S&P 500 Index shown above, the composition of corporate value has undergone a fundamental transformation over the past five decades. In 1975, tangible assets—property, plant, equipment, inventory, and other physical capital—represented 83% of the market value of companies comprising the S&P 500 index, with intangible assets accounting for only 17%. By the end of 2025, this relationship had completely inverted: intangible assets now constitute approximately 92% of S&P 500 market capitalization, while tangible assets have been reduced to a mere 8%.
https://oceantomo.com/insights/ocean-tomo-releases-2025-intangible-asset-market-value-study-results/
EDIT: On the LLMs as learning tools part, this sounds a lot like navigating the creation of Wikipedia. To me, the obvious problem is that we're objectively already "paperclipping" ourselves with data centers.
|
Northern Ireland26367 Posts
On March 17 2026 03:03 Simberto wrote:Show nested quote +On March 17 2026 02:27 LightSpectra wrote: Hallucinations mean the LLM will make up information if it can't supply a good answer for what the user is asking for. Depending on how you test it, this was found to happen with up to 40% of their answers as of last year. Even if it's improved, would you really want to trust something that's wrong 20%, even 10% of the time? That's a higher error rate than Wikipedia or public domain textbooks and encyclopedias. If i am not completely mistaken, it is much worse than this. LLMs don't "make up information". They don't actually interact with the topics they talk about on an information level at all. They simply give you the combination of words that is the statistically most likely answer to your question according to their training data. LLMs have no concept of truth or knowledge. They are simply doing improv theater based on your input. Which makes them a very bad source for knowledge. Try telling people how it actually works, they’ll outright not believe you. Even if you’ve education in a tangentially connected domain.
I know what the word ‘the’ means and where to stick it. An LLM does not know the former, and on the latter is just making a best guess based on probability, albeit a solid guess based on uncountable amounts of prior texts.
What I don’t understand is the reticence by some to take this crude explanation on face value. To someone who doesn’t know much about how computers fundamentally work in pretty decent detail I mean, yeah it can sound mental. But what’s the alternative explanation? Magic?
|
United States43681 Posts
GH, that tangible vs intangible assets analysis was created by an idiot strictly for the use of idiots. The accounting definition of assets has almost no relevance to the valuation of a company.
|
On March 17 2026 03:16 KwarK wrote:Show nested quote +On March 17 2026 03:09 GreenHorizons wrote:On March 17 2026 01:44 Dan HH wrote:On March 16 2026 17:02 GreenHorizons wrote:On March 16 2026 03:09 Gorsameth wrote:On March 16 2026 01:55 GreenHorizons wrote: There's probably some unforeseen economic impacts of removing bots (and now AI agents) when we consider they make up about half of the internet traffic most ads are metric'd off of.
There's basically a centi-billion dollar industry (without counting the platforms themselves really) in arbitraging (frauding) ad engagement by buying fake engagement and selling it to advertisers.
(EDIT: Advertising is rather uniquely central to the US economy.)
That probably has an unrecognized impact on the culture/sociology of the humans on (and off) the internet that is worthy of consideration. And half the S&P 500 is an AI bubble with little purpose and no financial viability, The economy is utterly fucked either way. NVIDIA pays Meta/X/etc to generate AI enhanced advertisements, the AI learns the most effective ads target AI bots, the AI bots learn the best ads are AI generated, Meta/X/etc needs to buy more NVIDIA stuff to handle the ever increasing AI traffic. Infinite money glitch achieved. If ads don't get converted to sales they get cut pretty quickly. Naturally, the solution is to give AI bots a stipend to occasionally order products. AI bots might get basic income before humans, if they become the consumers there's not much need for us. You're still thinking about ads for tangible things for humans to use. You might not have noticed, but that's not what "the US economy" (as most people imagine it) really is any more. As demonstrated in the S&P 500 Index shown above, the composition of corporate value has undergone a fundamental transformation over the past five decades. In 1975, tangible assets—property, plant, equipment, inventory, and other physical capital—represented 83% of the market value of companies comprising the S&P 500 index, with intangible assets accounting for only 17%. By the end of 2025, this relationship had completely inverted: intangible assets now constitute approximately 92% of S&P 500 market capitalization, while tangible assets have been reduced to a mere 8%. https://oceantomo.com/insights/ocean-tomo-releases-2025-intangible-asset-market-value-study-results/EDIT: On the LLMs as learning tools part, this sounds a lot like navigating the creation of Wikipedia. To me, the obvious problem is that we're objectively already "paperclipping" ourselves with data centers. GH, that tangible vs intangible assets analysis was created by an idiot strictly for the use of idiots. The accounting definition of assets has almost no relevance to the valuation of a company.
Fair enough, but I sense your personal animosity against me and personal familiarity with the subject matter is impinging on your recognition of my point (which obviously, given your expertise, isn't specifically the "valuation of a company")
I should have used different data to more effectively make my point. The point simply being that the US economy isn't driven by making cars and such anymore (I think most people get this?).
Even these early AI agents (typically with human assistance still) are reasonably capable of generating revenue, including by doing ad arbitrage/fraud. They can then turn that revenue into a subscription to their own AI services to fund buying more compute to generate more ads for AI subscription services as a rough example.
I should also mention I don't literally mean it is an "infinite money glitch", that's sardonic sarcasm.
On March 17 2026 03:15 WombaT wrote:Show nested quote +On March 17 2026 03:03 Simberto wrote:On March 17 2026 02:27 LightSpectra wrote: Hallucinations mean the LLM will make up information if it can't supply a good answer for what the user is asking for. Depending on how you test it, this was found to happen with up to 40% of their answers as of last year. Even if it's improved, would you really want to trust something that's wrong 20%, even 10% of the time? That's a higher error rate than Wikipedia or public domain textbooks and encyclopedias. If i am not completely mistaken, it is much worse than this. LLMs don't "make up information". They don't actually interact with the topics they talk about on an information level at all. They simply give you the combination of words that is the statistically most likely answer to your question according to their training data. LLMs have no concept of truth or knowledge. They are simply doing improv theater based on your input. Which makes them a very bad source for knowledge. + Show Spoiler +Try telling people how it actually works, they’ll outright not believe you. Even if you’ve education in a tangentially connected domain.
I know what the word ‘the’ means and where to stick it. An LLM does not know the former, and on the latter is just making a best guess based on probability, albeit a solid guess based on uncountable amounts of prior texts.
What I don’t understand is the reticence by some to take this crude explanation on face value. To someone who doesn’t know much about how computers fundamentally work in pretty decent detail I mean, yeah it can sound mental. But what’s the alternative explanation? Magic?
You ever see a scene where someone from the past sees a TV?
|
On March 17 2026 02:06 Liquid`Drone wrote:Show nested quote +On March 16 2026 23:00 LightSpectra wrote:On March 16 2026 15:06 Liquid`Drone wrote: Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.
I do have issues with people posting chatgpt posts as arguments but that is different. LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect). Grok is an outlier and should not be trusted for anything. I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense. If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.
There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works.
Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it.
My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem.
My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem.
|
United States43681 Posts
On March 17 2026 03:37 GreenHorizons wrote:Show nested quote +On March 17 2026 03:16 KwarK wrote:On March 17 2026 03:09 GreenHorizons wrote:On March 17 2026 01:44 Dan HH wrote:On March 16 2026 17:02 GreenHorizons wrote:On March 16 2026 03:09 Gorsameth wrote:On March 16 2026 01:55 GreenHorizons wrote: There's probably some unforeseen economic impacts of removing bots (and now AI agents) when we consider they make up about half of the internet traffic most ads are metric'd off of.
There's basically a centi-billion dollar industry (without counting the platforms themselves really) in arbitraging (frauding) ad engagement by buying fake engagement and selling it to advertisers.
(EDIT: Advertising is rather uniquely central to the US economy.)
That probably has an unrecognized impact on the culture/sociology of the humans on (and off) the internet that is worthy of consideration. And half the S&P 500 is an AI bubble with little purpose and no financial viability, The economy is utterly fucked either way. NVIDIA pays Meta/X/etc to generate AI enhanced advertisements, the AI learns the most effective ads target AI bots, the AI bots learn the best ads are AI generated, Meta/X/etc needs to buy more NVIDIA stuff to handle the ever increasing AI traffic. Infinite money glitch achieved. If ads don't get converted to sales they get cut pretty quickly. Naturally, the solution is to give AI bots a stipend to occasionally order products. AI bots might get basic income before humans, if they become the consumers there's not much need for us. You're still thinking about ads for tangible things for humans to use. You might not have noticed, but that's not what "the US economy" (as most people imagine it) really is any more. As demonstrated in the S&P 500 Index shown above, the composition of corporate value has undergone a fundamental transformation over the past five decades. In 1975, tangible assets—property, plant, equipment, inventory, and other physical capital—represented 83% of the market value of companies comprising the S&P 500 index, with intangible assets accounting for only 17%. By the end of 2025, this relationship had completely inverted: intangible assets now constitute approximately 92% of S&P 500 market capitalization, while tangible assets have been reduced to a mere 8%. https://oceantomo.com/insights/ocean-tomo-releases-2025-intangible-asset-market-value-study-results/EDIT: On the LLMs as learning tools part, this sounds a lot like navigating the creation of Wikipedia. To me, the obvious problem is that we're objectively already "paperclipping" ourselves with data centers. GH, that tangible vs intangible assets analysis was created by an idiot strictly for the use of idiots. The accounting definition of assets has almost no relevance to the valuation of a company. Fair enough I did see it mentioned on Forbes, but I sense you're personal animosity against me and personal familiarity with the subject matter is impinging on your recognition of my point (which obviously, given your expertise, isn't specifically the "valuation of a company") I should have used different data to more effectively make my point. The point simply being that the US economy isn't driven by making cars and such anymore (I think most people get this?). Even these early AI agents (typically with human assistance still) are reasonably capable of generating revenue, including by doing ad arbitrage/fraud. They can then turn that revenue into a subscription to their own AI services to fund buying more compute to generate more ads for AI subscription services as a rough example. I should also mention I don't literally mean it is an "infinite money glitch", that's sardonic sarcasm. Forbes has become a blogging host. They're not a reputable website.
You're right on one of the causes, the shift of heavy industry out of the US. Some industries are more plant heavy than others. To make steel you need a giant foundry for example. To make ships you need a dockyard. There are industries in which a lot of the capital that is tied up in generating the profits are big physical assets that the accounting rules let you capitalize. Then there are industries that don't use as much plant in the creation of revenue. For example software companies won't have as high a % of their asset value in factories or machinery and will have a higher % of their value in patents, trademarks, licenses etc.
But the other more important cause is mergers and acquisitions creating our favourite intangible, "goodwill". Let's say that a company has something not on the balance sheet but that it uses to generate a lot of profit. Customer relationships built up over decades, excellent brand recognition, a loyal and skilled workforce, that kind of thing.
A larger company wants to absorb it and they will negotiate a price based on the expected cash flow which makes sense. If they were to come in and say "well your building is worth $1m so we'll pay $1.1m, that's a good price, right?" then the current owners would laugh and explain that it isn't the building that is being sold, it is the business that resides in the building that is being sold. Everyone understands this and the business generates $1m in profit per year and so they come up with a sales price of $10m in this example.
The accounting treatment here for the larger company is PPE net of depreciation $1m Goodwill $9m Cash -$10m
They've spent $10m cash and they now have a business that is worth $10m but $9m of that $10 value is goodwill. It's intangible. It exists and has value because there's a building that spits out a shitload of cash every year and everyone would agree that that's something very valuable but it's not like the building is made up of super valuable bricks.
Then another company buys out that company and it compounds. And once on the books goodwill stays, pretty much forever.
The SP500 is, almost by definition, going to be made up of companies that buy a lot of other companies. The analysis is comparable to making a hundred bricks and graphing the age of the bricks over time. They're measuring the same thing on the X and Y axis.
|
|
|
|
|
|