• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 08:01
CEST 14:01
KST 21:01
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL21] Ro4 Preview: On Course12Code S Season 1 - RO8 Preview7[ASL21] Ro8 Preview Pt2: Progenitors8Code S Season 1 - RO12 Group A: Rogue, Percival, Solar, Zoun13[ASL21] Ro8 Preview Pt1: Inheritors16
Community News
Weekly Cups (May 4-10): Clem, MaxPax, herO win1Maestros of The Game 2 announcement and schedule !10Weekly Cups (April 27-May 4): Clem takes triple0RSL Revival: Season 5 - Qualifiers and Main Event12Code S Season 1 (2026) - RO12 Results1
StarCraft 2
General
MaNa leaves Team Liquid Weekly Cups (May 4-10): Clem, MaxPax, herO win Code S Season 1 - RO8 Preview Behind the Blue - Team Liquid History Book Weekly Cups (April 27-May 4): Clem takes triple
Tourneys
2026 GSL Season 2 Qualifiers $5,000 WardiTV Spring Championship 2026 Maestros of The Game 2 announcement and schedule ! SC2 INu's Battles#16 <BO.9> Master Swan Open (Global Bronze-Master 2)
Strategy
Custom Maps
[D]RTS in all its shapes and glory <3 [A] Nemrods 1/4 players
External Content
Mutation # 525 Wheel of Misfortune The PondCast: SC2 News & Results Mutation # 524 Death and Taxes Mutation # 523 Firewall
Brood War
General
Flashes ASL S21 Ro8 Review BW General Discussion Pros React To: Leta vs Tulbo (ASL S21, Ro.8) (Spoiler) Interview ASL Ro4 Day 2 Winner Data needed
Tourneys
[ASL21] Semifinals A [ASL21] Semifinals B [Megathread] Daily Proleagues [BSL22] RO16 Group Stage - 02 - 10 May
Strategy
Fighting Spirit mining rates [G] Hydra ZvZ: An Introduction Simple Questions, Simple Answers Muta micro map competition
Other Games
General Games
Nintendo Switch Thread Warcraft III: The Frozen Throne Stormgate/Frost Giant Megathread Starcraft Tabletop Miniature Game PC Games Sales Thread
Dota 2
The Story of Wings Gaming
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas TL Mafia Community Thread Five o'clock TL Mafia
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread UK Politics Mega-thread YouTube Thread European Politico-economics QA Mega-thread
Fan Clubs
The IdrA Fan Club
Media & Entertainment
[Manga] One Piece Anime Discussion Thread [Req][Books] Good Fantasy/SciFi books
Sports
2024 - 2026 Football Thread McBoner: A hockey love story Formula 1 Discussion
World Cup 2022
Tech Support
streaming software Strange computer issues (software) [G] How to Block Livestream Ads
TL Community
The Automated Ban List
Blogs
How EEG Data Can Predict Gam…
TrAiDoS
ramps on octagon
StaticNine
Funny Nicknames
LUCKY_NOOB
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1444 users

US Politics Mega-thread - Page 5563

Forum Index > General Forum
Post a Reply
Prev 1 5561 5562 5563 5564 5565 5721 Next
Now that we have a new thread, in order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a complete and thorough read before posting!

NOTE: When providing a source, please provide a very brief summary on what it's about and what purpose it adds to the discussion. The supporting statement should clearly explain why the subject is relevant and needs to be discussed. Please follow this rule especially for tweets.

Your supporting statement should always come BEFORE you provide the source.


If you have any questions, comments, concern, or feedback regarding the USPMT, then please use this thread: http://www.teamliquid.net/forum/website-feedback/510156-us-politics-thread
Fleetfeet
Profile Blog Joined May 2014
Canada2718 Posts
March 16 2026 18:59 GMT
#111241
On March 17 2026 03:37 GreenHorizons wrote:
Show nested quote +
On March 17 2026 03:16 KwarK wrote:
On March 17 2026 03:09 GreenHorizons wrote:
On March 17 2026 01:44 Dan HH wrote:
On March 16 2026 17:02 GreenHorizons wrote:
On March 16 2026 03:09 Gorsameth wrote:
On March 16 2026 01:55 GreenHorizons wrote:
There's probably some unforeseen economic impacts of removing bots (and now AI agents) when we consider they make up about half of the internet traffic most ads are metric'd off of.

There's basically a centi-billion dollar industry (without counting the platforms themselves really) in arbitraging (frauding) ad engagement by buying fake engagement and selling it to advertisers.

(EDIT: Advertising is rather uniquely central to the US economy.)

That probably has an unrecognized impact on the culture/sociology of the humans on (and off) the internet that is worthy of consideration.
And half the S&P 500 is an AI bubble with little purpose and no financial viability,

The economy is utterly fucked either way.


NVIDIA pays Meta/X/etc to generate AI enhanced advertisements, the AI learns the most effective ads target AI bots, the AI bots learn the best ads are AI generated, Meta/X/etc needs to buy more NVIDIA stuff to handle the ever increasing AI traffic. Infinite money glitch achieved.

If ads don't get converted to sales they get cut pretty quickly. Naturally, the solution is to give AI bots a stipend to occasionally order products. AI bots might get basic income before humans, if they become the consumers there's not much need for us.
You're still thinking about ads for tangible things for humans to use. You might not have noticed, but that's not what "the US economy" (as most people imagine it) really is any more.

[image loading]


As demonstrated in the S&P 500 Index shown above, the composition of corporate value has undergone a fundamental transformation over the past five decades. In 1975, tangible assets—property, plant, equipment, inventory, and other physical capital—represented 83% of the market value of companies comprising the S&P 500 index, with intangible assets accounting for only 17%. By the end of 2025, this relationship had completely inverted: intangible assets now constitute approximately 92% of S&P 500 market capitalization, while tangible assets have been reduced to a mere 8%.


https://oceantomo.com/insights/ocean-tomo-releases-2025-intangible-asset-market-value-study-results/

EDIT: On the LLMs as learning tools part, this sounds a lot like navigating the creation of Wikipedia. To me, the obvious problem is that we're objectively already "paperclipping" ourselves with data centers.


GH, that tangible vs intangible assets analysis was created by an idiot strictly for the use of idiots. The accounting definition of assets has almost no relevance to the valuation of a company.


Fair enough, but I sense your personal animosity against me and personal familiarity with the subject matter is impinging on your recognition of my point (which obviously, given your expertise, isn't specifically the "valuation of a company")

I should have used different data to more effectively make my point. The point simply being that the US economy isn't driven by making cars and such anymore (I think most people get this?).

Even these early AI agents (typically with human assistance still) are reasonably capable of generating revenue, including by doing ad arbitrage/fraud. They can then turn that revenue into a subscription to their own AI services to fund buying more compute to generate more ads for AI subscription services as a rough example.

I should also mention I don't literally mean it is an "infinite money glitch", that's sardonic sarcasm.

Show nested quote +
On March 17 2026 03:15 WombaT wrote:
On March 17 2026 03:03 Simberto wrote:
On March 17 2026 02:27 LightSpectra wrote:
Hallucinations mean the LLM will make up information if it can't supply a good answer for what the user is asking for. Depending on how you test it, this was found to happen with up to 40% of their answers as of last year. Even if it's improved, would you really want to trust something that's wrong 20%, even 10% of the time? That's a higher error rate than Wikipedia or public domain textbooks and encyclopedias.


If i am not completely mistaken, it is much worse than this. LLMs don't "make up information". They don't actually interact with the topics they talk about on an information level at all. They simply give you the combination of words that is the statistically most likely answer to your question according to their training data.

LLMs have no concept of truth or knowledge. They are simply doing improv theater based on your input.

Which makes them a very bad source for knowledge.

+ Show Spoiler +
Try telling people how it actually works, they’ll outright not believe you. Even if you’ve education in a tangentially connected domain.

I know what the word ‘the’ means and where to stick it. An LLM does not know the former, and on the latter is just making a best guess based on probability, albeit a solid guess based on uncountable amounts of prior texts.

What I don’t understand is the reticence by some to take this crude explanation on face value.
To someone who doesn’t know much about how computers fundamentally work in pretty decent detail I mean, yeah it can sound mental. But what’s the alternative explanation? Magic?


You ever see a scene where someone from the past sees a TV?


This doesn't fundamentally seem like things I expect you to disagree with, outside of the end result being wealth for corporations and not wealth for everyone. If AI could infinite money glitch and we could 'steal' that infinite money for UBI, i assume you'd be on board? (depending of course on how much suffering the 'money glitch' generates)
Dan HH
Profile Joined July 2012
Romania9207 Posts
March 16 2026 19:15 GMT
#111242
On March 17 2026 02:50 WombaT wrote:
Show nested quote +
On March 17 2026 01:46 Billyboy wrote:
I think a big problem is no one’s knows what they don’t know. So if you use AI to do something you are an expert in, it can be a very powerful time saver. Because you can fairly accurately and quickly weed out what’s wrong. But if you don’t know the subject matter it is really hard to know what is wrong and why.

Another big societal issue is how many people are using it to confirm their pop psychology diagnosis of themselves or others in their life. It will always confirm what you think. It will even basically lead you with what additional questions you need to confirm your belief. Feel free to go into private mode and have two AI open and ask each about if a person you know is a narcissist or not. In one box act as though you believe they are and in the other not. Both times the bot will confirm you answer.

And it is doing that all the time in all sorts of topics because people think it’s a really smart friend who is impartial. And it is far from impartial.

Aye, I don’t even think we’ve collectively properly adjusted to the changes social media brought into society yet, the coming epoch I fear may look like that only on crack.

Any potentially transformative technology does tend to bring problems with it, even if it’s a net positive, just how these things go.

One of my main bones of contention is the folks pushing this aren’t even really trying to grapple with them, I mean by and large they do not care at all. It’s not that they tried to anticipate potential issues and lacked perfect foresight or whatever, there seemingly isn’t any mental energy put into anticipation much less mitigation.

I don’t fundamentally hate the underlying tech or whatever, I’m not a Luddite in that sense but a whole bunch of stuff surrounding it really fundamentally stinks.

Ungodly amounts of copyright infringement? Oh well. Deepfake porn? Oh my well. The rather obvious potential to add even further to political and cultural misinformation? Oh well.

Like there’s no concern for any such things that are pretty egregious, much less the more complex tradeoffs.

Person A may find chatting to an LLM useful for whatever reason and it benefits them somehow, whereas Person B may pay for some AI waifu that validates some pretty awful life choices for them. That kinda thing gets a bit more complex and a provider can plausibly say that how people use their product isn’t 100% their responsibility. Hell you can go as far back as alcohol for such a tradeoff, many people enjoy it with few real ill-effects, but you still get alcoholics.

I’ve a bit more sympathy when we get into such areas, but again to stress, there’s seemingly no concern whatsoever for any of it. And that greatly concerns me.

We're living in a Generalized Wanking Era.

Why do orgasms exist and feel good? It's an evolutionary incentive to make us more likely to have offspring. But we and all apes and dolphins and dogs and otters and some others used our big mammal brains to find ways to trick our bodies to just give us the reward by wanking or humping random things.

We are now very rapidly tricking every other incentive built into our brain chemistry in the same way.

Boredom exists to make us seek a productive task that will increase our chance of survival. We get dopamine hits when we aquire new information or finish a chore that needs to be done. Do daily quests in a mobile game need to be done? No. Are a hundred 20-second Tiktoks useful new information? No. Ape brain doesn't know the difference though.

We seek validation and bonding because we are social creatures and being part of a group increases our survival odds, being well-regarded in our group even more so. Our brain releases endorphins after receiving those things, ape brain doesn't know the praise from your anti-vax Facebook group or the memes Discord channel or the compliment bot LLM calling you insightful doesn't increase social safety.

Acquiring new things once again triggers the reward center because you guessed it, having new tools or extra food helps your survival odds. You typed a noun that can be a purchasable item? Well, here's a week of non-stop bombardment with ads for that type of item. You can buy now and pay later, or pay in installments, or get an instant credit, please bro just buy it you won't even feel the financial hit I promise. Ape brain is convinced your new dinosaur light-up Skechers shoes might help you evade a sabretooth or attract a mate.

Fear of missing out? It hasn't rained in quite a while and your usual water source is dried out, the group plans to explore further than ever before and hope to find a new one, you see some clouds in the distance. It might rain today or the clouds might go in a different direction, maybe the group will find water or maybe they'll run into unnecessary danger. But there's something comforting about taking a risk with them rather than doing nothing. Fast forward to the Generalized Wanking Era, your mate bet at the start of the season that Arsenal finally win another title this year, he also mined bitcoin back in the day, you're such a fool for never getting an easy payday. Here's fifty thousand ads for betting sites and shitcoin platforms, we'll even give you a lil something as a sign up bonus.

You get the idea, I'm aware most of it isn't new, overeating has been an issue for many decades, organized religion has used social validation and the threat of social exclusion for ages, get rich quick schemes have been around forever. But all new technology from ecommerce to social media to LLMs are specifically crafted around this "trick your stupid body to give you the reward without doing the work the reward was supposed to incentivize" meta and take it to ridiculous never before seen heights.
GreenHorizons
Profile Blog Joined April 2011
United States23948 Posts
March 16 2026 19:17 GMT
#111243
On March 17 2026 03:59 Fleetfeet wrote:
Show nested quote +
On March 17 2026 03:37 GreenHorizons wrote:
On March 17 2026 03:16 KwarK wrote:
On March 17 2026 03:09 GreenHorizons wrote:
On March 17 2026 01:44 Dan HH wrote:
On March 16 2026 17:02 GreenHorizons wrote:
On March 16 2026 03:09 Gorsameth wrote:
On March 16 2026 01:55 GreenHorizons wrote:
There's probably some unforeseen economic impacts of removing bots (and now AI agents) when we consider they make up about half of the internet traffic most ads are metric'd off of.

There's basically a centi-billion dollar industry (without counting the platforms themselves really) in arbitraging (frauding) ad engagement by buying fake engagement and selling it to advertisers.

(EDIT: Advertising is rather uniquely central to the US economy.)

That probably has an unrecognized impact on the culture/sociology of the humans on (and off) the internet that is worthy of consideration.
And half the S&P 500 is an AI bubble with little purpose and no financial viability,

The economy is utterly fucked either way.


NVIDIA pays Meta/X/etc to generate AI enhanced advertisements, the AI learns the most effective ads target AI bots, the AI bots learn the best ads are AI generated, Meta/X/etc needs to buy more NVIDIA stuff to handle the ever increasing AI traffic. Infinite money glitch achieved.

If ads don't get converted to sales they get cut pretty quickly. Naturally, the solution is to give AI bots a stipend to occasionally order products. AI bots might get basic income before humans, if they become the consumers there's not much need for us.
You're still thinking about ads for tangible things for humans to use. You might not have noticed, but that's not what "the US economy" (as most people imagine it) really is any more.

[image loading]


As demonstrated in the S&P 500 Index shown above, the composition of corporate value has undergone a fundamental transformation over the past five decades. In 1975, tangible assets—property, plant, equipment, inventory, and other physical capital—represented 83% of the market value of companies comprising the S&P 500 index, with intangible assets accounting for only 17%. By the end of 2025, this relationship had completely inverted: intangible assets now constitute approximately 92% of S&P 500 market capitalization, while tangible assets have been reduced to a mere 8%.


https://oceantomo.com/insights/ocean-tomo-releases-2025-intangible-asset-market-value-study-results/

EDIT: On the LLMs as learning tools part, this sounds a lot like navigating the creation of Wikipedia. To me, the obvious problem is that we're objectively already "paperclipping" ourselves with data centers.


GH, that tangible vs intangible assets analysis was created by an idiot strictly for the use of idiots. The accounting definition of assets has almost no relevance to the valuation of a company.


Fair enough, but I sense your personal animosity against me and personal familiarity with the subject matter is impinging on your recognition of my point (which obviously, given your expertise, isn't specifically the "valuation of a company")

I should have used different data to more effectively make my point. The point simply being that the US economy isn't driven by making cars and such anymore (I think most people get this?).

Even these early AI agents (typically with human assistance still) are reasonably capable of generating revenue, including by doing ad arbitrage/fraud. They can then turn that revenue into a subscription to their own AI services to fund buying more compute to generate more ads for AI subscription services as a rough example.

I should also mention I don't literally mean it is an "infinite money glitch", that's sardonic sarcasm.

On March 17 2026 03:15 WombaT wrote:
On March 17 2026 03:03 Simberto wrote:
On March 17 2026 02:27 LightSpectra wrote:
Hallucinations mean the LLM will make up information if it can't supply a good answer for what the user is asking for. Depending on how you test it, this was found to happen with up to 40% of their answers as of last year. Even if it's improved, would you really want to trust something that's wrong 20%, even 10% of the time? That's a higher error rate than Wikipedia or public domain textbooks and encyclopedias.


If i am not completely mistaken, it is much worse than this. LLMs don't "make up information". They don't actually interact with the topics they talk about on an information level at all. They simply give you the combination of words that is the statistically most likely answer to your question according to their training data.

LLMs have no concept of truth or knowledge. They are simply doing improv theater based on your input.

Which makes them a very bad source for knowledge.

+ Show Spoiler +
Try telling people how it actually works, they’ll outright not believe you. Even if you’ve education in a tangentially connected domain.

I know what the word ‘the’ means and where to stick it. An LLM does not know the former, and on the latter is just making a best guess based on probability, albeit a solid guess based on uncountable amounts of prior texts.

What I don’t understand is the reticence by some to take this crude explanation on face value.
To someone who doesn’t know much about how computers fundamentally work in pretty decent detail I mean, yeah it can sound mental. But what’s the alternative explanation? Magic?


You ever see a scene where someone from the past sees a TV?


This doesn't fundamentally seem like things I expect you to disagree with, outside of the end result being wealth for corporations and not wealth for everyone. If AI could infinite money glitch and we could 'steal' that infinite money for UBI, i assume you'd be on board? (depending of course on how much suffering the 'money glitch' generates)

That's a pretty important distinction generally (forget the AI part), right?

I mean if we get cold-fusion, then a "The Machine Stops" future seems more plausible and desirable than the capitalist hellscape we're hurtling toward. But no, it wouldn't really be my personal interpretation of something we should be striving for if that's what you're asking?
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
LightSpectra
Profile Blog Joined October 2011
United States2575 Posts
March 16 2026 19:25 GMT
#111244
On March 17 2026 03:59 Fleetfeet wrote:
This doesn't fundamentally seem like things I expect you to disagree with, outside of the end result being wealth for corporations and not wealth for everyone. If AI could infinite money glitch and we could 'steal' that infinite money for UBI, i assume you'd be on board? (depending of course on how much suffering the 'money glitch' generates)


There's no such thing as an 'infinite money glitch,' it's more like paying off your credit card debts using other credit cards. People keep investing in LLM companies because they're hoping for extreme returns comparable to investing in Apple right before Steve Jobs' return. They're not profitable yet, but the companies selling hardware to AI companies, especially Nvidia, are getting disgustingly rich in the meantime, giving the illusion that LLMs are too big to fail.
2006 Shinhan Bank OSL Season 3 was the greatest tournament of all time
WombaT
Profile Blog Joined May 2010
Northern Ireland26785 Posts
March 16 2026 19:37 GMT
#111245
On March 17 2026 04:15 Dan HH wrote:
Show nested quote +
On March 17 2026 02:50 WombaT wrote:
On March 17 2026 01:46 Billyboy wrote:
I think a big problem is no one’s knows what they don’t know. So if you use AI to do something you are an expert in, it can be a very powerful time saver. Because you can fairly accurately and quickly weed out what’s wrong. But if you don’t know the subject matter it is really hard to know what is wrong and why.

Another big societal issue is how many people are using it to confirm their pop psychology diagnosis of themselves or others in their life. It will always confirm what you think. It will even basically lead you with what additional questions you need to confirm your belief. Feel free to go into private mode and have two AI open and ask each about if a person you know is a narcissist or not. In one box act as though you believe they are and in the other not. Both times the bot will confirm you answer.

And it is doing that all the time in all sorts of topics because people think it’s a really smart friend who is impartial. And it is far from impartial.

Aye, I don’t even think we’ve collectively properly adjusted to the changes social media brought into society yet, the coming epoch I fear may look like that only on crack.

Any potentially transformative technology does tend to bring problems with it, even if it’s a net positive, just how these things go.

One of my main bones of contention is the folks pushing this aren’t even really trying to grapple with them, I mean by and large they do not care at all. It’s not that they tried to anticipate potential issues and lacked perfect foresight or whatever, there seemingly isn’t any mental energy put into anticipation much less mitigation.

I don’t fundamentally hate the underlying tech or whatever, I’m not a Luddite in that sense but a whole bunch of stuff surrounding it really fundamentally stinks.

Ungodly amounts of copyright infringement? Oh well. Deepfake porn? Oh my well. The rather obvious potential to add even further to political and cultural misinformation? Oh well.

Like there’s no concern for any such things that are pretty egregious, much less the more complex tradeoffs.

Person A may find chatting to an LLM useful for whatever reason and it benefits them somehow, whereas Person B may pay for some AI waifu that validates some pretty awful life choices for them. That kinda thing gets a bit more complex and a provider can plausibly say that how people use their product isn’t 100% their responsibility. Hell you can go as far back as alcohol for such a tradeoff, many people enjoy it with few real ill-effects, but you still get alcoholics.

I’ve a bit more sympathy when we get into such areas, but again to stress, there’s seemingly no concern whatsoever for any of it. And that greatly concerns me.

We're living in a Generalized Wanking Era.

Why do orgasms exist and feel good? It's an evolutionary incentive to make us more likely to have offspring. But we and all apes and dolphins and dogs and otters and some others used our big mammal brains to find ways to trick our bodies to just give us the reward by wanking or humping random things.

We are now very rapidly tricking every other incentive built into our brain chemistry in the same way.

Boredom exists to make us seek a productive task that will increase our chance of survival. We get dopamine hits when we aquire new information or finish a chore that needs to be done. Do daily quests in a mobile game need to be done? No. Are a hundred 20-second Tiktoks useful new information? No. Ape brain doesn't know the difference though.

We seek validation and bonding because we are social creatures and being part of a group increases our survival odds, being well-regarded in our group even more so. Our brain releases endorphins after receiving those things, ape brain doesn't know the praise from your anti-vax Facebook group or the memes Discord channel or the compliment bot LLM calling you insightful doesn't increase social safety.

Acquiring new things once again triggers the reward center because you guessed it, having new tools or extra food helps your survival odds. You typed a noun that can be a purchasable item? Well, here's a week of non-stop bombardment with ads for that type of item. You can buy now and pay later, or pay in installments, or get an instant credit, please bro just buy it you won't even feel the financial hit I promise. Ape brain is convinced your new dinosaur light-up Skechers shoes might help you evade a sabretooth or attract a mate.

Fear of missing out? It hasn't rained in quite a while and your usual water source is dried out, the group plans to explore further than ever before and hope to find a new one, you see some clouds in the distance. It might rain today or the clouds might go in a different direction, maybe the group will find water or maybe they'll run into unnecessary danger. But there's something comforting about taking a risk with them rather than doing nothing. Fast forward to the Generalized Wanking Era, your mate bet at the start of the season that Arsenal finally win another title this year, he also mined bitcoin back in the day, you're such a fool for never getting an easy payday. Here's fifty thousand ads for betting sites and shitcoin platforms, we'll even give you a lil something as a sign up bonus.

You get the idea, I'm aware most of it isn't new, overeating has been an issue for many decades, organized religion has used social validation and the threat of social exclusion for ages, get rich quick schemes have been around forever. But all new technology from ecommerce to social media to LLMs are specifically crafted around this "trick your stupid body to give you the reward without doing the work the reward was supposed to incentivize" meta and take it to ridiculous never before seen heights.

Pretty much, also I may steal the phrase ‘generalised wanking era’ for my own purposes

To go back to my previous if x innovation brings muchos good, but with a downside of generalised wanking from a subset, I mean there’s a cost/benefit there. It feel the current epoch is rather actively encouraging endless masturbation
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
Vivax
Profile Blog Joined April 2011
22317 Posts
Last Edited: 2026-03-16 20:05:49
March 16 2026 19:46 GMT
#111246
On March 17 2026 04:37 WombaT wrote:
Show nested quote +
On March 17 2026 04:15 Dan HH wrote:
On March 17 2026 02:50 WombaT wrote:
On March 17 2026 01:46 Billyboy wrote:
I think a big problem is no one’s knows what they don’t know. So if you use AI to do something you are an expert in, it can be a very powerful time saver. Because you can fairly accurately and quickly weed out what’s wrong. But if you don’t know the subject matter it is really hard to know what is wrong and why.

Another big societal issue is how many people are using it to confirm their pop psychology diagnosis of themselves or others in their life. It will always confirm what you think. It will even basically lead you with what additional questions you need to confirm your belief. Feel free to go into private mode and have two AI open and ask each about if a person you know is a narcissist or not. In one box act as though you believe they are and in the other not. Both times the bot will confirm you answer.

And it is doing that all the time in all sorts of topics because people think it’s a really smart friend who is impartial. And it is far from impartial.

Aye, I don’t even think we’ve collectively properly adjusted to the changes social media brought into society yet, the coming epoch I fear may look like that only on crack.

Any potentially transformative technology does tend to bring problems with it, even if it’s a net positive, just how these things go.

One of my main bones of contention is the folks pushing this aren’t even really trying to grapple with them, I mean by and large they do not care at all. It’s not that they tried to anticipate potential issues and lacked perfect foresight or whatever, there seemingly isn’t any mental energy put into anticipation much less mitigation.

I don’t fundamentally hate the underlying tech or whatever, I’m not a Luddite in that sense but a whole bunch of stuff surrounding it really fundamentally stinks.

Ungodly amounts of copyright infringement? Oh well. Deepfake porn? Oh my well. The rather obvious potential to add even further to political and cultural misinformation? Oh well.

Like there’s no concern for any such things that are pretty egregious, much less the more complex tradeoffs.

Person A may find chatting to an LLM useful for whatever reason and it benefits them somehow, whereas Person B may pay for some AI waifu that validates some pretty awful life choices for them. That kinda thing gets a bit more complex and a provider can plausibly say that how people use their product isn’t 100% their responsibility. Hell you can go as far back as alcohol for such a tradeoff, many people enjoy it with few real ill-effects, but you still get alcoholics.

I’ve a bit more sympathy when we get into such areas, but again to stress, there’s seemingly no concern whatsoever for any of it. And that greatly concerns me.

We're living in a Generalized Wanking Era.

Why do orgasms exist and feel good? It's an evolutionary incentive to make us more likely to have offspring. But we and all apes and dolphins and dogs and otters and some others used our big mammal brains to find ways to trick our bodies to just give us the reward by wanking or humping random things.

We are now very rapidly tricking every other incentive built into our brain chemistry in the same way.

Boredom exists to make us seek a productive task that will increase our chance of survival. We get dopamine hits when we aquire new information or finish a chore that needs to be done. Do daily quests in a mobile game need to be done? No. Are a hundred 20-second Tiktoks useful new information? No. Ape brain doesn't know the difference though.

We seek validation and bonding because we are social creatures and being part of a group increases our survival odds, being well-regarded in our group even more so. Our brain releases endorphins after receiving those things, ape brain doesn't know the praise from your anti-vax Facebook group or the memes Discord channel or the compliment bot LLM calling you insightful doesn't increase social safety.

Acquiring new things once again triggers the reward center because you guessed it, having new tools or extra food helps your survival odds. You typed a noun that can be a purchasable item? Well, here's a week of non-stop bombardment with ads for that type of item. You can buy now and pay later, or pay in installments, or get an instant credit, please bro just buy it you won't even feel the financial hit I promise. Ape brain is convinced your new dinosaur light-up Skechers shoes might help you evade a sabretooth or attract a mate.

Fear of missing out? It hasn't rained in quite a while and your usual water source is dried out, the group plans to explore further than ever before and hope to find a new one, you see some clouds in the distance. It might rain today or the clouds might go in a different direction, maybe the group will find water or maybe they'll run into unnecessary danger. But there's something comforting about taking a risk with them rather than doing nothing. Fast forward to the Generalized Wanking Era, your mate bet at the start of the season that Arsenal finally win another title this year, he also mined bitcoin back in the day, you're such a fool for never getting an easy payday. Here's fifty thousand ads for betting sites and shitcoin platforms, we'll even give you a lil something as a sign up bonus.

You get the idea, I'm aware most of it isn't new, overeating has been an issue for many decades, organized religion has used social validation and the threat of social exclusion for ages, get rich quick schemes have been around forever. But all new technology from ecommerce to social media to LLMs are specifically crafted around this "trick your stupid body to give you the reward without doing the work the reward was supposed to incentivize" meta and take it to ridiculous never before seen heights.

Pretty much, also I may steal the phrase ‘generalised wanking era’ for my own purposes

To go back to my previous if x innovation brings muchos good, but with a downside of generalised wanking from a subset, I mean there’s a cost/benefit there. It feel the current epoch is rather actively encouraging endless masturbation


It's the dreadful environment caused by the poor management. Some dude in Pompeji also busted a quick nut out before being covered in lava.

Mount doom is giving it all right now.

[image loading]
GreenHorizons
Profile Blog Joined April 2011
United States23948 Posts
Last Edited: 2026-03-16 20:04:58
March 16 2026 19:55 GMT
#111247
On March 17 2026 04:37 WombaT wrote:
Show nested quote +
On March 17 2026 04:15 Dan HH wrote:
On March 17 2026 02:50 WombaT wrote:
On March 17 2026 01:46 Billyboy wrote:
I think a big problem is no one’s knows what they don’t know. So if you use AI to do something you are an expert in, it can be a very powerful time saver. Because you can fairly accurately and quickly weed out what’s wrong. But if you don’t know the subject matter it is really hard to know what is wrong and why.

Another big societal issue is how many people are using it to confirm their pop psychology diagnosis of themselves or others in their life. It will always confirm what you think. It will even basically lead you with what additional questions you need to confirm your belief. Feel free to go into private mode and have two AI open and ask each about if a person you know is a narcissist or not. In one box act as though you believe they are and in the other not. Both times the bot will confirm you answer.

And it is doing that all the time in all sorts of topics because people think it’s a really smart friend who is impartial. And it is far from impartial.

Aye, I don’t even think we’ve collectively properly adjusted to the changes social media brought into society yet, the coming epoch I fear may look like that only on crack.

Any potentially transformative technology does tend to bring problems with it, even if it’s a net positive, just how these things go.

One of my main bones of contention is the folks pushing this aren’t even really trying to grapple with them, I mean by and large they do not care at all. It’s not that they tried to anticipate potential issues and lacked perfect foresight or whatever, there seemingly isn’t any mental energy put into anticipation much less mitigation.

I don’t fundamentally hate the underlying tech or whatever, I’m not a Luddite in that sense but a whole bunch of stuff surrounding it really fundamentally stinks.

Ungodly amounts of copyright infringement? Oh well. Deepfake porn? Oh my well. The rather obvious potential to add even further to political and cultural misinformation? Oh well.

Like there’s no concern for any such things that are pretty egregious, much less the more complex tradeoffs.

Person A may find chatting to an LLM useful for whatever reason and it benefits them somehow, whereas Person B may pay for some AI waifu that validates some pretty awful life choices for them. That kinda thing gets a bit more complex and a provider can plausibly say that how people use their product isn’t 100% their responsibility. Hell you can go as far back as alcohol for such a tradeoff, many people enjoy it with few real ill-effects, but you still get alcoholics.

I’ve a bit more sympathy when we get into such areas, but again to stress, there’s seemingly no concern whatsoever for any of it. And that greatly concerns me.

We're living in a Generalized Wanking Era.

Why do orgasms exist and feel good? It's an evolutionary incentive to make us more likely to have offspring. But we and all apes and dolphins and dogs and otters and some others used our big mammal brains to find ways to trick our bodies to just give us the reward by wanking or humping random things.

We are now very rapidly tricking every other incentive built into our brain chemistry in the same way.

Boredom exists to make us seek a productive task that will increase our chance of survival. We get dopamine hits when we aquire new information or finish a chore that needs to be done. Do daily quests in a mobile game need to be done? No. Are a hundred 20-second Tiktoks useful new information? No. Ape brain doesn't know the difference though.

We seek validation and bonding because we are social creatures and being part of a group increases our survival odds, being well-regarded in our group even more so. Our brain releases endorphins after receiving those things, ape brain doesn't know the praise from your anti-vax Facebook group or the memes Discord channel or the compliment bot LLM calling you insightful doesn't increase social safety.

Acquiring new things once again triggers the reward center because you guessed it, having new tools or extra food helps your survival odds. You typed a noun that can be a purchasable item? Well, here's a week of non-stop bombardment with ads for that type of item. You can buy now and pay later, or pay in installments, or get an instant credit, please bro just buy it you won't even feel the financial hit I promise. Ape brain is convinced your new dinosaur light-up Skechers shoes might help you evade a sabretooth or attract a mate.

Fear of missing out? It hasn't rained in quite a while and your usual water source is dried out, the group plans to explore further than ever before and hope to find a new one, you see some clouds in the distance. It might rain today or the clouds might go in a different direction, maybe the group will find water or maybe they'll run into unnecessary danger. But there's something comforting about taking a risk with them rather than doing nothing. Fast forward to the Generalized Wanking Era, your mate bet at the start of the season that Arsenal finally win another title this year, he also mined bitcoin back in the day, you're such a fool for never getting an easy payday. Here's fifty thousand ads for betting sites and shitcoin platforms, we'll even give you a lil something as a sign up bonus.

You get the idea, I'm aware most of it isn't new, overeating has been an issue for many decades, organized religion has used social validation and the threat of social exclusion for ages, get rich quick schemes have been around forever. But all new technology from ecommerce to social media to LLMs are specifically crafted around this "trick your stupid body to give you the reward without doing the work the reward was supposed to incentivize" meta and take it to ridiculous never before seen heights.

Pretty much, also I may steal the phrase ‘generalised wanking era’ for my own purposes

To go back to my previous if x innovation brings muchos good, but with a downside of generalised wanking from a subset, I mean there’s a cost/benefit there. It feel the current epoch is rather actively encouraging endless masturbation

It's arguably been irretrevably toxically mastubatory since the founding of "Mr Skin".

There's also an interesting related aside (that connects to the LLMs breaking critical thinking discussion) relating the founder's "bar trick" with the argument back in Socrates' day (technically before with the Myth of Theuth he's referencing I suppose) when people said books would ruin people's capacity/utilization of memory as they transitioned away from oral history and such.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Liquid`Drone
Profile Joined September 2002
Norway28797 Posts
Last Edited: 2026-03-16 20:26:28
March 16 2026 20:25 GMT
#111248
Okay so like, AI/LLMs as a whole, obviously there are a ton of issues there, and I'm not saying that they'll end up being a societal good overall. The verdict isn't in yet, and there are certainly many potentially disastrous outcomes. I'm not gonna delve into those right now.

What I have issues with, is mostly the idea that they are useless as sources. Yes, they will sometimes hallucinate, but like, I frequently have long conversations with bots (mostly chatgpt historically, more copilot lately, going to move on towards claude) and the notion that they're giving misinformation all the time is just wrong. Additionally, I think LLMs have a fantastic - underutilized because people use them wrongly - potential to be a tool to help people learn.

To give my background here - I teach English, Civics and History in a Norwegian high school. I'm also part of a research project where we're cooperating with the Norwegian University of Science and Technology, where our goal is 'determining how AI can best be used as a tool for learning'. When I use chatbots myself, I am mostly talking to the bots about one of these three subjects - English, civics or history, and these being subjects I teach, I am well versed in spotting errors. It's not that they don't happen - but when talking about a bigger topic, they're rare. Stuff like 'they hallucinate up to 40% of the time' makes it sound like 40% of the stuff they say is bullshit - or at least, that they'll end up spouting bullshit in upwards to 40% of conversations I have with them. But if I ask, for example, 'can you give me ten facts about world war 1 that I can use for a quiz', it's overwhelmingly likely that all ten facts given are commonly accepted as true among historians. I've seen complete gibberish answers given when I ask about, for example, mid-tier norwegian football players, or d-tier celebrities, or myself, I've seen wrong answers being given in one out of 40 questions created as a grammar exercise, and everybody knows that you should be critical towards the information you get and to double check important stuff - but being critical doesn't mean dismissing everything by default. If someone uses the Gemini answer to back up an argument they're making, dismissing this because it's AI is a faulty approach. You can dismiss it if it's wrong - of course - but there's no reason for that to be your default reaction.

Then - AI used for teaching:
The fact is, AI as it is, is both a shortcut that enables students to quickly get an answer they can hand in - robbing them of any learning they should have gained through the effort of doing the work required to produce that answer - and a fantastic tool for learning for the students who use it correctly. While some people have thought that AI could be like a source of 'democratization' of knowledge, giving everybody access to an assistant that never tires, that can explain stuff in a pedagogical manner, what we've actually seen is that the gap between the 'good' and 'bad' students actually increases because of AI. This isn't just because the bad students are becoming worse (because they ask the chatbot of their choice to answer a question for them and then they get a good answer and think they've done a good job) - but also because the good students are becoming better. Smart students who use AI well, learn more, faster, than they used to do - and it helps them both with relevant facts and connecting the dots.

One of the keys here - and this is an area where education so far has failed entirely, and which is the specific area I'm researching right now - is the importance of prompts. Students - with the occasional exception because they have a teacher with a particular passion for learning about this - have mostly been left to their own with this new tool. But for example, I've supplied my students with the following prompt that they've copy pasted and then used:
+ Show Spoiler +

You should act as a Socratic tutor in social studies for a Vg1 (first-year upper secondary) student.

Topic: What is the most important difference between the Nordic and the Anglo-American welfare model?

I have some prior knowledge, but I need to understand and review the topic better before an assessment.

Rules:

Ask one question at a time

Do not give long explanations

Do not give the full answer unless I ask for it

If my answer is wrong or incomplete, help me move forward with questions or small hints

Ask me to explain things in my own words

After a few rounds, you may give a short summary


And while I don't have a huge dataset or anything, the feedback I got from students - both in terms of them writing a short note containing their reflections on their own learning process - and in terms of them showcasing that they had learned the key differences between the Nordic and Anglo-American welfare model - is highly promising. I've also done a fair amount of work with refugees, and the ones who master the language the fastest tend to be students who use a ton of AI - not 'can you translate this for me please', but 'Can you pretend that you are a native Norwegian? I would like for us to talk about x, and then, when you see that I am frequently making a particular mistake, can you explain what I'm doing wrongly in arabic/ukrainian?'

I'm not some type of oh AI is the savior but man, there's tons of potential - both ways, and I see both, every day, in my working life. And whether we like it or not, it's there, no point in pretending it's not.
Moderator
Liquid`Drone
Profile Joined September 2002
Norway28797 Posts
March 16 2026 20:36 GMT
#111249
On March 17 2026 03:46 EnDeR_ wrote:
Show nested quote +
On March 17 2026 02:06 Liquid`Drone wrote:
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.


There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works.

Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it.

My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem.

My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem.


My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself.

For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine.
Moderator
Billyboy
Profile Joined September 2024
1719 Posts
March 16 2026 21:05 GMT
#111250
For the purpose of a source on this forum, I don’t mind it as long as it is clearly labeled as AI. It’s when people pretend that’s what they said that’s where it gets problematic. Someone here can always fact the AI. I also think if it straight from prompts those should be disclosed as well.
EnDeR_
Profile Blog Joined May 2004
Spain2879 Posts
March 16 2026 21:34 GMT
#111251
On March 17 2026 05:36 Liquid`Drone wrote:
Show nested quote +
On March 17 2026 03:46 EnDeR_ wrote:
On March 17 2026 02:06 Liquid`Drone wrote:
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.


There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works.

Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it.

My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem.

My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem.


My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself.

For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine.


Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress.

To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel.

estás más desubicao q un croissant en un plato de nécoras
GreenHorizons
Profile Blog Joined April 2011
United States23948 Posts
March 16 2026 21:47 GMT
#111252
On March 17 2026 06:34 EnDeR_ wrote:
Show nested quote +
On March 17 2026 05:36 Liquid`Drone wrote:
On March 17 2026 03:46 EnDeR_ wrote:
On March 17 2026 02:06 Liquid`Drone wrote:
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.


There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works.

Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it.

My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem.

My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem.


My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself.

For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine.
+ Show Spoiler +


Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress.

To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it.
And people don't check. I don't think this is a good direction of travel.


In the age of Trump it feels like young people are having a materially hard time connecting the necessity of actually knowing/stating facts and reaching the pinnacles of success in the US. Much like the "written vs oral history" into "LLM vs critical thinking" debate, this isn't entirely new (this was in many ways at the heart of the "divine right" debate) but it is a pressing problem that will find resolution whether we choose to shape it consciously or not.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Dan HH
Profile Joined July 2012
Romania9207 Posts
March 16 2026 21:50 GMT
#111253
On March 17 2026 05:25 Liquid`Drone wrote:
Then - AI used for teaching:
The fact is, AI as it is, is both a shortcut that enables students to quickly get an answer they can hand in - robbing them of any learning they should have gained through the effort of doing the work required to produce that answer - and a fantastic tool for learning for the students who use it correctly. While some people have thought that AI could be like a source of 'democratization' of knowledge, giving everybody access to an assistant that never tires, that can explain stuff in a pedagogical manner, what we've actually seen is that the gap between the 'good' and 'bad' students actually increases because of AI. This isn't just because the bad students are becoming worse (because they ask the chatbot of their choice to answer a question for them and then they get a good answer and think they've done a good job) - but also because the good students are becoming better. Smart students who use AI well, learn more, faster, than they used to do - and it helps them both with relevant facts and connecting the dots.

One of the keys here - and this is an area where education so far has failed entirely, and which is the specific area I'm researching right now - is the importance of prompts. Students - with the occasional exception because they have a teacher with a particular passion for learning about this - have mostly been left to their own with this new tool. But for example, I've supplied my students with the following prompt that they've copy pasted and then used:
+ Show Spoiler +

You should act as a Socratic tutor in social studies for a Vg1 (first-year upper secondary) student.

Topic: What is the most important difference between the Nordic and the Anglo-American welfare model?

I have some prior knowledge, but I need to understand and review the topic better before an assessment.

Rules:

Ask one question at a time

Do not give long explanations

Do not give the full answer unless I ask for it

If my answer is wrong or incomplete, help me move forward with questions or small hints

Ask me to explain things in my own words

After a few rounds, you may give a short summary


And while I don't have a huge dataset or anything, the feedback I got from students - both in terms of them writing a short note containing their reflections on their own learning process - and in terms of them showcasing that they had learned the key differences between the Nordic and Anglo-American welfare model - is highly promising. I've also done a fair amount of work with refugees, and the ones who master the language the fastest tend to be students who use a ton of AI - not 'can you translate this for me please', but 'Can you pretend that you are a native Norwegian? I would like for us to talk about x, and then, when you see that I am frequently making a particular mistake, can you explain what I'm doing wrongly in arabic/ukrainian?'

I'm not some type of oh AI is the savior but man, there's tons of potential - both ways, and I see both, every day, in my working life. And whether we like it or not, it's there, no point in pretending it's not.

I appreciate the time you took to give us a little peek into your work specifically with AI used in learning.

From my own experience, people that were good at googling are good at using AI and vice-versa. The amount of people that are good at googling is low. I'm thinking the percentage of people that are aware search operators exist is a single digit.

With your background, I'm sure you're aware there has been a deluge of studies in the past year comparing depth of learning and cognitive load specifically between groups asked to research a topic using traditional search engines vs LLMs. And the results were invariably that the groups using LLMs used their brains less and retained less information.

Now, we can nitpick methodology, throwing a random subject to someone is one thing, whereas intentionally trying to learn something you're interested in is very different, sure. But we already have the internet itself as a precedent of how people went and used democratized information.

We know the difference between a tool that helps you do the thing and a tool that does the thing for you. Using maps vs GPS is well researched example, GPS makes you not need to look for and memorize landmarks and create a mental map, which hinders your ability to learn to navigate unguided. Not a big deal, unless civilization falls in your lifetime you won't need that ability. Many tasks require skills that you won't need again or that you'll always be assisted with.

But being able to research, interpret and understand and check the veracity of a text are essential skills. And LLMs will hinder them for the average user is my current impression and expectation.

As I said in a previous post, AI isn't inherently bad or sycophantic, there are business decisions making them worse than they need to be. Almost all the ads for AI I've ever seen give banale use cases with some "woah, this thing will think for you" subtext. What's sold to investors is that this thing will do all the science and most of the labor for us and solve climate change and cure cancer, we don't need research grants and education to become more productive, we need hardware farms. What's given to users is compliments and validation and answers that sound confident and authoritative.

To your point that it makes good students better and widening the gap, as we see in the world around us today, it's the median student that decides elections and the future of our species. If the effect of LLMs on the median student is to make them exert their brain less it's a problem. I agree that the solution to that problem isn't to attempt to put the genie back in the box, we're all aware that's not an option and I personally wouldn't take it even if it were an option.
JimmyJRaynor
Profile Blog Joined April 2010
Canada17509 Posts
Last Edited: 2026-03-16 22:40:21
March 16 2026 22:06 GMT
#111254
One pillar in the argument in favour of continuing to bludgeon various middle east countries is that Israelis and Jews have the divine god given right to be on the land they are on. Ya well, it appears Israelis are voluntarily surrendering these rights. Birth rates are falling and emigration out of Israel is increasing sharply.

https://blogs.timesofisrael.com/emigration-from-israel-reaches-new-heights/
https://www.jpost.com/israel-news/article-881859
The study mentions that there will be a significant decrease in fertility rates by 2030, with secular Jewish women projected to have 1.7 children by then.

Among religious women, including traditional-religious women, fertility is projected to decline to about 2.3 children per woman, and among Haredi women, it is projected to decline to 4.3 children per woman, up to the year 2040.

On March 17 2026 06:50 Dan HH wrote:
I appreciate the time you took to give us a little peek into your work specifically with AI used in learning.

Don't use AI or any form of automated assistance before age 18.

Since the 1970s, the best way to learn math is without a calculator. The obvious extension of that is no AI at all. Want the cosine of 76 degrees? Approximate it using the tools mathematicians used for 87 bazillion years. Don't use a calculator. Stay off of screens and rely on printed materials only. You will crank out less "work product", however, your mind will develop properly.
Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
Liquid`Drone
Profile Joined September 2002
Norway28797 Posts
March 16 2026 22:37 GMT
#111255
On March 17 2026 06:50 Dan HH wrote:
Show nested quote +
On March 17 2026 05:25 Liquid`Drone wrote:
Then - AI used for teaching:
The fact is, AI as it is, is both a shortcut that enables students to quickly get an answer they can hand in - robbing them of any learning they should have gained through the effort of doing the work required to produce that answer - and a fantastic tool for learning for the students who use it correctly. While some people have thought that AI could be like a source of 'democratization' of knowledge, giving everybody access to an assistant that never tires, that can explain stuff in a pedagogical manner, what we've actually seen is that the gap between the 'good' and 'bad' students actually increases because of AI. This isn't just because the bad students are becoming worse (because they ask the chatbot of their choice to answer a question for them and then they get a good answer and think they've done a good job) - but also because the good students are becoming better. Smart students who use AI well, learn more, faster, than they used to do - and it helps them both with relevant facts and connecting the dots.

One of the keys here - and this is an area where education so far has failed entirely, and which is the specific area I'm researching right now - is the importance of prompts. Students - with the occasional exception because they have a teacher with a particular passion for learning about this - have mostly been left to their own with this new tool. But for example, I've supplied my students with the following prompt that they've copy pasted and then used:
+ Show Spoiler +

You should act as a Socratic tutor in social studies for a Vg1 (first-year upper secondary) student.

Topic: What is the most important difference between the Nordic and the Anglo-American welfare model?

I have some prior knowledge, but I need to understand and review the topic better before an assessment.

Rules:

Ask one question at a time

Do not give long explanations

Do not give the full answer unless I ask for it

If my answer is wrong or incomplete, help me move forward with questions or small hints

Ask me to explain things in my own words

After a few rounds, you may give a short summary


And while I don't have a huge dataset or anything, the feedback I got from students - both in terms of them writing a short note containing their reflections on their own learning process - and in terms of them showcasing that they had learned the key differences between the Nordic and Anglo-American welfare model - is highly promising. I've also done a fair amount of work with refugees, and the ones who master the language the fastest tend to be students who use a ton of AI - not 'can you translate this for me please', but 'Can you pretend that you are a native Norwegian? I would like for us to talk about x, and then, when you see that I am frequently making a particular mistake, can you explain what I'm doing wrongly in arabic/ukrainian?'

I'm not some type of oh AI is the savior but man, there's tons of potential - both ways, and I see both, every day, in my working life. And whether we like it or not, it's there, no point in pretending it's not.

I appreciate the time you took to give us a little peek into your work specifically with AI used in learning.

From my own experience, people that were good at googling are good at using AI and vice-versa. The amount of people that are good at googling is low. I'm thinking the percentage of people that are aware search operators exist is a single digit.

With your background, I'm sure you're aware there has been a deluge of studies in the past year comparing depth of learning and cognitive load specifically between groups asked to research a topic using traditional search engines vs LLMs. And the results were invariably that the groups using LLMs used their brains less and retained less information.

Now, we can nitpick methodology, throwing a random subject to someone is one thing, whereas intentionally trying to learn something you're interested in is very different, sure. But we already have the internet itself as a precedent of how people went and used democratized information.

We know the difference between a tool that helps you do the thing and a tool that does the thing for you. Using maps vs GPS is well researched example, GPS makes you not need to look for and memorize landmarks and create a mental map, which hinders your ability to learn to navigate unguided. Not a big deal, unless civilization falls in your lifetime you won't need that ability. Many tasks require skills that you won't need again or that you'll always be assisted with.

But being able to research, interpret and understand and check the veracity of a text are essential skills. And LLMs will hinder them for the average user is my current impression and expectation.

As I said in a previous post, AI isn't inherently bad or sycophantic, there are business decisions making them worse than they need to be. Almost all the ads for AI I've ever seen give banale use cases with some "woah, this thing will think for you" subtext. What's sold to investors is that this thing will do all the science and most of the labor for us and solve climate change and cure cancer, we don't need research grants and education to become more productive, we need hardware farms. What's given to users is compliments and validation and answers that sound confident and authoritative.

To your point that it makes good students better and widening the gap, as we see in the world around us today, it's the median student that decides elections and the future of our species. If the effect of LLMs on the median student is to make them exert their brain less it's a problem. I agree that the solution to that problem isn't to attempt to put the genie back in the box, we're all aware that's not an option and I personally wouldn't take it even if it were an option.


I don't disagree with any of this and I wholeheartedly agree with most of it. The aspect I think I am most worried about with AI is the extension of the GPS example you bring up - we know that our brains actually benefit from doing many of those small, menial tasks we've delegated to our smartphones, and I worry that AI ending up doing more and more of our more creative and rewarding work - not just the small menial tasks - will to a smaller or greater degree rob us of the human experience.

However, I think if you are engaged in the act of discussing a topic and you need some data to prove a point, you're already at a pretty high level of engagement. In that case, I'm not worried about using AI to give a quick data point rather than having to dig around for a reliable and trustworthy source. And as a teacher, the overarching goal of the research team I'm part of is basically to help give teachers around norway the toolbox to help the median student use it wisely.

Also - anecdotally - when ChatGPT was pretty new, I had given a group of students a history test where one of the questions was 'who is considered the father of liberalism'. This question ended up being a fun 'tell' - because the students who had used ChatGPT as their primary tool for preparing for the test all answered John Locke, while the ones who had used our default textbook as their primary tool for preparing answered John Stuart Mill. Either answer is fine (it was a mediocre question, but oh well) - but comparing the rest of the answers, students who went 'John Locke' had generally done a slightly better job overall; without giving me the impression they were harder workers.

This was a class full of good students. Teaching English or civics for vocational students that are going into construction, the overall impression is that AI has been a negative for their learning experience; but even then, I've given assignments that were major hits - that would make students who would normally struggle churning out a single paragraph of their own text over 90 minutes write more than a full page.

Moderator
GreenHorizons
Profile Blog Joined April 2011
United States23948 Posts
Last Edited: 2026-03-17 00:02:47
March 16 2026 23:26 GMT
#111256
Fun bit related to how the US/global economy functions, GPS, AI, and the limitations of LLMs vs Large Geospatial Models (LGMs) that recently came out. Turns out Pokemon Go players were actually unknowingly busy helping to build AI "World models" (currently) being sold to help autonomous robots navigate the real world.

Niantic Spatial has trained its model on 30 billion images captured in urban environments. In particular, the images are clustered around hot spots—places that served as important locations in Niantic’s games that players were encouraged to visit, such as Pokémon battle arenas. “We had a million-plus locations around the world where we can locate you precisely,” says McClendon. “We know where you’re standing within several centimeters of accuracy and, most importantly, where you’re looking.”

The upshot is that for each of those million locations, Niantic Spatial has many thousands of images taken in more or less the same place but from different angles, at different times of day, and in different weather conditions. Each of those images comes with detailed metadata that pinpoints where in space the phone was at the time it captured the image, including which way the phone was facing, which way up it was, whether or not it was moving, how fast and in which direction, and more.

The firm has used this data set to train a model to predict exactly where it is by taking into account what it is looking at—even for locations other than those million hot spots, where good sources of image and location data are scarcer.


www.technologyreview.com

EDIT: Presumably, they could also sell this information to Palantir and Northrop Grumman for ArsenalOS + Lattice OS



"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
WombaT
Profile Blog Joined May 2010
Northern Ireland26785 Posts
March 16 2026 23:50 GMT
#111257
On March 17 2026 05:25 Liquid`Drone wrote:
Okay so like, AI/LLMs as a whole, obviously there are a ton of issues there, and I'm not saying that they'll end up being a societal good overall. The verdict isn't in yet, and there are certainly many potentially disastrous outcomes. I'm not gonna delve into those right now.

What I have issues with, is mostly the idea that they are useless as sources. Yes, they will sometimes hallucinate, but like, I frequently have long conversations with bots (mostly chatgpt historically, more copilot lately, going to move on towards claude) and the notion that they're giving misinformation all the time is just wrong. Additionally, I think LLMs have a fantastic - underutilized because people use them wrongly - potential to be a tool to help people learn.

To give my background here - I teach English, Civics and History in a Norwegian high school. I'm also part of a research project where we're cooperating with the Norwegian University of Science and Technology, where our goal is 'determining how AI can best be used as a tool for learning'. When I use chatbots myself, I am mostly talking to the bots about one of these three subjects - English, civics or history, and these being subjects I teach, I am well versed in spotting errors. It's not that they don't happen - but when talking about a bigger topic, they're rare. Stuff like 'they hallucinate up to 40% of the time' makes it sound like 40% of the stuff they say is bullshit - or at least, that they'll end up spouting bullshit in upwards to 40% of conversations I have with them. But if I ask, for example, 'can you give me ten facts about world war 1 that I can use for a quiz', it's overwhelmingly likely that all ten facts given are commonly accepted as true among historians. I've seen complete gibberish answers given when I ask about, for example, mid-tier norwegian football players, or d-tier celebrities, or myself, I've seen wrong answers being given in one out of 40 questions created as a grammar exercise, and everybody knows that you should be critical towards the information you get and to double check important stuff - but being critical doesn't mean dismissing everything by default. If someone uses the Gemini answer to back up an argument they're making, dismissing this because it's AI is a faulty approach. You can dismiss it if it's wrong - of course - but there's no reason for that to be your default reaction.

Then - AI used for teaching:
The fact is, AI as it is, is both a shortcut that enables students to quickly get an answer they can hand in - robbing them of any learning they should have gained through the effort of doing the work required to produce that answer - and a fantastic tool for learning for the students who use it correctly. While some people have thought that AI could be like a source of 'democratization' of knowledge, giving everybody access to an assistant that never tires, that can explain stuff in a pedagogical manner, what we've actually seen is that the gap between the 'good' and 'bad' students actually increases because of AI. This isn't just because the bad students are becoming worse (because they ask the chatbot of their choice to answer a question for them and then they get a good answer and think they've done a good job) - but also because the good students are becoming better. Smart students who use AI well, learn more, faster, than they used to do - and it helps them both with relevant facts and connecting the dots.

One of the keys here - and this is an area where education so far has failed entirely, and which is the specific area I'm researching right now - is the importance of prompts. Students - with the occasional exception because they have a teacher with a particular passion for learning about this - have mostly been left to their own with this new tool. But for example, I've supplied my students with the following prompt that they've copy pasted and then used:
+ Show Spoiler +

You should act as a Socratic tutor in social studies for a Vg1 (first-year upper secondary) student.

Topic: What is the most important difference between the Nordic and the Anglo-American welfare model?

I have some prior knowledge, but I need to understand and review the topic better before an assessment.

Rules:

Ask one question at a time

Do not give long explanations

Do not give the full answer unless I ask for it

If my answer is wrong or incomplete, help me move forward with questions or small hints

Ask me to explain things in my own words

After a few rounds, you may give a short summary


And while I don't have a huge dataset or anything, the feedback I got from students - both in terms of them writing a short note containing their reflections on their own learning process - and in terms of them showcasing that they had learned the key differences between the Nordic and Anglo-American welfare model - is highly promising. I've also done a fair amount of work with refugees, and the ones who master the language the fastest tend to be students who use a ton of AI - not 'can you translate this for me please', but 'Can you pretend that you are a native Norwegian? I would like for us to talk about x, and then, when you see that I am frequently making a particular mistake, can you explain what I'm doing wrongly in arabic/ukrainian?'

I'm not some type of oh AI is the savior but man, there's tons of potential - both ways, and I see both, every day, in my working life. And whether we like it or not, it's there, no point in pretending it's not.

Interesting, on the bolded what would you attribute that to based on your own anecdotal experience? Better proficiency or just a higher level of enthusiasm for the learning process?
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
WombaT
Profile Blog Joined May 2010
Northern Ireland26785 Posts
March 16 2026 23:55 GMT
#111258
On March 17 2026 07:06 JimmyJRaynor wrote:
One pillar in the argument in favour of continuing to bludgeon various middle east countries is that Israelis and Jews have the divine god given right to be on the land they are on. Ya well, it appears Israelis are voluntarily surrendering these rights. Birth rates are falling and emigration out of Israel is increasing sharply.

https://blogs.timesofisrael.com/emigration-from-israel-reaches-new-heights/
https://www.jpost.com/israel-news/article-881859
Show nested quote +
The study mentions that there will be a significant decrease in fertility rates by 2030, with secular Jewish women projected to have 1.7 children by then.

Among religious women, including traditional-religious women, fertility is projected to decline to about 2.3 children per woman, and among Haredi women, it is projected to decline to 4.3 children per woman, up to the year 2040.

Show nested quote +
On March 17 2026 06:50 Dan HH wrote:
I appreciate the time you took to give us a little peek into your work specifically with AI used in learning.

Don't use AI or any form of automated assistance before age 18.

Since the 1970s, the best way to learn math is without a calculator. The obvious extension of that is no AI at all. Want the cosine of 76 degrees? Approximate it using the tools mathematicians used for 87 bazillion years. Don't use a calculator. Stay off of screens and rely on printed materials only. You will crank out less "work product", however, your mind will develop properly.

Do folks particularly care if your mind develops properly?

In a vacuum, I think working your brain out on the regular, yeah absolutely.

But school is about cranking out work product, as are most jobs. You’re just actively handicapping yourself to develop a skillset that absolutely can be rewarded, but rarely is unless it’s an exceptional one.
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
Acrofales
Profile Joined August 2010
Spain18291 Posts
March 17 2026 00:00 GMT
#111259
On March 16 2026 23:00 LightSpectra wrote:
Show nested quote +
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).

"Horrific environmental effects" is a bit hyperbolic there. Andrew Ng explained it quite well (link) but even if you reject the premise, you're not replacing the compute with nothing, as you would be for AI-slop TikTok videos, but with some other learning aide. I guarantee humans have a bigger carbon footprint than the compute for a few billion tokens a different human would use for education. Let alone the water used by humans, we're horribly inefficient water users.

Don't get me wrong, everything else you mentioned is fine, but the environmental effects is a ridiculous point.
LightSpectra
Profile Blog Joined October 2011
United States2575 Posts
March 17 2026 00:17 GMT
#111260
On March 17 2026 09:00 Acrofales wrote:
Show nested quote +
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).

"Horrific environmental effects" is a bit hyperbolic there. Andrew Ng explained it quite well (link) but even if you reject the premise, you're not replacing the compute with nothing, as you would be for AI-slop TikTok videos, but with some other learning aide. I guarantee humans have a bigger carbon footprint than the compute for a few billion tokens a different human would use for education. Let alone the water used by humans, we're horribly inefficient water users.

Don't get me wrong, everything else you mentioned is fine, but the environmental effects is a ridiculous point.


Data centers for AI are projected to use somewhere between 6.7 to 12% of the United States' electricity by 2028: https://www.belfercenter.org/research-analysis/ai-data-centers-us-electric-grid

I somehow doubt non-LLM learning aids are using that much.
2006 Shinhan Bank OSL Season 3 was the greatest tournament of all time
Prev 1 5561 5562 5563 5564 5565 5721 Next
Please log in or register to reply.
Live Events Refresh
Kung Fu Cup
11:00
#7
IntoTheiNu 1171
RotterdaM329
WardiTV229
TKL 188
SteadfastSC59
Liquipedia
Replay Cast
09:00
KungFu Cup 2026 Week 6
CranKy Ducklings132
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
RotterdaM 323
TKL 188
ProTech146
Rex 85
SteadfastSC 59
herO (SOOP) 17
StarCraft: Brood War
Britney 35337
Calm 8737
Sea 4391
Bisu 1560
Jaedong 646
Horang2 578
firebathero 507
Soma 453
actioN 407
Hyuk 319
[ Show more ]
Mini 254
Killer 230
EffOrt 200
Last 166
Pusan 143
Zeus 131
Mind 105
ZerO 101
Rush 91
Larva 78
ggaemo 65
Aegong 49
Soulkey 48
HiyA 44
sSak 44
ToSsGirL 44
Sharp 42
hero 36
Hm[arnc] 30
JulyZerg 26
sorry 24
soO 16
Icarus 14
Bale 13
Movie 11
Noble 9
ajuk12(nOOB) 9
Sexy 7
IntoTheRainbow 7
Terrorterran 5
Dota 2
Gorgc4998
XcaliburYe160
Counter-Strike
olofmeister2674
shoxiejesuss563
x6flipin445
edward70
kRYSTAL_13
Other Games
singsing1309
B2W.Neo498
DeMusliM240
XaKoH 223
Lowko213
Beastyqt175
monkeys_forever116
Mew2King101
amsayoshi34
ZerO(Twitch)8
Happy0
Organizations
Counter-Strike
PGL34151
StarCraft: Brood War
UltimateBattle 131
lovetv 11
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
[ Show 16 non-featured ]
StarCraft 2
• CranKy Ducklings SOOP28
• intothetv
• AfreecaTV YouTube
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• iopq 9
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• lizZardDota259
League of Legends
• Nemesis4294
• Jankos1622
Other Games
• WagamamaTV226
Upcoming Events
Replay Cast
12h
The PondCast
22h
OSC
22h
Replay Cast
1d 12h
RSL Revival
1d 22h
OSC
2 days
Korean StarCraft League
2 days
RSL Revival
2 days
BSL
3 days
GSL
3 days
Cure vs herO
SHIN vs Maru
[ Show More ]
BSL
4 days
Replay Cast
4 days
Monday Night Weeklies
5 days
Replay Cast
5 days
The PondCast
5 days
GSL
6 days
Liquipedia Results

Completed

Proleague 2026-05-12
WardiTV TLMC #16
Nations Cup 2026

Ongoing

BSL Season 22
ASL Season 21
IPSL Spring 2026
KCM Race Survival 2026 Season 2
Acropolis #4
KK 2v2 League Season 1
BSL 22 Non-Korean Championship
SCTL 2026 Spring
RSL Revival: Season 5
2026 GSL S1
Asian Champions League 2026
IEM Atlanta 2026
PGL Astana 2026
BLAST Rivals Spring 2026
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2

Upcoming

Escore Tournament S2: W7
YSL S3
Escore Tournament S2: W8
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
Maestros of the Game 2
WardiTV Spring 2026
2026 GSL S2
BLAST Bounty Summer 2026: Closed Qualifier
Stake Ranked Episode 3
XSE Pro League 2026
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.