• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 11:38
CEST 17:38
KST 00:38
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL21] Ro24 Preview Pt1: New Chaos0Team Liquid Map Contest #22 - Presented by Monster Energy9ByuL: The Forgotten Master of ZvT30Behind the Blue - Team Liquid History Book20Clem wins HomeStory Cup 289
Community News
Weekly Cups (March 16-22): herO doubles, Cure surprises3Blizzard Classic Cup @ BlizzCon 2026 - $100k prize pool48Weekly Cups (March 9-15): herO, Clem, ByuN win42026 KungFu Cup Announcement6BGE Stara Zagora 2026 cancelled12
StarCraft 2
General
Team Liquid Map Contest #22 - Presented by Monster Energy What mix of new & old maps do you want in the next ladder pool? (SC2) Potential Updates Coming to the SC2 CN Server Behind the Blue - Team Liquid History Book herO wins SC2 All-Star Invitational
Tourneys
RSL Season 4 announced for March-April Sparkling Tuna Cup - Weekly Open Tournament StarCraft Evolution League (SC Evo Biweekly) WardiTV Mondays World University TeamLeague (500$+) | Signups Open
Strategy
Custom Maps
[M] (2) Frigid Storage Publishing has been re-enabled! [Feb 24th 2026]
External Content
The PondCast: SC2 News & Results Mutation # 518 Radiation Zone Mutation # 517 Distant Threat Mutation # 516 Specter of Death
Brood War
General
Pros React To: SoulKey vs Ample ASL21 General Discussion RepMastered™: replay sharing and analyzer site KK Platform will provide 1 million CNY Recent recommended BW games
Tourneys
[ASL21] Ro24 Group C [Megathread] Daily Proleagues [ASL21] Ro24 Group B [ASL21] Ro24 Group A
Strategy
What's the deal with APM & what's its true value Fighting Spirit mining rates Simple Questions, Simple Answers
Other Games
General Games
General RTS Discussion Thread Nintendo Switch Thread Stormgate/Frost Giant Megathread Darkest Dungeon Path of Exile
Dota 2
The Story of Wings Gaming Official 'what is Dota anymore' discussion
League of Legends
G2 just beat GenG in First stand
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Five o'clock TL Mafia Mafia Game Mode Feedback/Ideas Vanilla Mini Mafia
Community
General
US Politics Mega-thread The Games Industry And ATVI European Politico-economics QA Mega-thread Canadian Politics Mega-thread Russo-Ukrainian War Thread
Fan Clubs
The IdrA Fan Club
Media & Entertainment
[Manga] One Piece [Req][Books] Good Fantasy/SciFi books Movie Discussion!
Sports
Formula 1 Discussion 2024 - 2026 Football Thread Cricket [SPORT] Tokyo Olympics 2021 Thread General nutrition recommendations
World Cup 2022
Tech Support
[G] How to Block Livestream Ads
TL Community
The Automated Ban List
Blogs
Funny Nicknames
LUCKY_NOOB
Money Laundering In Video Ga…
TrAiDoS
Iranian anarchists: organize…
XenOsky
FS++
Kraekkling
Shocked by a laser…
Spydermine0240
ASL S21 English Commentary…
namkraft
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1148 users

US Politics Mega-thread - Page 5565

Forum Index > General Forum
Post a Reply
Prev 1 5563 5564 5565 5566 5567 5609 Next
Now that we have a new thread, in order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a complete and thorough read before posting!

NOTE: When providing a source, please provide a very brief summary on what it's about and what purpose it adds to the discussion. The supporting statement should clearly explain why the subject is relevant and needs to be discussed. Please follow this rule especially for tweets.

Your supporting statement should always come BEFORE you provide the source.


If you have any questions, comments, concern, or feedback regarding the USPMT, then please use this thread: http://www.teamliquid.net/forum/website-feedback/510156-us-politics-thread
LightSpectra
Profile Blog Joined October 2011
United States2336 Posts
March 17 2026 11:45 GMT
#111281
A man in the USA was thrown in jail for 37 days for posting a meme saying "We have to get over it" after Charlie Kirk was shot.

https://www.theguardian.com/us-news/2025/dec/18/tennessee-charlie-kirk-meme-arrest-lawsuit

Despite there being more guns than people, I didn't see any armed insurrections to set him free.
2006 Shinhan Bank OSL Season 3 was the greatest tournament of all time
EnDeR_
Profile Blog Joined May 2004
Spain2832 Posts
March 17 2026 12:14 GMT
#111282
On March 17 2026 17:41 Liquid`Drone wrote:
Show nested quote +
On March 17 2026 06:34 EnDeR_ wrote:
On March 17 2026 05:36 Liquid`Drone wrote:
On March 17 2026 03:46 EnDeR_ wrote:
On March 17 2026 02:06 Liquid`Drone wrote:
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.


There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works.

Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it.

My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem.

My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem.


My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself.

For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine.


Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress.

To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel.



But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources:

+ Show Spoiler +
The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates:

≈ 540,000 people were deported in 2025 (widely cited estimate) source

Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source

Some government-related figures and reporting also point to totals in that same range (around 540,000) (source

Important context


This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone.

Estimates vary because:

The government released limited or inconsistent data in 2025 source

Different sources count deportations slightly differently (e.g., interior removals vs. border removals).

Bottom line

👉 A reasonable, evidence-based answer is:
About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure.


In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search.


I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that.

What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources.

I took Baal's statement and turned it into a question:

has anyone gone to prison in western democracies for having an opinion

And the bot answered:
Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something)

Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner

Which is a general definition of what a political prisoner is.

The bot made that statement sound very authoritative and even added a citation to it!

Then it hit me with this:

Examples in Western Democracies
United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive.

Which again, sounds very authoritative, but there's no link.

I then scrolled to the actual search results, and this is what it summarised:

https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website

https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post

https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post

And the rest of the sources are either not relevant, or actually say the opposite.

estás más desubicao q un croissant en un plato de nécoras
KwarK
Profile Blog Joined July 2006
United States43758 Posts
March 17 2026 13:10 GMT
#111283
On March 17 2026 19:14 baal wrote:
Show nested quote +
On March 17 2026 15:13 EnDeR_ wrote:
On March 17 2026 11:21 baal wrote:
On March 15 2026 17:59 EnDeR_ wrote:
In a lot of parts of the world people can be killed by what they say, also in many parts including the 1st world people can go to prison for the wrong opinions etc.

Manning and Assange went through hell and countless others would without anonymity.


Source for bolded?


[image loading]


I don't know who that is.


Count Dankula, arrested and went to court for a nazi joke with his dog, google him.

I did Google him. He didn’t go to prison. What are you talking about?
ModeratorThe angels have the phone box
Billyboy
Profile Joined September 2024
1590 Posts
March 17 2026 13:29 GMT
#111284
On March 17 2026 11:55 baal wrote:
Show nested quote +
On March 17 2026 11:38 Billyboy wrote:
I going to really stand behind this thing that is already really bad in reality, because there is a small chance it would help against the absolute worst case senario, which is more likely to happen with the really bad thing in reality.


Perhaps to you in particular an authoritarian regime raising to power is an unlikely scenario, for the majority of the world it is not my friend.


The thing is that the perils of gun ownership scale proportionally with a society's civility, meaning that an "uncivil" country like lets say Mauritania gun ownership creates a lot of violence, but at the same time an authoritarian regime raising to power is very likely, compare that to Switzerland, which has very high gun ownership with little negatives effects but also a Tyranny is very unlikely.

So the risk/reward seems pretty linear across the board.

That is because the Swiss are actually much more like a well regulated militia as Washington and the others had intended. Training, responsibility, robust permitting and registration.

The first use of the amendment was not to put down tyranny of the of federal government (never the goal), it was to assert the feds authority to enforce laws and collect taxes. When farmers were violently resisting paying the taxes.

TLDR, the second amendment made sense the way it was intended, but not the way the NRA and other people who profit off guns (not just the gun manufacturers but also for profit prison's, for profit hospitals, the whole "security" industry) has twisted it.
dyhb
Profile Joined August 2021
United States204 Posts
March 17 2026 14:22 GMT
#111285
On March 17 2026 21:14 EnDeR_ wrote:
Show nested quote +
On March 17 2026 17:41 Liquid`Drone wrote:
On March 17 2026 06:34 EnDeR_ wrote:
On March 17 2026 05:36 Liquid`Drone wrote:
On March 17 2026 03:46 EnDeR_ wrote:
On March 17 2026 02:06 Liquid`Drone wrote:
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.


There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works.

Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it.

My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem.

My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem.


My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself.

For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine.


Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress.

To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel.



But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources:

+ Show Spoiler +
The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates:

≈ 540,000 people were deported in 2025 (widely cited estimate) source

Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source

Some government-related figures and reporting also point to totals in that same range (around 540,000) (source

Important context


This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone.

Estimates vary because:

The government released limited or inconsistent data in 2025 source

Different sources count deportations slightly differently (e.g., interior removals vs. border removals).

Bottom line

👉 A reasonable, evidence-based answer is:
About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure.


In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search.


I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that.

What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources.

I took Baal's statement and turned it into a question:

has anyone gone to prison in western democracies for having an opinion

And the bot answered:
Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something)

Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner

Which is a general definition of what a political prisoner is.

The bot made that statement sound very authoritative and even added a citation to it!

Then it hit me with this:

Examples in Western Democracies
United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive.

Which again, sounds very authoritative, but there's no link.

I then scrolled to the actual search results, and this is what it summarised:

https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website

https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post

https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post

And the rest of the sources are either not relevant, or actually say the opposite.

It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media, but it is an AI issue if it can't cite official crime statistics immediately to back it up. I got much the same from my attempts. It gave me a gov.uk link, but it didn't say what the AI said I'd find there. So the better answer was that the specific crimes ought to have received a breakdown at said link, but it hadn't them and was relying on something else.
WombaT
Profile Blog Joined May 2010
Northern Ireland26470 Posts
March 17 2026 14:28 GMT
#111286
On March 17 2026 20:42 EnDeR_ wrote:
Show nested quote +
On March 17 2026 19:14 baal wrote:
On March 17 2026 15:13 EnDeR_ wrote:
On March 17 2026 11:21 baal wrote:
On March 15 2026 17:59 EnDeR_ wrote:
In a lot of parts of the world people can be killed by what they say, also in many parts including the 1st world people can go to prison for the wrong opinions etc.

Manning and Assange went through hell and countless others would without anonymity.


Source for bolded?


[image loading]


I don't know who that is.


Count Dankula, arrested and went to court for a nazi joke with his dog, google him.



On 23 April 2018, Meechan was sentenced to a fine of £800, with no prison sentence.[11]

so not going to prison for his opinions?

I mean in fairness even an 800 quid fine is ridiculous.

On the flipside I mean it is one bloke and one case, for something that’s supposedly endemic Mr Dankula and a handful of others sure do crop up a lot.

There feels this massive disconnect between reality and perception when it comes to the UK, or other European nations and how much the state is clamping down on free speech that I encounter rather frequently and precludes sensible discussions on the issue. Especially with Americans, although that’s also partly culturally explicable given the quasi-sacred First Amendment and different conceptions on free speech more generally.

Without going full cui bono on it, it does seem to be a scenario that benefits social media companies who don’t want regulated, or malicious political actors to have this rather wonky perception in quarters.
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
Velr
Profile Blog Joined July 2008
Switzerland10866 Posts
Last Edited: 2026-03-17 14:33:45
March 17 2026 14:32 GMT
#111287
It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media


Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical...

Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread?
WombaT
Profile Blog Joined May 2010
Northern Ireland26470 Posts
March 17 2026 14:41 GMT
#111288
On March 17 2026 23:22 dyhb wrote:
Show nested quote +
On March 17 2026 21:14 EnDeR_ wrote:
On March 17 2026 17:41 Liquid`Drone wrote:
On March 17 2026 06:34 EnDeR_ wrote:
On March 17 2026 05:36 Liquid`Drone wrote:
On March 17 2026 03:46 EnDeR_ wrote:
On March 17 2026 02:06 Liquid`Drone wrote:
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.


There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works.

Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it.

My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem.

My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem.


My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself.

For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine.


Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress.

To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel.



But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources:

+ Show Spoiler +
The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates:

≈ 540,000 people were deported in 2025 (widely cited estimate) source

Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source

Some government-related figures and reporting also point to totals in that same range (around 540,000) (source

Important context


This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone.

Estimates vary because:

The government released limited or inconsistent data in 2025 source

Different sources count deportations slightly differently (e.g., interior removals vs. border removals).

Bottom line

👉 A reasonable, evidence-based answer is:
About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure.


In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search.


I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that.

What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources.

I took Baal's statement and turned it into a question:

has anyone gone to prison in western democracies for having an opinion

And the bot answered:
Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something)

Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner

Which is a general definition of what a political prisoner is.

The bot made that statement sound very authoritative and even added a citation to it!

Then it hit me with this:

Examples in Western Democracies
United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive.

Which again, sounds very authoritative, but there's no link.

I then scrolled to the actual search results, and this is what it summarised:

https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website

https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post

https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post

And the rest of the sources are either not relevant, or actually say the opposite.

It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media, but it is an AI issue if it can't cite official crime statistics immediately to back it up. I got much the same from my attempts. It gave me a gov.uk link, but it didn't say what the AI said I'd find there. So the better answer was that the specific crimes ought to have received a breakdown at said link, but it hadn't them and was relying on something else.

I mean some of this would I think be mitigated by AIs being a bit more clear that ‘hey I don’t know’ more frequently and clearly.

I recall trying to use it to answer a real obscure GSL commentary occurrence that I just couldn’t find, and only really being a TLer I thought it might grab an answer from some other platform. Instead it basically repackaged me asking the question here with a ‘I think it might be’ as an authoritative answer. While I did check the source I didn’t really have to as I recognised my own theory anyway, but it was quite illustrative.

But I don’t think it’s a massive issue, perhaps it is and I’m just wrong on this.

People who are bad at Googling, or are using it to find the first link that agrees with their premise to use in an argument, are just going to do the same thing with LLMs, and people who aren’t, aren’t.

Likewise people who want to pass whatever test will abuse the fuck out of LLMs without actually learning things, but it’s a useful tool for those who have an intrinsic interest and enthusiasm for whatever the thing is.

In that sense I think such tech is just further exposing and exacerbated existing cultural or institutionally structural problems rather than causing any new ones in these domains.
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
EnDeR_
Profile Blog Joined May 2004
Spain2832 Posts
Last Edited: 2026-03-17 14:54:30
March 17 2026 14:45 GMT
#111289
On March 17 2026 23:22 dyhb wrote:
Show nested quote +
On March 17 2026 21:14 EnDeR_ wrote:
On March 17 2026 17:41 Liquid`Drone wrote:
On March 17 2026 06:34 EnDeR_ wrote:
On March 17 2026 05:36 Liquid`Drone wrote:
On March 17 2026 03:46 EnDeR_ wrote:
On March 17 2026 02:06 Liquid`Drone wrote:
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.


There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works.

Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it.

My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem.

My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem.


My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself.

For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine.


Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress.

To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel.



But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources:

+ Show Spoiler +
The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates:

≈ 540,000 people were deported in 2025 (widely cited estimate) source

Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source

Some government-related figures and reporting also point to totals in that same range (around 540,000) (source

Important context


This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone.

Estimates vary because:

The government released limited or inconsistent data in 2025 source

Different sources count deportations slightly differently (e.g., interior removals vs. border removals).

Bottom line

👉 A reasonable, evidence-based answer is:
About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure.


In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search.


I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that.

What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources.

I took Baal's statement and turned it into a question:

has anyone gone to prison in western democracies for having an opinion

And the bot answered:
Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something)

Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner

Which is a general definition of what a political prisoner is.

The bot made that statement sound very authoritative and even added a citation to it!

Then it hit me with this:

Examples in Western Democracies
United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive.

Which again, sounds very authoritative, but there's no link.

I then scrolled to the actual search results, and this is what it summarised:

https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website

https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post

https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post

And the rest of the sources are either not relevant, or actually say the opposite.

It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media, but it is an AI issue if it can't cite official crime statistics immediately to back it up. I got much the same from my attempts. It gave me a gov.uk link, but it didn't say what the AI said I'd find there. So the better answer was that the specific crimes ought to have received a breakdown at said link, but it hadn't them and was relying on something else.


Bit of a tough read that one, but this comment struck out for me on that piece:

A spokesperson for Leicestershire police (the force the Times reported had the highest rates of arrests for the relevant offences per 100,000) clarified that offences under section 127 and section 1 can include any form of communication and may also be “serious domestic abuse-related crimes”.[10]


So the 1000s figure also includes domestic abuse, so the number of arrests makes a bit more sense if that gets taken into account.

A quick google yields: In England and Wales, over 240,000 arrests for domestic abuse were made in 2020

that's from the office for national statistics, so hopefully legit.

In that context, thousands of arrests related to online communications relating to domestic abuse offences makes sense. I mean, the Times piece is borderline "not technically" a lie territory but pretty close in my view.

EDIT: cleaned the quote.
EDIT2: Link that I clicked on dyhb's post: https://lordslibrary.parliament.uk/select-communications-offences-and-concerns-over-free-speech/#:~:text=The authors reported that police,Wales from 2017 to 2023
but is now removed.
estás más desubicao q un croissant en un plato de nécoras
WombaT
Profile Blog Joined May 2010
Northern Ireland26470 Posts
March 17 2026 14:57 GMT
#111290
On March 17 2026 23:32 Velr wrote:
Show nested quote +
It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media


Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical...

Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread?

‘That includes’ can do some heavy lifting for some folks. It’s a part of the pie, but some jump to some preconceived notion they have and conclude the majority of those arrests are simply for unpopular/transgressive social media posts.

Quite often it turns out that it’s the combo of existing, often long, long existing criminal offence, that is done via some kind of media as a mechanism.

So-called revenge porn is a sadly common one. So too harassment, so too libel.

Somebody like Graham Linehan who’s held up by rather odious types as fighting the good fight against trans rights overreach, to my knowledge has never incurred the wrath of the state for talking shit about trans people in general. It’s been for frankly deranged levels of consistent harassment of specific individuals. BTW I’m going from memory so if I am wrong, I am wrong, just to inb4 that.

The internet isn’t some magical, different realm, some alternative reality but some seem to treat it as if it is.

If I posted naked photos of my ex in her workplace, or if I followed some enemy around in their day to day and chatted shit about them in earshot of others, I don’t think people would object to me being arrested at the very least, or sued, or subject to a restraining order.

Not that it’s all a bed of roses here, we’ve seen subsequent governments enact real, actual policies that restrict legitimate political protest.

Which oddly enough you hear much less about, it appears for some the freedom to pejoratively use the n word online is a bigger infringement on expression than, actual infringements on expression. Wonder why that is eh?
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
EnDeR_
Profile Blog Joined May 2004
Spain2832 Posts
March 17 2026 15:03 GMT
#111291
On March 17 2026 23:57 WombaT wrote:
Show nested quote +
On March 17 2026 23:32 Velr wrote:
It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media


Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical...

Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread?

‘That includes’ can do some heavy lifting for some folks. It’s a part of the pie, but some jump to some preconceived notion they have and conclude the majority of those arrests are simply for unpopular/transgressive social media posts.

Quite often it turns out that it’s the combo of existing, often long, long existing criminal offence, that is done via some kind of media as a mechanism.

So-called revenge porn is a sadly common one. So too harassment, so too libel.

Somebody like Graham Linehan who’s held up by rather odious types as fighting the good fight against trans rights overreach, to my knowledge has never incurred the wrath of the state for talking shit about trans people in general. It’s been for frankly deranged levels of consistent harassment of specific individuals. BTW I’m going from memory so if I am wrong, I am wrong, just to inb4 that.

The internet isn’t some magical, different realm, some alternative reality but some seem to treat it as if it is.

If I posted naked photos of my ex in her workplace, or if I followed some enemy around in their day to day and chatted shit about them in earshot of others, I don’t think people would object to me being arrested at the very least, or sued, or subject to a restraining order.

Not that it’s all a bed of roses here, we’ve seen subsequent governments enact real, actual policies that restrict legitimate political protest.

Which oddly enough you hear much less about, it appears for some the freedom to pejoratively use the n word online is a bigger infringement on expression than, actual infringements on expression. Wonder why that is eh?


I woudn't actually be all that surprised if the vast majority of prosecutions cited in that times piece under that particular legislation weren't domestic case abuses where that's the only thing they could pin on the person.
estás más desubicao q un croissant en un plato de nécoras
WombaT
Profile Blog Joined May 2010
Northern Ireland26470 Posts
March 17 2026 15:10 GMT
#111292
On March 18 2026 00:03 EnDeR_ wrote:
Show nested quote +
On March 17 2026 23:57 WombaT wrote:
On March 17 2026 23:32 Velr wrote:
It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media


Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical...

Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread?

‘That includes’ can do some heavy lifting for some folks. It’s a part of the pie, but some jump to some preconceived notion they have and conclude the majority of those arrests are simply for unpopular/transgressive social media posts.

Quite often it turns out that it’s the combo of existing, often long, long existing criminal offence, that is done via some kind of media as a mechanism.

So-called revenge porn is a sadly common one. So too harassment, so too libel.

Somebody like Graham Linehan who’s held up by rather odious types as fighting the good fight against trans rights overreach, to my knowledge has never incurred the wrath of the state for talking shit about trans people in general. It’s been for frankly deranged levels of consistent harassment of specific individuals. BTW I’m going from memory so if I am wrong, I am wrong, just to inb4 that.

The internet isn’t some magical, different realm, some alternative reality but some seem to treat it as if it is.

If I posted naked photos of my ex in her workplace, or if I followed some enemy around in their day to day and chatted shit about them in earshot of others, I don’t think people would object to me being arrested at the very least, or sued, or subject to a restraining order.

Not that it’s all a bed of roses here, we’ve seen subsequent governments enact real, actual policies that restrict legitimate political protest.

Which oddly enough you hear much less about, it appears for some the freedom to pejoratively use the n word online is a bigger infringement on expression than, actual infringements on expression. Wonder why that is eh?


I woudn't actually be all that surprised if the vast majority of prosecutions cited in that times piece under that particular legislation weren't domestic case abuses where that's the only thing they could pin on the person.

Wouldn’t surprise me one iota. Especially as it’s just about the easiest thing for police to investigate and subsequently prove was the case.
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
dyhb
Profile Joined August 2021
United States204 Posts
March 17 2026 15:56 GMT
#111293
On March 17 2026 23:32 Velr wrote:
Show nested quote +
It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media


Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical...

Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread?
I'm not in favor of letting the government decide who was enough of a "prick" online to no longer be allowed to participate in society. The only thing I'm willing to concede isn't born out of naivete about governments and backwardness about the relationship between government and society is "incite violence," which has articulable versions that aren't subjective as hell.

One thing you might've missed by quoting out a partial sentence from my post is the statement that "it is an AI issue if it can't cite official crime statistics immediately to back it up." Clearly, in your haste, you must've accidentally skimmed that to mean "and my daily propaganda newsletter told me this is true."

On March 17 2026 23:41 WombaT wrote:
Show nested quote +
On March 17 2026 23:22 dyhb wrote:
On March 17 2026 21:14 EnDeR_ wrote:
On March 17 2026 17:41 Liquid`Drone wrote:
On March 17 2026 06:34 EnDeR_ wrote:
On March 17 2026 05:36 Liquid`Drone wrote:
On March 17 2026 03:46 EnDeR_ wrote:
On March 17 2026 02:06 Liquid`Drone wrote:
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.


There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works.

Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it.

My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem.

My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem.


My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself.

For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine.


Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress.

To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel.



But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources:

+ Show Spoiler +
The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates:

≈ 540,000 people were deported in 2025 (widely cited estimate) source

Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source

Some government-related figures and reporting also point to totals in that same range (around 540,000) (source

Important context


This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone.

Estimates vary because:

The government released limited or inconsistent data in 2025 source

Different sources count deportations slightly differently (e.g., interior removals vs. border removals).

Bottom line

👉 A reasonable, evidence-based answer is:
About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure.


In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search.


I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that.

What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources.

I took Baal's statement and turned it into a question:

has anyone gone to prison in western democracies for having an opinion

And the bot answered:
Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something)

Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner

Which is a general definition of what a political prisoner is.

The bot made that statement sound very authoritative and even added a citation to it!

Then it hit me with this:

Examples in Western Democracies
United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive.

Which again, sounds very authoritative, but there's no link.

I then scrolled to the actual search results, and this is what it summarised:

https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website

https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post

https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post

And the rest of the sources are either not relevant, or actually say the opposite.

It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media, but it is an AI issue if it can't cite official crime statistics immediately to back it up. I got much the same from my attempts. It gave me a gov.uk link, but it didn't say what the AI said I'd find there. So the better answer was that the specific crimes ought to have received a breakdown at said link, but it hadn't them and was relying on something else.

I mean some of this would I think be mitigated by AIs being a bit more clear that ‘hey I don’t know’ more frequently and clearly.

I recall trying to use it to answer a real obscure GSL commentary occurrence that I just couldn’t find, and only really being a TLer I thought it might grab an answer from some other platform. Instead it basically repackaged me asking the question here with a ‘I think it might be’ as an authoritative answer. While I did check the source I didn’t really have to as I recognised my own theory anyway, but it was quite illustrative.

But I don’t think it’s a massive issue, perhaps it is and I’m just wrong on this.

People who are bad at Googling, or are using it to find the first link that agrees with their premise to use in an argument, are just going to do the same thing with LLMs, and people who aren’t, aren’t.

Likewise people who want to pass whatever test will abuse the fuck out of LLMs without actually learning things, but it’s a useful tool for those who have an intrinsic interest and enthusiasm for whatever the thing is.

In that sense I think such tech is just further exposing and exacerbated existing cultural or institutionally structural problems rather than causing any new ones in these domains.
Exactly. The AI should interpret its lack of actual high-quality evidence to preface the answer with an "I don't know" of some kind. Instead, every week I see an AI cite a source that doesn't say what its paraphrasing. I am left interpreting the bad AI answer as some kind of fudged evidence, like it internally concluded that it is a supported, high-quality statement, but can't locate the evidence for me at that precise time.

On March 17 2026 23:45 EnDeR_ wrote:
Show nested quote +
On March 17 2026 23:22 dyhb wrote:
On March 17 2026 21:14 EnDeR_ wrote:
On March 17 2026 17:41 Liquid`Drone wrote:
On March 17 2026 06:34 EnDeR_ wrote:
On March 17 2026 05:36 Liquid`Drone wrote:
On March 17 2026 03:46 EnDeR_ wrote:
On March 17 2026 02:06 Liquid`Drone wrote:
On March 16 2026 23:00 LightSpectra wrote:
On March 16 2026 15:06 Liquid`Drone wrote:
Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source.

I do have issues with people posting chatgpt posts as arguments but that is different.


LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect).


Grok is an outlier and should not be trusted for anything.

I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense.

If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic.


There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works.

Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it.

My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem.

My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem.


My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself.

For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine.


Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress.

To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel.



But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources:

+ Show Spoiler +
The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates:

≈ 540,000 people were deported in 2025 (widely cited estimate) source

Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source

Some government-related figures and reporting also point to totals in that same range (around 540,000) (source

Important context


This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone.

Estimates vary because:

The government released limited or inconsistent data in 2025 source

Different sources count deportations slightly differently (e.g., interior removals vs. border removals).

Bottom line

👉 A reasonable, evidence-based answer is:
About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure.


In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search.


I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that.

What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources.

I took Baal's statement and turned it into a question:

has anyone gone to prison in western democracies for having an opinion

And the bot answered:
Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something)

Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner

Which is a general definition of what a political prisoner is.

The bot made that statement sound very authoritative and even added a citation to it!

Then it hit me with this:

Examples in Western Democracies
United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive.

Which again, sounds very authoritative, but there's no link.

I then scrolled to the actual search results, and this is what it summarised:

https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website

https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post

https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post

And the rest of the sources are either not relevant, or actually say the opposite.

It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media, but it is an AI issue if it can't cite official crime statistics immediately to back it up. I got much the same from my attempts. It gave me a gov.uk link, but it didn't say what the AI said I'd find there. So the better answer was that the specific crimes ought to have received a breakdown at said link, but it hadn't them and was relying on something else.


Bit of a tough read that one, but this comment struck out for me on that piece:

Show nested quote +
A spokesperson for Leicestershire police (the force the Times reported had the highest rates of arrests for the relevant offences per 100,000) clarified that offences under section 127 and section 1 can include any form of communication and may also be “serious domestic abuse-related crimes”.[10]


So the 1000s figure also includes domestic abuse, so the number of arrests makes a bit more sense if that gets taken into account.

A quick google yields: In England and Wales, over 240,000 arrests for domestic abuse were made in 2020

that's from the office for national statistics, so hopefully legit.

In that context, thousands of arrests related to online communications relating to domestic abuse offences makes sense. I mean, the Times piece is borderline "not technically" a lie territory but pretty close in my view.

EDIT: cleaned the quote.
EDIT2: Link that I clicked on dyhb's post: https://lordslibrary.parliament.uk/select-communications-offences-and-concerns-over-free-speech/#:~:text=The authors reported that police,Wales from 2017 to 2023
but is now removed.
It sounds like any sound AI model should say, truthfully, that it hears such claims repeated by mainstream sources, but cannot find any high-quality primary source itself. If I were to agree with Liquid`Drone a little more than I do, then I'd think AI would easily analyze all crime statistics from western democracies and tell me which nations arrest and prosecute the most citizens for social media posts. The UK could be unexceptional for all I know, and the US might have well more incitement to violence prosecutions per capita than the UK has arrests for non-crime hate incidents under the Public Order Act.
JimmyJRaynor
Profile Blog Joined April 2010
Canada17388 Posts
Last Edited: 2026-03-17 16:06:46
March 17 2026 16:03 GMT
#111294
Joe Kent quit. He explained why on twitter.
https://x.com/i/status/2033897242986209689


I cannot in good conscience support the ongoing war in Iran. Iran posed no imminent threat to our nation, and it is clear that we started this war due to pressure from Israel and its powerful American lobby.

In other news, evil prevails again as the USA defeated the DR in the WBC semis.
Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
Velr
Profile Blog Joined July 2008
Switzerland10866 Posts
Last Edited: 2026-03-17 16:07:55
March 17 2026 16:07 GMT
#111295
On March 18 2026 00:56 dyhb wrote:
Show nested quote +
On March 17 2026 23:32 Velr wrote:
It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media


Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical...

Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread?
I'm not in favor of letting the government decide who was enough of a "prick" online to no longer be allowed to participate in society. The only thing I'm willing to concede isn't born out of naivete about governments and backwardness about the relationship between government and society is "incite violence," which has articulable versions that aren't subjective as hell.

One thing you might've missed by quoting out a partial sentence from my post is the statement that "it is an AI issue if it can't cite official crime statistics immediately to back it up." Clearly, in your haste, you must've accidentally skimmed that to mean "and my daily propaganda newsletter told me this is true."


Thats because I didn't care about the AI part of your posts. I just found it interesting how interlinked you guys seem to be sharing the same bs half truths declaring it to be "facts".

I don't care if you got it from asking an AI, from a newsletter or directly from daddy Trump, that you get it from somewhere and then share it is bad enough on it's own.
LightSpectra
Profile Blog Joined October 2011
United States2336 Posts
March 17 2026 16:20 GMT
#111296
Trump says he can do 'anything I want' with Cuba: https://www.reuters.com/world/americas/trump-says-he-thinks-he-will-have-honor-taking-cuba-2026-03-16/

I wish he would stop talking about foreign countries like he does prepubescent girls.
2006 Shinhan Bank OSL Season 3 was the greatest tournament of all time
GreenHorizons
Profile Blog Joined April 2011
United States23773 Posts
Last Edited: 2026-03-17 16:26:17
March 17 2026 16:23 GMT
#111297
On March 18 2026 01:20 LightSpectra wrote:
Trump says he can do 'anything I want' with Cuba: https://www.reuters.com/world/americas/trump-says-he-thinks-he-will-have-honor-taking-cuba-2026-03-16/

I wish he would stop talking about foreign countries like he does prepubescent girls.

Who is going to stop him?
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
LightSpectra
Profile Blog Joined October 2011
United States2336 Posts
March 17 2026 16:32 GMT
#111298
On March 18 2026 01:23 GreenHorizons wrote:
Show nested quote +
On March 18 2026 01:20 LightSpectra wrote:
Trump says he can do 'anything I want' with Cuba: https://www.reuters.com/world/americas/trump-says-he-thinks-he-will-have-honor-taking-cuba-2026-03-16/

I wish he would stop talking about foreign countries like he does prepubescent girls.

Who is going to stop him?


You know, there was in fact someone in this thread that suggested using firearms to indefinitely resist the military, and implied everyone who wouldn't do so is a coward. Wonder what happened to them.
2006 Shinhan Bank OSL Season 3 was the greatest tournament of all time
dyhb
Profile Joined August 2021
United States204 Posts
March 17 2026 16:36 GMT
#111299
On March 18 2026 01:07 Velr wrote:
Show nested quote +
On March 18 2026 00:56 dyhb wrote:
On March 17 2026 23:32 Velr wrote:
It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media


Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical...

Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread?
I'm not in favor of letting the government decide who was enough of a "prick" online to no longer be allowed to participate in society. The only thing I'm willing to concede isn't born out of naivete about governments and backwardness about the relationship between government and society is "incite violence," which has articulable versions that aren't subjective as hell.

One thing you might've missed by quoting out a partial sentence from my post is the statement that "it is an AI issue if it can't cite official crime statistics immediately to back it up." Clearly, in your haste, you must've accidentally skimmed that to mean "and my daily propaganda newsletter told me this is true."


Thats because I didn't care about the AI part of your posts. I just found it interesting how interlinked you guys seem to be sharing the same bs half truths declaring it to be "facts".

I don't care if you got it from asking an AI, from a newsletter or directly from daddy Trump, that you get it from somewhere and then share it is bad enough on it's own.
I think a normal person can read this thread and conclude that there's interest in using an AI to get some (ostensibly) publicly available facts. This is a public thread with three people and 6+ posts on specifically this fact or non-fact. I do find it quite humorous to conclude that the TeamLiquid US Politics Thread is a daily propaganda newsletter (or AI or daddy Trump) according to your logic. Someone posts an AI summary, another analyzes, and two pages later a third person is a bad person for critiquing the AI responses! TL radicalization confirmed!

If I can talk you down from this ledge, I think you can admit that subjects in general discussion within a thread are not engaged with by only fringe radicals of a particular type. At least, I'm hopeful that you can admit that. Questioning an AI source for how it's making those claims of fact is different than endorsing AI as having correctly found and presented the facts. If none of those distinctions are true for you, then I don't think we can have a conversation on political subjects at all.
Biff The Understudy
Profile Blog Joined February 2008
France8011 Posts
March 17 2026 16:55 GMT
#111300
The fact anyone can post an AI prompt into a debate and think he will still be taken seriously is baffling. It’s like saying, “not only am i here to talk without doing any listening, but you know what, i won’t even talk myself because that is also too much effort so I’ll let you debate a machine and go grab a coffee.”
The fellow who is out to burn things up is the counterpart of the fool who thinks he can save the world. The world needs neither to be burned up nor to be saved. The world is, we are. Transients, if we buck it; here to stay if we accept it. ~H.Miller
Prev 1 5563 5564 5565 5566 5567 5609 Next
Please log in or register to reply.
Live Events Refresh
WardiTV Team League
11:00
Group A
RotterdaM728
WardiTV700
IndyStarCraft 312
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
RotterdaM 809
IndyStarCraft 335
Hui .315
LamboSC2 211
Railgan 21
StarCraft: Brood War
Britney 43666
Jaedong 3429
Mini 663
BeSt 656
EffOrt 583
ggaemo 560
Stork 510
actioN 426
Shuttle 424
Rush 375
[ Show more ]
ZerO 370
firebathero 346
Hyuk 320
Soulkey 287
Killer 256
hero 107
Last 94
Larva 80
ToSsGirL 77
sorry 70
PianO 62
Sharp 60
sSak 60
Sea.KH 59
Shine 50
Hyun 49
Aegong 30
JYJ 25
Movie 23
Rock 23
Bale 20
Sacsri 17
GoRush 16
Terrorterran 14
eros_byul 1
Dota 2
Gorgc9666
qojqva951
Counter-Strike
edward71
Super Smash Bros
Mew2King80
Heroes of the Storm
Khaldor418
Liquid`Hasu275
MindelVK2
Other Games
FrodaN3638
singsing2023
Liquid`RaSZi1418
B2W.Neo1057
crisheroes141
Grubby35
ZerO(Twitch)19
Organizations
Other Games
gamesdonequick962
StarCraft 2
ComeBackTV 653
Other Games
BasetradeTV152
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 14 non-featured ]
StarCraft 2
• Adnapsc2 7
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• WagamamaTV591
League of Legends
• Jankos5683
• Nemesis4273
Upcoming Events
BSL
3h 22m
Replay Cast
8h 22m
Replay Cast
17h 22m
Afreeca Starleague
18h 22m
Light vs Calm
Royal vs Mind
Wardi Open
19h 22m
Monday Night Weeklies
1d
OSC
1d 8h
Sparkling Tuna Cup
1d 18h
Afreeca Starleague
1d 18h
Rush vs PianO
Flash vs Speed
Replay Cast
2 days
[ Show More ]
Afreeca Starleague
2 days
BeSt vs Leta
Queen vs Jaedong
Replay Cast
3 days
The PondCast
3 days
Replay Cast
4 days
RSL Revival
4 days
Replay Cast
5 days
RSL Revival
5 days
BSL
6 days
RSL Revival
6 days
uThermal 2v2 Circuit
6 days
Liquipedia Results

Completed

Proleague 2026-03-27
WardiTV Winter 2026
Underdog Cup #3

Ongoing

BSL Season 22
CSL Elite League 2026
CSL Season 20: Qualifier 1
ASL Season 21
Acropolis #4 - TS6
2026 Changsha Offline CUP
StarCraft2 Community Team League 2026 Spring
RSL Revival: Season 4
Nations Cup 2026
NationLESS Cup
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual

Upcoming

CSL Season 20: Qualifier 2
Escore Tournament S2: W1
CSL 2026 SPRING (S20)
Acropolis #4
IPSL Spring 2026
BSL 22 Non-Korean Championship
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
IEM Atlanta 2026
Asian Champions League 2026
PGL Astana 2026
BLAST Rivals Spring 2026
CCT Season 3 Global Finals
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.