https://www.theguardian.com/us-news/2025/dec/18/tennessee-charlie-kirk-meme-arrest-lawsuit
Despite there being more guns than people, I didn't see any armed insurrections to set him free.
| Forum Index > General Forum |
Now that we have a new thread, in order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a complete and thorough read before posting! NOTE: When providing a source, please provide a very brief summary on what it's about and what purpose it adds to the discussion. The supporting statement should clearly explain why the subject is relevant and needs to be discussed. Please follow this rule especially for tweets. Your supporting statement should always come BEFORE you provide the source. If you have any questions, comments, concern, or feedback regarding the USPMT, then please use this thread: http://www.teamliquid.net/forum/website-feedback/510156-us-politics-thread | ||
|
LightSpectra
United States2336 Posts
March 17 2026 11:45 GMT
#111281
https://www.theguardian.com/us-news/2025/dec/18/tennessee-charlie-kirk-meme-arrest-lawsuit Despite there being more guns than people, I didn't see any armed insurrections to set him free. | ||
|
EnDeR_
Spain2832 Posts
March 17 2026 12:14 GMT
#111282
On March 17 2026 17:41 Liquid`Drone wrote: Show nested quote + On March 17 2026 06:34 EnDeR_ wrote: On March 17 2026 05:36 Liquid`Drone wrote: On March 17 2026 03:46 EnDeR_ wrote: On March 17 2026 02:06 Liquid`Drone wrote: On March 16 2026 23:00 LightSpectra wrote: On March 16 2026 15:06 Liquid`Drone wrote: Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source. I do have issues with people posting chatgpt posts as arguments but that is different. LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect). Grok is an outlier and should not be trusted for anything. I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense. If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic. There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works. Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it. My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem. My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem. My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself. For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine. Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress. To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel. But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources: + Show Spoiler + The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates: ≈ 540,000 people were deported in 2025 (widely cited estimate) source Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source Some government-related figures and reporting also point to totals in that same range (around 540,000) (source Important context This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone. Estimates vary because: The government released limited or inconsistent data in 2025 source Different sources count deportations slightly differently (e.g., interior removals vs. border removals). Bottom line 👉 A reasonable, evidence-based answer is: About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure. In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search. I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that. What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources. I took Baal's statement and turned it into a question: has anyone gone to prison in western democracies for having an opinion And the bot answered: Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something) Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner Which is a general definition of what a political prisoner is. The bot made that statement sound very authoritative and even added a citation to it! Then it hit me with this: Examples in Western Democracies United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive. Which again, sounds very authoritative, but there's no link. I then scrolled to the actual search results, and this is what it summarised: https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post And the rest of the sources are either not relevant, or actually say the opposite. | ||
|
KwarK
United States43758 Posts
March 17 2026 13:10 GMT
#111283
On March 17 2026 19:14 baal wrote: Show nested quote + On March 17 2026 15:13 EnDeR_ wrote: On March 17 2026 11:21 baal wrote: On March 15 2026 17:59 EnDeR_ wrote: In a lot of parts of the world people can be killed by what they say, also in many parts including the 1st world people can go to prison for the wrong opinions etc. Manning and Assange went through hell and countless others would without anonymity. Source for bolded? ![]() I don't know who that is. Count Dankula, arrested and went to court for a nazi joke with his dog, google him. I did Google him. He didn’t go to prison. What are you talking about? | ||
|
Billyboy
1590 Posts
March 17 2026 13:29 GMT
#111284
On March 17 2026 11:55 baal wrote: Show nested quote + On March 17 2026 11:38 Billyboy wrote: I going to really stand behind this thing that is already really bad in reality, because there is a small chance it would help against the absolute worst case senario, which is more likely to happen with the really bad thing in reality. Perhaps to you in particular an authoritarian regime raising to power is an unlikely scenario, for the majority of the world it is not my friend. The thing is that the perils of gun ownership scale proportionally with a society's civility, meaning that an "uncivil" country like lets say Mauritania gun ownership creates a lot of violence, but at the same time an authoritarian regime raising to power is very likely, compare that to Switzerland, which has very high gun ownership with little negatives effects but also a Tyranny is very unlikely. So the risk/reward seems pretty linear across the board. That is because the Swiss are actually much more like a well regulated militia as Washington and the others had intended. Training, responsibility, robust permitting and registration. The first use of the amendment was not to put down tyranny of the of federal government (never the goal), it was to assert the feds authority to enforce laws and collect taxes. When farmers were violently resisting paying the taxes. TLDR, the second amendment made sense the way it was intended, but not the way the NRA and other people who profit off guns (not just the gun manufacturers but also for profit prison's, for profit hospitals, the whole "security" industry) has twisted it. | ||
|
dyhb
United States204 Posts
March 17 2026 14:22 GMT
#111285
On March 17 2026 21:14 EnDeR_ wrote: It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media, but it is an AI issue if it can't cite official crime statistics immediately to back it up. I got much the same from my attempts. It gave me a gov.uk link, but it didn't say what the AI said I'd find there. So the better answer was that the specific crimes ought to have received a breakdown at said link, but it hadn't them and was relying on something else.Show nested quote + On March 17 2026 17:41 Liquid`Drone wrote: On March 17 2026 06:34 EnDeR_ wrote: On March 17 2026 05:36 Liquid`Drone wrote: On March 17 2026 03:46 EnDeR_ wrote: On March 17 2026 02:06 Liquid`Drone wrote: On March 16 2026 23:00 LightSpectra wrote: On March 16 2026 15:06 Liquid`Drone wrote: Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source. I do have issues with people posting chatgpt posts as arguments but that is different. LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect). Grok is an outlier and should not be trusted for anything. I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense. If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic. There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works. Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it. My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem. My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem. My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself. For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine. Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress. To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel. But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources: + Show Spoiler + The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates: ≈ 540,000 people were deported in 2025 (widely cited estimate) source Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source Some government-related figures and reporting also point to totals in that same range (around 540,000) (source Important context This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone. Estimates vary because: The government released limited or inconsistent data in 2025 source Different sources count deportations slightly differently (e.g., interior removals vs. border removals). Bottom line 👉 A reasonable, evidence-based answer is: About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure. In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search. I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that. What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources. I took Baal's statement and turned it into a question: has anyone gone to prison in western democracies for having an opinion And the bot answered: Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something) Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner Which is a general definition of what a political prisoner is. The bot made that statement sound very authoritative and even added a citation to it! Then it hit me with this: Examples in Western Democracies United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive. Which again, sounds very authoritative, but there's no link. I then scrolled to the actual search results, and this is what it summarised: https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post And the rest of the sources are either not relevant, or actually say the opposite. | ||
|
WombaT
Northern Ireland26470 Posts
March 17 2026 14:28 GMT
#111286
On March 17 2026 20:42 EnDeR_ wrote: Show nested quote + On March 17 2026 19:14 baal wrote: On March 17 2026 15:13 EnDeR_ wrote: On March 17 2026 11:21 baal wrote: On March 15 2026 17:59 EnDeR_ wrote: In a lot of parts of the world people can be killed by what they say, also in many parts including the 1st world people can go to prison for the wrong opinions etc. Manning and Assange went through hell and countless others would without anonymity. Source for bolded? ![]() I don't know who that is. Count Dankula, arrested and went to court for a nazi joke with his dog, google him. On 23 April 2018, Meechan was sentenced to a fine of £800, with no prison sentence.[11] so not going to prison for his opinions? I mean in fairness even an 800 quid fine is ridiculous. On the flipside I mean it is one bloke and one case, for something that’s supposedly endemic Mr Dankula and a handful of others sure do crop up a lot. There feels this massive disconnect between reality and perception when it comes to the UK, or other European nations and how much the state is clamping down on free speech that I encounter rather frequently and precludes sensible discussions on the issue. Especially with Americans, although that’s also partly culturally explicable given the quasi-sacred First Amendment and different conceptions on free speech more generally. Without going full cui bono on it, it does seem to be a scenario that benefits social media companies who don’t want regulated, or malicious political actors to have this rather wonky perception in quarters. | ||
|
Velr
Switzerland10866 Posts
March 17 2026 14:32 GMT
#111287
It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical... Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread? | ||
|
WombaT
Northern Ireland26470 Posts
March 17 2026 14:41 GMT
#111288
On March 17 2026 23:22 dyhb wrote: Show nested quote + It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media, but it is an AI issue if it can't cite official crime statistics immediately to back it up. I got much the same from my attempts. It gave me a gov.uk link, but it didn't say what the AI said I'd find there. So the better answer was that the specific crimes ought to have received a breakdown at said link, but it hadn't them and was relying on something else.On March 17 2026 21:14 EnDeR_ wrote: On March 17 2026 17:41 Liquid`Drone wrote: On March 17 2026 06:34 EnDeR_ wrote: On March 17 2026 05:36 Liquid`Drone wrote: On March 17 2026 03:46 EnDeR_ wrote: On March 17 2026 02:06 Liquid`Drone wrote: On March 16 2026 23:00 LightSpectra wrote: On March 16 2026 15:06 Liquid`Drone wrote: Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source. I do have issues with people posting chatgpt posts as arguments but that is different. LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect). Grok is an outlier and should not be trusted for anything. I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense. If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic. There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works. Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it. My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem. My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem. My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself. For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine. Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress. To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel. But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources: + Show Spoiler + The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates: ≈ 540,000 people were deported in 2025 (widely cited estimate) source Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source Some government-related figures and reporting also point to totals in that same range (around 540,000) (source Important context This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone. Estimates vary because: The government released limited or inconsistent data in 2025 source Different sources count deportations slightly differently (e.g., interior removals vs. border removals). Bottom line 👉 A reasonable, evidence-based answer is: About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure. In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search. I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that. What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources. I took Baal's statement and turned it into a question: has anyone gone to prison in western democracies for having an opinion And the bot answered: Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something) Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner Which is a general definition of what a political prisoner is. The bot made that statement sound very authoritative and even added a citation to it! Then it hit me with this: Examples in Western Democracies United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive. Which again, sounds very authoritative, but there's no link. I then scrolled to the actual search results, and this is what it summarised: https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post And the rest of the sources are either not relevant, or actually say the opposite. I mean some of this would I think be mitigated by AIs being a bit more clear that ‘hey I don’t know’ more frequently and clearly. I recall trying to use it to answer a real obscure GSL commentary occurrence that I just couldn’t find, and only really being a TLer I thought it might grab an answer from some other platform. Instead it basically repackaged me asking the question here with a ‘I think it might be’ as an authoritative answer. While I did check the source I didn’t really have to as I recognised my own theory anyway, but it was quite illustrative. But I don’t think it’s a massive issue, perhaps it is and I’m just wrong on this. People who are bad at Googling, or are using it to find the first link that agrees with their premise to use in an argument, are just going to do the same thing with LLMs, and people who aren’t, aren’t. Likewise people who want to pass whatever test will abuse the fuck out of LLMs without actually learning things, but it’s a useful tool for those who have an intrinsic interest and enthusiasm for whatever the thing is. In that sense I think such tech is just further exposing and exacerbated existing cultural or institutionally structural problems rather than causing any new ones in these domains. | ||
|
EnDeR_
Spain2832 Posts
March 17 2026 14:45 GMT
#111289
On March 17 2026 23:22 dyhb wrote: Show nested quote + It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media, but it is an AI issue if it can't cite official crime statistics immediately to back it up. I got much the same from my attempts. It gave me a gov.uk link, but it didn't say what the AI said I'd find there. So the better answer was that the specific crimes ought to have received a breakdown at said link, but it hadn't them and was relying on something else.On March 17 2026 21:14 EnDeR_ wrote: On March 17 2026 17:41 Liquid`Drone wrote: On March 17 2026 06:34 EnDeR_ wrote: On March 17 2026 05:36 Liquid`Drone wrote: On March 17 2026 03:46 EnDeR_ wrote: On March 17 2026 02:06 Liquid`Drone wrote: On March 16 2026 23:00 LightSpectra wrote: On March 16 2026 15:06 Liquid`Drone wrote: Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source. I do have issues with people posting chatgpt posts as arguments but that is different. LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect). Grok is an outlier and should not be trusted for anything. I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense. If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic. There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works. Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it. My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem. My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem. My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself. For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine. Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress. To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel. But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources: + Show Spoiler + The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates: ≈ 540,000 people were deported in 2025 (widely cited estimate) source Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source Some government-related figures and reporting also point to totals in that same range (around 540,000) (source Important context This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone. Estimates vary because: The government released limited or inconsistent data in 2025 source Different sources count deportations slightly differently (e.g., interior removals vs. border removals). Bottom line 👉 A reasonable, evidence-based answer is: About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure. In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search. I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that. What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources. I took Baal's statement and turned it into a question: has anyone gone to prison in western democracies for having an opinion And the bot answered: Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something) Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner Which is a general definition of what a political prisoner is. The bot made that statement sound very authoritative and even added a citation to it! Then it hit me with this: Examples in Western Democracies United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive. Which again, sounds very authoritative, but there's no link. I then scrolled to the actual search results, and this is what it summarised: https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post And the rest of the sources are either not relevant, or actually say the opposite. Bit of a tough read that one, but this comment struck out for me on that piece: A spokesperson for Leicestershire police (the force the Times reported had the highest rates of arrests for the relevant offences per 100,000) clarified that offences under section 127 and section 1 can include any form of communication and may also be “serious domestic abuse-related crimes”.[10] So the 1000s figure also includes domestic abuse, so the number of arrests makes a bit more sense if that gets taken into account. A quick google yields: In England and Wales, over 240,000 arrests for domestic abuse were made in 2020 that's from the office for national statistics, so hopefully legit. In that context, thousands of arrests related to online communications relating to domestic abuse offences makes sense. I mean, the Times piece is borderline "not technically" a lie territory but pretty close in my view. EDIT: cleaned the quote. EDIT2: Link that I clicked on dyhb's post: https://lordslibrary.parliament.uk/select-communications-offences-and-concerns-over-free-speech/#:~:text=The authors reported that police,Wales from 2017 to 2023 but is now removed. | ||
|
WombaT
Northern Ireland26470 Posts
March 17 2026 14:57 GMT
#111290
On March 17 2026 23:32 Velr wrote: Show nested quote + It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical... Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread? ‘That includes’ can do some heavy lifting for some folks. It’s a part of the pie, but some jump to some preconceived notion they have and conclude the majority of those arrests are simply for unpopular/transgressive social media posts. Quite often it turns out that it’s the combo of existing, often long, long existing criminal offence, that is done via some kind of media as a mechanism. So-called revenge porn is a sadly common one. So too harassment, so too libel. Somebody like Graham Linehan who’s held up by rather odious types as fighting the good fight against trans rights overreach, to my knowledge has never incurred the wrath of the state for talking shit about trans people in general. It’s been for frankly deranged levels of consistent harassment of specific individuals. BTW I’m going from memory so if I am wrong, I am wrong, just to inb4 that. The internet isn’t some magical, different realm, some alternative reality but some seem to treat it as if it is. If I posted naked photos of my ex in her workplace, or if I followed some enemy around in their day to day and chatted shit about them in earshot of others, I don’t think people would object to me being arrested at the very least, or sued, or subject to a restraining order. Not that it’s all a bed of roses here, we’ve seen subsequent governments enact real, actual policies that restrict legitimate political protest. Which oddly enough you hear much less about, it appears for some the freedom to pejoratively use the n word online is a bigger infringement on expression than, actual infringements on expression. Wonder why that is eh? | ||
|
EnDeR_
Spain2832 Posts
March 17 2026 15:03 GMT
#111291
On March 17 2026 23:57 WombaT wrote: Show nested quote + On March 17 2026 23:32 Velr wrote: It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical... Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread? ‘That includes’ can do some heavy lifting for some folks. It’s a part of the pie, but some jump to some preconceived notion they have and conclude the majority of those arrests are simply for unpopular/transgressive social media posts. Quite often it turns out that it’s the combo of existing, often long, long existing criminal offence, that is done via some kind of media as a mechanism. So-called revenge porn is a sadly common one. So too harassment, so too libel. Somebody like Graham Linehan who’s held up by rather odious types as fighting the good fight against trans rights overreach, to my knowledge has never incurred the wrath of the state for talking shit about trans people in general. It’s been for frankly deranged levels of consistent harassment of specific individuals. BTW I’m going from memory so if I am wrong, I am wrong, just to inb4 that. The internet isn’t some magical, different realm, some alternative reality but some seem to treat it as if it is. If I posted naked photos of my ex in her workplace, or if I followed some enemy around in their day to day and chatted shit about them in earshot of others, I don’t think people would object to me being arrested at the very least, or sued, or subject to a restraining order. Not that it’s all a bed of roses here, we’ve seen subsequent governments enact real, actual policies that restrict legitimate political protest. Which oddly enough you hear much less about, it appears for some the freedom to pejoratively use the n word online is a bigger infringement on expression than, actual infringements on expression. Wonder why that is eh? I woudn't actually be all that surprised if the vast majority of prosecutions cited in that times piece under that particular legislation weren't domestic case abuses where that's the only thing they could pin on the person. | ||
|
WombaT
Northern Ireland26470 Posts
March 17 2026 15:10 GMT
#111292
On March 18 2026 00:03 EnDeR_ wrote: Show nested quote + On March 17 2026 23:57 WombaT wrote: On March 17 2026 23:32 Velr wrote: It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical... Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread? ‘That includes’ can do some heavy lifting for some folks. It’s a part of the pie, but some jump to some preconceived notion they have and conclude the majority of those arrests are simply for unpopular/transgressive social media posts. Quite often it turns out that it’s the combo of existing, often long, long existing criminal offence, that is done via some kind of media as a mechanism. So-called revenge porn is a sadly common one. So too harassment, so too libel. Somebody like Graham Linehan who’s held up by rather odious types as fighting the good fight against trans rights overreach, to my knowledge has never incurred the wrath of the state for talking shit about trans people in general. It’s been for frankly deranged levels of consistent harassment of specific individuals. BTW I’m going from memory so if I am wrong, I am wrong, just to inb4 that. The internet isn’t some magical, different realm, some alternative reality but some seem to treat it as if it is. If I posted naked photos of my ex in her workplace, or if I followed some enemy around in their day to day and chatted shit about them in earshot of others, I don’t think people would object to me being arrested at the very least, or sued, or subject to a restraining order. Not that it’s all a bed of roses here, we’ve seen subsequent governments enact real, actual policies that restrict legitimate political protest. Which oddly enough you hear much less about, it appears for some the freedom to pejoratively use the n word online is a bigger infringement on expression than, actual infringements on expression. Wonder why that is eh? I woudn't actually be all that surprised if the vast majority of prosecutions cited in that times piece under that particular legislation weren't domestic case abuses where that's the only thing they could pin on the person. Wouldn’t surprise me one iota. Especially as it’s just about the easiest thing for police to investigate and subsequently prove was the case. | ||
|
dyhb
United States204 Posts
March 17 2026 15:56 GMT
#111293
On March 17 2026 23:32 Velr wrote: I'm not in favor of letting the government decide who was enough of a "prick" online to no longer be allowed to participate in society. The only thing I'm willing to concede isn't born out of naivete about governments and backwardness about the relationship between government and society is "incite violence," which has articulable versions that aren't subjective as hell.Show nested quote + It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical... Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread? One thing you might've missed by quoting out a partial sentence from my post is the statement that "it is an AI issue if it can't cite official crime statistics immediately to back it up." Clearly, in your haste, you must've accidentally skimmed that to mean "and my daily propaganda newsletter told me this is true." On March 17 2026 23:41 WombaT wrote: Exactly. The AI should interpret its lack of actual high-quality evidence to preface the answer with an "I don't know" of some kind. Instead, every week I see an AI cite a source that doesn't say what its paraphrasing. I am left interpreting the bad AI answer as some kind of fudged evidence, like it internally concluded that it is a supported, high-quality statement, but can't locate the evidence for me at that precise time.Show nested quote + On March 17 2026 23:22 dyhb wrote: On March 17 2026 21:14 EnDeR_ wrote: It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media, but it is an AI issue if it can't cite official crime statistics immediately to back it up. I got much the same from my attempts. It gave me a gov.uk link, but it didn't say what the AI said I'd find there. So the better answer was that the specific crimes ought to have received a breakdown at said link, but it hadn't them and was relying on something else.On March 17 2026 17:41 Liquid`Drone wrote: On March 17 2026 06:34 EnDeR_ wrote: On March 17 2026 05:36 Liquid`Drone wrote: On March 17 2026 03:46 EnDeR_ wrote: On March 17 2026 02:06 Liquid`Drone wrote: On March 16 2026 23:00 LightSpectra wrote: On March 16 2026 15:06 Liquid`Drone wrote: Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source. I do have issues with people posting chatgpt posts as arguments but that is different. LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect). Grok is an outlier and should not be trusted for anything. I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense. If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic. There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works. Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it. My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem. My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem. My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself. For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine. Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress. To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel. But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources: + Show Spoiler + The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates: ≈ 540,000 people were deported in 2025 (widely cited estimate) source Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source Some government-related figures and reporting also point to totals in that same range (around 540,000) (source Important context This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone. Estimates vary because: The government released limited or inconsistent data in 2025 source Different sources count deportations slightly differently (e.g., interior removals vs. border removals). Bottom line 👉 A reasonable, evidence-based answer is: About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure. In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search. I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that. What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources. I took Baal's statement and turned it into a question: has anyone gone to prison in western democracies for having an opinion And the bot answered: Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something) Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner Which is a general definition of what a political prisoner is. The bot made that statement sound very authoritative and even added a citation to it! Then it hit me with this: Examples in Western Democracies United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive. Which again, sounds very authoritative, but there's no link. I then scrolled to the actual search results, and this is what it summarised: https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post And the rest of the sources are either not relevant, or actually say the opposite. I mean some of this would I think be mitigated by AIs being a bit more clear that ‘hey I don’t know’ more frequently and clearly. I recall trying to use it to answer a real obscure GSL commentary occurrence that I just couldn’t find, and only really being a TLer I thought it might grab an answer from some other platform. Instead it basically repackaged me asking the question here with a ‘I think it might be’ as an authoritative answer. While I did check the source I didn’t really have to as I recognised my own theory anyway, but it was quite illustrative. But I don’t think it’s a massive issue, perhaps it is and I’m just wrong on this. People who are bad at Googling, or are using it to find the first link that agrees with their premise to use in an argument, are just going to do the same thing with LLMs, and people who aren’t, aren’t. Likewise people who want to pass whatever test will abuse the fuck out of LLMs without actually learning things, but it’s a useful tool for those who have an intrinsic interest and enthusiasm for whatever the thing is. In that sense I think such tech is just further exposing and exacerbated existing cultural or institutionally structural problems rather than causing any new ones in these domains. On March 17 2026 23:45 EnDeR_ wrote: It sounds like any sound AI model should say, truthfully, that it hears such claims repeated by mainstream sources, but cannot find any high-quality primary source itself. If I were to agree with Liquid`Drone a little more than I do, then I'd think AI would easily analyze all crime statistics from western democracies and tell me which nations arrest and prosecute the most citizens for social media posts. The UK could be unexceptional for all I know, and the US might have well more incitement to violence prosecutions per capita than the UK has arrests for non-crime hate incidents under the Public Order Act.Show nested quote + On March 17 2026 23:22 dyhb wrote: On March 17 2026 21:14 EnDeR_ wrote: It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media, but it is an AI issue if it can't cite official crime statistics immediately to back it up. I got much the same from my attempts. It gave me a gov.uk link, but it didn't say what the AI said I'd find there. So the better answer was that the specific crimes ought to have received a breakdown at said link, but it hadn't them and was relying on something else.On March 17 2026 17:41 Liquid`Drone wrote: On March 17 2026 06:34 EnDeR_ wrote: On March 17 2026 05:36 Liquid`Drone wrote: On March 17 2026 03:46 EnDeR_ wrote: On March 17 2026 02:06 Liquid`Drone wrote: On March 16 2026 23:00 LightSpectra wrote: On March 16 2026 15:06 Liquid`Drone wrote: Your reading comprehension is off. I'm not negative towards ai as a tool for learning and I had no issues with baal posting the summary as a source. I do have issues with people posting chatgpt posts as arguments but that is different. LLMs as a learning tool is extremely dubious at best, catastrophic at worst. Aside from the documented fact that they hallucinate up to 40% of their information, the horrific environmental effects, and the predictable outcome of letting the authoritarian billionaire class gatekeep information (remember when Grok would start talking about "white genocide" when asked about literally anything?), they're also extremely sycophantic, which makes overconfident uneducated people even less open to new points of view (Dunning-Kruger effect). Grok is an outlier and should not be trusted for anything. I can also be on board with being negative towards using ChatGPT because OpenAI - unlike Anthropic, didn't refuse to cooperate with the pentagon regarding mass surveillance or fully autonomous weapons. I'm not gonna argue against the environmental effects, but this idea that AI is entirely bad and without positive sides is nonsense. If you talk to chatgpt or copilot or gemini about subjects that are uncontroversial or well established, it's good. You'll get solid answers that largely correspond with the truth (or well, the 'most accepted/established information'). When you say they 'hallucinate up to 40% of their information', what does that even mean? You think it's wrong 40% of the time? Or that 'on certain weird, niche subjects where it doesn't have much knowledge, it will still pretend to know what it's talking about and then it can, in those specific situations, confidently make up 40% of what it tells you'? That's an issue - for sure - but if you want to educate yourself on photosyntesis or grammar rules or the consequences of the industrial revolution, LLMs are fantastic. There are many upsides of using genAI tools. I use it regularly. I should also say that in a scientific context, I find the AI to be more inaccurate than accurate, no matter the model I'm using. Granted, I'm not asking how photosynthesis works. Anecdotally, AI summaries is how many students learn; why bother with the course material when you can just study a summary of it. My PhD students don't read papers, they read AI summaries of papers. In a scientific context, this is bad because to make the AI summary, it's dumbing down the content and giving results that are inaccurate. My students can't tell that they're getting inaccurate information and this is becoming a serious problem. My point was that AI summaries, especially within search results, has a huge problem: either the tech company decides which information you see (by biasing sources), or it feeds you whatever was in the search results, so you have no way of telling (unless you go and check the sources!) whether the information is coming from a reputable source or not. When people unquestionably take AI summaries as facts without checking, like baal was doing, this, to me, is a serious problem. My wife is doing a PhD and for her, AI has been an invaluable asset. In particular for statistical analysis which she herself did not have the skillset to do. Then, she's gotten stuff double and triple-checked by people with those skills, and overwhelmingly, AI has been spot on. She doesn't use it the way your students do - and I agree it's important that students develop the grit and tenacity to actually read academic papers - but if I want to read a PhD paper about a scientific field where I'm not literate, I will get a more accurate understanding from reading an AI summary than I will from reading the actual paper. If it's a field where I know my shit, nope, still gonna trust myself. For the AI summary part - it's basically like googling something, visiting the first link you get, and using data from that link as your evidence. Can it be wrong? Sure. But it's a) significantly less time consuming and b) not wrong by default especially if you're talking about a non-controversial topic. Like baal himself said, using an AI summary to get an answer to "Does the cuban people hate their government" would be awful - but for 'gun ownership rates for different countries', it's fine. Agree that AI can help you do stuff that you don't have the background for. Especially if the solution involves writing some code (like doing statistical analysis). This is fine if it's a well-defined problem with known solutions. It quickly craps out if you want to apply it to something new but I digress. To your latter point, this isn't as simple as you are making it out to be. A question like "how many immigrants did ICE deport in 2025" should not be controversial, it should just be a number collected from reports, similar to how numbers for gun ownership for different countries are collected. And yet, would you trust any number that it gives you? Would you trust it if it summarised the white house website? I am not saying that it's wrong by default, but that you can't assume the answer is right without checking it. And people don't check. I don't think this is a good direction of travel. But the AI summary does give sources. Incidentally - for whatever reason - when I just googled 'how many immigrants did ICE deport in 2025' - there wasn't even an AI summary created for me, and I just got a bunch of different sources. However, asking ChatGPT that very same question, I get a pretty nuanced answer - which links to four different sources: + Show Spoiler + The exact number is uncertain, because official U.S. data for 2025 has been incomplete and inconsistent. But based on the best available estimates: ≈ 540,000 people were deported in 2025 (widely cited estimate) source Other analyses/projected totals suggest roughly 500,000–600,000 deportations for the year (source Some government-related figures and reporting also point to totals in that same range (around 540,000) (source Important context This total usually includes deportations carried out by ICE + border authorities (CBP), not just ICE alone. Estimates vary because: The government released limited or inconsistent data in 2025 source Different sources count deportations slightly differently (e.g., interior removals vs. border removals). Bottom line 👉 A reasonable, evidence-based answer is: About 500,000–600,000 immigrants were deported from the U.S. in 2025, with ~540,000 being the most commonly cited figure. In my opinion this is a very solid answer to the question I asked. It's not overconfident, and it uses a reasonable set of sources as its foundation for its answer. And this corresponds very well with my experience using chatgpt or copilot as a google-substitute - or even looking at the AI summary. When I wrote my previous post where I linked baal a source from Education Weekly about how ai use influences brain activity, that was a source I got from the AI summary of my google search. I am not saying there is no benefit to AI tools. You can definitely use them in a way that can genuinely saves you time. I'm not debating that. What I'm saying is that AI summaries sound authoritative and most people don't question them -- I think you are a clear outlier in that you actually clicked the sources it cited for you. My main beef is that it turns all primary sources into secondary sources. I took Baal's statement and turned it into a question: has anyone gone to prison in western democracies for having an opinion And the bot answered: Yes, individuals have gone to prison in western democracies for expressing opinions, although such cases are typically prosecuted under specific limitations on free speech, such as hate speech laws, holocaust denial, or incitement to violence, rather than simply for holding an unpopular opinion. (link to something) Which sounds authoritative, until you click on the link and it goes to: https://en.wikipedia.org/wiki/Political_prisoner Which is a general definition of what a political prisoner is. The bot made that statement sound very authoritative and even added a citation to it! Then it hit me with this: Examples in Western Democracies United Kingdom: The UK has seen thousands of arrests related to online communication offences, with reports suggesting over 30 people are arrested daily for speech crimes. Individuals have been imprisoned for online posts, including a journalist having police at her door over a tweet, and instances of jail time for social media posts deemed threatening or abusive. Which again, sounds very authoritative, but there's no link. I then scrolled to the actual search results, and this is what it summarised: https://www.persuasion.community/p/europe-really-is-jailing-people-for <- tell me you don't get creepy vibes from the website https://www.reddit.com/r/ShitAmericansSay/comments/1g8uztr/europeans_go_to_jail_for_stuff_they_say_online/ <- a reddit post https://www.quora.com/Do-any-countries-imprison-their-opponents-for-speaking-against-the-government <- some quora post And the rest of the sources are either not relevant, or actually say the opposite. Bit of a tough read that one, but this comment struck out for me on that piece: Show nested quote + A spokesperson for Leicestershire police (the force the Times reported had the highest rates of arrests for the relevant offences per 100,000) clarified that offences under section 127 and section 1 can include any form of communication and may also be “serious domestic abuse-related crimes”.[10] So the 1000s figure also includes domestic abuse, so the number of arrests makes a bit more sense if that gets taken into account. A quick google yields: In England and Wales, over 240,000 arrests for domestic abuse were made in 2020 that's from the office for national statistics, so hopefully legit. In that context, thousands of arrests related to online communications relating to domestic abuse offences makes sense. I mean, the Times piece is borderline "not technically" a lie territory but pretty close in my view. EDIT: cleaned the quote. EDIT2: Link that I clicked on dyhb's post: https://lordslibrary.parliament.uk/select-communications-offences-and-concerns-over-free-speech/#:~:text=The authors reported that police,Wales from 2017 to 2023 but is now removed. | ||
|
JimmyJRaynor
Canada17388 Posts
March 17 2026 16:03 GMT
#111294
https://x.com/i/status/2033897242986209689 I cannot in good conscience support the ongoing war in Iran. Iran posed no imminent threat to our nation, and it is clear that we started this war due to pressure from Israel and its powerful American lobby. In other news, evil prevails again as the USA defeated the DR in the WBC semis. | ||
|
Velr
Switzerland10866 Posts
March 17 2026 16:07 GMT
#111295
On March 18 2026 00:56 dyhb wrote: Show nested quote + I'm not in favor of letting the government decide who was enough of a "prick" online to no longer be allowed to participate in society. The only thing I'm willing to concede isn't born out of naivete about governments and backwardness about the relationship between government and society is "incite violence," which has articulable versions that aren't subjective as hell.On March 17 2026 23:32 Velr wrote: It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical... Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread? One thing you might've missed by quoting out a partial sentence from my post is the statement that "it is an AI issue if it can't cite official crime statistics immediately to back it up." Clearly, in your haste, you must've accidentally skimmed that to mean "and my daily propaganda newsletter told me this is true." Thats because I didn't care about the AI part of your posts. I just found it interesting how interlinked you guys seem to be sharing the same bs half truths declaring it to be "facts". I don't care if you got it from asking an AI, from a newsletter or directly from daddy Trump, that you get it from somewhere and then share it is bad enough on it's own. | ||
|
LightSpectra
United States2336 Posts
March 17 2026 16:20 GMT
#111296
I wish he would stop talking about foreign countries like he does prepubescent girls. | ||
|
GreenHorizons
United States23773 Posts
March 17 2026 16:23 GMT
#111297
On March 18 2026 01:20 LightSpectra wrote: Trump says he can do 'anything I want' with Cuba: https://www.reuters.com/world/americas/trump-says-he-thinks-he-will-have-honor-taking-cuba-2026-03-16/ I wish he would stop talking about foreign countries like he does prepubescent girls. Who is going to stop him? | ||
|
LightSpectra
United States2336 Posts
March 17 2026 16:32 GMT
#111298
On March 18 2026 01:23 GreenHorizons wrote: Show nested quote + On March 18 2026 01:20 LightSpectra wrote: Trump says he can do 'anything I want' with Cuba: https://www.reuters.com/world/americas/trump-says-he-thinks-he-will-have-honor-taking-cuba-2026-03-16/ I wish he would stop talking about foreign countries like he does prepubescent girls. Who is going to stop him? You know, there was in fact someone in this thread that suggested using firearms to indefinitely resist the military, and implied everyone who wouldn't do so is a coward. Wonder what happened to them. | ||
|
dyhb
United States204 Posts
March 17 2026 16:36 GMT
#111299
On March 18 2026 01:07 Velr wrote: I think a normal person can read this thread and conclude that there's interest in using an AI to get some (ostensibly) publicly available facts. This is a public thread with three people and 6+ posts on specifically this fact or non-fact. I do find it quite humorous to conclude that the TeamLiquid US Politics Thread is a daily propaganda newsletter (or AI or daddy Trump) according to your logic. Someone posts an AI summary, another analyzes, and two pages later a third person is a bad person for critiquing the AI responses! TL radicalization confirmed!Show nested quote + On March 18 2026 00:56 dyhb wrote: On March 17 2026 23:32 Velr wrote: I'm not in favor of letting the government decide who was enough of a "prick" online to no longer be allowed to participate in society. The only thing I'm willing to concede isn't born out of naivete about governments and backwardness about the relationship between government and society is "incite violence," which has articulable versions that aren't subjective as hell.It should be concerning that the UK arrests and prosecutes over 1,000 people a year for a legal standard that includes offensive and indecent speech on social media Should it tho? Why? Not behaving like a giant, usually racist, prick spreading lies and/or stoking hate including downright trying to incite violence doesn't seem that high of a standard to be allowed to participate in society. Now you could argue, that the state sometimes overreaches and prosecutes people it shouldn't... But then I remember that the US has the death penalty and this argument would be extremly hypocritical... Also... I read about hte UK arresting citizens for "free speech" in like 5 reddit threads today but usally the number is ~13'000 and misquoted as from 2025 when it's not. Do you get a daily propaganda newsletter to know what bs to spread? One thing you might've missed by quoting out a partial sentence from my post is the statement that "it is an AI issue if it can't cite official crime statistics immediately to back it up." Clearly, in your haste, you must've accidentally skimmed that to mean "and my daily propaganda newsletter told me this is true." Thats because I didn't care about the AI part of your posts. I just found it interesting how interlinked you guys seem to be sharing the same bs half truths declaring it to be "facts". I don't care if you got it from asking an AI, from a newsletter or directly from daddy Trump, that you get it from somewhere and then share it is bad enough on it's own. If I can talk you down from this ledge, I think you can admit that subjects in general discussion within a thread are not engaged with by only fringe radicals of a particular type. At least, I'm hopeful that you can admit that. Questioning an AI source for how it's making those claims of fact is different than endorsing AI as having correctly found and presented the facts. If none of those distinctions are true for you, then I don't think we can have a conversation on political subjects at all. | ||
|
Biff The Understudy
France8011 Posts
March 17 2026 16:55 GMT
#111300
| ||
| ||
StarCraft 2 StarCraft: Brood War Britney Dota 2Jaedong Mini BeSt EffOrt ggaemo Stork actioN Shuttle Rush [ Show more ] Counter-Strike Super Smash Bros Heroes of the Storm Other Games Organizations Other Games StarCraft 2 Other Games StarCraft 2 StarCraft: Brood War
StarCraft 2 • Adnapsc2 StarCraft: Brood War• AfreecaTV YouTube • intothetv • Kozan • IndyKCrew • LaughNgamezSOOP • Migwel • sooper7s Dota 2 League of Legends |
|
BSL
Replay Cast
Replay Cast
Afreeca Starleague
Light vs Calm
Royal vs Mind
Wardi Open
Monday Night Weeklies
OSC
Sparkling Tuna Cup
Afreeca Starleague
Rush vs PianO
Flash vs Speed
Replay Cast
[ Show More ] Afreeca Starleague
BeSt vs Leta
Queen vs Jaedong
Replay Cast
The PondCast
Replay Cast
RSL Revival
Replay Cast
RSL Revival
BSL
RSL Revival
uThermal 2v2 Circuit
|
|
|