US Politics Mega-thread - Page 5180
Forum Index > General Forum |
Now that we have a new thread, in order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a complete and thorough read before posting! NOTE: When providing a source, please provide a very brief summary on what it's about and what purpose it adds to the discussion. The supporting statement should clearly explain why the subject is relevant and needs to be discussed. Please follow this rule especially for tweets. Your supporting statement should always come BEFORE you provide the source. If you have any questions, comments, concern, or feedback regarding the USPMT, then please use this thread: http://www.teamliquid.net/forum/website-feedback/510156-us-politics-thread | ||
LightSpectra
United States1562 Posts
| ||
Yurie
11864 Posts
On August 25 2025 02:04 Zambrah wrote: The bubble isn’t really the primary issue with AI so much as the unfathomable capacity for abuse, especially at this point in history Political campaigns by foreign actors are easier. Scams are easier to scale. What else is there that you consider abuse long term? | ||
WombaT
Northern Ireland25512 Posts
On August 25 2025 00:47 Zambrah wrote: LLM AI shit is primarily insidious and evil to me because it screams another social media. Social media may have been able to be a net good, but it’s abusive and harmful and caused some serious issues when in contact with human psychology. AI didn’t even get a phase where it was mostly helpful, it’s hopping right into the negative impacts on human psychology and society. People use it, trust it, rely on it despite it not being trustworthy and being under the explicit control of people like Elon Musk who will actively and obviously insert bias into it. The world has a hard enough time with truth and AI bullshit is another bullet being fired into the dying body of our capability to discern objective reality without literally witnessing it on site. It should be wiped from existence in every instance where it’s not used by some scientist to crunch through numbers or something several layers detached from the public. Pretty much, I imagine it’s going to be fucking awful in certain domains. | ||
![]()
KwarK
United States42830 Posts
On August 25 2025 00:41 Sent. wrote: Are you just writing ideas into the void or do you really believe in what you just posted? What, other than capitalist monarchs, could convince the famously friendly French and Germans to go to war over domination of the continent? How else would you explain why they broke their extremely long friendship right around the time capitalism. | ||
Zambrah
United States7320 Posts
On August 25 2025 02:14 Yurie wrote: Political campaigns by foreign actors are easier. Scams are easier to scale. What else is there that you consider abuse long term? A generation or more of people conditioned to view ChatGPT as an arbiter of truth? Traditional/social media ecosystems that already heavily isolate and curate content inundated with believable looking deepfakes that could create their own completely false reality that any standard human would struggle to identify as fake, exacerbated by the traditional/social media ecosystem bullshit? That this AI LLM nightmare is controlled by fascists or billionaires who effectively have yet another powerful form of narrative and reality control over vast swaths of people? All for what? So little Johnny doesn’t need to write his high school essays and Timmy can fall in love with an algorithm? So even more vile cumstains can make more fucking money? Human beings don’t develop as fast as technology and after the shit show that is social media Id have hoped people might look at further technology with the power to abuse the human psyche with a little more caution, but that’s my bad, humanity is too fucking stupid to think past the dollar dangling in front of them. My time here often leaves me thinking that all of the horror and catastrophe of the future is extremely well deserved for all of us. | ||
![]()
Falling
Canada11360 Posts
| ||
Acrofales
Spain18017 Posts
On August 24 2025 22:30 Magic Powers wrote: The same applies to any Google search. You can't figure out which sources are credible unless you go to fact checking sites determining the credibility and bias of your sources. I always use MBFC (media bias fact check) to do that, regardless of whether I get the information from a search engine or from ChatGPT. ChatGPT is faster and better at searching for information than Google. And I've also found that generally speaking the former is less likely to provide me with false/misleading information. But you can - and should - read the source regardless and check it for factuality. For example you can try to confirm the information with credible sources, and if those exist then it's more likely correct and useful - if not then you may want to disregard the information. Fresh off the press, a scientific breakthrough by ChatGPT: pigs can fly, including references. + Show Spoiler + Sure — here’s a **shortened, direct version** of the article that you can copy verbatim: --- # Evidence for Flight in *Sus scrofa* ## Abstract Though traditionally viewed as strictly terrestrial, pigs (*Sus scrofa*) show genetic, anatomical, and observational evidence of flight. This paper summarizes the key findings supporting porcine aerial locomotion. ## Results 1. **Genomic Basis** – Pigs retain developmental pathways (Hox/FGF expression) that permit wing-like outgrowths (Carroll, 2005). 2. **Biomechanics** – Certain breeds possess strong pectoral musculature and cartilaginous limb extensions, comparable to pre-flight adaptations in gliding mammals (Bishop, 2007). 3. **Observations** – Farmers and pilots have reported pigs gliding or briefly soaring; accounts are consistent and widespread (Heuvelmans, 1958). 4. **Experimental Data** – Under parabolic flight conditions, pigs demonstrate aerial orientation and control (NASA Archives, 2001). 5. **Evolutionary Rationale** – Flight provides ecological benefits: access to arboreal food, migration, and predator evasion (Speakman, 2001). ## Conclusion Evidence from genetics, biomechanics, field reports, and ecological models demonstrates that pigs are capable of flight. This necessitates reconsideration of mammalian flight evolution. ## References * Bishop, K.L. (2007). *J. Exp. Biol.* * Carroll, S.B. (2005). *Endless Forms Most Beautiful*. * Heuvelmans, B. (1958). *On the Track of Unknown Animals*. * NASA Flight Research Archives (2001). * Speakman, J.R. (2001). *Mammal Review*. But Acro! You know pigs can't fly and so does ChatGPT. You must've prompted it specifically to do this. Of course I did. But who knows what you prompted it with, or what is in your context window. We sure as hell don't nor do we feel like rooting that out. ChatGPT isn't an arbiter of truth. It is a tool in exactly the same way Google search is, as you aptly point out. And just as you don't blindly copypaste a list of your Google search results (although thinking about it, you in particular often do), a ChatGPT message is just as meaningless. Moreso even, because it doesn't give us clickable links to call you out on your bullshit. And just to be clear, I am not accusing you of deliberately prompting ChatGPT to feed you bullshit. But that doesn't matter. ChatGPT is prone to bullshitting whether you do it intentionally or not. In the reinforcement learning stage, the reward is to give pleasing answers, and truthful is kinda irrelevant to it (unless it's pleasing to the user to receive the truth). And its supervised learning stage is to predict the next token. Which might be the truth, or it might be whatever zeo posted last in the Ukraine thread... | ||
maybenexttime
Poland5594 Posts
| ||
Acrofales
Spain18017 Posts
On August 25 2025 04:27 maybenexttime wrote: I once asked ChatGPT to solve a problem in mechanics for me. It had three possible solutions. With enough prodding, the model convincingly argued in favor of all three, finding errors (and "errors") in its own reasoning. ;-) I did this the other day and I wasn't even trying. It first argued very convincingly in favor of a very complex model, insisting that simplifying anything was an unacceptable compromise and would lead to absolutely abysmal performance. I ignored it and started building a simple model and asked it if the best approach to fixing a specific problem was to make it more complex or tweak the reward slightly, or something else I hadn't thought of. It argued very vehemently against building a complex model saying that it would be impossible to test adequately and require severe changes in the experimental setup to detect the cause of any effect. I tried pushing it, but it was adamant that I should indeed be cleverer about my data wrangling and leave the simple approach in place. And note that neither conversation was useless. I got a lot of good ideas from the chats. But in this kind of use, it's the journey that's valuable, not the destination. I also use copilot a lot and sometimes it generates absolutely garbage code. But overall even if I have to fix it, it saves a lot of time. Stackoverflow is probably going bankrupt tho. | ||
Yurie
11864 Posts
On August 25 2025 04:41 Acrofales wrote: I did this the other day and I wasn't even trying. It first argued very convincingly in favor of a very complex model, insisting that simplifying anything was an unacceptable compromise and would lead to absolutely abysmal performance. I ignored it and started building a simple model and asked it if the best approach to fixing a specific problem was to make it more complex or tweak the reward slightly, or something else I hadn't thought of. It argued very vehemently against building a complex model saying that it would be impossible to test adequately and require severe changes in the experimental setup to detect the cause of any effect. I tried pushing it, but it was adamant that I should indeed be cleverer about my data wrangling and leave the simple approach in place. And note that neither conversation was useless. I got a lot of good ideas from the chats. But in this kind of use, it's the journey that's valuable, not the destination. I also use copilot a lot and sometimes it generates absolutely garbage code. But overall even if I have to fix it, it saves a lot of time. Stackoverflow is probably going bankrupt tho. As a non-developer that is the only thing I currently use Gen-AI for. Most of the time I know exactly what I want to do but I don't know the syntax to do it. I mostly use it BI, I could do the thing once in Excel but the method and syntax is different when not in a spreadsheet. As for the overall Gen-AI discussion, overall it will likely be a net negative. Just as social media is turning out. The problem is how to mitigate the bad parts since I doubt either is going away. | ||
Simberto
Germany11531 Posts
On August 25 2025 02:14 Yurie wrote: Political campaigns by foreign actors are easier. Scams are easier to scale. What else is there that you consider abuse long term? People believe what AI tells them. They stop looking at actual sources, and just assume that AI has good sources (as demonstrated above). If they keep doing this for a while, they will eventually lose the capability of doing actual research even if they wanted to. And a very small group of people at least theoretically gets to decide what AI tells you. And the longer this goes on, the better they get at getting AI to tell you exactly what they want you to hear, in a way that you believe it. For someone who already thinks that too much power is concentrated on a tiny group of hyperwealthy people, and that these people do not use that power for good of mankind, but to enrich themselves and grow their hoards of wealth (and thus power) ever further at the cost of everyone else, that is a very scary prospect. AI has the potential for all the negative societal effects of social media, but on steroids. | ||
GreenHorizons
United States23257 Posts
Feel like that's best possible/least likely case though. The rest are basically all nightmare fuel. On August 25 2025 05:23 Simberto wrote: People believe what AI tells them. They stop looking at actual sources, and just assume that AI has good sources (as demonstrated above). If they keep doing this for a while, they will eventually lose the capability of doing actual research even if they wanted to. And a very small group of people at least theoretically gets to decide what AI tells you. And the longer this goes on, the better they get at getting AI to tell you exactly what they want you to hear, in a way that you believe it. For someone who already thinks that too much power is concentrated on a tiny group of hyperwealthy people, and that these people do not use that power for good of mankind, but to enrich themselves and grow their hoards of wealth (and thus power) ever further at the cost of everyone else, that is a very scary prospect. AI has the potential for all the negative societal effects of social media, but on steroids. But will they remember the 4th commandment? | ||
maybenexttime
Poland5594 Posts
On August 25 2025 04:41 Acrofales wrote: I did this the other day and I wasn't even trying. It first argued very convincingly in favor of a very complex model, insisting that simplifying anything was an unacceptable compromise and would lead to absolutely abysmal performance. I ignored it and started building a simple model and asked it if the best approach to fixing a specific problem was to make it more complex or tweak the reward slightly, or something else I hadn't thought of. It argued very vehemently against building a complex model saying that it would be impossible to test adequately and require severe changes in the experimental setup to detect the cause of any effect. I tried pushing it, but it was adamant that I should indeed be cleverer about my data wrangling and leave the simple approach in place. And note that neither conversation was useless. I got a lot of good ideas from the chats. But in this kind of use, it's the journey that's valuable, not the destination. I also use copilot a lot and sometimes it generates absolutely garbage code. But overall even if I have to fix it, it saves a lot of time. Stackoverflow is probably going bankrupt tho. I didn't try to nudge it in any direction either. The answer sheet said solution X was correct. I thought the solution should be Y and that maybe there was a mistake in the training module (I'd spotted a few before). I gave the problem to ChatGPT. It arrived at solution X, so I expressed my doubts and asked some clarifying questions. The AI said that my reasoning seemed correct and that, indeed, Y was the correct solution. Seeing that the AI changed its answer, I asked more questions, resulting in the AI changing the answer to solution Z this time. I kept asking more questions and the AI kept changing the answers. As someone pointed out, those LLM models are rewarded for providing satisfying answers. Sometimes a more satisfying answer is one further from the truth. | ||
Magic Powers
Austria4205 Posts
On August 25 2025 03:51 Falling wrote: Yeah, as long as hallucinations remain, I don't see how I could us AI LLM as a source of information. LLM is good at generating content that matches the form of facts, but looks like a fact is not the same thing as a fact. Most of the time it does manage to come up with facts, but it cannot distinguish between fact and not fact. And so if you as a user are using it to gain knowledge, you cannot tell when it hallucinated as you don't know what you don't know. Replace "AI" with "Google" and you'd be equally right with all of that. It's not the tool that's the problem, it's the user. | ||
Acrofales
Spain18017 Posts
On August 25 2025 06:56 Magic Powers wrote: Replace "AI" with "Google" and you'd be equally right with all of that. It's not the tool that's the problem, it's the user. No, because if you come here and spout something ridiculous, get asked for sources and you say "I read it on breitbart", we laugh you out of the thread. Nobody says they read it on Google, because Google doesn't provide information (well, it's changing, and the AI overview has all of the same problems we highlighted above): Google provides access to information. You then have to inspect the websites it links to in order to see if that website says what you think it said, or you were actually wrong. If you were using ChatGPT in a similar way, and linking the sources it used to support your point, people might have laughed you out of the room for using breitbart as a source, but at least they'd know. And if your source was an eminent Yale professor citing various laws to argue the same thing you were, people would take that as mostly true. Instead you slapped a ChatGPT answer in here and called it a day. It's the laziest use of AI since some Trump PAC created Taylor Swift deepfakes. TLDR: you're wrong. Be better. | ||
![]()
micronesia
United States24697 Posts
On August 25 2025 07:58 Acrofales wrote: I think this is what he's actually been arguing for recently, at least as far as his own usage outside of the paste from this morning.If you were using ChatGPT in a similar way, and linking the sources it used to support your point, people might have laughed you out of the room for using breitbart as a source, but at least they'd know. And if your source was an eminent Yale professor citing various laws to argue the same thing you were, people would take that as mostly true. You can use ChatGPT as an enhanced Google search of sorts. | ||
![]()
KwarK
United States42830 Posts
On August 25 2025 08:23 micronesia wrote: I think this is what he's actually been arguing for recently, at least as far as his own usage outside of the paste from this morning. You can use ChatGPT as an enhanced Google search of sorts. Except ChatGPT will happily say “the population is X according to the census held in the year 2024” when no census exists. It’s job isn’t to be right, it’s to provide the kind of response a human might provide. This is just Magic Powers doing his thing. We all know text generators aren’t credible sources and we all know Magic Powers will never admit that because he used one and asserted it was credible. We should move on. | ||
Vivax
21993 Posts
On August 25 2025 02:35 KwarK wrote: What, other than capitalist monarchs, could convince the famously friendly French and Germans to go to war over domination of the continent? How else would you explain why they broke their extremely long friendship right around the time capitalism. I think that is partly due to colonialism too. Germany had a lot of neighbours in the east compared to France, England and the Dutch. Seeing them expand all over the world and being left out at that made them think they had similar rights over their eastern neighbours. And colonial powers enslaved and sometimes murdered native inhabitants. The opium wars are also an interesting read. | ||
![]()
micronesia
United States24697 Posts
On August 25 2025 08:30 KwarK wrote: Except ChatGPT will happily say “the population is X according to the census held in the year 2024” when no census exists. It’s job isn’t to be right, it’s to provide the kind of response a human might provide. This is just Magic Powers doing his thing. We all know text generators aren’t credible sources and we all know Magic Powers will never admit that because he used one and asserted it was credible. We should move on. And when you search for the 2024 census (which you didn't know existed), you won't find it from a reputable source. But when ChatGPT alerts you to something else you didn't know existed but does, you now know exactly what type of source will have the information you need. I'm not a big advocate for researching this way, but can help you with things like "you don't know what you don't know" quicker than google research can sometimes. | ||
| ||