• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 03:18
CET 09:18
KST 17:18
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
ByuL: The Forgotten Master of ZvT29Behind the Blue - Team Liquid History Book19Clem wins HomeStory Cup 289HomeStory Cup 28 - Info & Preview13Rongyi Cup S3 - Preview & Info8
Community News
Team Liquid Map Contest - Preparation Notice6Weekly Cups (Feb 23-Mar 1): herO doubles, 2v2 bonanza1Weekly Cups (Feb 16-22): MaxPax doubles0Weekly Cups (Feb 9-15): herO doubles up2ACS replaced by "ASL Season Open" - Starts 21/0258
StarCraft 2
General
How do you think the 5.0.15 balance patch (Oct 2025) for StarCraft II has affected the game? Team Liquid Map Contest - Preparation Notice ByuL: The Forgotten Master of ZvT Nexon's StarCraft game could be FPS, led by UMS maker Weekly Cups (Feb 23-Mar 1): herO doubles, 2v2 bonanza
Tourneys
BeamStick Australia Sparkling Tuna Cup - Weekly Open Tournament $5,000 WardiTV Winter Championship 2026 RSL Season 4 announced for March-April Sea Duckling Open (Global, Bronze-Diamond)
Strategy
Custom Maps
Publishing has been re-enabled! [Feb 24th 2026] Map Editor closed ?
External Content
The PondCast: SC2 News & Results Mutation # 515 Together Forever Mutation # 514 Ulnar New Year Mutation # 513 Attrition Warfare
Brood War
General
Effort misses out on ASL S21 BGH Auto Balance -> http://bghmmr.eu/ BW General Discussion Gypsy to Korea BSL 22 Map Contest — Submissions OPEN to March 10
Tourneys
[BSL22] Open Qualifier #1 - Sunday 21:00 CET [Megathread] Daily Proleagues Small VOD Thread 2.0 BWCL Season 64 Announcement
Strategy
Soma's 9 hatch build from ASL Game 2 Fighting Spirit mining rates Simple Questions, Simple Answers Zealot bombing is no longer popular?
Other Games
General Games
Nintendo Switch Thread Stormgate/Frost Giant Megathread Battle Aces/David Kim RTS Megathread Diablo 2 thread Path of Exile
Dota 2
Official 'what is Dota anymore' discussion The Story of Wings Gaming
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Mafia Game Mode Feedback/Ideas Vanilla Mini Mafia TL Mafia Community Thread
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread Things Aren’t Peaceful in Palestine YouTube Thread UK Politics Mega-thread
Fan Clubs
The IdrA Fan Club
Media & Entertainment
[Manga] One Piece [Req][Books] Good Fantasy/SciFi books Anime Discussion Thread
Sports
2024 - 2026 Football Thread Formula 1 Discussion TL MMA Pick'em Pool 2013
World Cup 2022
Tech Support
Laptop capable of using Photoshop Lightroom?
TL Community
The Automated Ban List
Blogs
Shocked by a laser…
Spydermine0240
Gaming-Related Deaths
TrAiDoS
ONE GREAT AMERICAN MARINE…
XenOsky
Unintentional protectionism…
Uldridge
ASL S21 English Commentary…
namkraft
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1437 users

US Politics Mega-thread - Page 1694

Forum Index > General Forum
Post a Reply
Prev 1 1692 1693 1694 1695 1696 5539 Next
Now that we have a new thread, in order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a complete and thorough read before posting!

NOTE: When providing a source, please provide a very brief summary on what it's about and what purpose it adds to the discussion. The supporting statement should clearly explain why the subject is relevant and needs to be discussed. Please follow this rule especially for tweets.

Your supporting statement should always come BEFORE you provide the source.


If you have any questions, comments, concern, or feedback regarding the USPMT, then please use this thread: http://www.teamliquid.net/forum/website-feedback/510156-us-politics-thread
Slydie
Profile Joined August 2013
1931 Posts
July 20 2019 11:28 GMT
#33861
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)
Buff the siegetank
Nebuchad
Profile Blog Joined December 2012
Switzerland12405 Posts
July 20 2019 11:44 GMT
#33862
On July 20 2019 14:37 CosmicSpiral wrote:
While historical accounts give the impression neoliberalism was a natural progression from capitalism and liberalism, it represented a marked break from both traditions. It was the brainchild of a few public intellectuals who aggressively promoted it through think tanks and adviser positions; its key representatives viewed liberalism and democracy as antimonies, although they didn't proclaim so outside of the MPS; the epistemological justification for ceding judgment to the market would have been utterly alien to Smith or any Enlightenment-influenced philosopher.

It's late over here. I'll fully elaborate on each post tomorrow.


I understand that it's a break from tradition, it's just one that I find utterly logical, and one that I would expect to happen given the principles of capitalism. It's on a really basic level; we give the main share of power to a group of people, and after a while, theories of economics that favor this group of people even more start appearing and gaining prominence (as they are backed by some of the people who have the most influence and the most power). That makes a lot of intuitive sense to me regardless of what Smith thought.

You can find the ancestor of trickle-down economics before the 1927 crisis. It wasn't full neoliberalism as there was no aspect of globalization but that makes sense as well given how nations operated then and how they operate now.

Also I would definitely agree that there is a tension between democracy and liberalism.
No will to live, no wish to die
Gahlo
Profile Joined February 2010
United States35171 Posts
July 20 2019 11:59 GMT
#33863
On July 20 2019 20:28 Slydie wrote:
Show nested quote +
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)

The market has already failed many, many people immensely.
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
Last Edited: 2019-07-20 14:14:22
July 20 2019 13:28 GMT
#33864
--- Nuked ---
Gorsameth
Profile Joined April 2010
Netherlands22122 Posts
July 20 2019 14:15 GMT
#33865
On July 20 2019 22:28 JimmiC wrote:
Show nested quote +
On July 20 2019 20:59 Gahlo wrote:
On July 20 2019 20:28 Slydie wrote:
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)

The market has already failed many, many people immensely.

So has attempts at communism. And saying the people who have tried is the problem is lazy thinking. There are cultural human issues at play that have consistently caused it to fail the vsry people it is meant to help. It sounds great to say you are going to change the incentives of society so people strive for different things. This is a super hard thing to do even on a super small scale, how do you do it on a super large scale? What do you do with the people who don't want the change? Or cant change?
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.
It ignores such insignificant forces as time, entropy, and death
GreenHorizons
Profile Blog Joined April 2011
United States23672 Posts
July 20 2019 14:19 GMT
#33866
On July 20 2019 20:59 Gahlo wrote:
Show nested quote +
On July 20 2019 20:28 Slydie wrote:
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)

The market has already failed many, many people immensely.

There's an ongoing issue of people conflating capitalism and markets to be one in the same and they aren't. You can have markets without capitalism and I think some people don't make that distinction in their understandings/arguments.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
July 20 2019 14:23 GMT
#33867
--- Nuked ---
GreenHorizons
Profile Blog Joined April 2011
United States23672 Posts
Last Edited: 2019-07-20 14:28:46
July 20 2019 14:27 GMT
#33868
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Mohdoo
Profile Joined August 2007
United States15742 Posts
July 20 2019 14:30 GMT
#33869
Seizing a UK tanker feels like a profoundly bad idea for Iran. If Europe gets on board with "Fuck Iran", it is lights out. It would be a huge victory for Trump if Europe were to get legitimately pissed at Iran.
Gorsameth
Profile Joined April 2010
Netherlands22122 Posts
July 20 2019 14:34 GMT
#33870
On July 20 2019 23:27 GreenHorizons wrote:
Show nested quote +
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

Show nested quote +
A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.
It ignores such insignificant forces as time, entropy, and death
Nebuchad
Profile Blog Joined December 2012
Switzerland12405 Posts
July 20 2019 14:38 GMT
#33871
On July 20 2019 23:34 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
No will to live, no wish to die
Gorsameth
Profile Joined April 2010
Netherlands22122 Posts
July 20 2019 14:40 GMT
#33872
On July 20 2019 23:38 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.
It ignores such insignificant forces as time, entropy, and death
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
July 20 2019 14:42 GMT
#33873
--- Nuked ---
Nebuchad
Profile Blog Joined December 2012
Switzerland12405 Posts
July 20 2019 14:45 GMT
#33874
On July 20 2019 23:40 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
No will to live, no wish to die
GreenHorizons
Profile Blog Joined April 2011
United States23672 Posts
Last Edited: 2019-07-20 14:46:50
July 20 2019 14:45 GMT
#33875
On July 20 2019 23:34 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


Depends on how literally you mean that?

"all of humanity" isn't going to do anything except maybe go extinct, A selfish/corrupt society doesn't develop an AI that isn't also "selfish" and corrupt.

Banking on AI seems completely irrational from every angle to me, whereas "all of humanity stops being selfish" isn't happening under any system and capitalism has failed miserably to mitigate that selfishness as we have a handful of people with the majority of the worlds resources leading us straight into catastrophe that threatens the species (granted we're stubborn survivors) so they can be a bit wealthier tomorrow than they were today. So the objection "but there's still selfishness" doesn't make sense to me as damning.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Gorsameth
Profile Joined April 2010
Netherlands22122 Posts
July 20 2019 14:48 GMT
#33876
On July 20 2019 23:45 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.
It ignores such insignificant forces as time, entropy, and death
Nebuchad
Profile Blog Joined December 2012
Switzerland12405 Posts
July 20 2019 14:56 GMT
#33877
On July 20 2019 23:48 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:45 Nebuchad wrote:
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.


I perceive your objection to be similar to saying that in order to move from an authoritarian system of government to a democratic system of government (representative democracy), we need to get rid of power hunger in human nature. Otherwise people who are power hungry will try to game the representative system.

Like... yeah, it's true, they will, and they have. But we can still establish a representative system without eliminating power hunger, and that's still an improvement over an authoritarian system.
No will to live, no wish to die
GreenHorizons
Profile Blog Joined April 2011
United States23672 Posts
July 20 2019 14:59 GMT
#33878
On July 20 2019 23:56 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:48 Gorsameth wrote:
On July 20 2019 23:45 Nebuchad wrote:
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.


I perceive your objection to be similar to saying that in order to move from an authoritarian system of government to a democratic system of government (representative democracy), we need to get rid of power hunger in human nature. Otherwise people who are power hungry will try to game the representative system.

Like... yeah, it's true, they will, and they have. But we can still establish a representative system without eliminating power hunger, and that's still an improvement over an authoritarian system.


His country does still have a king?
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Gorsameth
Profile Joined April 2010
Netherlands22122 Posts
July 20 2019 15:01 GMT
#33879
On July 20 2019 23:45 GreenHorizons wrote:
Show nested quote +
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


Depends on how literally you mean that?

"all of humanity" isn't going to do anything except maybe go extinct, A selfish/corrupt society doesn't develop an AI that isn't also "selfish" and corrupt.

Banking on AI seems completely irrational from every angle to me, whereas "all of humanity stops being selfish" isn't happening under any system and capitalism has failed miserably to mitigate that selfishness as we have a handful of people with the majority of the worlds resources leading us straight into catastrophe that threatens the species (granted we're stubborn survivors) so they can be a bit wealthier tomorrow than they were today. So the objection "but there's still selfishness" doesn't make sense to me as damning.
My point is that the system you come up with after your revolution is going to suffer from the same problem, that of selfish people exploiting it for themselves at the cost of everyone else.
Because its going to be a system run by people and its only a matter of time until those people will be selfish and greedy ones. (and that time is likely going to be immediately).

Capitalism has a lot of big flaws and I would love to change it into something better but what reason do I have to stand with you on the barricades to bring it toppling down when the replacement is going to be the same or worse?
If society is going to roll the dice I would want more assurances then a shrug that its going to do something.

And in my opinion the answer to that is Artificial Intelligence, less chance of greed, selfishness and corruption (if done properly and with an advanced enough AI, which we don't have yet)
It ignores such insignificant forces as time, entropy, and death
Gorsameth
Profile Joined April 2010
Netherlands22122 Posts
July 20 2019 15:06 GMT
#33880
On July 20 2019 23:56 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:48 Gorsameth wrote:
On July 20 2019 23:45 Nebuchad wrote:
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.


I perceive your objection to be similar to saying that in order to move from an authoritarian system of government to a democratic system of government (representative democracy), we need to get rid of power hunger in human nature. Otherwise people who are power hungry will try to game the representative system.

Like... yeah, it's true, they will, and they have. But we can still establish a representative system without eliminating power hunger, and that's still an improvement over an authoritarian system.
Sure but its a risk every time, and often in the past the situation for people was bad enough that they were willing to take that risk because it couldn't get worse.
I don't feel like that is the case now. Things can get a lot worse, and the last few decades have plenty of examples of it, and for most people it's simply not worth taking that risk at the moment.
It ignores such insignificant forces as time, entropy, and death
Prev 1 1692 1693 1694 1695 1696 5539 Next
Please log in or register to reply.
Live Events Refresh
Replay Cast
00:00
LiuLi Cup Grand Finals Playoff
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
WinterStarcraft447
StarCraft: Brood War
Sea 28854
EffOrt 230
BeSt 214
yabsab 115
sSak 78
ToSsGirL 50
Mind 45
Shinee 34
ZergMaN 24
Larva 23
[ Show more ]
Sharp 21
Bale 15
League of Legends
JimRising 536
Counter-Strike
Stewie2K901
m0e_tv404
shoxiejesuss316
Heroes of the Storm
Khaldor134
Other Games
summit1g3655
C9.Mang0288
Tasteless167
NeuroSwarm52
Mew2King35
Moletrap2
Organizations
Other Games
BasetradeTV189
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 15 non-featured ]
StarCraft 2
• StrangeGG 64
• LUISG 8
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• iopq 5
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Stunt619
• HappyZerGling129
Upcoming Events
Ultimate Battle
3h 42m
Light vs ZerO
WardiTV Winter Champion…
3h 42m
MaxPax vs Spirit
Rogue vs Bunny
Cure vs SHIN
Solar vs Zoun
OSC
9h 42m
Replay Cast
15h 42m
CranKy Ducklings
1d 1h
WardiTV Winter Champion…
1d 3h
AI Arena Tournament
1d 11h
Replay Cast
1d 15h
Sparkling Tuna Cup
2 days
WardiTV Winter Champion…
2 days
[ Show More ]
OSC
2 days
Replay Cast
2 days
Replay Cast
3 days
Monday Night Weeklies
3 days
OSC
3 days
Replay Cast
5 days
The PondCast
6 days
Replay Cast
6 days
Liquipedia Results

Completed

Proleague 2026-03-04
PiG Sty Festival 7.0
Underdog Cup #3

Ongoing

KCM Race Survival 2026 Season 1
Jeongseon Sooper Cup
Spring Cup 2026
WardiTV Winter 2026
Nations Cup 2026
ESL Pro League S23 Stage 1&2
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual

Upcoming

ASL Season 21: Qualifier #1
ASL Season 21: Qualifier #2
ASL Season 21
Acropolis #4 - TS6
Acropolis #4
IPSL Spring 2026
CSLAN 4
HSC XXIX
uThermal 2v2 2026 Main Event
Bellum Gens Elite Stara Zagora 2026
RSL Revival: Season 4
NationLESS Cup
CS Asia Championships 2026
IEM Atlanta 2026
Asian Champions League 2026
PGL Astana 2026
BLAST Rivals Spring 2026
CCT Season 3 Global Finals
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League S23 Finals
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.