• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 15:57
CEST 21:57
KST 04:57
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL21] Ro8 Preview Pt1: Inheritors14[ASL21] Ro16 Preview Pt2: All Star10Team Liquid Map Contest #22 - The Finalists19[ASL21] Ro16 Preview Pt1: Fresh Flow9[ASL21] Ro24 Preview Pt2: News Flash10
Community News
2026 GSL Season 1 Qualifiers22Maestros of the Game 2 announced92026 GSL Tour plans announced15Weekly Cups (April 6-12): herO doubles, "Villains" prevail1MaNa leaves Team Liquid25
StarCraft 2
General
Team Liquid Map Contest #22 - The Finalists MaNa leaves Team Liquid Maestros of the Game 2 announced 2026 GSL Tour plans announced Blizzard Classic Cup @ BlizzCon 2026 - $100k prize pool
Tourneys
2026 GSL Season 1 Qualifiers Sparkling Tuna Cup - Weekly Open Tournament INu's Battles#14 <BO.9 2Matches> GSL CK: More events planned pending crowdfunding RSL Revival: Season 5 - Qualifiers and Main Event
Strategy
Custom Maps
[D]RTS in all its shapes and glory <3 [A] Nemrods 1/4 players [M] (2) Frigid Storage
External Content
The PondCast: SC2 News & Results Mutation # 523 Firewall Mutation # 522 Flip My Base Mutation # 521 Memorable Boss
Brood War
General
[ASL21] Ro8 Preview Pt1: Inheritors BGH Auto Balance -> http://bghmmr.eu/ FlaSh: This Will Be My Final ASL【ASL S21 Ro.16】 Leta's ASL S21 Ro.16 review ASL21 General Discussion
Tourneys
[ASL21] Ro8 Day 1 [Megathread] Daily Proleagues [ASL21] Ro16 Group D Escore Tournament StarCraft Season 2
Strategy
Fighting Spirit mining rates Simple Questions, Simple Answers What's the deal with APM & what's its true value Any training maps people recommend?
Other Games
General Games
Dawn of War IV Stormgate/Frost Giant Megathread Diablo IV Nintendo Switch Thread Total Annihilation Server - TAForever
Dota 2
The Story of Wings Gaming
League of Legends
G2 just beat GenG in First stand
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas TL Mafia Community Thread Five o'clock TL Mafia
Community
General
US Politics Mega-thread 3D technology/software discussion European Politico-economics QA Mega-thread Canadian Politics Mega-thread Things Aren’t Peaceful in Palestine
Fan Clubs
The IdrA Fan Club
Media & Entertainment
Anime Discussion Thread [Manga] One Piece [Req][Books] Good Fantasy/SciFi books Movie Discussion!
Sports
2024 - 2026 Football Thread Formula 1 Discussion McBoner: A hockey love story
World Cup 2022
Tech Support
streaming software Strange computer issues (software) [G] How to Block Livestream Ads
TL Community
The Automated Ban List
Blogs
Sexual Health Of Gamers
TrAiDoS
lurker extra damage testi…
StaticNine
Broowar part 2
qwaykee
Funny Nicknames
LUCKY_NOOB
Iranian anarchists: organize…
XenOsky
Customize Sidebar...

Website Feedback

Closed Threads



Active: 3129 users

US Politics Mega-thread - Page 1694

Forum Index > General Forum
Post a Reply
Prev 1 1692 1693 1694 1695 1696 5702 Next
Now that we have a new thread, in order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a complete and thorough read before posting!

NOTE: When providing a source, please provide a very brief summary on what it's about and what purpose it adds to the discussion. The supporting statement should clearly explain why the subject is relevant and needs to be discussed. Please follow this rule especially for tweets.

Your supporting statement should always come BEFORE you provide the source.


If you have any questions, comments, concern, or feedback regarding the USPMT, then please use this thread: http://www.teamliquid.net/forum/website-feedback/510156-us-politics-thread
Slydie
Profile Joined August 2013
1935 Posts
July 20 2019 11:28 GMT
#33861
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)
Buff the siegetank
Nebuchad
Profile Blog Joined December 2012
Switzerland12450 Posts
July 20 2019 11:44 GMT
#33862
On July 20 2019 14:37 CosmicSpiral wrote:
While historical accounts give the impression neoliberalism was a natural progression from capitalism and liberalism, it represented a marked break from both traditions. It was the brainchild of a few public intellectuals who aggressively promoted it through think tanks and adviser positions; its key representatives viewed liberalism and democracy as antimonies, although they didn't proclaim so outside of the MPS; the epistemological justification for ceding judgment to the market would have been utterly alien to Smith or any Enlightenment-influenced philosopher.

It's late over here. I'll fully elaborate on each post tomorrow.


I understand that it's a break from tradition, it's just one that I find utterly logical, and one that I would expect to happen given the principles of capitalism. It's on a really basic level; we give the main share of power to a group of people, and after a while, theories of economics that favor this group of people even more start appearing and gaining prominence (as they are backed by some of the people who have the most influence and the most power). That makes a lot of intuitive sense to me regardless of what Smith thought.

You can find the ancestor of trickle-down economics before the 1927 crisis. It wasn't full neoliberalism as there was no aspect of globalization but that makes sense as well given how nations operated then and how they operate now.

Also I would definitely agree that there is a tension between democracy and liberalism.
No will to live, no wish to die
Gahlo
Profile Joined February 2010
United States35172 Posts
July 20 2019 11:59 GMT
#33863
On July 20 2019 20:28 Slydie wrote:
Show nested quote +
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)

The market has already failed many, many people immensely.
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
Last Edited: 2019-07-20 14:14:22
July 20 2019 13:28 GMT
#33864
--- Nuked ---
Gorsameth
Profile Joined April 2010
Netherlands22298 Posts
July 20 2019 14:15 GMT
#33865
On July 20 2019 22:28 JimmiC wrote:
Show nested quote +
On July 20 2019 20:59 Gahlo wrote:
On July 20 2019 20:28 Slydie wrote:
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)

The market has already failed many, many people immensely.

So has attempts at communism. And saying the people who have tried is the problem is lazy thinking. There are cultural human issues at play that have consistently caused it to fail the vsry people it is meant to help. It sounds great to say you are going to change the incentives of society so people strive for different things. This is a super hard thing to do even on a super small scale, how do you do it on a super large scale? What do you do with the people who don't want the change? Or cant change?
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.
It ignores such insignificant forces as time, entropy, and death
GreenHorizons
Profile Blog Joined April 2011
United States23914 Posts
July 20 2019 14:19 GMT
#33866
On July 20 2019 20:59 Gahlo wrote:
Show nested quote +
On July 20 2019 20:28 Slydie wrote:
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)

The market has already failed many, many people immensely.

There's an ongoing issue of people conflating capitalism and markets to be one in the same and they aren't. You can have markets without capitalism and I think some people don't make that distinction in their understandings/arguments.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
July 20 2019 14:23 GMT
#33867
--- Nuked ---
GreenHorizons
Profile Blog Joined April 2011
United States23914 Posts
Last Edited: 2019-07-20 14:28:46
July 20 2019 14:27 GMT
#33868
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Mohdoo
Profile Joined August 2007
United States15743 Posts
July 20 2019 14:30 GMT
#33869
Seizing a UK tanker feels like a profoundly bad idea for Iran. If Europe gets on board with "Fuck Iran", it is lights out. It would be a huge victory for Trump if Europe were to get legitimately pissed at Iran.
Gorsameth
Profile Joined April 2010
Netherlands22298 Posts
July 20 2019 14:34 GMT
#33870
On July 20 2019 23:27 GreenHorizons wrote:
Show nested quote +
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

Show nested quote +
A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.
It ignores such insignificant forces as time, entropy, and death
Nebuchad
Profile Blog Joined December 2012
Switzerland12450 Posts
July 20 2019 14:38 GMT
#33871
On July 20 2019 23:34 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
No will to live, no wish to die
Gorsameth
Profile Joined April 2010
Netherlands22298 Posts
July 20 2019 14:40 GMT
#33872
On July 20 2019 23:38 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.
It ignores such insignificant forces as time, entropy, and death
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
July 20 2019 14:42 GMT
#33873
--- Nuked ---
Nebuchad
Profile Blog Joined December 2012
Switzerland12450 Posts
July 20 2019 14:45 GMT
#33874
On July 20 2019 23:40 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
No will to live, no wish to die
GreenHorizons
Profile Blog Joined April 2011
United States23914 Posts
Last Edited: 2019-07-20 14:46:50
July 20 2019 14:45 GMT
#33875
On July 20 2019 23:34 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


Depends on how literally you mean that?

"all of humanity" isn't going to do anything except maybe go extinct, A selfish/corrupt society doesn't develop an AI that isn't also "selfish" and corrupt.

Banking on AI seems completely irrational from every angle to me, whereas "all of humanity stops being selfish" isn't happening under any system and capitalism has failed miserably to mitigate that selfishness as we have a handful of people with the majority of the worlds resources leading us straight into catastrophe that threatens the species (granted we're stubborn survivors) so they can be a bit wealthier tomorrow than they were today. So the objection "but there's still selfishness" doesn't make sense to me as damning.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Gorsameth
Profile Joined April 2010
Netherlands22298 Posts
July 20 2019 14:48 GMT
#33876
On July 20 2019 23:45 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.
It ignores such insignificant forces as time, entropy, and death
Nebuchad
Profile Blog Joined December 2012
Switzerland12450 Posts
July 20 2019 14:56 GMT
#33877
On July 20 2019 23:48 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:45 Nebuchad wrote:
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.


I perceive your objection to be similar to saying that in order to move from an authoritarian system of government to a democratic system of government (representative democracy), we need to get rid of power hunger in human nature. Otherwise people who are power hungry will try to game the representative system.

Like... yeah, it's true, they will, and they have. But we can still establish a representative system without eliminating power hunger, and that's still an improvement over an authoritarian system.
No will to live, no wish to die
GreenHorizons
Profile Blog Joined April 2011
United States23914 Posts
July 20 2019 14:59 GMT
#33878
On July 20 2019 23:56 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:48 Gorsameth wrote:
On July 20 2019 23:45 Nebuchad wrote:
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.


I perceive your objection to be similar to saying that in order to move from an authoritarian system of government to a democratic system of government (representative democracy), we need to get rid of power hunger in human nature. Otherwise people who are power hungry will try to game the representative system.

Like... yeah, it's true, they will, and they have. But we can still establish a representative system without eliminating power hunger, and that's still an improvement over an authoritarian system.


His country does still have a king?
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Gorsameth
Profile Joined April 2010
Netherlands22298 Posts
July 20 2019 15:01 GMT
#33879
On July 20 2019 23:45 GreenHorizons wrote:
Show nested quote +
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


Depends on how literally you mean that?

"all of humanity" isn't going to do anything except maybe go extinct, A selfish/corrupt society doesn't develop an AI that isn't also "selfish" and corrupt.

Banking on AI seems completely irrational from every angle to me, whereas "all of humanity stops being selfish" isn't happening under any system and capitalism has failed miserably to mitigate that selfishness as we have a handful of people with the majority of the worlds resources leading us straight into catastrophe that threatens the species (granted we're stubborn survivors) so they can be a bit wealthier tomorrow than they were today. So the objection "but there's still selfishness" doesn't make sense to me as damning.
My point is that the system you come up with after your revolution is going to suffer from the same problem, that of selfish people exploiting it for themselves at the cost of everyone else.
Because its going to be a system run by people and its only a matter of time until those people will be selfish and greedy ones. (and that time is likely going to be immediately).

Capitalism has a lot of big flaws and I would love to change it into something better but what reason do I have to stand with you on the barricades to bring it toppling down when the replacement is going to be the same or worse?
If society is going to roll the dice I would want more assurances then a shrug that its going to do something.

And in my opinion the answer to that is Artificial Intelligence, less chance of greed, selfishness and corruption (if done properly and with an advanced enough AI, which we don't have yet)
It ignores such insignificant forces as time, entropy, and death
Gorsameth
Profile Joined April 2010
Netherlands22298 Posts
July 20 2019 15:06 GMT
#33880
On July 20 2019 23:56 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:48 Gorsameth wrote:
On July 20 2019 23:45 Nebuchad wrote:
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.


I perceive your objection to be similar to saying that in order to move from an authoritarian system of government to a democratic system of government (representative democracy), we need to get rid of power hunger in human nature. Otherwise people who are power hungry will try to game the representative system.

Like... yeah, it's true, they will, and they have. But we can still establish a representative system without eliminating power hunger, and that's still an improvement over an authoritarian system.
Sure but its a risk every time, and often in the past the situation for people was bad enough that they were willing to take that risk because it couldn't get worse.
I don't feel like that is the case now. Things can get a lot worse, and the last few decades have plenty of examples of it, and for most people it's simply not worth taking that risk at the moment.
It ignores such insignificant forces as time, entropy, and death
Prev 1 1692 1693 1694 1695 1696 5702 Next
Please log in or register to reply.
Live Events Refresh
Monday Night Weeklies
16:00
#49
RotterdaM1112
TKL 413
IndyStarCraft 302
SteadfastSC234
BRAT_OK 121
EnkiAlexander 33
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
RotterdaM 1112
TKL 413
IndyStarCraft 302
SteadfastSC 234
BRAT_OK 121
UpATreeSC 110
JuggernautJason58
StarCraft: Brood War
ggaemo 397
Hyun 57
910 20
HiyA 15
NaDa 6
Dota 2
capcasts146
Counter-Strike
pashabiceps2430
fl0m1943
Super Smash Bros
Mew2King55
Heroes of the Storm
Liquid`Hasu332
Khaldor216
Other Games
Grubby6392
summit1g5996
mouzStarbuck266
C9.Mang0260
KnowMe205
Sick202
Pyrionflax107
Trikslyr54
MindelVK8
Organizations
Dota 2
PGL Dota 2 - Main Stream342
Other Games
BasetradeTV317
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
[ Show 20 non-featured ]
StarCraft 2
• kabyraGe 239
• Adnapsc2 14
• Reevou 5
• LaughNgamezSOOP
• sooper7s
• AfreecaTV YouTube
• intothetv
• Migwel
• Kozan
• IndyKCrew
StarCraft: Brood War
• 80smullet 29
• FirePhoenix20
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• WagamamaTV750
League of Legends
• Nemesis3788
Other Games
• imaqtpie1620
• Shiphtur224
• tFFMrPink 13
Upcoming Events
Replay Cast
4h 4m
Replay Cast
13h 4m
Afreeca Starleague
14h 4m
Leta vs YSC
GSL
1d 13h
Rogue vs Percival
Zoun vs Solar
Replay Cast
2 days
GSL
2 days
Cure vs TriGGeR
ByuN vs Bunny
The PondCast
2 days
KCM Race Survival
2 days
Replay Cast
3 days
Replay Cast
3 days
[ Show More ]
Escore
3 days
Replay Cast
4 days
Replay Cast
4 days
IPSL
4 days
Ret vs Art_Of_Turtle
Radley vs TBD
BSL
4 days
Replay Cast
5 days
uThermal 2v2 Circuit
5 days
BSL
5 days
IPSL
5 days
eOnzErG vs TBD
G5 vs Nesh
Replay Cast
6 days
Wardi Open
6 days
Afreeca Starleague
6 days
Jaedong vs Light
Monday Night Weeklies
6 days
Liquipedia Results

Completed

Escore Tournament S2: W4
WardiTV TLMC #16
Nations Cup 2026

Ongoing

BSL Season 22
ASL Season 21
CSL 2026 SPRING (S20)
IPSL Spring 2026
KCM Race Survival 2026 Season 2
StarCraft2 Community Team League 2026 Spring
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2
PGL Cluj-Napoca 2026

Upcoming

Escore Tournament S2: W5
KK 2v2 League Season 1
Acropolis #4
BSL 22 Non-Korean Championship
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
Maestros of the Game 2
2026 GSL S2
RSL Revival: Season 5
2026 GSL S1
XSE Pro League 2026
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
IEM Atlanta 2026
Asian Champions League 2026
PGL Astana 2026
BLAST Rivals Spring 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.