• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 04:18
CET 10:18
KST 18:18
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
TL.net Map Contest #21: Winners11Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10[ASL20] Finals Preview: Arrival13TL.net Map Contest #21: Voting12[ASL20] Ro4 Preview: Descent11
Community News
Weekly Cups (Nov 3-9): Clem Conquers in Canada1SC: Evo Complete - Ranked Ladder OPEN ALPHA5StarCraft, SC2, HotS, WC3, Returning to Blizzcon!45$5,000+ WardiTV 2025 Championship7[BSL21] RO32 Group Stage4
StarCraft 2
General
SC: Evo Complete - Ranked Ladder OPEN ALPHA Weekly Cups (Nov 3-9): Clem Conquers in Canada Mech is the composition that needs teleportation t Craziest Micro Moments Of All Time? RotterdaM "Serral is the GOAT, and it's not close"
Tourneys
Constellation Cup - Main Event - Stellar Fest Tenacious Turtle Tussle Sparkling Tuna Cup - Weekly Open Tournament $5,000+ WardiTV 2025 Championship Merivale 8 Open - LAN - Stellar Fest
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 499 Chilling Adaptation Mutation # 498 Wheel of Misfortune|Cradle of Death Mutation # 497 Battle Haredened Mutation # 496 Endless Infection
Brood War
General
FlaSh on: Biggest Problem With SnOw's Playstyle BW General Discussion BGH Auto Balance -> http://bghmmr.eu/ [ASL20] Ask the mapmakers — Drop your questions Where's CardinalAllin/Jukado the mapmaker?
Tourneys
[Megathread] Daily Proleagues [ASL20] Grand Finals [BSL21] RO32 Group A - Saturday 21:00 CET [BSL21] RO32 Group B - Sunday 21:00 CET
Strategy
Current Meta PvZ map balance How to stay on top of macro? Soma's 9 hatch build from ASL Game 2
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Should offensive tower rushing be viable in RTS games? Path of Exile Dawn of War IV
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread SPIRED by.ASL Mafia {211640}
Community
General
Russo-Ukrainian War Thread US Politics Mega-thread Things Aren’t Peaceful in Palestine Canadian Politics Mega-thread The Games Industry And ATVI
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
[Manga] One Piece Anime Discussion Thread Movie Discussion! Korean Music Discussion Series you have seen recently...
Sports
2024 - 2026 Football Thread Formula 1 Discussion NBA General Discussion MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Dyadica Gospel – a Pulp No…
Hildegard
Coffee x Performance in Espo…
TrAiDoS
Saturation point
Uldridge
DnB/metal remix FFO Mick Go…
ImbaTosS
Reality "theory" prov…
perfectspheres
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1823 users

US Politics Mega-thread - Page 1694

Forum Index > General Forum
Post a Reply
Prev 1 1692 1693 1694 1695 1696 5350 Next
Now that we have a new thread, in order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a complete and thorough read before posting!

NOTE: When providing a source, please provide a very brief summary on what it's about and what purpose it adds to the discussion. The supporting statement should clearly explain why the subject is relevant and needs to be discussed. Please follow this rule especially for tweets.

Your supporting statement should always come BEFORE you provide the source.


If you have any questions, comments, concern, or feedback regarding the USPMT, then please use this thread: http://www.teamliquid.net/forum/website-feedback/510156-us-politics-thread
Slydie
Profile Joined August 2013
1927 Posts
July 20 2019 11:28 GMT
#33861
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)
Buff the siegetank
Nebuchad
Profile Blog Joined December 2012
Switzerland12320 Posts
July 20 2019 11:44 GMT
#33862
On July 20 2019 14:37 CosmicSpiral wrote:
While historical accounts give the impression neoliberalism was a natural progression from capitalism and liberalism, it represented a marked break from both traditions. It was the brainchild of a few public intellectuals who aggressively promoted it through think tanks and adviser positions; its key representatives viewed liberalism and democracy as antimonies, although they didn't proclaim so outside of the MPS; the epistemological justification for ceding judgment to the market would have been utterly alien to Smith or any Enlightenment-influenced philosopher.

It's late over here. I'll fully elaborate on each post tomorrow.


I understand that it's a break from tradition, it's just one that I find utterly logical, and one that I would expect to happen given the principles of capitalism. It's on a really basic level; we give the main share of power to a group of people, and after a while, theories of economics that favor this group of people even more start appearing and gaining prominence (as they are backed by some of the people who have the most influence and the most power). That makes a lot of intuitive sense to me regardless of what Smith thought.

You can find the ancestor of trickle-down economics before the 1927 crisis. It wasn't full neoliberalism as there was no aspect of globalization but that makes sense as well given how nations operated then and how they operate now.

Also I would definitely agree that there is a tension between democracy and liberalism.
No will to live, no wish to die
Gahlo
Profile Joined February 2010
United States35162 Posts
July 20 2019 11:59 GMT
#33863
On July 20 2019 20:28 Slydie wrote:
Show nested quote +
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)

The market has already failed many, many people immensely.
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
Last Edited: 2019-07-20 14:14:22
July 20 2019 13:28 GMT
#33864
--- Nuked ---
Gorsameth
Profile Joined April 2010
Netherlands21950 Posts
July 20 2019 14:15 GMT
#33865
On July 20 2019 22:28 JimmiC wrote:
Show nested quote +
On July 20 2019 20:59 Gahlo wrote:
On July 20 2019 20:28 Slydie wrote:
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)

The market has already failed many, many people immensely.

So has attempts at communism. And saying the people who have tried is the problem is lazy thinking. There are cultural human issues at play that have consistently caused it to fail the vsry people it is meant to help. It sounds great to say you are going to change the incentives of society so people strive for different things. This is a super hard thing to do even on a super small scale, how do you do it on a super large scale? What do you do with the people who don't want the change? Or cant change?
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.
It ignores such insignificant forces as time, entropy, and death
GreenHorizons
Profile Blog Joined April 2011
United States23459 Posts
July 20 2019 14:19 GMT
#33866
On July 20 2019 20:59 Gahlo wrote:
Show nested quote +
On July 20 2019 20:28 Slydie wrote:
On July 20 2019 19:53 Uldridge wrote:
I think capitalism will eat itself once the automation / AI revolution is in full swing. Either that or we'll have a harsh dystopian outcome. The reaction of the demographic at risk during that time needs to be the correct one.. because only they will have the power to change their fate.


We'll be ok, we have a lot of experience in inventing new jobs when old ones disappear or are moved to lowcost countries.

Capitalism goes nowhere. You can try to limit how much power the richest have but I fail to see how strict regulations can fully replace market mechanisms. The market will always find a way, as all the failed attempts have shown so far.

Another unresolveable problem is potential corruption of the socialist system and that it is essentially anti democratic. You can say capitalism can exploit human weaknesses but strict socialism fails to take them into account and are doomed to fail.

(I am not talking about social democratic solutions which are firmly within regulated capitalism.)

The market has already failed many, many people immensely.

There's an ongoing issue of people conflating capitalism and markets to be one in the same and they aren't. You can have markets without capitalism and I think some people don't make that distinction in their understandings/arguments.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
July 20 2019 14:23 GMT
#33867
--- Nuked ---
GreenHorizons
Profile Blog Joined April 2011
United States23459 Posts
Last Edited: 2019-07-20 14:28:46
July 20 2019 14:27 GMT
#33868
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Mohdoo
Profile Joined August 2007
United States15723 Posts
July 20 2019 14:30 GMT
#33869
Seizing a UK tanker feels like a profoundly bad idea for Iran. If Europe gets on board with "Fuck Iran", it is lights out. It would be a huge victory for Trump if Europe were to get legitimately pissed at Iran.
Gorsameth
Profile Joined April 2010
Netherlands21950 Posts
July 20 2019 14:34 GMT
#33870
On July 20 2019 23:27 GreenHorizons wrote:
Show nested quote +
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

Show nested quote +
A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.
It ignores such insignificant forces as time, entropy, and death
Nebuchad
Profile Blog Joined December 2012
Switzerland12320 Posts
July 20 2019 14:38 GMT
#33871
On July 20 2019 23:34 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
No will to live, no wish to die
Gorsameth
Profile Joined April 2010
Netherlands21950 Posts
July 20 2019 14:40 GMT
#33872
On July 20 2019 23:38 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.
It ignores such insignificant forces as time, entropy, and death
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
July 20 2019 14:42 GMT
#33873
--- Nuked ---
Nebuchad
Profile Blog Joined December 2012
Switzerland12320 Posts
July 20 2019 14:45 GMT
#33874
On July 20 2019 23:40 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
No will to live, no wish to die
GreenHorizons
Profile Blog Joined April 2011
United States23459 Posts
Last Edited: 2019-07-20 14:46:50
July 20 2019 14:45 GMT
#33875
On July 20 2019 23:34 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


Depends on how literally you mean that?

"all of humanity" isn't going to do anything except maybe go extinct, A selfish/corrupt society doesn't develop an AI that isn't also "selfish" and corrupt.

Banking on AI seems completely irrational from every angle to me, whereas "all of humanity stops being selfish" isn't happening under any system and capitalism has failed miserably to mitigate that selfishness as we have a handful of people with the majority of the worlds resources leading us straight into catastrophe that threatens the species (granted we're stubborn survivors) so they can be a bit wealthier tomorrow than they were today. So the objection "but there's still selfishness" doesn't make sense to me as damning.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Gorsameth
Profile Joined April 2010
Netherlands21950 Posts
July 20 2019 14:48 GMT
#33876
On July 20 2019 23:45 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.
It ignores such insignificant forces as time, entropy, and death
Nebuchad
Profile Blog Joined December 2012
Switzerland12320 Posts
July 20 2019 14:56 GMT
#33877
On July 20 2019 23:48 Gorsameth wrote:
Show nested quote +
On July 20 2019 23:45 Nebuchad wrote:
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.


I perceive your objection to be similar to saying that in order to move from an authoritarian system of government to a democratic system of government (representative democracy), we need to get rid of power hunger in human nature. Otherwise people who are power hungry will try to game the representative system.

Like... yeah, it's true, they will, and they have. But we can still establish a representative system without eliminating power hunger, and that's still an improvement over an authoritarian system.
No will to live, no wish to die
GreenHorizons
Profile Blog Joined April 2011
United States23459 Posts
July 20 2019 14:59 GMT
#33878
On July 20 2019 23:56 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:48 Gorsameth wrote:
On July 20 2019 23:45 Nebuchad wrote:
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.


I perceive your objection to be similar to saying that in order to move from an authoritarian system of government to a democratic system of government (representative democracy), we need to get rid of power hunger in human nature. Otherwise people who are power hungry will try to game the representative system.

Like... yeah, it's true, they will, and they have. But we can still establish a representative system without eliminating power hunger, and that's still an improvement over an authoritarian system.


His country does still have a king?
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Gorsameth
Profile Joined April 2010
Netherlands21950 Posts
July 20 2019 15:01 GMT
#33879
On July 20 2019 23:45 GreenHorizons wrote:
Show nested quote +
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


Depends on how literally you mean that?

"all of humanity" isn't going to do anything except maybe go extinct, A selfish/corrupt society doesn't develop an AI that isn't also "selfish" and corrupt.

Banking on AI seems completely irrational from every angle to me, whereas "all of humanity stops being selfish" isn't happening under any system and capitalism has failed miserably to mitigate that selfishness as we have a handful of people with the majority of the worlds resources leading us straight into catastrophe that threatens the species (granted we're stubborn survivors) so they can be a bit wealthier tomorrow than they were today. So the objection "but there's still selfishness" doesn't make sense to me as damning.
My point is that the system you come up with after your revolution is going to suffer from the same problem, that of selfish people exploiting it for themselves at the cost of everyone else.
Because its going to be a system run by people and its only a matter of time until those people will be selfish and greedy ones. (and that time is likely going to be immediately).

Capitalism has a lot of big flaws and I would love to change it into something better but what reason do I have to stand with you on the barricades to bring it toppling down when the replacement is going to be the same or worse?
If society is going to roll the dice I would want more assurances then a shrug that its going to do something.

And in my opinion the answer to that is Artificial Intelligence, less chance of greed, selfishness and corruption (if done properly and with an advanced enough AI, which we don't have yet)
It ignores such insignificant forces as time, entropy, and death
Gorsameth
Profile Joined April 2010
Netherlands21950 Posts
July 20 2019 15:06 GMT
#33880
On July 20 2019 23:56 Nebuchad wrote:
Show nested quote +
On July 20 2019 23:48 Gorsameth wrote:
On July 20 2019 23:45 Nebuchad wrote:
On July 20 2019 23:40 Gorsameth wrote:
On July 20 2019 23:38 Nebuchad wrote:
On July 20 2019 23:34 Gorsameth wrote:
On July 20 2019 23:27 GreenHorizons wrote:
On July 20 2019 23:15 Gorsameth wrote:
The problem is human nature so its not something we are going to fix long term until we get rid of the problem.
I for one look forward to our AI overlord.


When we do research looking for the traits people claim to be "human nature" we discover time and again it's mostly learned behavior that can be changed by changing the circumstances/learning.

A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.

+ Show Spoiler +
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?

To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.

The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample.
Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.


www.scientificamerican.com
What do you think is more likely?
All of humanity stops being selfish, or we develop an AI advanced enough to govern for us.


I don't understand why you think we need to remove selfishness
Because its only a matter of time until those who want to abuse a system get into a position where they can abuse that system for themselves at the cost of others.


But... that's mostly okay. And the few ways that aren't okay can simply be illegal.
But that is pretty much where we are now.
And apparently things are not ok.


I perceive your objection to be similar to saying that in order to move from an authoritarian system of government to a democratic system of government (representative democracy), we need to get rid of power hunger in human nature. Otherwise people who are power hungry will try to game the representative system.

Like... yeah, it's true, they will, and they have. But we can still establish a representative system without eliminating power hunger, and that's still an improvement over an authoritarian system.
Sure but its a risk every time, and often in the past the situation for people was bad enough that they were willing to take that risk because it couldn't get worse.
I don't feel like that is the case now. Things can get a lot worse, and the last few decades have plenty of examples of it, and for most people it's simply not worth taking that risk at the moment.
It ignores such insignificant forces as time, entropy, and death
Prev 1 1692 1693 1694 1695 1696 5350 Next
Please log in or register to reply.
Live Events Refresh
Next event in 2h 42m
[ Submit Event ]
Live Streams
Refresh
StarCraft: Brood War
Britney 1654
Yoon 445
Tasteless 367
Zeus 307
Stork 199
Rush 97
Free 81
ToSsGirL 43
Shine 40
Sharp 29
[ Show more ]
Sexy 8
Noble 4
Terrorterran 4
Dota 2
XaKoH 360
League of Legends
JimRising 461
Counter-Strike
olofmeister630
shoxiejesuss627
allub278
Other Games
summit1g16703
ceh9507
Happy258
NeuroSwarm41
Organizations
Other Games
gamesdonequick571
BasetradeTV43
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 16 non-featured ]
StarCraft 2
• Berry_CruncH243
• LUISG 32
• Light_VIP 18
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• iopq 1
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Stunt662
Other Games
• Scarra1388
Upcoming Events
WardiTV Korean Royale
2h 42m
OSC
7h 42m
Replay Cast
13h 42m
Replay Cast
23h 42m
Kung Fu Cup
1d 2h
Classic vs Solar
herO vs Cure
Reynor vs GuMiho
ByuN vs ShoWTimE
Tenacious Turtle Tussle
1d 13h
The PondCast
2 days
RSL Revival
2 days
Solar vs Zoun
MaxPax vs Bunny
Kung Fu Cup
2 days
WardiTV Korean Royale
2 days
[ Show More ]
PiGosaur Monday
2 days
RSL Revival
3 days
Classic vs Creator
Cure vs TriGGeR
Kung Fu Cup
3 days
CranKy Ducklings
4 days
RSL Revival
4 days
herO vs Gerald
ByuN vs SHIN
Kung Fu Cup
4 days
BSL 21
4 days
Tarson vs Julia
Doodle vs OldBoy
eOnzErG vs WolFix
StRyKeR vs Aeternum
Sparkling Tuna Cup
5 days
RSL Revival
5 days
Reynor vs sOs
Maru vs Ryung
Kung Fu Cup
5 days
WardiTV Korean Royale
5 days
BSL 21
5 days
JDConan vs Semih
Dragon vs Dienmax
Tech vs NewOcean
TerrOr vs Artosis
Wardi Open
6 days
Monday Night Weeklies
6 days
Liquipedia Results

Completed

Proleague 2025-11-07
Stellar Fest: Constellation Cup
Eternal Conflict S1

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
YSL S2
BSL Season 21
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual

Upcoming

SLON Tour Season 2
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
RSL Revival: Season 3
META Madness #9
BLAST Bounty Winter 2026
BLAST Bounty Winter 2026: Closed Qualifier
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.