• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 12:09
CET 18:09
KST 02:09
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10
Community News
RSL Season 3: RO16 results & RO8 bracket13Weekly Cups (Nov 10-16): Reynor, Solar lead Zerg surge1[TLMC] Fall/Winter 2025 Ladder Map Rotation14Weekly Cups (Nov 3-9): Clem Conquers in Canada4SC: Evo Complete - Ranked Ladder OPEN ALPHA14
StarCraft 2
General
SC: Evo Complete - Ranked Ladder OPEN ALPHA RSL Season 3: RO16 results & RO8 bracket RSL Season 3 - Playoffs Preview Mech is the composition that needs teleportation t GM / Master map hacker and general hacking and cheating thread
Tourneys
RSL Revival: Season 3 $5,000+ WardiTV 2025 Championship StarCraft Evolution League (SC Evo Biweekly) Constellation Cup - Main Event - Stellar Fest 2025 RSL Offline Finals Dates + Ticket Sales!
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 500 Fright night Mutation # 499 Chilling Adaptation Mutation # 498 Wheel of Misfortune|Cradle of Death Mutation # 497 Battle Haredened
Brood War
General
2v2 maps which are SC2 style with teams together? BGH Auto Balance -> http://bghmmr.eu/ Data analysis on 70 million replays soO on: FanTaSy's Potential Return to StarCraft A cwal.gg Extension - Easily keep track of anyone
Tourneys
[BSL21] RO16 Tie Breaker - Group B - Sun 21:00 CET [BSL21] RO16 Tie Breaker - Group A - Sat 21:00 CET [Megathread] Daily Proleagues Small VOD Thread 2.0
Strategy
Current Meta Game Theory for Starcraft How to stay on top of macro? PvZ map balance
Other Games
General Games
Path of Exile Nintendo Switch Thread Should offensive tower rushing be viable in RTS games? Clair Obscur - Expedition 33 Stormgate/Frost Giant Megathread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Mafia Game Mode Feedback/Ideas
Community
General
Russo-Ukrainian War Thread US Politics Mega-thread The Games Industry And ATVI Things Aren’t Peaceful in Palestine About SC2SEA.COM
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
[Manga] One Piece Movie Discussion! Anime Discussion Thread Korean Music Discussion
Sports
Formula 1 Discussion 2024 - 2026 Football Thread NBA General Discussion MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
TL Community
The Automated Ban List
Blogs
The Health Impact of Joining…
TrAiDoS
Dyadica Evangelium — Chapt…
Hildegard
Saturation point
Uldridge
DnB/metal remix FFO Mick Go…
ImbaTosS
Customize Sidebar...

Website Feedback

Closed Threads



Active: 2037 users

The Big Programming Thread - Page 1003

Forum Index > General Forum
Post a Reply
Prev 1 1001 1002 1003 1004 1005 1032 Next
Thread Rules
1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution.
2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20)
3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible.
4. Use [code] tags to format code blocks.
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
Last Edited: 2019-03-13 22:34:48
March 13 2019 22:34 GMT
#20041
christopher bishop - pattern recognition and machine learning

I believe that this is regarded as one of the best. It's also hard . (at least for me)
Manit0u
Profile Blog Joined August 2004
Poland17450 Posts
March 14 2019 15:15 GMT
#20042
Be Ruby developer.
Get assigned to a Scala project.
Remove some code.
Project now works fine on 2 machines instead of 8 with the same load.
Feel good.
Time is precious. Waste it wisely.
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
March 14 2019 16:40 GMT
#20043
Manit0u I just realized I never replied to your post about that graph visualizer.

I have used something like that before actually (not that specifically, that one is for looking at cryptocurrency stuff? I don't know about any of that stuff so if what I just said was dumb then forgive me). Anyways it was very, very interesting to see some of the intuition I had about some of the graphs I was working with come to life in a visual way. However, the really hard graphs were difficult to even tell what was going on when their relationships were inspected visually. They mostly come out looking like repeated intertwined rings of varying lengths.
Manit0u
Profile Blog Joined August 2004
Poland17450 Posts
March 14 2019 18:57 GMT
#20044
On March 15 2019 01:40 travis wrote:
Manit0u I just realized I never replied to your post about that graph visualizer.

I have used something like that before actually (not that specifically, that one is for looking at cryptocurrency stuff? I don't know about any of that stuff so if what I just said was dumb then forgive me). Anyways it was very, very interesting to see some of the intuition I had about some of the graphs I was working with come to life in a visual way. However, the really hard graphs were difficult to even tell what was going on when their relationships were inspected visually. They mostly come out looking like repeated intertwined rings of varying lengths.


Well, those visualizations are actually for function calls and network systems. Could potentially work for anything since functions within an application and network systems are basically graphs of sort.
Time is precious. Waste it wisely.
WarSame
Profile Blog Joined February 2010
Canada1950 Posts
March 15 2019 03:19 GMT
#20045
I recently got myself pulled of an old, crappy system program and am now doing some Google Cloud Functions work. I am loving this!
Can it be I stayed away too long? Did you miss these rhymes while I was gone?
Mr. Wiggles
Profile Blog Joined August 2010
Canada5894 Posts
March 15 2019 03:52 GMT
#20046
On March 14 2019 03:34 SC-Shield wrote:
Could you please recommend a few nice books about Machine Learning? If they're about Reinforcement Learning, then it will be even better.

Here's the book I used when I took a course in reinforcement learning from Rich Sutton:

http://incompleteideas.net/book/the-book.html

It's available for free as a PDF. When we did the course he was still working on the second edition, so some stuff was missing, but it looks complete now. It was pretty good from what I remember, and was useful when I was refreshing myself on some concepts recently.

Rich is one of the 'fathers' of reinforcement learning and is currently leading the Deep Mind office in Edmonton, so you can consider him a pretty authoritative source on RL

https://deepmind.com/blog/deepmind-office-canada-edmonton/
you gotta dance
Manit0u
Profile Blog Joined August 2004
Poland17450 Posts
March 15 2019 11:45 GMT
#20047
[image loading]
Time is precious. Waste it wisely.
Manit0u
Profile Blog Joined August 2004
Poland17450 Posts
March 16 2019 11:54 GMT
#20048
Guys, I need your help. My friend wants to switch jobs and I hooked him up with an entry-level front-end position. For this he'll need to be able to do some basic web app in Angular or React.

Unfortunately I haven't touched front-end for 3 years and I wouldn't know what would be some of the best online resources to learn those technologies. Could you hook the brother up?

Any tips or hints on what to pay attention to and what extra skills (besides less or sass) might be required would be greatly appreciated. Should I also teach him some about NoSQL stuff like Mongo?
Time is precious. Waste it wisely.
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
Last Edited: 2019-03-17 15:16:51
March 17 2019 15:16 GMT
#20049
neural network question for people with deep learning expertise

So, it's my intuition that what a neural network really does is basically find and weight correlations between (n choose k) inputs within the units of the neural network.

Which makes me wonder: for many networks, is relu really a good choice of activation function? Because, if the above is correct, then doesn't relu only find POSITIVE correlations between inputs? For example, if we have inputs A,B,C,D, and are trying to classify a cat, a network using relu can find that A = .5 AND B = .5, may be more significant towards our data being a cat than the individual weightings of when A = .5 + B = .5. However, relu should *not* be able to find a negative correlation, that is to say that maybe A = .5 increases the likelihood of our data being a cat, C = .5 increases the likelihood of our data being a cat, but A =.5 AND C = .5 makes it LESS likely that our data is a cat.

Am I right that relu cannot find that last type of correlation, and you would need something like leaky relu to find that kind of correlation in data?
Equalizer
Profile Joined April 2010
Canada115 Posts
March 17 2019 17:19 GMT
#20050
Requires more than 1 layer; passing the output of ReLu through negative weight edge will allow the second layer to not activate when both A and C are active and activate when either A or C are active.
The person who says it cannot be done, should not interrupt the person doing it.
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
Last Edited: 2019-03-17 17:30:13
March 17 2019 17:29 GMT
#20051
But in the end, it can never value the correlation as negative, at best it can value it at zero, right? So it can't fully capture the relationship in the most accurate way - your network can't truly punish this negative correlation - it can only "not reward it".


Let's take a network with 26 inputs: A....Z
If 100% of the time that A >= .5, and C >=.5: it is not a cat.
But 100% of the rest of the time A >= .5, or C >=.5: it is a cat

It's not realistically ever going to be able to learn this rule exactly, right? Even if some later unit doesn't fire because it solved {A >= .5, C>=.5} ---> not a cat, this won't prevent it from thinking it is a cat based on the other inputs.


I mean.. I suppose I could see it eventually solving that relationship. But then the amount of layers and units would need to be incredibly huge.

Or am I overcomplicating this and I am flat out wrong in my conceptualization.
Simberto
Profile Blog Joined July 2010
Germany11641 Posts
March 17 2019 17:39 GMT
#20052
Correlations can mathematically rank between -1 and 1.

a correlation of -1 between A and B means that if A, then never B (and if B, then never A)
a correlation of 0 means that A doesn't influence whether B or not B
a correlation of 1 means that if A, then B (and if B, then A)

all other values are in between.

I don't know too much about programming though.
Acrofales
Profile Joined August 2010
Spain18132 Posts
March 17 2019 18:29 GMT
#20053
On March 18 2019 02:29 travis wrote:
But in the end, it can never value the correlation as negative, at best it can value it at zero, right? So it can't fully capture the relationship in the most accurate way - your network can't truly punish this negative correlation - it can only "not reward it".


Let's take a network with 26 inputs: A....Z
If 100% of the time that A >= .5, and C >=.5: it is not a cat.
But 100% of the rest of the time A >= .5, or C >=.5: it is a cat

It's not realistically ever going to be able to learn this rule exactly, right? Even if some later unit doesn't fire because it solved {A >= .5, C>=.5} ---> not a cat, this won't prevent it from thinking it is a cat based on the other inputs.


I mean.. I suppose I could see it eventually solving that relationship. But then the amount of layers and units would need to be incredibly huge.

Or am I overcomplicating this and I am flat out wrong in my conceptualization.

You just need multiple layers as Equalizer stated. I haven't actually read this blog, but the solution appears to be right, so I assume it's on-point for solving XOR with perceptrons:

https://towardsdatascience.com/perceptrons-logical-functions-and-the-xor-problem-37ca5025790a
zatic
Profile Blog Joined September 2007
Zurich15355 Posts
March 17 2019 23:22 GMT
#20054
I have tried to put this in words but my takes at explaining have all been crap haha. But I am still learning ML myself.

travis in the end it comes down to what Equalizer said, a negative weight on the edge to the next layer does the job. And it doesn't need a lot of layers and units. Maybe just try it out? You can build your problem (essentially XOR) with 2 layers, (in and output layer), 2 nodes, and fit it to 1. accuracy. If you print out the weights and biases you will see that one edge is positive and one is negative.

Conceptually, it's not the activation function that does the regression. You can try above with any activation function, including one that can return negative values, and it will turn out the same.
ModeratorI know Teamliquid is known as a massive building
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
Last Edited: 2019-03-17 23:48:44
March 17 2019 23:42 GMT
#20055
Well, the problem isn't really equivalent to an XOR, right? (I know that we can make an XOR network) In the example I gave it is, but thats because I gave a lazy explanation. What I really just meant was A has a positive effect on an outcome of cat, B has a positive effect on outcome of cat, but (A and B) has a negative effect on outcome of cat. XOR simplifies this, but as the relationships become more and more complicated it would seem that that simplification would require a further and further expanded network, because relu will just kill every unit that investigates (A and B) heavily, rather than ever actually providing a negative value. If we simply provided a negative output for (A and B) from one of our units, then we could capture that information in the very first hidden layer, rather than having to rely on the summations of previous layers which had a 0 output because they investigated (A and B) heavily.

Anyways I will just take everyone's word for it that relu works efficiently for this. I do know that most papers say that evidence points at there not being much advantage to leaky relu other than for addressing the issue of units that will no longer learn.
zatic
Profile Blog Joined September 2007
Zurich15355 Posts
March 17 2019 23:49 GMT
#20056
It took like 10 years for a consensus to build that relu works better than other activations. To quote a lecture I heard on deep learning some years ago: "We use relu because it works better. Why does it work better? We don't know. If someone tells you that they know, they are lying."
ModeratorI know Teamliquid is known as a massive building
Equalizer
Profile Joined April 2010
Canada115 Posts
March 18 2019 03:17 GMT
#20057
@travis I do not see why what I mentioned doesn't accomplish what you describe. By having a negative weight it is equivalent to have a unit producing a negative output.

Consider the following,
output = w_1 * A + w_2 * B + w_3 * max(A + B - 1,0) + bias
where A,B in [0,1]

By setting w_3 as an appropriate negative weight you can apply any negative contribution from the event of A and B to the output that is needed.

Note: max(A + B - 1,0) is just ReLu with weights of 1 and bias of -1.
The person who says it cannot be done, should not interrupt the person doing it.
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
March 18 2019 23:07 GMT
#20058
You're right, im dumb, you can. Thank you!
Manit0u
Profile Blog Joined August 2004
Poland17450 Posts
March 19 2019 21:03 GMT
#20059
Time is precious. Waste it wisely.
JimmyJRaynor
Profile Blog Joined April 2010
Canada17029 Posts
Last Edited: 2019-03-20 19:24:53
March 20 2019 19:18 GMT
#20060
On February 21 2019 12:21 Manit0u wrote:
Screw math and ML.

to your point...
In general, "Machine Learning" is being oversold by its proponents.

"I apparently have a bit of a reputation as someone who is anti-machine learning or anti-AI when it comes to human research. This is a bit of misrepresentation of my views, and (I'd argue) a misrepresentation of the issues "statistics people" take with AI/ML as a whole."

"I personally think that AI/ML has a lot to bring to the table to enhance science, health and human performance. The problem is that the AI/ML crowd are over-selling their wares and often being disingenuous about what is current state-of-the art"

"Issue 1: CLAIMING EVERYTHING IS MACHINE LEARNING. Just because AI/ML may use algebra or linear regression, doesn't mean it is AI/ML. Same goes for Nonlinear regression, correlation, logistic regression, or everything else that IS STATISTICS (or information theory, etc.) "

"It's cool if you use statistics and statistical concepts properly. Really, we're a big-tent kind of people. Just don't claim you invented something you clearly did not. And no, stringing together multiple correlations in an automated way doesn't make it extra special. "

"Issue 2: OMG THE HYPE MACHINE, MAKE IT STOP. Again, a lot of really good stuff is being trialed with AI/ML. You don't need to oversell the genuinely good work and advances being pioneered. Here's the thing, most "statistics people" are allergic to hype."

"Many human research statisticians work in areas of health where people can die or receive in appropriate treatments if we do our job wrong. It isn't to say we're perfect, but we work hard to be conservative and criticize our models so we're confident in the results."

"This is, I think, the main reason statisticians have issues with the AI/ML crowd: we can smell snake oil. The really good and avant garde AI/ML work gets lumped in with the utter nonsense directed at VC's and the pop media."

"Issue 3: THE SNAKE OIL IS SPREADING (tweet #7) Again, there is good AI/ML work being done, but most of it is just re-branded statistics or 'stuff' hiding behind the term "proprietary". This snake oil is leaking into government, academia, etc in an attempt to be 'cool'"

"We're seeing this salesmanship more-and-more outside of traditional AI/ML technology circles. There are conference presentations or academic papers that call things like principle component analysis AI/ML... it was invented in 1901! https://en.wikipedia.org/wiki/Principal_component_analysis … "

"Finally, many "stats people" are interested in applying AI/ML techniques and seeing where it can compliment our backgrounds and current work. We just get turned off by the bravado, hype and salesmanship that accompanies AI/ML. So yeah, we'll keep giving it a hard time. "

Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
Prev 1 1001 1002 1003 1004 1005 1032 Next
Please log in or register to reply.
Live Events Refresh
Next event in 2h 51m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
RotterdaM 574
IndyStarCraft 128
BRAT_OK 63
MindelVK 24
StarCraft: Brood War
Britney 28624
Calm 3413
Rain 2695
GuemChi 643
actioN 272
firebathero 248
BeSt 142
Backho 66
Killer 50
Oya187 47
[ Show more ]
Barracks 31
zelot 23
ToSsGirL 21
scan(afreeca) 20
JulyZerg 13
SilentControl 9
HiyA 9
Sacsri 8
Shine 8
Bale 6
Dota 2
Gorgc8293
qojqva2765
singsing2174
Dendi848
League of Legends
rGuardiaN34
Counter-Strike
ScreaM1880
pashabiceps1082
byalli620
allub273
oskar122
Heroes of the Storm
Khaldor612
Other Games
FrodaN2256
Hui .354
Fuzer 294
KnowMe123
XaKoH 98
ArmadaUGS76
mouzStarbuck74
Organizations
Dota 2
PGL Dota 2 - Main Stream24742
Other Games
EGCTV1716
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 17 non-featured ]
StarCraft 2
• Berry_CruncH182
• poizon28 25
• IndyKCrew
• AfreecaTV YouTube
• sooper7s
• intothetv
• Kozan
• LaughNgamezSOOP
• Migwel
StarCraft: Brood War
• Pr0nogo 1
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• C_a_k_e 2082
• WagamamaTV451
League of Legends
• Nemesis2546
Other Games
• tFFMrPink 16
Upcoming Events
IPSL
2h 51m
StRyKeR vs OldBoy
Sziky vs Tarson
BSL 21
2h 51m
StRyKeR vs Artosis
OyAji vs KameZerg
OSC
5h 51m
OSC
15h 51m
Wardi Open
18h 51m
Monday Night Weeklies
23h 51m
OSC
1d 5h
Wardi Open
1d 18h
Replay Cast
2 days
Wardi Open
2 days
[ Show More ]
Tenacious Turtle Tussle
3 days
The PondCast
3 days
Replay Cast
4 days
LAN Event
5 days
Replay Cast
5 days
Replay Cast
5 days
Sparkling Tuna Cup
6 days
Liquipedia Results

Completed

Proleague 2025-11-21
Stellar Fest: Constellation Cup
Eternal Conflict S1

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
YSL S2
BSL Season 21
CSCL: Masked Kings S3
SLON Tour Season 2
META Madness #9
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2

Upcoming

BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter 2026: Closed Qualifier
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.