• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 15:23
CEST 21:23
KST 04:23
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Power Rank - Esports World Cup 202550RSL Season 1 - Final Week9[ASL19] Finals Recap: Standing Tall15HomeStory Cup 27 - Info & Preview18Classic wins Code S Season 2 (2025)16
Community News
BSL Team Wars - Bonyth, Dewalt, Hawk & Sziky teams5Weekly Cups (July 14-20): Final Check-up0Esports World Cup 2025 - Brackets Revealed19Weekly Cups (July 7-13): Classic continues to roll8Team TLMC #5 - Submission re-extension4
StarCraft 2
General
RSL Revival patreon money discussion thread The GOAT ranking of GOAT rankings Power Rank - Esports World Cup 2025 Jim claims he and Firefly were involved in match-fixing RSL Season 1 - Final Week
Tourneys
Esports World Cup 2025 Master Swan Open (Global Bronze-Master 2) Sparkling Tuna Cup - Weekly Open Tournament Sea Duckling Open (Global, Bronze-Diamond) FEL Cracov 2025 (July 27) - $8000 live event
Strategy
How did i lose this ZvP, whats the proper response
Custom Maps
External Content
Mutation #239 Bad Weather Mutation # 483 Kill Bot Wars Mutation # 482 Wheel of Misfortune Mutation # 481 Fear and Lava
Brood War
General
BW General Discussion [Update] ShieldBattery: 1v1 Fastest Support! BGH Auto Balance -> http://bghmmr.eu/ BSL Team Wars - Bonyth, Dewalt, Hawk & Sziky teams Ginuda's JaeDong Interview Series
Tourneys
[Megathread] Daily Proleagues CSL Xiamen International Invitational [CSLPRO] It's CSLAN Season! - Last Chance [BSL 2v2] ProLeague Season 3 - Friday 21:00 CET
Strategy
Simple Questions, Simple Answers I am doing this better than progamers do.
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread [MMORPG] Tree of Savior (Successor of Ragnarok) Path of Exile CCLP - Command & Conquer League Project
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread Post Pic of your Favorite Food! Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread The Games Industry And ATVI
Fan Clubs
SKT1 Classic Fan Club!
Media & Entertainment
[\m/] Heavy Metal Thread Anime Discussion Thread Movie Discussion! [Manga] One Piece Korean Music Discussion
Sports
Formula 1 Discussion 2024 - 2025 Football Thread TeamLiquid Health and Fitness Initiative For 2023 NBA General Discussion
World Cup 2022
Tech Support
Installation of Windows 10 suck at "just a moment" Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Ping To Win? Pings And Their…
TrAiDoS
momentary artworks from des…
tankgirl
from making sc maps to makin…
Husyelt
StarCraft improvement
iopq
Socialism Anyone?
GreenHorizons
Customize Sidebar...

Website Feedback

Closed Threads



Active: 832 users

The Math Thread - Page 2

Forum Index > General Forum
Post a Reply
Prev 1 2 3 4 5 30 31 32 Next All
Nin54545
Profile Joined June 2017
8 Posts
June 09 2017 14:21 GMT
#21
Thank you for the A Manit0u , i am not so confident in my ability , i will study equations in a few years...
HKTPZ
Profile Joined May 2017
105 Posts
June 09 2017 16:06 GMT
#22
What an odd coincidence - I was actually wondering a few days ago why there was no math thread here on Teamliquid - and well here we are
Nesserev
Profile Blog Joined January 2011
Belgium2760 Posts
June 09 2017 16:18 GMT
#23
--- Nuked ---
Oshuy
Profile Joined September 2011
Netherlands529 Posts
June 09 2017 16:27 GMT
#24
On June 09 2017 23:21 Nin54545 wrote:
Thank you for the A Manit0u , i am not so confident in my ability , i will study equations in a few years...



Then it is probably too early

In this specific case, sin(30), cos(60) and log100(10) are just clever ways of writing 1/2, most sqrt() also disappear. Pretty much everything simplifies nicely ; it looks basically like an equation to write something simple in a complicated way.

The weird ones are
- 14.661 and 21584, which seem arbitrary. Probably aimed at obtaining a given number as a result, but not an elegant way to do it
- D just looks wrong. Probably 512 and not 5*sqrt(2), but even then it should probably read (512*0.5)² instead of 512*(0.5)² (and then D=8)
- sqrt(1/2.sqrt(16)) in C looks strange, it leaves a sqrt(2) in the equation which is awkward when everything else is just fractions.
Coooot
Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-06-09 18:44:08
June 09 2017 18:39 GMT
#25
On June 09 2017 04:13 CecilSunkure wrote:
I've been eager to learn about Fourier transform. In particular I wanted to use it to do pitch adjustments of sound samples. I have some code here for playing sounds, and want to add some pitch adjustment stuff.

Would anyone mind chatting with me about the mathematics? I was hoping to find someone knowledge about Fourier transforms and their applications that I could bounce a bunch of questions off of. Please PM me if anyone would be so kind!


Doing some biophysical modeling and analyzing the (possible) oscillations in noise, I did write some FT code. It's a bit different than pitch adjustments. I wanted some power spectrum of time series data. I used the FFTW C library, which seems the fastest thing you can get to do FT, unless you use a something optimized for a specific architecture/chipset/hardware. It is reasonable straightforward to use, and you can call it up from any language; C/Python/matlab/Julia.

It is very much a black box, and the library is so fast because it divides the problem into chunks and uses a mix of several numerical methods, depending on the nature of the problem and the hardware you are running. It is completely opaque. But since it is an industry standard, that's ok.

To me the signal processing element of it all was a bit of of a dark art. You need to be an electrical engineer specialized in signal processing to really know how to decide on the parameters you want to use to most effectively convert time series data into frequency series data. Windowing, sampling, spectral leakage, aliases, frequency resolution, and some version of the Heisenberg uncertainty saying you an increase of frequency resolution would inadvertently decrease time resolution, and all kinds of artifacts that might pop up, that all was not very easy to understand 'on the fly'. I still remember the 'convolution in the time domain corresponds to multiplication in the frequency domain', but if I had to explain it right now, I'd fail. In the end, I am a chemist by training, working with mathematicians turned biologists. And signal processing using FT is a big thing in engineering, and scientists just use it as a black box, most of the time.

The discreteness also doesn't help, as the continuous math is 'simple' the understand, as long as you are comfortable with the complex plane. But the implications of discreteness, they made it all a bit more confusing. Especially since I never took a course in discrete maths. And I has on a deadline to just get it working. So I didn't have the time to patiently go through a signal processing textbook and try out simple things step by step.


That said, for what you are doing. If you transform some sound in the time domain to the frequency domain, you can hit it with some function. Then the frequencies in your signal will chance. When you then convert it back to the time domain, it will be a different sound, as it contains different harmonics/overtones. I guess this is how autotune works, in a way.

For the math, I thought this video was best:


In the end it is all about projecting the time data onto the complex plane. That's why it uses the sine and cosine.

As for applications, it is used all over the place. It is probably one of the most commonly used algorithms around. Everyone with some electronic device, phone, mp3 player, etc, uses it all the time. Sounds, spectrum, analysis/recording/sampling of data, but also data compression.
As a scientist, we usually use it when we record a spectrum of a molecule. Instead of getting how much photons it absorbs at each time, we get a fingerprint of which frequencies it absorbs in general. It removes noise, compacts what is happening over a longer period of time, and shows all the info we want to know in a straightforward manner.
D_lux
Profile Joined March 2009
Hungary60 Posts
June 09 2017 19:00 GMT
#26
On June 09 2017 04:13 CecilSunkure wrote:
I've been eager to learn about Fourier transform. In particular I wanted to use it to do pitch adjustments of sound samples. I have some code here for playing sounds, and want to add some pitch adjustment stuff.

Would anyone mind chatting with me about the mathematics? I was hoping to find someone knowledge about Fourier transforms and their applications that I could bounce a bunch of questions off of. Please PM me if anyone would be so kind!



Check out this guy's youtube channel:
https://www.youtube.com/user/ddorran/playlists

He has some nice playlists explaining stuff about the fourier transform, discrete fourier transform, Z domain, sampling, zero padding, etc etc... all the things you will need if you are working with sound.

Wish I could help you more, but I always understood Fourier Transforms in a very superficial way. If you really want to understand you will really need to go deep into the math, but there are some very good ways to visualize these transforms which help the extremely abstract math.
there is no spoon
CecilSunkure
Profile Blog Joined May 2010
United States2829 Posts
June 09 2017 19:16 GMT
#27
Thanks for the links guys! I'll check em out today
mozoku
Profile Joined September 2012
United States708 Posts
June 09 2017 20:50 GMT
#28
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.
Nin54545
Profile Joined June 2017
8 Posts
June 09 2017 21:17 GMT
#29
On June 10 2017 01:27 Oshuy wrote:
Show nested quote +
On June 09 2017 23:21 Nin54545 wrote:
Thank you for the A Manit0u , i am not so confident in my ability , i will study equations in a few years...



Then it is probably too early

In this specific case, sin(30), cos(60) and log100(10) are just clever ways of writing 1/2, most sqrt() also disappear. Pretty much everything simplifies nicely ; it looks basically like an equation to write something simple in a complicated way.

The weird ones are
- 14.661 and 21584, which seem arbitrary. Probably aimed at obtaining a given number as a result, but not an elegant way to do it
- D just looks wrong. Probably 512 and not 5*sqrt(2), but even then it should probably read (512*0.5)² instead of 512*(0.5)² (and then D=8)
- sqrt(1/2.sqrt(16)) in C looks strange, it leaves a sqrt(2) in the equation which is awkward when everything else is just fractions.


ty ))))
Poopi
Profile Blog Joined November 2010
France12838 Posts
June 09 2017 22:11 GMT
#30
On June 10 2017 05:50 mozoku wrote:
Show nested quote +
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.

But my question is: can you know the real probability of the event?
WriterMaru
Lebesgue
Profile Joined October 2008
4542 Posts
June 09 2017 22:21 GMT
#31
On June 10 2017 07:11 Poopi wrote:
Show nested quote +
On June 10 2017 05:50 mozoku wrote:
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.

But my question is: can you know the real probability of the event?


With finite amount of data you will never be able to learn the "real" probability of an event. What you obtain using statistical methods is always an estimate. If you read scientific articles that use statistical analysis they will always report both point-estimates as well as standard deviations, confidence intervals or posterior belief distribution to measure how "precise" is their reported point-estimate.

HKTPZ
Profile Joined May 2017
105 Posts
June 09 2017 22:27 GMT
#32
On June 10 2017 07:11 Poopi wrote:
Show nested quote +
On June 10 2017 05:50 mozoku wrote:
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.

But my question is: can you know the real probability of the event?

Supposed we knew everything about how everything works, then yes, in that case we would be able to know the real probability (which would be either 0 or 1).

But at the end of the day we know very little about how everything works and we may not even truly realize how little we know.

Trying to predict something like an election or the weather comes down to simplified models over the factors we think influence the outcome - then the models can be reevaluated; how the factors are weighed can be adjusted etc
fishjie
Profile Blog Joined September 2010
United States1519 Posts
June 09 2017 22:43 GMT
#33
Yeah probability is a way to estimate the likelihood of something happening without complete information. If we had precise knowledge of how you flip the coin - initial starting position, angle of the flick, force of the flick, exact dimensions of the coin and its weight, wind conditions, and who knows how many other parameters, the probability of heads is not 1/2. You'd be able to build a physical model to know the exact answer. It'd be 0 or 1. At that point its purely deterministic.

Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-06-09 22:46:57
June 09 2017 22:45 GMT
#34
On June 10 2017 07:11 Poopi wrote:
Show nested quote +
On June 10 2017 05:50 mozoku wrote:
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.

But my question is: can you know the real probability of the event?


In science (ie: not mathematics), nothing is absolute. Even the most mundane and predictable of events; we cannot know anything for 100%.
For example, when I throw a dice, we cannot know that there is a zero chance of the dice not shattering on impact. Yes, we can calculate the forces involved. But who is to say that for the first time ever, the laws of nature won't suddenly change?

In the same way, if a dice truly gives a 50/50 for odds vs even, you only get to exactly 50/50 going to infinity. In fact, it is completely impossible to get 50/50 if you throw a dice an odd number of times. The reason we know a dice is 50/50 to be odd or even if because we do exactly know the number of sides it has. The assumption then is that the dice if perfectly fair. Which probably isn't the case. If you are really bored, you can take a bunch of dice, or coins, throw/flip each of them an absurd number of times. Then calculate how likely the outcome you got is under the assumption that the dice/coin is fair.

In principle, any imperfection of flaw will mean the dice/coin isn't perfectly symmetrical. And thus it can in principle lead to a biased die. So in the case of an unfair dice/coin, there is no way to know with 100% accuracy that the exact probabilities are. The law of large numbers can get you far enough. Far enough for any real-world applications. (now if you really need to throw a lot of times, the dice/coin may actually wear and it's fairness may chance as a function of the number of throws, adding another layer of complexity.) So it is mainly a philosophical debate.

That said, you probably control the outcome of the dice/coin for a 100% by 'deciding' how you throw it. But I don't know of any magicians that have enough skill to throw a dice in a certain way so it gets them the outcome they want.
hypercube
Profile Joined April 2010
Hungary2735 Posts
June 09 2017 22:53 GMT
#35
On June 10 2017 07:43 fishjie wrote:
Yeah probability is a way to estimate the likelihood of something happening without complete information. If we had precise knowledge of how you flip the coin - initial starting position, angle of the flick, force of the flick, exact dimensions of the coin and its weight, wind conditions, and who knows how many other parameters, the probability of heads is not 1/2. You'd be able to build a physical model to know the exact answer. It'd be 0 or 1. At that point its purely deterministic.



And even then it would be an exact answer under the assumption that the physical model is completely accurate.
"Sending people in rockets to other planets is a waste of money better spent on sending rockets into people on this planet."
fishjie
Profile Blog Joined September 2010
United States1519 Posts
Last Edited: 2017-06-09 23:00:44
June 09 2017 22:56 GMT
#36
On June 10 2017 07:45 Ernaine wrote:
In science (ie: not mathematics), nothing is absolute. Even the most mundane and predictable of events; we cannot know anything for 100%.

For example, when I throw a dice, we cannot know that there is a zero chance of the dice not shattering on impact. Yes, we can calculate the forces involved. But who is to say that for the first time ever, the laws of nature won't suddenly change?


Ah fair point, related reading to that point:
https://en.wikipedia.org/wiki/Sunrise_problem
HKTPZ
Profile Joined May 2017
105 Posts
June 09 2017 23:03 GMT
#37
Even if a magician appeared before us and rolled 1 a billion times in a row - that wouldnt confirm anything other than rolling 1 being a possibility. Now, assuming uniform distribution(equal likelihood of each occurance - 1/6 of rolling 1), what the magician did occurs rarely - one sixth to the billion - so for real world and every day applications, we would argue the magician seems to be able to manipulate the dice such that the distribution is not uniform.
Poopi
Profile Blog Joined November 2010
France12838 Posts
Last Edited: 2017-06-09 23:23:17
June 09 2017 23:20 GMT
#38
On June 10 2017 07:45 Ernaine wrote:
Show nested quote +
On June 10 2017 07:11 Poopi wrote:
On June 10 2017 05:50 mozoku wrote:
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.

But my question is: can you know the real probability of the event?


In science (ie: not mathematics), nothing is absolute. Even the most mundane and predictable of events; we cannot know anything for 100%.
For example, when I throw a dice, we cannot know that there is a zero chance of the dice not shattering on impact. Yes, we can calculate the forces involved. But who is to say that for the first time ever, the laws of nature won't suddenly change?

In the same way, if a dice truly gives a 50/50 for odds vs even, you only get to exactly 50/50 going to infinity. In fact, it is completely impossible to get 50/50 if you throw a dice an odd number of times. The reason we know a dice is 50/50 to be odd or even if because we do exactly know the number of sides it has. The assumption then is that the dice if perfectly fair. Which probably isn't the case. If you are really bored, you can take a bunch of dice, or coins, throw/flip each of them an absurd number of times. Then calculate how likely the outcome you got is under the assumption that the dice/coin is fair.

In principle, any imperfection of flaw will mean the dice/coin isn't perfectly symmetrical. And thus it can in principle lead to a biased die. So in the case of an unfair dice/coin, there is no way to know with 100% accuracy that the exact probabilities are. The law of large numbers can get you far enough. Far enough for any real-world applications. (now if you really need to throw a lot of times, the dice/coin may actually wear and it's fairness may chance as a function of the number of throws, adding another layer of complexity.) So it is mainly a philosophical debate.

That said, you probably control the outcome of the dice/coin for a 100% by 'deciding' how you throw it. But I don't know of any magicians that have enough skill to throw a dice in a certain way so it gets them the outcome they want.

You don't only get to exactly 50/50 going to infinity, you have 50% chances of having 50/50 if you throw it 2 times.
And I know the definition of probability with infinity and such, that's not really my question.
About the dice example, it has less chances of giving 6 than 1 afaik because there are less holes in 1, but again it's not my question :/.

And I'm not talking about dices at all because dices are pretty well random.

What I'm talking about is the probability of real life events that are a priori not random: are we still stuck at determinism issues?
WriterMaru
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
Last Edited: 2017-06-10 03:18:51
June 09 2017 23:22 GMT
#39
edit: i solved my problem
Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-06-09 23:27:48
June 09 2017 23:24 GMT
#40
Well, I don't agree. Yes, dices may not be fair. No real physical dice can be infinitely fair.

But if we throw only 2 times and get both results once, the hypothesis that the true nature of the dice is that we get 10% heads and 90% tails is still somewhat in agreement with the data. And that is quite a bit different from the 50/50

Yes, after 2 trials we do get exactly 50/50, and we can get similar results at 4 trials and all the other even trials, but we don't know with a high probability that the coins for sure are in fact 50/50 completely fair coins.

So yes, we need to go to infinity to exactly get 50/50. You can try it with a computer. (yes, it will have pseudo-random numbers, so it is still a bit iffy, just like having a completely fair dice
Prev 1 2 3 4 5 30 31 32 Next All
Please log in or register to reply.
Live Events Refresh
Next event in 15h 37m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Hui .296
BRAT_OK 104
ProTech70
MindelVK 35
ForJumy 32
Nathanias 17
StarCraft: Brood War
Mini 627
Mind 129
ivOry 4
Dota 2
420jenkins512
Counter-Strike
fl0m5413
sgares495
Foxcn225
oskar192
Fnx 37
Heroes of the Storm
Liquid`Hasu402
Other Games
FrodaN3139
Gorgc2886
qojqva805
Dendi753
Trikslyr76
QueenE73
ArmadaUGS56
Sick35
Organizations
Other Games
BasetradeTV44
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 17 non-featured ]
StarCraft 2
• Adnapsc2 10
• iHatsuTV 8
• intothetv
• sooper7s
• Migwel
• AfreecaTV YouTube
• LaughNgamezSOOP
• IndyKCrew
• Kozan
StarCraft: Brood War
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• masondota22019
• WagamamaTV602
League of Legends
• Jankos2211
Other Games
• imaqtpie1734
• Shiphtur378
Upcoming Events
Esports World Cup
15h 37m
Serral vs Cure
Solar vs Classic
OSC
18h 37m
CranKy Ducklings
1d 14h
BSL20 Non-Korean Champi…
1d 18h
CSO Cup
1d 20h
BSL20 Non-Korean Champi…
1d 22h
Bonyth vs Sziky
Dewalt vs Hawk
Hawk vs QiaoGege
Sziky vs Dewalt
Mihu vs Bonyth
Zhanhun vs QiaoGege
QiaoGege vs Fengzi
FEL
2 days
BSL20 Non-Korean Champi…
2 days
BSL20 Non-Korean Champi…
2 days
Bonyth vs Zhanhun
Dewalt vs Mihu
Hawk vs Sziky
Sziky vs QiaoGege
Mihu vs Hawk
Zhanhun vs Dewalt
Fengzi vs Bonyth
Sparkling Tuna Cup
4 days
[ Show More ]
Online Event
4 days
uThermal 2v2 Circuit
5 days
The PondCast
6 days
Liquipedia Results

Completed

CSL Xiamen Invitational
Championship of Russia 2025
Murky Cup #2

Ongoing

Copa Latinoamericana 4
Jiahua Invitational
BSL20 Non-Korean Championship
Esports World Cup 2025
CC Div. A S7
Underdog Cup #2
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25

Upcoming

CSLPRO Last Chance 2025
ASL Season 20: Qualifier #1
ASL Season 20: Qualifier #2
ASL Season 20
CSLPRO Chat StarLAN 3
BSL Season 21
RSL Revival: Season 2
Maestros of the Game
SEL Season 2 Championship
uThermal 2v2 Main Event
FEL Cracov 2025
HCC Europe
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.