• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 15:27
CEST 21:27
KST 04:27
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
TL.net Map Contest #21: Voting9[ASL20] Ro4 Preview: Descent11Team TLMC #5: Winners Announced!3[ASL20] Ro8 Preview Pt2: Holding On9Maestros of the Game: Live Finals Preview (RO4)5
Community News
BSL Team A vs Koreans - Sat-Sun 16:00 CET4Weekly Cups (Oct 6-12): Four star herO85.0.15 Patch Balance Hotfix (2025-10-8)80Weekly Cups (Sept 29-Oct 5): MaxPax triples up3PartinG joins SteamerZone, returns to SC2 competition32
StarCraft 2
General
Stellar Fest: StarCraft II returns to Canada The New Patch Killed Mech! herO Talks: Poor Performance at EWC and more... TL.net Map Contest #21: Voting Revisiting the game after10 years and wow it's bad
Tourneys
SC2's Safe House 2 - October 18 & 19 $1,200 WardiTV October (Oct 21st-31st) WardiTV Mondays RSL Offline Finals Dates + Ticket Sales! SC4ALL $6,000 Open LAN in Philadelphia
Strategy
Custom Maps
External Content
Mutation # 495 Rest In Peace Mutation # 494 Unstable Environment Mutation # 493 Quick Killers Mutation # 492 Get Out More
Brood War
General
BW General Discussion BSL Team A vs Koreans - Sat-Sun 16:00 CET Question regarding recent ASL Bisu vs Larva game [Interview] Grrrr... 2024 Pros React To: BarrackS + FlaSh Coaching vs SnOw
Tourneys
[ASL20] Semifinal B SC4ALL $1,500 Open Bracket LAN [Megathread] Daily Proleagues [ASL20] Semifinal A
Strategy
BW - ajfirecracker Strategy & Training Relatively freeroll strategies Current Meta Siegecraft - a new perspective
Other Games
General Games
Stormgate/Frost Giant Megathread Dawn of War IV Path of Exile Nintendo Switch Thread ZeroSpace Megathread
Dota 2
Official 'what is Dota anymore' discussion LiquidDota to reintegrate into TL.net
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
SPIRED by.ASL Mafia {211640} TL Mafia Community Thread
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread Things Aren’t Peaceful in Palestine Men's Fashion Thread Sex and weight loss
Fan Clubs
The herO Fan Club! The Happy Fan Club!
Media & Entertainment
Anime Discussion Thread [Manga] One Piece Series you have seen recently... Movie Discussion!
Sports
Formula 1 Discussion 2024 - 2026 Football Thread MLB/Baseball 2023 NBA General Discussion TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List Recent Gifted Posts
Blogs
The Heroism of Pepe the Fro…
Peanutsc
Rocket League: Traits, Abili…
TrAiDoS
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1716 users

The Math Thread - Page 2

Forum Index > General Forum
Post a Reply
Prev 1 2 3 4 5 30 31 32 Next All
Nin54545
Profile Joined June 2017
8 Posts
June 09 2017 14:21 GMT
#21
Thank you for the A Manit0u , i am not so confident in my ability , i will study equations in a few years...
HKTPZ
Profile Joined May 2017
105 Posts
June 09 2017 16:06 GMT
#22
What an odd coincidence - I was actually wondering a few days ago why there was no math thread here on Teamliquid - and well here we are
Nesserev
Profile Blog Joined January 2011
Belgium2760 Posts
June 09 2017 16:18 GMT
#23
--- Nuked ---
Oshuy
Profile Joined September 2011
Netherlands529 Posts
June 09 2017 16:27 GMT
#24
On June 09 2017 23:21 Nin54545 wrote:
Thank you for the A Manit0u , i am not so confident in my ability , i will study equations in a few years...



Then it is probably too early

In this specific case, sin(30), cos(60) and log100(10) are just clever ways of writing 1/2, most sqrt() also disappear. Pretty much everything simplifies nicely ; it looks basically like an equation to write something simple in a complicated way.

The weird ones are
- 14.661 and 21584, which seem arbitrary. Probably aimed at obtaining a given number as a result, but not an elegant way to do it
- D just looks wrong. Probably 512 and not 5*sqrt(2), but even then it should probably read (512*0.5)² instead of 512*(0.5)² (and then D=8)
- sqrt(1/2.sqrt(16)) in C looks strange, it leaves a sqrt(2) in the equation which is awkward when everything else is just fractions.
Coooot
Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-06-09 18:44:08
June 09 2017 18:39 GMT
#25
On June 09 2017 04:13 CecilSunkure wrote:
I've been eager to learn about Fourier transform. In particular I wanted to use it to do pitch adjustments of sound samples. I have some code here for playing sounds, and want to add some pitch adjustment stuff.

Would anyone mind chatting with me about the mathematics? I was hoping to find someone knowledge about Fourier transforms and their applications that I could bounce a bunch of questions off of. Please PM me if anyone would be so kind!


Doing some biophysical modeling and analyzing the (possible) oscillations in noise, I did write some FT code. It's a bit different than pitch adjustments. I wanted some power spectrum of time series data. I used the FFTW C library, which seems the fastest thing you can get to do FT, unless you use a something optimized for a specific architecture/chipset/hardware. It is reasonable straightforward to use, and you can call it up from any language; C/Python/matlab/Julia.

It is very much a black box, and the library is so fast because it divides the problem into chunks and uses a mix of several numerical methods, depending on the nature of the problem and the hardware you are running. It is completely opaque. But since it is an industry standard, that's ok.

To me the signal processing element of it all was a bit of of a dark art. You need to be an electrical engineer specialized in signal processing to really know how to decide on the parameters you want to use to most effectively convert time series data into frequency series data. Windowing, sampling, spectral leakage, aliases, frequency resolution, and some version of the Heisenberg uncertainty saying you an increase of frequency resolution would inadvertently decrease time resolution, and all kinds of artifacts that might pop up, that all was not very easy to understand 'on the fly'. I still remember the 'convolution in the time domain corresponds to multiplication in the frequency domain', but if I had to explain it right now, I'd fail. In the end, I am a chemist by training, working with mathematicians turned biologists. And signal processing using FT is a big thing in engineering, and scientists just use it as a black box, most of the time.

The discreteness also doesn't help, as the continuous math is 'simple' the understand, as long as you are comfortable with the complex plane. But the implications of discreteness, they made it all a bit more confusing. Especially since I never took a course in discrete maths. And I has on a deadline to just get it working. So I didn't have the time to patiently go through a signal processing textbook and try out simple things step by step.


That said, for what you are doing. If you transform some sound in the time domain to the frequency domain, you can hit it with some function. Then the frequencies in your signal will chance. When you then convert it back to the time domain, it will be a different sound, as it contains different harmonics/overtones. I guess this is how autotune works, in a way.

For the math, I thought this video was best:


In the end it is all about projecting the time data onto the complex plane. That's why it uses the sine and cosine.

As for applications, it is used all over the place. It is probably one of the most commonly used algorithms around. Everyone with some electronic device, phone, mp3 player, etc, uses it all the time. Sounds, spectrum, analysis/recording/sampling of data, but also data compression.
As a scientist, we usually use it when we record a spectrum of a molecule. Instead of getting how much photons it absorbs at each time, we get a fingerprint of which frequencies it absorbs in general. It removes noise, compacts what is happening over a longer period of time, and shows all the info we want to know in a straightforward manner.
D_lux
Profile Joined March 2009
Hungary60 Posts
June 09 2017 19:00 GMT
#26
On June 09 2017 04:13 CecilSunkure wrote:
I've been eager to learn about Fourier transform. In particular I wanted to use it to do pitch adjustments of sound samples. I have some code here for playing sounds, and want to add some pitch adjustment stuff.

Would anyone mind chatting with me about the mathematics? I was hoping to find someone knowledge about Fourier transforms and their applications that I could bounce a bunch of questions off of. Please PM me if anyone would be so kind!



Check out this guy's youtube channel:
https://www.youtube.com/user/ddorran/playlists

He has some nice playlists explaining stuff about the fourier transform, discrete fourier transform, Z domain, sampling, zero padding, etc etc... all the things you will need if you are working with sound.

Wish I could help you more, but I always understood Fourier Transforms in a very superficial way. If you really want to understand you will really need to go deep into the math, but there are some very good ways to visualize these transforms which help the extremely abstract math.
there is no spoon
CecilSunkure
Profile Blog Joined May 2010
United States2829 Posts
June 09 2017 19:16 GMT
#27
Thanks for the links guys! I'll check em out today
mozoku
Profile Joined September 2012
United States708 Posts
June 09 2017 20:50 GMT
#28
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.
Nin54545
Profile Joined June 2017
8 Posts
June 09 2017 21:17 GMT
#29
On June 10 2017 01:27 Oshuy wrote:
Show nested quote +
On June 09 2017 23:21 Nin54545 wrote:
Thank you for the A Manit0u , i am not so confident in my ability , i will study equations in a few years...



Then it is probably too early

In this specific case, sin(30), cos(60) and log100(10) are just clever ways of writing 1/2, most sqrt() also disappear. Pretty much everything simplifies nicely ; it looks basically like an equation to write something simple in a complicated way.

The weird ones are
- 14.661 and 21584, which seem arbitrary. Probably aimed at obtaining a given number as a result, but not an elegant way to do it
- D just looks wrong. Probably 512 and not 5*sqrt(2), but even then it should probably read (512*0.5)² instead of 512*(0.5)² (and then D=8)
- sqrt(1/2.sqrt(16)) in C looks strange, it leaves a sqrt(2) in the equation which is awkward when everything else is just fractions.


ty ))))
Poopi
Profile Blog Joined November 2010
France12904 Posts
June 09 2017 22:11 GMT
#30
On June 10 2017 05:50 mozoku wrote:
Show nested quote +
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.

But my question is: can you know the real probability of the event?
WriterMaru
Lebesgue
Profile Joined October 2008
4542 Posts
June 09 2017 22:21 GMT
#31
On June 10 2017 07:11 Poopi wrote:
Show nested quote +
On June 10 2017 05:50 mozoku wrote:
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.

But my question is: can you know the real probability of the event?


With finite amount of data you will never be able to learn the "real" probability of an event. What you obtain using statistical methods is always an estimate. If you read scientific articles that use statistical analysis they will always report both point-estimates as well as standard deviations, confidence intervals or posterior belief distribution to measure how "precise" is their reported point-estimate.

HKTPZ
Profile Joined May 2017
105 Posts
June 09 2017 22:27 GMT
#32
On June 10 2017 07:11 Poopi wrote:
Show nested quote +
On June 10 2017 05:50 mozoku wrote:
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.

But my question is: can you know the real probability of the event?

Supposed we knew everything about how everything works, then yes, in that case we would be able to know the real probability (which would be either 0 or 1).

But at the end of the day we know very little about how everything works and we may not even truly realize how little we know.

Trying to predict something like an election or the weather comes down to simplified models over the factors we think influence the outcome - then the models can be reevaluated; how the factors are weighed can be adjusted etc
fishjie
Profile Blog Joined September 2010
United States1519 Posts
June 09 2017 22:43 GMT
#33
Yeah probability is a way to estimate the likelihood of something happening without complete information. If we had precise knowledge of how you flip the coin - initial starting position, angle of the flick, force of the flick, exact dimensions of the coin and its weight, wind conditions, and who knows how many other parameters, the probability of heads is not 1/2. You'd be able to build a physical model to know the exact answer. It'd be 0 or 1. At that point its purely deterministic.

Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-06-09 22:46:57
June 09 2017 22:45 GMT
#34
On June 10 2017 07:11 Poopi wrote:
Show nested quote +
On June 10 2017 05:50 mozoku wrote:
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.

But my question is: can you know the real probability of the event?


In science (ie: not mathematics), nothing is absolute. Even the most mundane and predictable of events; we cannot know anything for 100%.
For example, when I throw a dice, we cannot know that there is a zero chance of the dice not shattering on impact. Yes, we can calculate the forces involved. But who is to say that for the first time ever, the laws of nature won't suddenly change?

In the same way, if a dice truly gives a 50/50 for odds vs even, you only get to exactly 50/50 going to infinity. In fact, it is completely impossible to get 50/50 if you throw a dice an odd number of times. The reason we know a dice is 50/50 to be odd or even if because we do exactly know the number of sides it has. The assumption then is that the dice if perfectly fair. Which probably isn't the case. If you are really bored, you can take a bunch of dice, or coins, throw/flip each of them an absurd number of times. Then calculate how likely the outcome you got is under the assumption that the dice/coin is fair.

In principle, any imperfection of flaw will mean the dice/coin isn't perfectly symmetrical. And thus it can in principle lead to a biased die. So in the case of an unfair dice/coin, there is no way to know with 100% accuracy that the exact probabilities are. The law of large numbers can get you far enough. Far enough for any real-world applications. (now if you really need to throw a lot of times, the dice/coin may actually wear and it's fairness may chance as a function of the number of throws, adding another layer of complexity.) So it is mainly a philosophical debate.

That said, you probably control the outcome of the dice/coin for a 100% by 'deciding' how you throw it. But I don't know of any magicians that have enough skill to throw a dice in a certain way so it gets them the outcome they want.
hypercube
Profile Joined April 2010
Hungary2735 Posts
June 09 2017 22:53 GMT
#35
On June 10 2017 07:43 fishjie wrote:
Yeah probability is a way to estimate the likelihood of something happening without complete information. If we had precise knowledge of how you flip the coin - initial starting position, angle of the flick, force of the flick, exact dimensions of the coin and its weight, wind conditions, and who knows how many other parameters, the probability of heads is not 1/2. You'd be able to build a physical model to know the exact answer. It'd be 0 or 1. At that point its purely deterministic.



And even then it would be an exact answer under the assumption that the physical model is completely accurate.
"Sending people in rockets to other planets is a waste of money better spent on sending rockets into people on this planet."
fishjie
Profile Blog Joined September 2010
United States1519 Posts
Last Edited: 2017-06-09 23:00:44
June 09 2017 22:56 GMT
#36
On June 10 2017 07:45 Ernaine wrote:
In science (ie: not mathematics), nothing is absolute. Even the most mundane and predictable of events; we cannot know anything for 100%.

For example, when I throw a dice, we cannot know that there is a zero chance of the dice not shattering on impact. Yes, we can calculate the forces involved. But who is to say that for the first time ever, the laws of nature won't suddenly change?


Ah fair point, related reading to that point:
https://en.wikipedia.org/wiki/Sunrise_problem
HKTPZ
Profile Joined May 2017
105 Posts
June 09 2017 23:03 GMT
#37
Even if a magician appeared before us and rolled 1 a billion times in a row - that wouldnt confirm anything other than rolling 1 being a possibility. Now, assuming uniform distribution(equal likelihood of each occurance - 1/6 of rolling 1), what the magician did occurs rarely - one sixth to the billion - so for real world and every day applications, we would argue the magician seems to be able to manipulate the dice such that the distribution is not uniform.
Poopi
Profile Blog Joined November 2010
France12904 Posts
Last Edited: 2017-06-09 23:23:17
June 09 2017 23:20 GMT
#38
On June 10 2017 07:45 Ernaine wrote:
Show nested quote +
On June 10 2017 07:11 Poopi wrote:
On June 10 2017 05:50 mozoku wrote:
On June 09 2017 09:22 Poopi wrote:
@JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o.

I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there!

Say we build a bayesian model that estimates the odds of a real life event A happening at 95%.
But the event happens in real life only once (the results of an election for example).
Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing.
Thus how can we judge if our model was well suited?
I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem.
But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/.
edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly
evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing?

(think about FiveThirtyEight and the likes for context)


edit 2 :
another "application" of this question would be with smartphones weather predictions!
They probably use some kind of bayesian model for that, and they will tell you:
"there is 30% chances that it'll rain at this hour." how are we supposed to use this information?
Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass?
Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella".
But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use.
It won't ever be the wisest choice :/.


For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior).

Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.)

I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates.

But my question is: can you know the real probability of the event?


In science (ie: not mathematics), nothing is absolute. Even the most mundane and predictable of events; we cannot know anything for 100%.
For example, when I throw a dice, we cannot know that there is a zero chance of the dice not shattering on impact. Yes, we can calculate the forces involved. But who is to say that for the first time ever, the laws of nature won't suddenly change?

In the same way, if a dice truly gives a 50/50 for odds vs even, you only get to exactly 50/50 going to infinity. In fact, it is completely impossible to get 50/50 if you throw a dice an odd number of times. The reason we know a dice is 50/50 to be odd or even if because we do exactly know the number of sides it has. The assumption then is that the dice if perfectly fair. Which probably isn't the case. If you are really bored, you can take a bunch of dice, or coins, throw/flip each of them an absurd number of times. Then calculate how likely the outcome you got is under the assumption that the dice/coin is fair.

In principle, any imperfection of flaw will mean the dice/coin isn't perfectly symmetrical. And thus it can in principle lead to a biased die. So in the case of an unfair dice/coin, there is no way to know with 100% accuracy that the exact probabilities are. The law of large numbers can get you far enough. Far enough for any real-world applications. (now if you really need to throw a lot of times, the dice/coin may actually wear and it's fairness may chance as a function of the number of throws, adding another layer of complexity.) So it is mainly a philosophical debate.

That said, you probably control the outcome of the dice/coin for a 100% by 'deciding' how you throw it. But I don't know of any magicians that have enough skill to throw a dice in a certain way so it gets them the outcome they want.

You don't only get to exactly 50/50 going to infinity, you have 50% chances of having 50/50 if you throw it 2 times.
And I know the definition of probability with infinity and such, that's not really my question.
About the dice example, it has less chances of giving 6 than 1 afaik because there are less holes in 1, but again it's not my question :/.

And I'm not talking about dices at all because dices are pretty well random.

What I'm talking about is the probability of real life events that are a priori not random: are we still stuck at determinism issues?
WriterMaru
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
Last Edited: 2017-06-10 03:18:51
June 09 2017 23:22 GMT
#39
edit: i solved my problem
Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-06-09 23:27:48
June 09 2017 23:24 GMT
#40
Well, I don't agree. Yes, dices may not be fair. No real physical dice can be infinitely fair.

But if we throw only 2 times and get both results once, the hypothesis that the true nature of the dice is that we get 10% heads and 90% tails is still somewhat in agreement with the data. And that is quite a bit different from the 50/50

Yes, after 2 trials we do get exactly 50/50, and we can get similar results at 4 trials and all the other even trials, but we don't know with a high probability that the coins for sure are in fact 50/50 completely fair coins.

So yes, we need to go to infinity to exactly get 50/50. You can try it with a computer. (yes, it will have pseudo-random numbers, so it is still a bit iffy, just like having a completely fair dice
Prev 1 2 3 4 5 30 31 32 Next All
Please log in or register to reply.
Live Events Refresh
Safe House 2
17:00
Round Robin
ZombieGrub535
TKL 234
CranKy Ducklings171
3DClanTV 80
EnkiAlexander 55
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
ZombieGrub535
TKL 234
CosmosSc2 76
Codebar 20
JuggernautJason11
Nathanias 4
StarCraft: Brood War
Britney 36774
Calm 2863
Shuttle 309
Hyun 132
Dewaltoss 114
firebathero 89
Backho 80
ZZZero.O 78
Dota 2
qojqva2266
LuMiX1
Heroes of the Storm
Khaldor330
Other Games
Grubby1494
Beastyqt623
Skadoodle458
Liquid`VortiX195
ToD171
KnowMe166
Pyrionflax141
Mew2King102
Trikslyr49
rGuardiaN29
fpsfer 1
Organizations
Other Games
gamesdonequick2277
BasetradeTV32
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 19 non-featured ]
StarCraft 2
• printf 63
• HeavenSC 19
• Adnapsc2 14
• Migwel
• AfreecaTV YouTube
• sooper7s
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
StarCraft: Brood War
• Airneanach32
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• Ler91
League of Legends
• Nemesis5706
Other Games
• imaqtpie1900
• Shiphtur302
• tFFMrPink 12
Upcoming Events
Sparkling Tuna Cup
14h 34m
Safe House 2
21h 34m
Monday Night Weeklies
1d 20h
WardiTV Invitational
2 days
WardiTV Invitational
2 days
Tenacious Turtle Tussle
4 days
The PondCast
4 days
WardiTV Invitational
5 days
Online Event
5 days
RSL Revival
6 days
[ Show More ]
RSL Revival
6 days
WardiTV Invitational
6 days
Liquipedia Results

Completed

Acropolis #4 - TS2
WardiTV TLMC #15
HCC Europe

Ongoing

BSL 21 Points
ASL Season 20
CSL 2025 AUTUMN (S18)
C-Race Season 1
IPSL Winter 2025-26
EC S1
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual

Upcoming

SC4ALL: Brood War
BSL Season 21
BSL 21 Team A
BSL 21 Non-Korean Championship
RSL Offline Finals
RSL Revival: Season 3
Stellar Fest
SC4ALL: StarCraft II
CranK Gathers Season 2: SC II Pro Teams
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.