[University Statistics] Probability - Bernoulli Trial or something else?

The problem would be the following:

>A poll is being conducted among a population of unknown size in order to find and interview a person with a given disease (lets call it D). The rate of sickness for disease D in said population is p(D)=0.001 (or 1 in 1000). The poll will run until someone with the given disease is found.
>
>Assuming the probability of success is independent and equally distributed, what would be the theoretical probability of having to poll 100 individuals before finding a diseased person?

Can this be solved as a binomial experiment for "n" statistically independent bernoulli trials? Or would it be seen as a negative binomial experiment/distribution?

What approach should I take to solve this?

I ran a simulation in Rstudio to obtain an empirical probability and came to a value of 0.09084 for 100,000 hypothetical runs. Im guessing the theoretical probability has to be close (or I could be completely wrong).

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/quinoa_baby
πŸ“…︎ Nov 24 2021
🚨︎ report
Bernoulli trial homework

Hello, I am stuck on this problem:

"The probability of a basketball player making a free-throw out of 3 attempts is 0.992. What is the probability of the player making a free throw on the first attempt?"

The answer is supposed to be 0.8, but I'm not sure how to actually solve it. All help is appreciated!

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/OldLadySmell
πŸ“…︎ Oct 21 2021
🚨︎ report
When to use Bernoulli trials vs. simple multiply

Something I have failed to reach both an intuitive understanding and even correct answer is when to use the Bernoulli trial formula C(n,k)p^kq^(n-k) (bad format but we all know it) vs. just multiplying probabilities, assuming independence.

Example: Say we have a pmf of P(X=1)=0.3, P(X=2)=0.6, P(X=3)=0.1

Now consider we have three variables X, Y, Z drawn independently from the pmf above. Is the probability of X=1 AND Y=2 AND Z=3 simply equal to 0.3*0.6*0.1 = 0.018? Or do we use Bernoulli's formula (for example count X=1 as a success, X != 1 as failure), then have

P(X=1) = C(3,1) * (0.3)^1 * (0.7)^2 = 0.441

P(Y=2) = C(3,1) * (0.6)^1 * (0.4)^2 = 0.288

P(Z=3) = C(3,1) * (0.1)^1 * (0.9)^2 = 0.243

then P(X=1 AND Y=2 AND Z=3) = 0.441 * 0.288 * 0.243 = 0.031?

Why?

If the question was not P(X=1 AND Y=2 AND Z=3) but rather "after drawing 3 variables independently, what is the probability that X, Y, and Z take distinct values", would the approach be different (difference being we don't want specifically that X is 1, Y is 2, Z is 3, just want X=(1,2, or 3) != Y=(1,2, or 3) != Z=(1,2, or 3))?

Sorry for a winded question, but this has always tripped me up. Explanations very, very much appreciated. Thanks!

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/blessedrng
πŸ“…︎ Sep 10 2021
🚨︎ report
Bernoulli Trial, Cumulative Distribution Function, and the Trial master

Yesterday Bex posted that it would take on average ~50 T14+ maps to see the Trialmaster (link: https://www.reddit.com/r/pathofexile/comments/n4ahe0/3141b_patch_notes/gwuktc9?utm_source=share&utm_medium=web2x&context=3). Followed by the line: " We've tested this multiple times to make sure this rate is matching the number of encounters actually being done and it does match." I have 0 reason to doubt they are seeing this in their data....but based on how the probability is encoded and how it is being tested their could be a huge discrepancy as to whether a player picked at random feel like they are experiencing the trial master as frequently as once in 50 ultimatums.

The lazy exile's approach:
-Set the trial master to spawn in 1/50 encounters (2%).
-Across all player record the number of T14 ultimatum trials attempted (for easy of math 100 million)
-Across all T14 ultimatum trials record the number of trial masters that pop-up (for easy math 2 million)
-Lazy stats person sees in fact 1/50 ultimatums lead to trial master so concludes players see trial master every 50 maps give or take a few.

The exile with one college course in stats:
-A single ultimatum is a Bernoulli's trial (ie weighted coin flip) with a Success/Failure chance of seeing the trial master. The success probability is 2% (1/50)
-For a given exile the chance of seeing the trial master is the Cumulative Distribution Function which is dependent on the number of ultimatums performed and the probability of success in each trial. This means:
--50% of players will have at-least one trial master within 35 T14+ maps. (Sounds pretty good!)
--20% of players (a whole 1 in 5 of us) will have at least one trial master in 80 T14+ maps (Not so good...80 maps is a long time)
--10% of players (1 in 10...enough to easily drown out a reddit forum) will have at least one trial master in 114 T14+ maps (114-T14+ map is seriously 50% of my entire mapping for a league)
--5% of unlucky players wont see the trial master even after 148 T14+ maps. Sucks to be you exile.
--1% of the player base won't see the trial mater in 228 - T14+ maps. (I loved last league and I didn't get 228 maven witnesses).

Anyway. For 90% of the player base to see the trial master at-least once after 50 ultimatums the probability of encountering him would

... keep reading on reddit ➑

πŸ‘︎ 96
πŸ’¬︎
πŸ‘€︎ u/junyaminty
πŸ“…︎ May 04 2021
🚨︎ report
Hi guys I was wondering why you can't use bernoullis trial for 2017 q1 part (i) thanks
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/dumah17
πŸ“…︎ Jun 13 2021
🚨︎ report
Statistics: Bernoulli Trial

Hello!!

I am trying to complete some homework and am confused about a Bernoulli trial question! The question went:

Suppose 10,000 tosses of a coin were performed, with each being an independent Bernoulli trial for Y, the outcome that the coin lands on its edge. Given the random variable Y is Bernoulli distributed with a probability of success 1/6000, what is the expected number of coins that will land on their edge over the 10,000 tosses?

I assumed that since the mean of a Bernoulli distribution is simply the probability of success, I should change the denominator of p to 10,000, thus going from 1/6000 to 1.666667/10000, and determining that E(X)=1.6667 (4dp), however, I am unsure if this is correct and are thrown off by the use of independent Bernoulli trials in the question, and cant figure out where this fits.

If someone would be able to help guide me on the right path that would be greatly appreciated!!

Thank you! :)

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Squeakizim
πŸ“…︎ Apr 28 2021
🚨︎ report
Is playing Yahtzee an example of a Bernoulli trial?
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/issaturtleduck
πŸ“…︎ Mar 18 2021
🚨︎ report
[Stats - Bernoulli Trials] Given 6 women and 3 men, how likely am I to select 0 women given I select 2 people?

Imagine I'm selecting someone at random from a group of 9 people. 6 of them are women, and 3 of them are men. If the random binomial variable is where X = #of women selected, and I select 2 random people, what is the likelihood that 0 of them are women?

My understanding of the issue is that you would create a bernoulli trial where n = 9 and r = 0.

This would result in: Combination(9, 0) * (2/3)^0 * (1/3)^(9-0) which results in: 5.080526342529091e-05. This is a value above 1, so its clearly wrong, so where am I going wrong?

Thank you so much in advance!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/gotmycarstuck
πŸ“…︎ Feb 23 2021
🚨︎ report
[University programming/statistics: bernoulli trials] Is the probability of throwing 6 heads different to the probability of throwing 6 heads and then a tails in a system where the experiment ends when a tails is thrown?

We haven't had any instructor prompts really other than the question. The question is:

In lectures, on insertion into a skiplist we flipped a coin until a head occured to determine the height of an element. So for example, if we flipped two tails before a head then we would insert an element of height 3. In all the examples in lectures the probability of flipping a head was exactly 1/2. Now assume that the coin is biased so the probability of flipping a head on a given throw is now 9/10. Derive an expression for the probability of producing an element of height h with this biased coin. Describe in broad terms the consequence of this bias in terms of the average cost of insertion into the skiplist. (2 marks)

Now I know the consequence of doing this with a skiplist and stuff, but the probability distribution part is tripping me up because we have NEVER done anything similiar to it in class.

So, in terms of the question given;

The system is that we keep throwing a bias coin (1/10 to land tails, 9/10 to land heads) until a heads is thrown. I want to generate a probability distribution of the amount of tails (the height) thrown in a row.

My first thought was a Bernoulli distribution,

P(T=h) = (1/10)^h *(9/10)

but after consulting my lecture notes which does a similar thing for a fair coin, it seems its saying the probability is

P(T=h) = (1/2)^h

which would imply for this unfair coin that;

P(T=h) = (1/10)^h

Im unsure because (1) this is a programming class with no statistics involved, so we havent had any introduction to these ideas or how they expect us to use them and (2) Im unsure if throwing 6 tails in a row is equal to throwing 6 tails and then a heads in this circumstance.

Thanks.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Big_Boss_Bob_Ross
πŸ“…︎ May 13 2020
🚨︎ report
[Q] what distribution is a sequence of bernoulli trials that have a poisson payoff? i.e 1,2,3,4 etc...payoff if success.

hi - I'm trying to calculate the p-value for the click-out-rate.

Quick review:

click out rate = number of clicks per session

penetration = 1 if a session had a click 0 otherwise

The average penetration rate is then based on a sequence of trials whose values are either 0 or 1 ===> So it has a binomial(N,p) distribution. N = number of trials and p = success rate.

But what if my statistic gives a payoff other than 1 in the event of a success? What if my random variable pays according to a rate lambda (poisson) in the event of a success?

example: [0 0 5 0 2 0 0 11 0 0 0 0 8] for [fail fail success fail success success fail fail fail fail success]

what distribution is this then? - this represents the number of clicks in sequence of sessions/trials

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/captnpepsi
πŸ“…︎ Jan 31 2020
🚨︎ report
A few questions on the implementation of Bernoulli trials in video games design.

This is sort of the intersection of probability and video game design, so I set the flair as applied maths.

My question is as follows: in a game where we repeatedly perform Bernoulli trials, whether it be critical hit chance, drop rate or suchlike, there’s always a chance that there exists an anomalous sequence of successes or failures.

In a statistic point of view, those anomalies are normal and hardly surprising. But for the player, it can be discouraging or grant unfair advantage out of pure chance.

This leads to my question: in practice, is there any method employed to β€˜regularise’ this Bernoulli sequence? Other issues that come to mind are, what is the resulting distribution? And does that approximate the theoretic distribution of the Bernoulli sequence?

One immediate thought is that we can employ a Bernoulli type sequence, but after, say, 10 failures, a success is guaranteed. Has such idea been employed?

If there’s a survey paper or textbook that discusses such methods, can someone please point me to the right direction?

πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/Vietdanglse
πŸ“…︎ Oct 30 2019
🚨︎ report
Question about Bernoulli trials

Forgive me if this question is silly:

Suppose you and I have one-hundred beans in a bag, all of which you think are white. I hypothesize that ninety-nine are white and one is red. Meanwhile, our friend, Silly Billy, hypothesizes that ninety are red and ten are white. In order to determine whose hypothesis (if any) is correct, we begin drawing beans out at random, one by one, not replacing them.

Let us say that we draw out one bean, which is white. Given your theory, there is a 100% (p = 1) chance that this ought to have occurred. Given my theory, there is a 99% (p = .99) chance it ought to have occurred. And given Silly Billy’s there is a 10% (p = .1) chance it ought to have occurred.

Obviously a single bean is not a good sample size, but does this single trial give preference to your theory over mine? It seems to discredit Silly Billy's theory (am I wrong?). What do you think about the evidential relevance of this single trial?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/LogicalChain5
πŸ“…︎ Jul 22 2019
🚨︎ report
Is there an algorithm for picking a random integer uniformly in [1, n] using a bounded number of Bernoulli trials (random bits)?

A practical algorithm for making a uniform pick in [1, n] is to extend the range to the nearest power of two and then pick in that extended range, i.e. using ⌈log2 nβŒ‰ bits. The result is the base-2 representation of a number, which is returned if it falls in the desired range; otherwise, we repeat the procedure. This is fine for practical computing purposes but it's not guaranteed to terminate.

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/znegva
πŸ“…︎ Jul 08 2018
🚨︎ report
Comparing the quality of two Bernoulli trial probability estimators (in a betting setting)

Hi!

I'm currently testing a model for betting on outcomes of a sport's match. The outcomes are binary, and bookmaker's provide odds for either outcome (which in turn can be converted to probability estimates). Our model also produces probability estimates for the outcomes of each match, and as such, we want to bet on matches where our probability estimates are favorably different from the bookmaker's.

So in theory, each of these matches represent a bernoulli trial for the outcome, each with their "real" probability of success, for which we have two estimates; the bookmaker's and our model's.

What we ideally want to estimate is the probability that our model's probability odds are better than the bookmaker's, given a sample of n (say 20, or more) realizations of such matches. Is there any statistically sound way to do this?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/larzinator
πŸ“…︎ Feb 10 2019
🚨︎ report
[High School probability] Bernoulli trials

Can someone please explain what is going on here . Specifically I don't understand the part where the author deduces the probability of each of the outcomes as (2/3)^4 *(1/3)^3 in the solution. Can someone explain how please?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Cheeky9
πŸ“…︎ Jan 15 2019
🚨︎ report
Why does the chance of 1 success in X Bernoulli trials (with p = 1/x) always equate to roughly 0.63?

Bit of a strange answer since the explanation could just be "because that's how numbers work". I remember reading an article relatively recently about the importance of this number (or perhaps, the importance of failing with p=0.37) and was hoping someone could jog my memory to help me find it.

Basically, is there any special reason for why it's 0.63/0.37? Is there any special number in math like that in the same vein as e?

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/NotJordy
πŸ“…︎ Nov 19 2017
🚨︎ report
Does the sum of the Poisson approximations to a Bernoulli trial never equal one?

The sum of the Poisson distribution P(X = j) is equal to exactly one only when the sum of P(X = j) is computed from j = 0 to infinity. When using a Poisson distribution to approximate a Bernoulli trial where an experiment with probability of success p is performed a finite number n times, it only makes sense for j to go up to n, so the sum of the probabilities of all of possible outcomes of the Bernoulli trial is P(X=0) + ... P(X = n) and is less than 1, and the difference from 1 is the sum P(X = n+1) + P(X = n+2) + ... P(X = infinity)?

πŸ‘︎ 3
πŸ’¬︎
πŸ“…︎ Apr 28 2019
🚨︎ report
Bernoulli Trials and Binomial Distribution

A random variable represents the number of successes in 20 Bernoulli trials, each with probability of success p=0.4.

(A) Find the mean and standard deviation of the random variable. (B) Find the probability that the number of successes lies within 1 standard deviation of the mean.

I've done part A. It's part B I can't seem to grasp. I'm even using the function in my calculator but I've had to do a similar problem like this 5 different times and still it keeps marking part B wrong.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/GirlOnWheels10
πŸ“…︎ Nov 26 2015
🚨︎ report
Bernoulli trials. The essential things to know. dataanalysisclassroom.com…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/realDevineni
πŸ“…︎ Sep 09 2017
🚨︎ report
[Statistics] Deriving p of Bernoulli trial from number of successes.

Hi. I'm working on a personal project and need to make sure the work I'm doing is mathematically sound. For the questions below, is my approach correct, or is there a correct/better way of doing things?

I have conducted 20 Bernoulli trials. My observed outcome is 8 successes.

It was proposed that $p_{suc}$ was 0.2. As the number of successes follow a binomial distribution, I have calculated the standard deviation as $\sqrt(npq)$ = $\sqrt(200.20.8)$ = $\sqrt(3.2)$ = 1.79

The proposed mean was 4, so my observed value is therefore + 2.22 standard deviations from the mean, and so my p value is 0.0264. So this is significant by my measure (<0.05). However is it valid to say that there is a 97.4% chance that $p_{suc}$> 0.2?

Also, I want to define a range of values for $p_{suc}$ given my observed number of successes, which I can be 95% confident in. So my estimate is $8/20$ = 0.4 +/- $x$.

Is there a way to calculate $x$, or can I use brute force to simulate 20 trials say 10,000+ times for different $p_{suc}$ values, and use an algorithm to find the upper and lower cut-off values for $p_{suc}$ whereby my observed success count of 8 appears in < 5% of simulations?

Thanks for any help.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/chazwc
πŸ“…︎ May 20 2018
🚨︎ report
Question about combinations and repeated Bernoulli trials

I've had this problem for some months now, and while normally I can find the answers as to why what I'm doing is wrong quickly, I still don't know why this certain way of solving problems is wrong. I'm a high school student, and in our classes we talked about combinations and repeated Bernoulli trials. Sometimes I face problems like this one:

"There are 10 X cases and 20 Y cases, indistinguishable at touch. From those 30 cases, 4 are selected, simultaneously. What is the probability of all 4 cases being Y?"

I always tend to use the repeated Bernoulli trials formula, since sucess would be getting a Y case (two thirds) and failure would be getting a X case (one third). So, I went and calculated this probability: https://www.wolframalpha.com/input/?i=probability+of+4+successes+in+4+trials+with+p%3D2%2F3

But it turns out that the right answer is this: https://www.wolframalpha.com/input/?i=(20+choose+4)+%2F+(30+choose+4) and I just can't understand why I can't use Bernoulli trials for this type of exercise. Can you help me, please?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/mcbinladen
πŸ“…︎ Jul 14 2016
🚨︎ report
Binomial distribution with pairs of independent Bernoulli trials

I'm trying to find some guidance on solving this problem involving a binomial probability distribution.

In the normal case, there are a series of Bernoulli trials with probability of success p, and the distribution will tell me the probability of k successes in n trials. I got that just fine.

However, the situation I'm looking at, each event represents a pair of independent Bernoulli trials, with the first trial succeeding on probability p, and the second with probability p/2. I would like a probability mass function and cumulative distribution function that will tell me the probability of k successes in n pairs of trials. I only care about the total successes irrespective of which of the two trials (or if both) succeed in any particular event.

My current means of solving this is looking at the two events as a single one, and setting p as the probability of at least one succeeding, which in this case is equivalent to (p/2)*(3-p). Then, for whatever k I want the probability for, I first divide it by the average number of successes for events in which there is at least one, which in this situation is equal to 3/(3-p), before applying it to the mass function. The problem is that this generally turns the starting k into a non-integer value to plug into the function, which, while it works, seems incorrect in a probability density function.

An example with real values just to make my explanation clearer:

There will be 50 pairs of trials, and the probability of success on the first one is 0.4, making the second 0.2. The probability that at least one trial will succeed is 0.52. The average number of successes when there is at least one is about 1.154. If I want to know the probability of achieving, say, 31 successes, I first divide by 1.154, giving me about 26.863 and then plug that into the probability mass function, which in this case gives me 0.109.

However, I'm not sure whether this value is correct. It seems plausible, but there's also the awkwardness of plugging in a fractional value into a probability mass function of a discrete probability distribution. Can anyone offer any insights into how to improve my calculations?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Arkalius
πŸ“…︎ Aug 02 2014
🚨︎ report
Simulating n‐Bernoulli Trials in Constant Time

Hey there everyone. I was thinking about ways to make a program of mine run faster, and while I was thinking on it, this question popped into my head. I'm not nearly familiar enough with the math background behind it (I'm a CS guy that relies on math a lot, but who's not very good at it). I was hoping someone better than me could look at it and tell me if it's the sort of thing that's impossible to do through a mathematical shortcut, or if there's no way to skip doing all the simulations. Okay, so here goes:

Let's say we have a very large number of coins to toss (pretend it's 10^12 ). They're unfair coins which land on heads about β…” of the time. If we were to flip them all, we would then have the number of heads and the numbers of tails resulting from these trillion trials. We also know that this will result in a clean binomial distribution, so it will be the case that we can calculate the probability of there being exactly x heads at the end of the trials. Naturally, the probabilities we can calculate for each of the possible results (from 0 to 10^12 heads) must add up to 1. We also know that the expected value of the trials will be 2⁄3 Γ— 10^12 .

My question is this: can we use this information to somehow compute a possible real result of running all these trials without actually running them? I was thinking of it this way: if each of the individual probabilities for a particular result can be thought of as a number of votes, we could essentially put the corresponding number of votes for each result into a hat, and randomly select one. This would essentially simulate running through all the flips, as the probability of pulling a given value out of this hat would be the exact same as achieving that number of heads after n trials. The problem is, this would require calculating the individual probability for every single possible result.

I thought that this could be simplified by just using the integral of the mass function as follows: set one side of an equation equal to a random variable r (range 0–1) and the other side to the definite integral of the mass function from 0 to k, where k is the result (the number of heads achieved). We could then solve for k to get the number of heads for a given random value of r. The problem, of course, is that you cannot integrate the mass function below:

r = ∫|n choose k| Γ— β…”^k Γ— β…“^nβˆ’k

as the n choose k bit includes factorials. So that's where I'm stuck. Is it possible to get

... keep reading on reddit ➑

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/AidenTai
πŸ“…︎ Aug 12 2013
🚨︎ report
Estimating number of Bernoulli trials given number of successes

(Note: I asked this question before on the Stats Stackexchange, but did noy get any authoritative answer. I hope my luck is better with Reddit).

Suppose you have a series of n trials, where the probability of success in each trial is p. The distribution of the number of successful trials follows a Binomial distribution with parameters (n, p). The mean is given by np whereas the variance is np(1-p). So far so good: this is pretty mundane Stats 101 stuff.

But suppose now that I only knew about the number m of successful trials, and had no knowledge of the total number n of trials, which is the variable I am interested in estimating. For example, I knew I had 100 successful trials, where each trial had a 0.1 chance of success. Is there a known probability distribution that describes the probable outcomes for n, the total number of trials? Estimating the mean is easy: m/p. But what about variance and other measures?

What if each success had a different (but known) chance of success? Suppose I had the following records:

  • success1 (with p=0.1)
  • success2 (with p=0.1)
  • success3 (with p=0.2)

Again, a good estimation of the total number of trials can be obtained by simply summing 1/p for each successful trial. In this case that number is 10+10+5=25. But what about variance and other measures?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/jon_smark
πŸ“…︎ Dec 07 2012
🚨︎ report
Is this a Bernoulli trial, Poisson sampling, or neither?

I recently found an interest in statistics and I want apply it to a real life scenario. My friend plays a game called magic: the gathering and I want to calculate the probabilities of drawing x number of the same card after drawing 7 cards knowing that the deck has a total of 60 cards. As you draw cards, the total numbers of cards that can be drawn change and this affects the probability of future draws. I know how to brute force this problem, but I want to learn a better way of solving it since I tend to run into these scenarios very often with my only tool being my own intuition. The problem I have is that I do not know what this type of problem is called. I am not familiar with a lot of the terminology used in statistics so it is difficult for me to compose an appropriate google search. Also, is there a tool that I can use in excel to help me with this type of problem? Thanks for reading.

πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/polihayse
πŸ“…︎ Apr 09 2015
🚨︎ report
[college stats] express binomial distr. in terms of bernoulli trial random variables

I'm not sure how to approach the second question, I'll post both since they are related

.5. Give the mean, variance, and standard deviation of the binomial distribution with n = 120 and p = 0.3. Evaluate numerically.

ok this is easy

mean E[x]= np = 36

var = np(1-p) = 25.2

sd = sqrt(var) = 5.02

.6. Per (5), express r.v. X having the binomial n = 120, p = 0.3 distribution in terms of Bernoulli Trial random variables X1, …, Xn. From this prove your answers (5), stating needed assumptions and rules of E and Var that you use

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/DJ_Arbor
πŸ“…︎ Oct 30 2015
🚨︎ report
Philosophical question about Bernoulli trials

Sorry if this is a silly question -- suppose you and I have one-hundred beans in a bag, all of which you think are white. I hypothesize that ninety-nine are white and one is red. Meanwhile, our friend, Silly Billy, hypothesizes that ninety are red and ten are white. In order to determine whose hypothesis (if any) is correct, we begin drawing beans out at random, one by one, not replacing them.

Let us say that we draw out one bean, which is white. Given your theory, there is a 100% (p = 1) chance that this ought to have occurred. Given my theory, there is a 99% (p = .99) chance it ought to have occurred. And given Silly Billy’s there is a 10% (p = .1) chance it ought to have occurred.

Obviously a single bean is not a good sample size, but does this single trial give preference to your theory over mine? It seems to discredit Silly Billy's theory (am I wrong?). What do you think about the evidential relevance of this single trial?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/LogicalChain5
πŸ“…︎ Jul 22 2019
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.