A list of puns related to "Discrete Random Variable"
Let P be the transition matrix and n the number of random walks. If you rearrange the matrix such that the transient states (those who are eventually left forever) come first and the absorbing states (those who transition back to themselves with probability 1 each time step), you can partition P into
[Q R] [0 I ]
where Q is the transition probabilities between transient states and R is the probability of transitioning to each absorbing state from each transient state. The fundamental matrix N gives the expected throughput of each transient state and is calculated as N=[I-Q]^-1 where I is an identity matrix of equal dimension to Q. The formula is the matrix version of the geometric series, as it comes from N=I+Q+Q^2+Q^3β¦ up to infinity. You can then calculate the absorption probabilities A from A=NR, giving the probability of absorption into each absorbing state given the initial state.
If the elements of Q are uniform random variables and you take n random walks on this markov chain, what are the distributions of the elements of N and A? Assume that the elements of Q are independently but not necessarily identically distributed.
Iβts been a while since Iβve done statistics and I came about a problem where thereβs two i.i.d. discrete uniform random variables both from [1,20] and I have to find the expected value of the maximum between the two. Iβm not really sure how order statistics is supposed to work for discrete distributions. Any thoughts?
SO I'm trying to explore the statistical properties of including vs. excluding a particular predictor 'A' in a classification context, trying to classify some binary 'B', specifically to probe the inverse question -- whether the effects of that feature 'B' have been removed appropriately from the predictor A by a colleague's procedure.
My thinking was that if the effect of 'B' still echoes through the "B-corrected" data 'A', we should be able to retrieve information about 'B' from the data using 'A', along with the rest of our data.
Of course, adding variables will always improve retrodictive classification accuracy. I envision two easy ways around this -- cross-validation, and permutation of the predictor 'A'.
I decided to start with the latter: basically, evaluating classification accuracy of 'B' from my data (incl. 'A'), and then computing a "null distribution" of classification accuracy by permuting (i.e. resampling with replacement) 'A' some number of times and re-running the classification algorithm each time.
So in the end, I might get a retrodictive classification accuracy using my real data of X = 75 / 100, and permuted accuracies of Y = (74, 71, 74, 73, 76, 75, 75, 76, 74, ...) / 100. My thinking is that if my colleague adequately removed all trace of the feature from the feature in question, the quantile of my real accuracy should be uniform(0,1) within my permuted accuracy (colleague actually ran this "correction" independently on thousands of features, so I could test for uniformity here).
Question is, I'm not really sure how to find the quantile of my discrete random variable, i.e. the count of correct classifications out of 100. The usual way I'd do this would be to do something like n_(X > Y) / n_Y, so if you truncate Y above before the ellipse, you get 5/9 = 0.56-ish. Or should I do n_(X >= Y) / n_Y, in which case I have 7 / 9 = 0.78-ish? In the continuous case it doesn't matter because we never get exact equality, but here it does. CDFs of ordered discrete random variables include the # being evaluated while summing over their PMFs, but IDK if that really provides principled guidance.
I can also imagine calculating the quantile after removing all the proportions in Y that match X exactly. This seems intuitive to me -- e.g. imagine we only have 3 observations, (1,2,3), and our distribution of permuted accuracies contains 50 '1/3's, 100 '2/3's, and 50 '3/3's. Using the real data, we observe a 2/3 -- seems nicer for it to fa
... keep reading on reddit β‘I was reading materials related to convolutions. Most references only have convolutions with respect to random variables of the same type, so I am asking if it was actually possible to add random variables of different types.
hey i have a quation but its not in english so i'll try my best translate it.
given g:R->R, mark Z=g(X)Y.
also given the probability function P(X=x) and P(Y=y|X=x) are known.
write expression for P(Z=z|X=x), with P(X=x) and P(Y=y|X=x).
so im assuming i will need to use the deffinition of conditional probability (or maybe bayes' theorem)
but before that i will need to expres z with x,y and im not sure how to do it
maybe im not even in the right direction.
would love some help
Should it be p_X(x) or P_X(x), do they different?
X is random variable and x its numerical value.
Here is with lowercase p in MIT 6.14 class
Here is another source using uppercase P
If they are the same can I also change the y-axis in PMF plot using lowercase p? Thanks
How does it even make sense to define a random variable Z as the sum of random variables X and Y, where X is for example Poisson and Y exponential?
I am not asking or implying that one should do this, I am just wondering since integrals are similar to a summations, if I were to model the behavior of a stochastic variable as a discrete when its more arugably continuous, would the predictions/formulations end up being way off?
I found that p=1/(3q) makes the the variables uncorrelated, but I do not know how to show independence since I do not know what P(X=i,Y=j) for i,j not equal to 1 is.
I've been studying for 5 weeks with another 8-10 weeks of studying left.
Also anything mixed distributions I struggle with.
I can do continuous random variables no problem since it's integrating.
In the discrete / conditional probability, it seems like you have to set up the problem properly or else you have no idea what your trying to solve. It seems like there is a lot of time I am just doing trial and error trying to figure out how to set up the problem. Especially when the problem is like a long paragraph full of information, I just get lost.
I took 15 practice questions ranging easy to difficult, and covering chapter 1 to 10 and I did not look at my notes, I did not look at the solutions. I was asked to aim for 50%.
I scored 80% and it took me 6 hours to do 15 problems just trial and error on the problems.
The ones I got wrong was because the answer I got was part of the multiple choice answers and it was wrong.
Then I did another 15 questions and I scored below 50%, this block of questions had way more of what I'm struggling with then the first block of 15 questions.
IβM BACK FRIENDS! Me again, back with updates on my latest video on DISCRETE RANDOM VARIABLE
Discrete Random Variable is honestly the easiest Statistics chapter yβall will ever need to study, so take this video as more of a refresher for those who may be a lilβ unsure on how to find your E(X) or Var(X). After this Iβll cover Binomial Distribution and Normal Distribution which are the more crucial ones so stay tuned friends! Also let me know if there are other topics you guys need desperate help in and Iβll craft up something for you :)
Hope yβall are doing good in school, especially with it resuming soon and many on alt days already. Remember to keep practicing and do your own notes and consult your teachers!! Aβs are nearing :D
The random variable X has the distribution B(6, p). Given that P(X=5) = P(X=3) + P(X=4), find the value of p. (HINT: use your graphing calculator)
This is one of my hw questions that none of my friends can figure out! Iβd really appreciate it if you explained it for us :)
https://preview.redd.it/ujy81dgakc951.png?width=681&format=png&auto=webp&s=2c276de1d0c7e825d2b50846b347868a4de247fc
How does it even make sense to sum different distributions, one continuous and one discrete. For example, how does it make sense to define a random variable Z with is the sum of rvs X and Y where X is poisson and Y exponential?
https://preview.redd.it/8fxukf9xjc951.png?width=675&format=png&auto=webp&s=770b27047137176bb43773cf3b6b0077853e8492
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.