A list of puns related to "Andrew Gelman"
It looks like academia has actually noticed the speed running drama. Here's a link to the Columbia University professor's tweet with his blog post. He has a wikipedia page and everything so it seems legit. He's also a Harvard PhD lol
He says that > I asked a local expert, who characterized the above-linked paper as βtrivial but impressive.β The local expert was not so impressed by the rebuttal offered by the player accused of cheating.
The comments from other statisticians don't seem to think the response was well done either.
I think a lot of people in this sub have been very sucked into the whole Jaynes school/dogma, and itβs maybe even considered settled in some rationalist-circles that Bayesianism is clearly the ultimate right way to do things. So I think this is a good read, as itβs an eminent statistician - whoβs a world-leading expert on Bayesian statistics - discussing how good Bayesian inference isnβt as βpureβ as some rationalists might want it to be
https://statmodeling.stat.columbia.edu/2020/10/24/reverse-engineering-the-problematic-tail-behavior-of-the-fivethirtyeight-presidential-election-forecast/
https://preview.redd.it/42ysmmyoqo761.png?width=1247&format=png&auto=webp&s=ccb6228805b37677d7d9017fa048ad1524b31dd7
I love the fact that the site uses .edu so it's much more trustworthy then a YouTube or reddit discussion.
>Joe Nadeau writes:
>
>Iβve followed the issues about p-values, signif. testing et al. both on blogs and in the literature. I appreciate the points raised, and the pointers to alternative approaches. All very interesting, provocative.
My question is whether you and your colleagues can point to real world examples of these alternative approaches. Itβs somewhat easy to point to mistakes in the literature. Itβs harder, and more instructive, to learn from good analyses of empirical studies.
>
>My reply:
>
>I have lots of examples of alternative approaches; see the applied papers here.
>
>And here are two particular examples:
>
>The Millennium Villages Project: a retrospective,observational, endline evaluation
>
>Analysis of Local Decisions Using Hierarchical Modeling, Applied to Home Radon Measurement and Remediation
ISBN: 9781107676510
https://andrewgelman.com/2018/09/30/someone-says-quote-exact-quote-misquotes/
Andrew Gelman comments on Sam's debate with Ezra Klein:
>Harris says, βThe quote is, this is the exact quoteββand follows up with something thatβs not the exact quote.
>
>...
>
>I mean, really, whatβs the point of that? How do you deal with people who do this sort of thing? I guess itβs related to the idea we talked about the other day, the distinction between truth and evidence. Presumably, Harris feels that heβs been maligned, and heβs not so concerned about the details. So when he says βthis is the exact quote,β what he means is: This is the essential truth.
The quote in question:
>The quote is, this is the exact quote: βSam Harris appeared to be ignorant of facts that were well known to everyone in the field of intelligence studies.β Now thatβs since been quietly removed from the article, but it was there and itβs archived.
And as far as I can tell, Gelman (and Klein) is right. But that just sounds bizarre to me. Why would Sam say "this is the exact quote", and then say something random that sounded like what he thought the article said? I would never describe Sam as someone "not so concerned about the details", and more interested in "essential truth" over "literal truth".
https://statmodeling.stat.columbia.edu/2020/04/07/the-generalizability-crisis-in-the-human-sciences/
Interesting to see a statistician basically back up Yarkoni's take on things and mentions a few other statistical challenges that make it even harder to do good science.
One thing I wondered listening to the episode on this that if we are to model the space of experimental configurations and the fact we have chosen only one (or maybe a few) of the many possible experimental configurations to do our actual experiment, how can we set priors on the effect of this? If I a priori think experimental configuration space makes things very noisy, this will mask small effect sizes and I won't get many (or any) significant results. On the other hand, I can reduce the prior variance until I do start getting significant effects. I don't see how anyone can argue meaningfully for a specific prior probability distribution on this space. Or am I missing something?
I like the concept, it's a really clear way of using a statistical model to highlight a difficult problem. But it seems hard to operationalise into an actual workflow for doing science.
https://www.datacamp.com/community/blog/election-forecasting-polling
No idea why I can't just post a link, but I found the interview very interesting.
It looks like academia has actually noticed the speed running drama. Here's a link to the Columbia University professor's tweet with his blog post. He has a wikipedia page and everything so it seems legit. He's also a Harvard PhD
He says that
>I asked a local expert, who characterized the above-linked paper as βtrivial but impressive.β The local expert was not so impressed by the rebuttal offered by the player accused of cheating.
The comments from other statisticians don't seem to think the response was well done either.
https://twitter.com/StatModeling/status/1342115215056527362
"I asked a local expert, who characterized the above-linked paper [the MST report] as βtrivial but impressive.β The local expert was not so impressed by the rebuttal offered by the player accused of cheating.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.