Two Georgia poll workers sue One America News, Giuliani over debunked fraud claims reuters.com/world/us/two-…
πŸ‘︎ 9k
πŸ’¬︎
πŸ‘€︎ u/deathcultexmember
πŸ“…︎ Dec 24 2021
🚨︎ report
My daughter wants to eat a woman who shares her birthday

The only way this is going to make sense is if I start at the beginning: August 21, 1982.

A baby girl was born shortly after midnight. I wasn’t the mother’s doctor, but I was the attending on the same labor and delivery floor. Even though the newborn’s Apgar was good, she was clearly in great distress. The on-call pediatrician raced the child to the NICU. Twenty minutes later, I was called to consult.

β€œYou want me to check on the mother?” I’m an obstetrician. I care for pregnant women and deliver their babies. Once they’re born, the infants become pediatric patients. Why was I being called into the neonatal unit?

β€œNo, Dr. Kaizen. It’s the child. Please come to the NICU.” I heard panic creeping into my colleague’s voice.

The baby lay in a NICU incubator, screaming. The nursing staff stood at a distance. None of them were looking at the child. They stared at the floor, or the far wall, or at me. These were experienced neonatal ICU nurses. They had dealt with every horrible condition that could possibly result from birth. But whatever was in the incubator had rattled them.

β€œHow is this an obstetrics case?”

The pediatrician gestured to the incubator. β€œPlease examine the patient, Dr. Kaizen, and tell me what you think.”

The baby girl looked like a healthy birthweight baby – eight pounds or so. But her abdomen was terribly distended. She certainly had a good reason for screaming.

I gently palpated the girl’s bulging belly, expecting to feel signs of fluid or gas. I didn’t. Instead, I felt an enlarged uterus. The fundus was near the infant’s sternum. I gently squeezed the sides of the child’s belly, feeling with my fingertips a miniature version of what I feel with my whole hands in adult patients. I placed my palm on her tiny belly. There was an almost imperceptible flutter, then something gently pushed against my hand.

I turned to the NICU staff. Their eyes were locked on me, hands holding their mouths or touching their foreheads.

I said, β€œthis infant is pregnant. And she is in labor.”

I did my best to remain calm, but I heard my voice crack as I spoke. Something was inside this newborn. Something had grown Inside her as she developed in the womb, and it wanted to get out. I have as much experience as the NICU nurses with the terrible effects of abnormal pregnancies. No matter what condition my patients and their fetuses had suffered from, I had never felt what I felt at that moment: fear. Fear of what was inside of this baby.

I delivered the

... keep reading on reddit ➑

πŸ‘︎ 1k
πŸ’¬︎
πŸ‘€︎ u/sarcasonomicon
πŸ“…︎ Jan 15 2022
🚨︎ report
TOZEX Reserve has several advantages with the combination of a decentralized stochastic order book. The TOZEX Reserve will be managed with smart contract capabilities; automated trigger point rules with a distributed governance mechanism to avoid as much as possible any manipulation.

#Tozex #TOZToken #Blockchain #Crypto #Project https://tozex.io/

u/R2TMC_RMT

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/R2TMC_RMT
πŸ“…︎ Aug 20 2020
🚨︎ report
Stochastic gradient descent: from noisy gradients in millions of dimensions for neural network training - how to go to 2nd order methods?

Stochastic gradient descent (SGD) is a basic tool of machine learning - I thought to try to discuss it with mathematicians here - good overview of 1st order methods, overview also of 2nd order methods.

So neural network can be seen as extremely general parametrization with number of parameters often in millions. They are trained by minimization of function defined as sum of some evaluation score for each object over the entire dataset. It is usually done by gradient descent, usually in stochastic way: calculating gradients from subsets ("minibatches") of dataset to better exploit local situation.

Beside problems of standard gradient descent like stucking in a plateau, it happens in huge dimension and we use noisy gradients - need to extract their statistical trends.

A basic way is (exponential moving) average of such noisy gradients (momentum method), and adding some heuristic modifications - such 1st order methods currently dominate neural network training.

There is now ongoing fight to get successful 2nd order methods for smarter choice of step size and and simultaneously modeling multiple directions, e.g. recent K-FAC claims multiple times faster convergence. However, it is quite difficult: full Hessian is huge (like 20TB for 3M parameters), it is very tough to estimate from noisy data, we need to invert Hessian, naive Newton method attracts to saddles - there is a belief that there are exp(dim) of them ...

Any ideas for practical realization of 2nd order methods? Some basic questions:

  • As full Hessian is too large, we rather need to choose a subspace where we use 2nd order model (we can simultaneously do gradient descent in the remaining) - how to choose this subspace and update this choice?

  • How to effectively estimate Hessian there?

πŸ‘︎ 59
πŸ’¬︎
πŸ‘€︎ u/jarekduda
πŸ“…︎ Apr 18 2019
🚨︎ report
TOZEX Reserve has several advantages with the combination of a decentralized stochastic order book. The TOZEX Reserve will be managed with smart contract capabilities; automated trigger point rules with a distributed governance mechanism to avoid as much as possible any manipulation. /r/tozexofficial/comments…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Chaki364
πŸ“…︎ Aug 21 2020
🚨︎ report
Why NBA star players in foul trouble should not be benched: if fouls are time invariant, then resting them is first-order stochastically dominated by playing them (r/statistics crosspost) mindyourdecisions.com/blo…
πŸ‘︎ 93
πŸ’¬︎
πŸ‘€︎ u/HelloMcFly
πŸ“…︎ May 08 2014
🚨︎ report
Compatibilism is not Absurd

Introduction

Greetings!

I have noticed that whenever free-will comes up, most people here will either deny it completely (Hard Determinist) or accept it but deny determinism (Libertarianism). This usually falls along the atheist / theist divide, with atheists being Hard Determinists and theists being Libertarians. The "middle" position, Compatibilism, is unpopular. Many will even declare it absurd or incomprehensible,, which I think is a bit unfair. I think this comes from a lack of understanding of what exactly the position encompasses, and does and does not assert . My hope in this post is to at the very least convince people that compatibilism isn't absurd, even if I can't convince them to adopt it

Definitions

By determinism, we mean the claim that 1) the universe follows unchanging, deterministic laws, and 2) all future states of the universe are completely determined by the initial state together with these laws. Both Hard Deterministis and Compatiibilists accept determinism, which is backed by all our current scientific theories. What they differ in is their acceptance of free will

NB. As a quick qualification, determinism is actually a bit of a misnomer. It might be that our universe also has stochastic processes, if certain interpretations of quantum mechanics turn out to be correct. However, I think we can agree that random quantum fluctuations or wave function collapse do not grant us free will. They are stochastic noise. So in the remainder of this discussion I will ignore these small effects and treat the universe as fully deterministic

Now, there are actually two common definitions of free-will:

  1. Free will is the ability to act according to one's wants, unencumbered, and absent external control. I will call this version free-act
  2. Free will is the ability to, at a certain moment in time, have multiple alternative possible futures available from which we can choose. It is the "freedom to do otherwise". I'll call this free-choice

The former is obviously a weaker thesis than the latter. I will argue for them both in turn, with focus on the second.

Argument for Free-act

Free-act is not incompatible with determinist. It may well be that our wants are predetermined. But we still have the ability to carry out those wants. For example, if I am thirsty, I have the ability to get a glass of water. If I am tired, I can sleep. If I want to be kind or be mean, I can do that too. In some sense, we can only do what we w

... keep reading on reddit ➑

πŸ‘︎ 69
πŸ’¬︎
πŸ‘€︎ u/arbitrarycivilian
πŸ“…︎ Jan 04 2022
🚨︎ report
Solving a second order, stochastic differential equation

I posted this question on the match stack exchange, but it got no love.

https://math.stackexchange.com/questions/3131898/solving-second-order-nonhomogeneous-ode-where-the-rhs-is-a-random-process

(Feel free to skip to the the last equation for what I'm trying to solve)

So I have a second order ODE with constant coefficients whose right hand side is the sum of white noise and its derivative. After some research, it seems there's a field dedicated to such things. I've done some Googling to find lecture notes on Stochastic Differential Equations, but even the introductory notes have been somewhat difficult for me to understand. Part of it is not knowing the notation, but also the "textbook" examples are all based on white noise being the derivative of Brownian motion. I need the derivative of white noise, and I've not yet found anything with higher order derivatives of Brownian motion.

Any guidance or explanation (or an answer) would be greatly appreciated!

I do have these basic questions that will hopefully get me along the right path

  1. Is it still valid to solve the homogeneous equation and get the complementary solution first, which is easy because its not stochastic, then solve the stochastic one separately and add them together?
  2. The right hand side is actually a white noise process plus its own derivative. From my understanding, this should also be a white noise process. How can I simplify this into a single process, while keeping tracking of the statistics (i.e. mean and std dev).
  3. Now suppose the above is correct, and I now have a generic second order constant coefficient SDE, what is the proper technique/tool to solve it?
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/kayson
πŸ“…︎ Mar 04 2019
🚨︎ report
MΓ©gra is a mini-language to make music with variable-order markov chains and some other stochastic shenanigans github.com/the-drunk-code…
πŸ‘︎ 13
πŸ’¬︎
πŸ‘€︎ u/lispm
πŸ“…︎ Jul 20 2019
🚨︎ report
Weekly Random Discussion Thread

Post about anything and everything related to investing. The place in /r/PHinvest for any questions, rants, advice, or commentary.

Posts that are not discussion-provoking enough for the main page will be pointed toward this weekly thread to help keep the quality of the main page posts as high as possible.

That said, keep it respectful, and enjoy!

πŸ‘︎ 5
πŸ’¬︎
πŸ“…︎ Dec 06 2021
🚨︎ report
Higher-order corrections to Stochastic Gradient Descent in the continuous-time limit

Hi folks,

I posted this to r/MachineLearning but thought that I might also get some interest here.

I've seen a lot of theoretical studies of Stochastic Gradient Descent (SGD) that consider it in the limit as the step size goes to zero, which turns it into a stochastic differential equation of the form dx/dt = alpha*(-grad(loss)(x) + noise). It's a lot easier to compute a lot of useful quantities using this form like e.g. stationary distributions.

However, one of the things that gets lost in this formalism is are intrinsic scales of the problem. In the continuous limit, rescaling the time variable t (or more generally, performing an arbitrary coordinate transformation) leaves the trajectory invariant, because the differential equation is formulated in a covariant fashion. This gives misleading results if you want to analyze something like the convergence rates. In this continuous formulation, you can just rescale your time parameter to 'speed up' your dynamics (which is equivalent to increasing alpha), whereas you obviously can't do this in the discrete formulation, because if you rescale alpha arbitrarily you overshoot and you get bad convergence.

The first thing that came to mind when I started thinking about this was that you could amend your differential equation to include higher-order correction terms. Specifically, if we have a differential equation of the form x'(t) = f(x), we can Taylor expand to get x(t + delta) β‰ˆ x(t) + f(x)*delta + 0.5*Df(x)*f(x)*delta^2 + O(delta^3). This tells us that the difference between the continuous trajectory solution x(t + delta) and the discrete trajectory x(t) + f(x)*delta after a time delta will be roughly 0.5*Df(x)*f(x)*delta^2. In order to get a more accurate model for the discrete-time process x(t+delta) = x(t) + f(x)*delta, we can introduce a correction term into our differential equation: x'(t) = f(x) - 0.5*Df(x)*f(x)*delta. When f is -alpha*grad(loss), this becomes x'(t) = -alpha*grad(loss)(x) + 0.5*alpha*Hessian(loss)(x)*grad(loss)(x)*delta. This correction term breaks covariance: when t is rescaled, both alpha and delta are necessarily rescaled, so the correction term transforms differently than the gradient term. It seems to me like this is a natural way to model the breakdown of covariance in the discrete dynamical system in the continuous setting and to study why certain timescales/learning rates are preferred.

tl;dr: Does know if this version of the continuous-time limit has

... keep reading on reddit ➑

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/glockenspielcello
πŸ“…︎ May 12 2019
🚨︎ report
Pump coming on 12/31/2021 (8am UTC/ 2am CST) with Quarterly Options contracts expiring. The whales are moving in USDT to market as we speak

You can time the price of the market by the strike price of contracts on options ( not futures)

The largest position's strike price will often dictate how the whales will wash/ manipulate the market to collect their options contracts ( yes this is illegal).

https://www.investopedia.com/terms/w/washtrading.asp

strike price derfinition

https://www.investopedia.com/terms/s/strikeprice.asp

OPtions definitions:

https://www.investopedia.com/options-basics-tutorial-4583012#:~:text=Call%20and%20Put%20Options,-Options%20are%20a&text=A%20call%20option%20gives%20the,right%20to%20sell%20a%20stock.

You can watch the movement of assets on whale alerts

https://twitter.com/whale_alert

when an asset gets moved to exchanges in large quantity it will dump.

THis often correlates with the Stochastic RSI on the daily candle

and glass node

https://glassnode.com/

Here is the calendar

https://www.marketwatch.com/optionscenter/calendar

3 days before the expiry whales will pump or dump assets and hold them in place via algorithmic trading software.

Once the price is massaged into place the algorithmic bots will buy and sell to hold the price at a certain point

****Ethereum was pushed to 3600 and will now be kept around 3750 until expiry time****

Like magic the boot comes off the neck of retail investors at 8am UTC/ 2am CSTand the price magically moves.

Most crypto exchanges help whales do this with their OTC desk.

Some more dubious exchanges blackout their servers to take assets from retail traders while still processing orders on the back end . This is to help with a liquidity crisis. *****Look at my post history if you want to know who and how......******

Be aware BTC may dump again as BTC is being moved to exchanges enmass

*****action items/ needs***

I am looking for a good place to get strike price data.

πŸ‘︎ 45
πŸ’¬︎
πŸ‘€︎ u/HammondXX
πŸ“…︎ Dec 30 2021
🚨︎ report
A Mathematician's Comment on state of TC calculations

Hi everyone

Sorry for click bait title, it was a stupid choice and I regret it deeply. These points of mine only apply to "edge-case"(i.e borderline) theory crafting (TC) results where averages lose a bit of meaning. I should have been more clear in the title. I personally think the TC community is doing an amazing job. When coming up with the title I was thinking way too literally in my head and not thinking of optics. I'm truly sorry if it sounded disrespectful.

I just wanted to share some discussion points that I don't see mentioned often. These caveats are obvious to those who know math(like theory crafters), however I want to share these points and a small monte-carlo demonstration on a very simple example so that hopefully we can all learn some relatively "basic" probability theory that is helpful to know in Genshin. Because when you know this, then this may help you make wiser resin and money investments. Note that this does not matter if you want to fully min-max a character and don't mind potential poor returns on investments. This only applies if you're uncertain a small DPS increase is worth investment for you needs. this is your choice. You determine what is optimal.

Summary (from u/derpkoikoi) What OP means is that inherently no matter what you do, since you have a insignificant number of actual hits you can make, you will not feel incremental gains in dps, so it's not worth spending resin for a 1% gain in damage.

In my own words my point is just simply: When averages are too close together for comparisons(like 3% dps change) comparisons are hard due to random uncertainty of crit(let alone other sources of uncertainty). So, you should always consider you own needs and take external numbers carefully( e.g know assumptions used). Below, I demonstrate why comparisons are "hard" and averages don't cut it when average results are "too close" together. Since we have uncertainty we don't know about, we have to relax our discussion to ranges of values, and in this case I think there are some points to simply think about. In these cases we can't order things saying X weapon is better than Y weapon because in practice most of the time they will do damage that falls in very similar ranges of values. **Note: I am not saying averages are bad**, but religiously relying on them can be misleading for small % dps changes in practice.

A small mathematical demo showing this point is below. Please do note I am not talking about crit ra

... keep reading on reddit ➑

πŸ‘︎ 1k
πŸ’¬︎
πŸ“…︎ Sep 22 2021
🚨︎ report
SERIOUS: This subreddit needs to understand what a "dad joke" really means.

I don't want to step on anybody's toes here, but the amount of non-dad jokes here in this subreddit really annoys me. First of all, dad jokes CAN be NSFW, it clearly says so in the sub rules. Secondly, it doesn't automatically make it a dad joke if it's from a conversation between you and your child. Most importantly, the jokes that your CHILDREN tell YOU are not dad jokes. The point of a dad joke is that it's so cheesy only a dad who's trying to be funny would make such a joke. That's it. They are stupid plays on words, lame puns and so on. There has to be a clever pun or wordplay for it to be considered a dad joke.

Again, to all the fellow dads, I apologise if I'm sounding too harsh. But I just needed to get it off my chest.

πŸ‘︎ 17k
πŸ’¬︎
πŸ‘€︎ u/anywhereiroa
πŸ“…︎ Jan 15 2022
🚨︎ report
Why NBA star players in foul trouble should not be benched: if fouls are time invariant, then resting them is first-order stochastically dominated by playing them mindyourdecisions.com/blo…
πŸ‘︎ 38
πŸ’¬︎
πŸ‘€︎ u/strategyguru
πŸ“…︎ May 08 2014
🚨︎ report
Theory and Code for Runge Kutta Methods (Code in the video description) youtu.be/t48a2M27kjM
πŸ‘︎ 16
πŸ’¬︎
πŸ“…︎ Dec 17 2021
🚨︎ report
[Q] Is it better to be 2x luckier or have 2x the chances?

Is it better to have 100 chances to win or 50 chances that are twice as lucky? Seems even but is it or is there a theory if it isn't? Like how a coin flip is not 50/50.

*Edit: Also the lottery runs constantly at the same interval and the tickets never expire.

πŸ‘︎ 24
πŸ’¬︎
πŸ‘€︎ u/steezy280
πŸ“…︎ Nov 22 2021
🚨︎ report
Blind Girl Here. Give Me Your Best Blind Jokes!

Do your worst!

πŸ‘︎ 5k
πŸ’¬︎
πŸ‘€︎ u/Leckzsluthor
πŸ“…︎ Jan 02 2022
🚨︎ report
Norskk

TLDR: Norskkk is a bunch of Thoraboos being Grifted by a French-Canadian claiming to be Norwegian with a bunch of stuff that will radicalize people to dangerously far-right ideologies.

To begin, an obligatory β€œFuck Norskk” (hereafter to be referred to as Norskkk for connotative reasons)

For reference I will be using their term β€œViking” to describe the wider Norse culture as they use it interchangeably and it makes it easiest should anyone wish to search their several sites for the articles I will be discussing.

It is also ESSENTIAL that you understand this is not a matter of theology. I don't think Norskkk is "worshipping wrong", I don't think they are worshipping at all. Nowhere in any of their sites does it mention reciprocity, the gifting cycle, the Tree and Well, or even basic concepts such as Frith and Wyrd. Instead they simply use the imagery, the the verbage, use the aesthetic, while projecting their own warped worldview.

Norskkk is a male-centric Norse-pagan grifting network run by Christopher Fragassi (sometimes known by the moniker Bjornsen). I first came across them years ago when I was looking more deeply into Norse and Germanic Paganism. I have taken the dive many times to point out specific flaws in their rhetoric, history, ideology, and general nonsense, today I am giving a short (yes, this is the condensed version) exploration of them so that you can understand some of the problems and be aware of them moving forward. This partial deconstruction, like a night of passion, comes in several parts.

Part 1 – The (a)Historicity

While many of us consider ourselves knowledgeable and well-read, and many more seek to become such themselves, Norskkk claims to be a teacher despite being foolishly ignorant. Unfortunately I will not get a chance to be a grammatical nitpicker of their information as there are just too many poorly translated terms and words to be able to speak on anything else, nor will I get a chance to delve into their hideously twisted and warped projections of Norse Myth.

On their various websites (not linked, because I believe their content to be genuinely less than worthless, though you are more than capable of finding them) you will find a variety of articles covering a range of ancient and modern topics including (but not limited to) Valhol, Vaccines, Horned Helmets, hair and beards, the roles of Disabled peoples, Cheese (I’m not kidding), drugs, shamanism, circumcision, ethnopurity, dietary constraints, marriage, Loki,

... keep reading on reddit ➑

πŸ‘︎ 76
πŸ’¬︎
πŸ‘€︎ u/FarHarbard
πŸ“…︎ Dec 29 2021
🚨︎ report
This subreddit is 10 years old now.

I'm surprised it hasn't decade.

πŸ‘︎ 14k
πŸ’¬︎
πŸ‘€︎ u/frexyincdude
πŸ“…︎ Jan 14 2022
🚨︎ report
Recommended order of learning "stochastic" topics

I'm an actuary and one of the exams I'm studying for involves stochastic calculus and differential equations (if you're not familiar with actuarial exams, they're 3-4 hours long multiple choice on finance, probability, models, etc.). The exam doesn't require a deep understanding or even previous knowledge of stochastic calculus and differential equations, in the study manual it basically just says the formula and how to use it. However, I want to understand where it comes from and why it works. I have found open source texts on stochastic processes, markov processes, stochastic analysis, stochastic differential equations but I don't really know what order work through them in. Should I know ODE's and PDE's before SDE's? Markov process before stochastic? Would working through a real analysis text help? Those kinds of questions.

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/NowAnActuary
πŸ“…︎ Jun 03 2014
🚨︎ report
Dropped my best ever dad joke & no one was around to hear it

For context I'm a Refuse Driver (Garbage man) & today I was on food waste. After I'd tipped I was checking the wagon for any defects when I spotted a lone pea balanced on the lifts.

I said "hey look, an escaPEA"

No one near me but it didn't half make me laugh for a good hour or so!

Edit: I can't believe how much this has blown up. Thank you everyone I've had a blast reading through the replies πŸ˜‚

πŸ‘︎ 19k
πŸ’¬︎
πŸ‘€︎ u/Vegetable-Acadia
πŸ“…︎ Jan 11 2022
🚨︎ report
What starts with a W and ends with a T

It really does, I swear!

πŸ‘︎ 6k
πŸ’¬︎
πŸ‘€︎ u/PsychedeIic_Sheep
πŸ“…︎ Jan 13 2022
🚨︎ report
Second Order Stochastic Optimization in Linear Time arxiv.org/abs/1602.03943
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/thvasilo
πŸ“…︎ Feb 15 2016
🚨︎ report
[D] Second-Order (e.g. Newton) Methods for Sparse Networks in Stochastic Environment

Let's say you have a fairly sparse neural network: not more than 50,000 connections, and no more than ~100 connections going into each node.

This is to learn a game (not image processing) so we don't need millions of connections (much less a convolutional layer). Not only that, but such high dimensionality would likely to lead to overfitting, when the underlying function could probably be represented (in theory, anyway) with a couple hundred parameters. The challenge is the stochastic environment. Training a heuristic presents noise (especially early in the game) and, of course, good moves can lead to bad outcomes.

(To get into the details, I'll be using standard optimization techniques to train a heuristic, and performance in the game itself for validation, to inform a genetic/evolutionary approach to feature selection and dimensionality reduction. We'll see if that approach works. One challenge is that I can't manually fiddle with learning rates for each network I create.)

You probably don't want to do a full second-order method on 10,000+ weights. (Inverting a 10k-by-10k matrix is expensive, and possibly not worth it in a stochastic environment.) But gradient descent is slow and requires a lot of fiddling with parameters (learning rate, momentum, etc.) and effectively never converges in a stochastic environment. So, I'd like to find something better than SGD if possible.

What about this approach, though: use backpropagation (first-order) to send error signals through the network. With those error signals, use Newton's method at each node (< 100 inputs) to update the weights. If these updates move too fast, then, instead of a learning rate, set a desired norm (say, normalize to |Ξ”w|_2 = 0.1).

Does this sound like a reasonable approach? Has anyone tried something like this? What were the results?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/michaelochurch
πŸ“…︎ Jan 23 2018
🚨︎ report
Question on Stochastic Processes - Higher Order Markov Chains Continuous Analogue?

I'm a PhD student interested in stochastic processes, and am curious if anyone can direct me towards any literature on processes like higher order Markov chains, but in continuous time and with a continuous state space?

With an order n Markov chain, the process depends on its previous n states, rather than a Markov chain which depends only on the present state. So could we have something like a process (X*t*)tβ‰₯0 where the behaviour at any time s depends, let's say, on the behaviour between time sβˆ’1 and time s?

So it's not quite a Markov process, which has "no memory", as the process has some memory.

I'd be keen to read any material on processes with limited memory - not just restricted to my example of the memory being a unit time interval. So if there are some processes where the memory is an interval which might get bigger or smaller, that would be cool too! :)

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/tulor
πŸ“…︎ Feb 20 2017
🚨︎ report
What is a a bisexual person doing when they’re not dating anybody?

They’re on standbi

πŸ‘︎ 11k
πŸ’¬︎
πŸ‘€︎ u/Toby-the-Cactus
πŸ“…︎ Jan 12 2022
🚨︎ report
Too many shares to stuff in my cellar... and Forward Looking TA for 9/13/21 - 9/17/21

Good Evening Apes!

Another weekend...another shitstorm of drama and wild speculation.

I've been a bit busy this weekend getting ready to move, and I want to let you all know your @'s have not gone unseen.

I don't feel that I can write about anything productive this week without first addressing my opinions on these two topics so we will start here and move into the analysis section.

As always I will post a consolidated Video DD of this on my YouTube for those of you that don't have the time to read through this, or have visual impairments/reading comprehension issues. This will be uploaded by...

9pm EDT/UTC-4

Part I: Too many shares to stuff in my cellar...

So I want to take a quick moment to discuss my opinions on the two big pieces of news to come out of this weekend.

Section 1: Cellar Boxing

I'm not sure when it became news to people that this was a strategy that was possibly being employed on GME but a large aspect of the main MOASS thesis has always been that they had excessively naked shorted GameStop in an attempt to drive them out of business. But I understand that this information isn't always the easiest to obtain, especially for newer apes. So here is why I think it's relevant and why it's not.

Pros:

  • Cellar Boxing is a strategy employed by market makers that manipulates the bid/ask spread of a stock with prices lower than .001
  • This is likely what Melvin began doing in 2014 as evidenced here on long-term OBV

Long-Term OBV showing the original short position

  • This is one reason why we think that there is a massive naked short position on GME. I suspect there are additional factors, but, if GME had been brought down to the levels where this strategy would have been effective I'm sure it would have been employed.
  • This is generally performed on stocks of micro-cap companies as it is not easy to do on companies like GameStop with larger floats and market caps.

Cons:

  • GameStop never dropped to the the price levels that are needed to affect this specific type of manipulation
  • This disregards the far more obvious manipulation occurring on GameStop vis-Γ -vis derivatives
  • The SEC is well aware of this form of manipulation, which doesn't necessarily mean they will do anything about it...
  • This was not the strategy used, at least initially, on Toys-R
... keep reading on reddit ➑

πŸ‘︎ 7k
πŸ’¬︎
πŸ‘€︎ u/gherkinit
πŸ“…︎ Sep 12 2021
🚨︎ report
BBIG - Fibonocci Retracement and Trend Analysis- Where I see the Price Going in the future

Good evening Apes,

I've noticed there has been a lot of confusion as to figuring out exactly where the BOTTOM is coming for BBIG as we have been seeing quite a bit of retracement and Red over the past few DAYS and there seems to be a little bit of panic and uncertainty lined up with setting newer LOWS each and every single Day,

Please note, none of this is Financial Advise, Do your own DD--this is strictly for entertainment purposes only. I hope that after everyone who has finished reading this whole post, reaches a STATE of ZEN that I have already taken on since a couple weeks ago.

So let's get right into a Fib Retracement of our beloved stock BBIG:

Fibonocci Retracement points

Notice the Yellow LINES and the Arrows pointing at significant Price Points

$2.16 - The Low

$12.49 - The High

$4.77 - The First Bounce---We Broke down below $6.11 while holding right above that price range for about a week down to a low of $4.77 before launching all the way back up to a intraday HIGH close of $8.48.

$8.48 - Notice how we got rejected at the 38% Retracement line(An indicator that we are still Correcting/Consolidating before being Ready to Retest Highs.

$4.41 - This is us Today---We've currently been making New lows almost every day TRYING to find a Bottom-

What everyone needs to keep in mind is we have NOT YET tested the 78% Retracement Price point which is $4.37---This is where our NEXT Strong level of Support line will be(I expect to see us Bounce off $4.37---and CLOSE above $4.37 bare minimum for tommorow! A Close above $4.37(but having it reach slightly below $4.37 would be a VERY BULLISH indicator that the WAVE down has fully completed and we are ready to take on our NEXT WAVE UP---Which I am also going to present and show as far as where we will go for Price Targets...

But Before I go there---I want to first Point out ONE potential Bearish OUTLOOK IF and only IF $4.37 does not hold....If we break and Close below $4.37----I FULLY expect us to drop down and test the 88.6% retrace @$3.34 with a Potential SUPER short Lived DIP down to $3.08 which would fill the gap on the bottom end before we RIP all the way up---I truly believe this would be the FLOOR Price of BBIG and the ABSOLUTE Lowest you could possibly see it go in a very bearish outlook....thankfully we still have the **0

... keep reading on reddit ➑

πŸ‘︎ 408
πŸ’¬︎
πŸ‘€︎ u/widowmakerlaser
πŸ“…︎ Nov 05 2021
🚨︎ report
Geddit? No? Only me?
πŸ‘︎ 6k
πŸ’¬︎
πŸ‘€︎ u/shampy311
πŸ“…︎ Dec 28 2021
🚨︎ report
I wanna hear your best airplane puns.

Pilot on me!!

πŸ‘︎ 3k
πŸ’¬︎
πŸ‘€︎ u/Paulie_Felice
πŸ“…︎ Jan 07 2022
🚨︎ report
$MU Micron Technology, deep-dive Adderall-fueled DD, β€œBack on the rocks! Back on the rocks, baby!”

Part 2: $MU (Micron Technologies) DD (part 2, more concise); A look at the price history on Friday and this week, LPDDR5X DRAM, P/E Ratio, NAND Flash growth, and the CEO's performance.

Official DD theme music: MEGA NRG MAN - BACK ON THE ROCKS and [Manuel - GAS GAS GAS] (https://www.youtube.com/watch?v=atuFSv2bLa8) and Ken Blast - The Top please listen to this as you read. Eurobeat music will give you Adderall like effects and help you read/comprehend this DD. The lyrics are also quite stock-y.

β€œWhen you get to the top

You ever been to the top?

Just listen... let me tell ya

Hear what you're missin'

Shut up and listen!

In the beginning you'll get crazy

Spending all the money you got

No more women to love you now

You gotta go and leave townβ€œ

So I’ve taken my daily Adderall, caffeine, and nicotine; I’m in a DD mood. I was requested by a few users to write an $MU (Micron Technology) DD.

Company profile:

β€œThe world is moving to a new economic model, where data is driving value creation in ways we had not imagined just a few years ago. Data is today’s new business currency, and memory and storage are strategic differentiators which will redefine how we extract value from data to expand the global economy and transform our society.

As the leader in innovative memory solutions, Micron is helping the world make sense of data by delivering technology that is transforming how the world uses information to enrich life for all. Through our global brands β€” Micron and Crucial β€” we offer the industry’s broadest portfolio. We are the only company manufacturing today’s major memory and storage technologies: DRAM, NAND, and NOR technology.

By providing foundational capability for innovations like AI and 5G across data center, the intelligent edge, and consumer devices, we unlock innovation across industries including healthcare, automotive and communications. Our technology and expertise are central to maximizing value from cutting-edge computing applications and new business models which disrupt and advance the industry.

*From our roots in Boise, Idaho, Micron has grown into an influential global presence committed to being the best memory company in the world. This means conducting business with integrity, accountability, and prof

... keep reading on reddit ➑

πŸ‘︎ 206
πŸ’¬︎
πŸ‘€︎ u/Emony-Dax
πŸ“…︎ Nov 22 2021
🚨︎ report
E or ß?
πŸ‘︎ 9k
πŸ’¬︎
πŸ‘€︎ u/Amazekam
πŸ“…︎ Jan 03 2022
🚨︎ report
How to start doing β€œapplications” of stochastic processes?

Hello, I’m an undergrad stats major and find stochastic processes interesting. I was reading the Ross probability models book. It was cool, I learned some theory, but now I want to start applying these concepts to solve problems. It’s easy to do this machine learning because there are actually libraries out there to fit random forests and various algorithms. But with stochastic processes like markov chains, martingales things like that it’s kind of hard to do so. How can I learn the β€œapplication” of stochastic processes. Where can I actually see this stuff in action on real datasets?

πŸ‘︎ 6
πŸ’¬︎
πŸ“…︎ Nov 14 2021
🚨︎ report
Example Second Order Stochastic Dominance without First Order Stochastic Dominance? [Ugrad]

Hello,

Given two gambles A and B and their relative cumulative distribution functions, is it possible for A to second-order stochastically dominate B without first-order stochastically dominating B? If so, what is an example of this? I have tried to come up with one but just can't for the life of me see how integral(CDFB(x)) > integral(CDFA(x)) for all x without CDFB(x) >CDFA(x) for all x as well :/

Thanks in advance.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Homeless101
πŸ“…︎ Feb 19 2018
🚨︎ report
Pun intended.
πŸ‘︎ 5k
πŸ’¬︎
πŸ‘€︎ u/Sharmaji1301
πŸ“…︎ Jan 15 2022
🚨︎ report
My daughter wants to eat a woman who shares her birthday

The only way this is going to make sense is if I start at the beginning: August 21, 1982.

A baby girl was born shortly after midnight. I wasn’t the mother’s doctor, but I was the attending on the same labor and delivery floor. Even though the newborn’s Apgar was good, she was clearly in great distress. The on-call pediatrician raced the child to the NICU. Twenty minutes later, I was called to consult.

β€œYou want me to check on the mother?” I’m an obstetrician. I care for pregnant women and deliver their babies. Once they’re born, the infants become pediatric patients. Why was I being called into the neonatal unit?

β€œNo, Dr. Kaizen. It’s the child. Please come to the NICU.” I heard panic creeping into my colleague’s voice.

The baby lay in a NICU incubator, screaming. The nursing staff stood at a distance. None of them were looking at the child. They stared at the floor, or the far wall, or at me. These were experienced neonatal ICU nurses. They had dealt with every horrible condition that could possibly result from birth. But whatever was in the incubator had rattled them.

β€œHow is this an obstetrics case?”

The pediatrician gestured to the incubator. β€œPlease examine the patient, Dr. Kaizen, and tell me what you think.”

The baby girl looked like a healthy birthweight baby – eight pounds or so. But her abdomen was terribly distended. She certainly had a good reason for screaming.

I gently palpated the girl’s bulging belly, expecting to feel signs of fluid or gas. I didn’t. Instead, I felt an enlarged uterus. The fundus was near the infant’s sternum. I gently squeezed the sides of the child’s belly, feeling with my fingertips a miniature version of what I feel with my whole hands in adult patients. I placed my palm on her tiny belly. There was an almost imperceptible flutter, then something gently pushed against my hand.

I turned to the NICU staff. Their eyes were locked on me, hands holding their mouths or touching their foreheads.

I said, β€œthis infant is pregnant. And she is in labor.”

I did my best to remain calm, but I heard my voice crack as I spoke. Something was inside this newborn. Something had grown Inside her as she developed in the womb, and it wanted to get out. I have as much experience as the NICU nurses with the terrible effects of abnormal pregnancies. No matter what condition my patients and their fetuses had suffered from, I had never felt what I felt at that moment: fear. Fear of what was inside of this baby.

I delivered the i

... keep reading on reddit ➑

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/sarcasonomicon
πŸ“…︎ Jan 15 2022
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.