A list of puns related to "Stochastic ordering"
The only way this is going to make sense is if I start at the beginning: August 21, 1982.
A baby girl was born shortly after midnight. I wasnβt the motherβs doctor, but I was the attending on the same labor and delivery floor. Even though the newbornβs Apgar was good, she was clearly in great distress. The on-call pediatrician raced the child to the NICU. Twenty minutes later, I was called to consult.
βYou want me to check on the mother?β Iβm an obstetrician. I care for pregnant women and deliver their babies. Once theyβre born, the infants become pediatric patients. Why was I being called into the neonatal unit?
βNo, Dr. Kaizen. Itβs the child. Please come to the NICU.β I heard panic creeping into my colleagueβs voice.
The baby lay in a NICU incubator, screaming. The nursing staff stood at a distance. None of them were looking at the child. They stared at the floor, or the far wall, or at me. These were experienced neonatal ICU nurses. They had dealt with every horrible condition that could possibly result from birth. But whatever was in the incubator had rattled them.
βHow is this an obstetrics case?β
The pediatrician gestured to the incubator. βPlease examine the patient, Dr. Kaizen, and tell me what you think.β
The baby girl looked like a healthy birthweight baby β eight pounds or so. But her abdomen was terribly distended. She certainly had a good reason for screaming.
I gently palpated the girlβs bulging belly, expecting to feel signs of fluid or gas. I didnβt. Instead, I felt an enlarged uterus. The fundus was near the infantβs sternum. I gently squeezed the sides of the childβs belly, feeling with my fingertips a miniature version of what I feel with my whole hands in adult patients. I placed my palm on her tiny belly. There was an almost imperceptible flutter, then something gently pushed against my hand.
I turned to the NICU staff. Their eyes were locked on me, hands holding their mouths or touching their foreheads.
I said, βthis infant is pregnant. And she is in labor.β
I did my best to remain calm, but I heard my voice crack as I spoke. Something was inside this newborn. Something had grown Inside her as she developed in the womb, and it wanted to get out. I have as much experience as the NICU nurses with the terrible effects of abnormal pregnancies. No matter what condition my patients and their fetuses had suffered from, I had never felt what I felt at that moment: fear. Fear of what was inside of this baby.
I delivered the
... keep reading on reddit β‘#Tozex #TOZToken #Blockchain #Crypto #Project https://tozex.io/
u/R2TMC_RMT
Stochastic gradient descent (SGD) is a basic tool of machine learning - I thought to try to discuss it with mathematicians here - good overview of 1st order methods, overview also of 2nd order methods.
So neural network can be seen as extremely general parametrization with number of parameters often in millions. They are trained by minimization of function defined as sum of some evaluation score for each object over the entire dataset. It is usually done by gradient descent, usually in stochastic way: calculating gradients from subsets ("minibatches") of dataset to better exploit local situation.
Beside problems of standard gradient descent like stucking in a plateau, it happens in huge dimension and we use noisy gradients - need to extract their statistical trends.
A basic way is (exponential moving) average of such noisy gradients (momentum method), and adding some heuristic modifications - such 1st order methods currently dominate neural network training.
There is now ongoing fight to get successful 2nd order methods for smarter choice of step size and and simultaneously modeling multiple directions, e.g. recent K-FAC claims multiple times faster convergence. However, it is quite difficult: full Hessian is huge (like 20TB for 3M parameters), it is very tough to estimate from noisy data, we need to invert Hessian, naive Newton method attracts to saddles - there is a belief that there are exp(dim) of them ...
Any ideas for practical realization of 2nd order methods? Some basic questions:
As full Hessian is too large, we rather need to choose a subspace where we use 2nd order model (we can simultaneously do gradient descent in the remaining) - how to choose this subspace and update this choice?
How to effectively estimate Hessian there?
Greetings!
I have noticed that whenever free-will comes up, most people here will either deny it completely (Hard Determinist) or accept it but deny determinism (Libertarianism). This usually falls along the atheist / theist divide, with atheists being Hard Determinists and theists being Libertarians. The "middle" position, Compatibilism, is unpopular. Many will even declare it absurd or incomprehensible,, which I think is a bit unfair. I think this comes from a lack of understanding of what exactly the position encompasses, and does and does not assert . My hope in this post is to at the very least convince people that compatibilism isn't absurd, even if I can't convince them to adopt it
By determinism, we mean the claim that 1) the universe follows unchanging, deterministic laws, and 2) all future states of the universe are completely determined by the initial state together with these laws. Both Hard Deterministis and Compatiibilists accept determinism, which is backed by all our current scientific theories. What they differ in is their acceptance of free will
NB. As a quick qualification, determinism is actually a bit of a misnomer. It might be that our universe also has stochastic processes, if certain interpretations of quantum mechanics turn out to be correct. However, I think we can agree that random quantum fluctuations or wave function collapse do not grant us free will. They are stochastic noise. So in the remainder of this discussion I will ignore these small effects and treat the universe as fully deterministic
Now, there are actually two common definitions of free-will:
The former is obviously a weaker thesis than the latter. I will argue for them both in turn, with focus on the second.
Free-act is not incompatible with determinist. It may well be that our wants are predetermined. But we still have the ability to carry out those wants. For example, if I am thirsty, I have the ability to get a glass of water. If I am tired, I can sleep. If I want to be kind or be mean, I can do that too. In some sense, we can only do what we w
... keep reading on reddit β‘I posted this question on the match stack exchange, but it got no love.
(Feel free to skip to the the last equation for what I'm trying to solve)
So I have a second order ODE with constant coefficients whose right hand side is the sum of white noise and its derivative. After some research, it seems there's a field dedicated to such things. I've done some Googling to find lecture notes on Stochastic Differential Equations, but even the introductory notes have been somewhat difficult for me to understand. Part of it is not knowing the notation, but also the "textbook" examples are all based on white noise being the derivative of Brownian motion. I need the derivative of white noise, and I've not yet found anything with higher order derivatives of Brownian motion.
Any guidance or explanation (or an answer) would be greatly appreciated!
I do have these basic questions that will hopefully get me along the right path
Hi folks,
I posted this to r/MachineLearning but thought that I might also get some interest here.
I've seen a lot of theoretical studies of Stochastic Gradient Descent (SGD) that consider it in the limit as the step size goes to zero, which turns it into a stochastic differential equation of the form dx/dt = alpha*(-grad(loss)(x) + noise). It's a lot easier to compute a lot of useful quantities using this form like e.g. stationary distributions.
However, one of the things that gets lost in this formalism is are intrinsic scales of the problem. In the continuous limit, rescaling the time variable t (or more generally, performing an arbitrary coordinate transformation) leaves the trajectory invariant, because the differential equation is formulated in a covariant fashion. This gives misleading results if you want to analyze something like the convergence rates. In this continuous formulation, you can just rescale your time parameter to 'speed up' your dynamics (which is equivalent to increasing alpha), whereas you obviously can't do this in the discrete formulation, because if you rescale alpha arbitrarily you overshoot and you get bad convergence.
The first thing that came to mind when I started thinking about this was that you could amend your differential equation to include higher-order correction terms. Specifically, if we have a differential equation of the form x'(t) = f(x), we can Taylor expand to get x(t + delta) β x(t) + f(x)*delta + 0.5*Df(x)*f(x)*delta^2 + O(delta^3). This tells us that the difference between the continuous trajectory solution x(t + delta) and the discrete trajectory x(t) + f(x)*delta after a time delta will be roughly 0.5*Df(x)*f(x)*delta^2. In order to get a more accurate model for the discrete-time process x(t+delta) = x(t) + f(x)*delta, we can introduce a correction term into our differential equation: x'(t) = f(x) - 0.5*Df(x)*f(x)*delta. When f is -alpha*grad(loss), this becomes x'(t) = -alpha*grad(loss)(x) + 0.5*alpha*Hessian(loss)(x)*grad(loss)(x)*delta. This correction term breaks covariance: when t is rescaled, both alpha and delta are necessarily rescaled, so the correction term transforms differently than the gradient term. It seems to me like this is a natural way to model the breakdown of covariance in the discrete dynamical system in the continuous setting and to study why certain timescales/learning rates are preferred.
tl;dr: Does know if this version of the continuous-time limit has
... keep reading on reddit β‘Post about anything and everything related to investing. The place in /r/PHinvest for any questions, rants, advice, or commentary.
Posts that are not discussion-provoking enough for the main page will be pointed toward this weekly thread to help keep the quality of the main page posts as high as possible.
That said, keep it respectful, and enjoy!
You can time the price of the market by the strike price of contracts on options ( not futures)
The largest position's strike price will often dictate how the whales will wash/ manipulate the market to collect their options contracts ( yes this is illegal).
https://www.investopedia.com/terms/w/washtrading.asp
strike price derfinition
https://www.investopedia.com/terms/s/strikeprice.asp
OPtions definitions:
You can watch the movement of assets on whale alerts
https://twitter.com/whale_alert
when an asset gets moved to exchanges in large quantity it will dump.
THis often correlates with the Stochastic RSI on the daily candle
and glass node
Here is the calendar
https://www.marketwatch.com/optionscenter/calendar
3 days before the expiry whales will pump or dump assets and hold them in place via algorithmic trading software.
Once the price is massaged into place the algorithmic bots will buy and sell to hold the price at a certain point
****Ethereum was pushed to 3600 and will now be kept around 3750 until expiry time****
Most crypto exchanges help whales do this with their OTC desk.
Some more dubious exchanges blackout their servers to take assets from retail traders while still processing orders on the back end . This is to help with a liquidity crisis. *****Look at my post history if you want to know who and how......******
Be aware BTC may dump again as BTC is being moved to exchanges enmass
*****action items/ needs***
I am looking for a good place to get strike price data.
Hi everyone
Sorry for click bait title, it was a stupid choice and I regret it deeply. These points of mine only apply to "edge-case"(i.e borderline) theory crafting (TC) results where averages lose a bit of meaning. I should have been more clear in the title. I personally think the TC community is doing an amazing job. When coming up with the title I was thinking way too literally in my head and not thinking of optics. I'm truly sorry if it sounded disrespectful.
I just wanted to share some discussion points that I don't see mentioned often. These caveats are obvious to those who know math(like theory crafters), however I want to share these points and a small monte-carlo demonstration on a very simple example so that hopefully we can all learn some relatively "basic" probability theory that is helpful to know in Genshin. Because when you know this, then this may help you make wiser resin and money investments. Note that this does not matter if you want to fully min-max a character and don't mind potential poor returns on investments. This only applies if you're uncertain a small DPS increase is worth investment for you needs. this is your choice. You determine what is optimal.
Summary (from u/derpkoikoi) What OP means is that inherently no matter what you do, since you have a insignificant number of actual hits you can make, you will not feel incremental gains in dps, so it's not worth spending resin for a 1% gain in damage.
In my own words my point is just simply: When averages are too close together for comparisons(like 3% dps change) comparisons are hard due to random uncertainty of crit(let alone other sources of uncertainty). So, you should always consider you own needs and take external numbers carefully( e.g know assumptions used). Below, I demonstrate why comparisons are "hard" and averages don't cut it when average results are "too close" together. Since we have uncertainty we don't know about, we have to relax our discussion to ranges of values, and in this case I think there are some points to simply think about. In these cases we can't order things saying X weapon is better than Y weapon because in practice most of the time they will do damage that falls in very similar ranges of values. **Note: I am not saying averages are bad**, but religiously relying on them can be misleading for small % dps changes in practice.
A small mathematical demo showing this point is below. Please do note I am not talking about crit ra
... keep reading on reddit β‘I don't want to step on anybody's toes here, but the amount of non-dad jokes here in this subreddit really annoys me. First of all, dad jokes CAN be NSFW, it clearly says so in the sub rules. Secondly, it doesn't automatically make it a dad joke if it's from a conversation between you and your child. Most importantly, the jokes that your CHILDREN tell YOU are not dad jokes. The point of a dad joke is that it's so cheesy only a dad who's trying to be funny would make such a joke. That's it. They are stupid plays on words, lame puns and so on. There has to be a clever pun or wordplay for it to be considered a dad joke.
Again, to all the fellow dads, I apologise if I'm sounding too harsh. But I just needed to get it off my chest.
Is it better to have 100 chances to win or 50 chances that are twice as lucky? Seems even but is it or is there a theory if it isn't? Like how a coin flip is not 50/50.
*Edit: Also the lottery runs constantly at the same interval and the tickets never expire.
Do your worst!
TLDR: Norskkk is a bunch of Thoraboos being Grifted by a French-Canadian claiming to be Norwegian with a bunch of stuff that will radicalize people to dangerously far-right ideologies.
To begin, an obligatory βFuck Norskkβ (hereafter to be referred to as Norskkk for connotative reasons)
For reference I will be using their term βVikingβ to describe the wider Norse culture as they use it interchangeably and it makes it easiest should anyone wish to search their several sites for the articles I will be discussing.
It is also ESSENTIAL that you understand this is not a matter of theology. I don't think Norskkk is "worshipping wrong", I don't think they are worshipping at all. Nowhere in any of their sites does it mention reciprocity, the gifting cycle, the Tree and Well, or even basic concepts such as Frith and Wyrd. Instead they simply use the imagery, the the verbage, use the aesthetic, while projecting their own warped worldview.
Norskkk is a male-centric Norse-pagan grifting network run by Christopher Fragassi (sometimes known by the moniker Bjornsen). I first came across them years ago when I was looking more deeply into Norse and Germanic Paganism. I have taken the dive many times to point out specific flaws in their rhetoric, history, ideology, and general nonsense, today I am giving a short (yes, this is the condensed version) exploration of them so that you can understand some of the problems and be aware of them moving forward. This partial deconstruction, like a night of passion, comes in several parts.
Part 1 β The (a)Historicity
While many of us consider ourselves knowledgeable and well-read, and many more seek to become such themselves, Norskkk claims to be a teacher despite being foolishly ignorant. Unfortunately I will not get a chance to be a grammatical nitpicker of their information as there are just too many poorly translated terms and words to be able to speak on anything else, nor will I get a chance to delve into their hideously twisted and warped projections of Norse Myth.
On their various websites (not linked, because I believe their content to be genuinely less than worthless, though you are more than capable of finding them) you will find a variety of articles covering a range of ancient and modern topics including (but not limited to) Valhol, Vaccines, Horned Helmets, hair and beards, the roles of Disabled peoples, Cheese (Iβm not kidding), drugs, shamanism, circumcision, ethnopurity, dietary constraints, marriage, Loki,
... keep reading on reddit β‘I'm surprised it hasn't decade.
I'm an actuary and one of the exams I'm studying for involves stochastic calculus and differential equations (if you're not familiar with actuarial exams, they're 3-4 hours long multiple choice on finance, probability, models, etc.). The exam doesn't require a deep understanding or even previous knowledge of stochastic calculus and differential equations, in the study manual it basically just says the formula and how to use it. However, I want to understand where it comes from and why it works. I have found open source texts on stochastic processes, markov processes, stochastic analysis, stochastic differential equations but I don't really know what order work through them in. Should I know ODE's and PDE's before SDE's? Markov process before stochastic? Would working through a real analysis text help? Those kinds of questions.
For context I'm a Refuse Driver (Garbage man) & today I was on food waste. After I'd tipped I was checking the wagon for any defects when I spotted a lone pea balanced on the lifts.
I said "hey look, an escaPEA"
No one near me but it didn't half make me laugh for a good hour or so!
Edit: I can't believe how much this has blown up. Thank you everyone I've had a blast reading through the replies π
It really does, I swear!
Let's say you have a fairly sparse neural network: not more than 50,000 connections, and no more than ~100 connections going into each node.
This is to learn a game (not image processing) so we don't need millions of connections (much less a convolutional layer). Not only that, but such high dimensionality would likely to lead to overfitting, when the underlying function could probably be represented (in theory, anyway) with a couple hundred parameters. The challenge is the stochastic environment. Training a heuristic presents noise (especially early in the game) and, of course, good moves can lead to bad outcomes.
(To get into the details, I'll be using standard optimization techniques to train a heuristic, and performance in the game itself for validation, to inform a genetic/evolutionary approach to feature selection and dimensionality reduction. We'll see if that approach works. One challenge is that I can't manually fiddle with learning rates for each network I create.)
You probably don't want to do a full second-order method on 10,000+ weights. (Inverting a 10k-by-10k matrix is expensive, and possibly not worth it in a stochastic environment.) But gradient descent is slow and requires a lot of fiddling with parameters (learning rate, momentum, etc.) and effectively never converges in a stochastic environment. So, I'd like to find something better than SGD if possible.
What about this approach, though: use backpropagation (first-order) to send error signals through the network. With those error signals, use Newton's method at each node (< 100 inputs) to update the weights. If these updates move too fast, then, instead of a learning rate, set a desired norm (say, normalize to |Ξw|_2 = 0.1).
Does this sound like a reasonable approach? Has anyone tried something like this? What were the results?
I'm a PhD student interested in stochastic processes, and am curious if anyone can direct me towards any literature on processes like higher order Markov chains, but in continuous time and with a continuous state space?
With an order n Markov chain, the process depends on its previous n states, rather than a Markov chain which depends only on the present state. So could we have something like a process (X*t*)tβ₯0 where the behaviour at any time s depends, let's say, on the behaviour between time sβ1 and time s?
So it's not quite a Markov process, which has "no memory", as the process has some memory.
I'd be keen to read any material on processes with limited memory - not just restricted to my example of the memory being a unit time interval. So if there are some processes where the memory is an interval which might get bigger or smaller, that would be cool too! :)
Theyβre on standbi
Good Evening Apes!
Another weekend...another shitstorm of drama and wild speculation.
I've been a bit busy this weekend getting ready to move, and I want to let you all know your @'s have not gone unseen.
I don't feel that I can write about anything productive this week without first addressing my opinions on these two topics so we will start here and move into the analysis section.
As always I will post a consolidated Video DD of this on my YouTube for those of you that don't have the time to read through this, or have visual impairments/reading comprehension issues. This will be uploaded by...
9pm EDT/UTC-4
So I want to take a quick moment to discuss my opinions on the two big pieces of news to come out of this weekend.
Section 1: Cellar Boxing
I'm not sure when it became news to people that this was a strategy that was possibly being employed on GME but a large aspect of the main MOASS thesis has always been that they had excessively naked shorted GameStop in an attempt to drive them out of business. But I understand that this information isn't always the easiest to obtain, especially for newer apes. So here is why I think it's relevant and why it's not.
Pros:
Long-Term OBV showing the original short position
Cons:
Good evening Apes,
I've noticed there has been a lot of confusion as to figuring out exactly where the BOTTOM is coming for BBIG as we have been seeing quite a bit of retracement and Red over the past few DAYS and there seems to be a little bit of panic and uncertainty lined up with setting newer LOWS each and every single Day,
Please note, none of this is Financial Advise, Do your own DD--this is strictly for entertainment purposes only. I hope that after everyone who has finished reading this whole post, reaches a STATE of ZEN that I have already taken on since a couple weeks ago.
So let's get right into a Fib Retracement of our beloved stock BBIG:
Notice the Yellow LINES and the Arrows pointing at significant Price Points
$2.16 - The Low
$12.49 - The High
$4.77 - The First Bounce---We Broke down below $6.11 while holding right above that price range for about a week down to a low of $4.77 before launching all the way back up to a intraday HIGH close of $8.48.
$8.48 - Notice how we got rejected at the 38% Retracement line(An indicator that we are still Correcting/Consolidating before being Ready to Retest Highs.
$4.41 - This is us Today---We've currently been making New lows almost every day TRYING to find a Bottom-
What everyone needs to keep in mind is we have NOT YET tested the 78% Retracement Price point which is $4.37---This is where our NEXT Strong level of Support line will be(I expect to see us Bounce off $4.37---and CLOSE above $4.37 bare minimum for tommorow! A Close above $4.37(but having it reach slightly below $4.37 would be a VERY BULLISH indicator that the WAVE down has fully completed and we are ready to take on our NEXT WAVE UP---Which I am also going to present and show as far as where we will go for Price Targets...
But Before I go there---I want to first Point out ONE potential Bearish OUTLOOK IF and only IF $4.37 does not hold....If we break and Close below $4.37----I FULLY expect us to drop down and test the 88.6% retrace @$3.34 with a Potential SUPER short Lived DIP down to $3.08 which would fill the gap on the bottom end before we RIP all the way up---I truly believe this would be the FLOOR Price of BBIG and the ABSOLUTE Lowest you could possibly see it go in a very bearish outlook....thankfully we still have the **0
... keep reading on reddit β‘Official DD theme music: MEGA NRG MAN - BACK ON THE ROCKS and [Manuel - GAS GAS GAS] (https://www.youtube.com/watch?v=atuFSv2bLa8) and Ken Blast - The Top please listen to this as you read. Eurobeat music will give you Adderall like effects and help you read/comprehend this DD. The lyrics are also quite stock-y.
βWhen you get to the top
You ever been to the top?
Just listen... let me tell ya
Hear what you're missin'
Shut up and listen!
In the beginning you'll get crazy
Spending all the money you got
No more women to love you now
You gotta go and leave townβ
So Iβve taken my daily Adderall, caffeine, and nicotine; Iβm in a DD mood. I was requested by a few users to write an $MU (Micron Technology) DD.
Company profile:
βThe world is moving to a new economic model, where data is driving value creation in ways we had not imagined just a few years ago. Data is todayβs new business currency, and memory and storage are strategic differentiators which will redefine how we extract value from data to expand the global economy and transform our society.
As the leader in innovative memory solutions, Micron is helping the world make sense of data by delivering technology that is transforming how the world uses information to enrich life for all. Through our global brands β Micron and Crucial β we offer the industryβs broadest portfolio. We are the only company manufacturing todayβs major memory and storage technologies: DRAM, NAND, and NOR technology.
By providing foundational capability for innovations like AI and 5G across data center, the intelligent edge, and consumer devices, we unlock innovation across industries including healthcare, automotive and communications. Our technology and expertise are central to maximizing value from cutting-edge computing applications and new business models which disrupt and advance the industry.
*From our roots in Boise, Idaho, Micron has grown into an influential global presence committed to being the best memory company in the world. This means conducting business with integrity, accountability, and prof
... keep reading on reddit β‘Pilot on me!!
Hello, Iβm an undergrad stats major and find stochastic processes interesting. I was reading the Ross probability models book. It was cool, I learned some theory, but now I want to start applying these concepts to solve problems. Itβs easy to do this machine learning because there are actually libraries out there to fit random forests and various algorithms. But with stochastic processes like markov chains, martingales things like that itβs kind of hard to do so. How can I learn the βapplicationβ of stochastic processes. Where can I actually see this stuff in action on real datasets?
Because she wanted to see the task manager.
Hello,
Given two gambles A and B and their relative cumulative distribution functions, is it possible for A to second-order stochastically dominate B without first-order stochastically dominating B? If so, what is an example of this? I have tried to come up with one but just can't for the life of me see how integral(CDFB(x)) > integral(CDFA(x)) for all x without CDFB(x) >CDFA(x) for all x as well :/
Thanks in advance.
The only way this is going to make sense is if I start at the beginning: August 21, 1982.
A baby girl was born shortly after midnight. I wasnβt the motherβs doctor, but I was the attending on the same labor and delivery floor. Even though the newbornβs Apgar was good, she was clearly in great distress. The on-call pediatrician raced the child to the NICU. Twenty minutes later, I was called to consult.
βYou want me to check on the mother?β Iβm an obstetrician. I care for pregnant women and deliver their babies. Once theyβre born, the infants become pediatric patients. Why was I being called into the neonatal unit?
βNo, Dr. Kaizen. Itβs the child. Please come to the NICU.β I heard panic creeping into my colleagueβs voice.
The baby lay in a NICU incubator, screaming. The nursing staff stood at a distance. None of them were looking at the child. They stared at the floor, or the far wall, or at me. These were experienced neonatal ICU nurses. They had dealt with every horrible condition that could possibly result from birth. But whatever was in the incubator had rattled them.
βHow is this an obstetrics case?β
The pediatrician gestured to the incubator. βPlease examine the patient, Dr. Kaizen, and tell me what you think.β
The baby girl looked like a healthy birthweight baby β eight pounds or so. But her abdomen was terribly distended. She certainly had a good reason for screaming.
I gently palpated the girlβs bulging belly, expecting to feel signs of fluid or gas. I didnβt. Instead, I felt an enlarged uterus. The fundus was near the infantβs sternum. I gently squeezed the sides of the childβs belly, feeling with my fingertips a miniature version of what I feel with my whole hands in adult patients. I placed my palm on her tiny belly. There was an almost imperceptible flutter, then something gently pushed against my hand.
I turned to the NICU staff. Their eyes were locked on me, hands holding their mouths or touching their foreheads.
I said, βthis infant is pregnant. And she is in labor.β
I did my best to remain calm, but I heard my voice crack as I spoke. Something was inside this newborn. Something had grown Inside her as she developed in the womb, and it wanted to get out. I have as much experience as the NICU nurses with the terrible effects of abnormal pregnancies. No matter what condition my patients and their fetuses had suffered from, I had never felt what I felt at that moment: fear. Fear of what was inside of this baby.
I delivered the i
... keep reading on reddit β‘Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.