Custom Point Estimator

While figuring out the points for an Army Card takes a lot of play testing, I've developed a spreadsheet calculator to help with a starting point for making customs.

One of the challenges was estimating for the extreme units, such as Isamu or Krug. But with this system I can get it accurate to +/- 20 points, for either of those units.

There is a lot of estimation in this calculator, especially when it comes to the special powers. And certain powers escalate, such as having Flying and being a Ranged unit pushes the points up. The more Defense a unit has, the more powerful Stealth Dodge is. Etc.

ttps://docs.google.com/spreadsheets/d/1du6i-ro4HKYm7jxFsLaxFtNU1mtUXdDdHlfEi919RBs/edit?usp=sharing

https://preview.redd.it/ucieytm7c6m51.png?width=834&format=png&auto=webp&s=e8e924a6bf910f1f4e0344d5c8f8b27952805f9a

πŸ‘︎ 25
πŸ’¬︎
πŸ‘€︎ u/Jvosika
πŸ“…︎ Sep 09 2020
🚨︎ report
[Q] Relative Efficiency of the mean vs. median in point estimators

I was reading in a textbook that, for point estimators that follow a normal distribution, using the median would produce a standard error 25% larger than using the mean. I'm not sure how worthwhile it would be to dive into the details of why this occurs (I found a very brief article on Asymptotic Relative Efficiency, which more or less said "it's complicated").

Could someone give me the TL;DR on why this happens? I'm wondering if there's a quick, down and dirty maths/explanation behind it?

πŸ‘︎ 18
πŸ’¬︎
πŸ‘€︎ u/BlackJack5027
πŸ“…︎ Aug 20 2020
🚨︎ report
TC Spreadsheet with Min/Max EC and point estimator

NEW AND IMPROVED! I shared this spreadsheet during the Owl TC, but I have updated the characters for the current TC AND I have added a NEW tab that will calculate min and MAX MC and points. All you need to do is go to FILE > Make a Copy > save your own copy, then plug in your characters' levels.

Gord TC Spreadsheet

There are two options for calculating: individual tabs for each chapter that only calculate MIN, and a tab that includes ALL chapters that calculates min/max (but the layout might be confusing).

The MAX calculations are based on my observations over several TCs (meaning, I put time and effort into it). If you notice an error in the MAX amount, please let me know. I have tried to figure out the formula, but I'm a word person, not a math person.

The tab that includes MAX calculations incorporates some formulas I do not use often, so please let me know if a formula appears to be missing or incorrect.

*Note: The Min/Max calculation assumes one run per character per day, except for the featured character (calculates 8 runs per day). I don't have anything built in to the Min/Max Calculator to account for using cocoa. There are definitely ways to tweak it to include extra runs and use of cocoa, but I tried to keep it as simple as possible.

*Also note: If you comment here with an issue, please give me some time to respond. Life is busy.

πŸ‘︎ 22
πŸ’¬︎
πŸ‘€︎ u/Vampireladybug
πŸ“…︎ Aug 28 2020
🚨︎ report
Point Estimate vs Point Estimator: are they two different things? What's the difference then?

My textbook seems to suggest a point estimate and a point estimator are two different things. I'm confused when to use which one.

From my textbook:

>For example, if we use a value of X to estimate the mean of a population, an observed sample proportion to estimate the parameter ΞΈ of a binomial population, or a value of S^(2) to estimate a population variance, we are in each case using a point estimate of the parameter in question. These estimates are called point estimates because in each case a single number, or a single point on the real axis, is used to estimate the parameter.
>
>Correspondingly, we refer to the statistics themselves as point estimators. For instance, X may be used as a point estimator of ΞΌ, in which case x is a point estimate of this parameter. Similarly, S^(2) may be used as a point estimator of Οƒ^(2), in which case s^(2) is a point estimate of this parameter. Here we used the word β€œpoint” to distinguish between these estimators and estimates and the interval estimators and interval estimates.

Does that mean S^(2) and s^(2) are different? What's the difference?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/shiningmatcha
πŸ“…︎ Nov 13 2020
🚨︎ report
PSA: Even in quiet yose, a good score estimator can be 20 points mistaken

For a while in the past, I became a little bit dependent on using a score estimator (Sabaki seemed to be the most reliable) upon entering an unclear yose. Typically, this was because I was anxious about where the dust was settling, and because I didn't want to hold up my opponent by counting manually, even if I had the time. Eventually, though, I came to realize this was an addiction that made me even more anxious in yose, and might be considered cheating by some standards. So, I ceased using it outside of reviewing purposes; but a recent game I had makes me question using it even then.

I wasn't a complete fool; I knew that any automatic score estimation had to be adjusted for small mistakes it made. But look at the following game and estimation of B+15.5. It actually looks pretty accurate, doesn't it? It's even Black's move! But something's not right... In every contested boundary on the board, Sabaki happens to be a little too optimistic for Black. Lizzie is 99% confident of a White victory. This is how the game finishes, at W+5.5, even after letting Black win two one-point kos at the end (this is the game sequence if you're curious, starting at 204). I've overlayed the two boards and tabulated the differences between them (Japanese scoring; this was actually a pretty meticulous operation to get right), and believe it or not, the thin Black territory in the bottom left isn't even where the score estimator was most off.

Even as a 2k with experience accounting for Sabaki's shortcomings, I never would've guessed a 20+ point swing. If I had used a score estimator at this point, I might've just resigned to save me and my opponent the trouble. Worse yet, if I was Black and had seen this estimation, and went on to lose, I would've been very frustrated.

All this has left me disillusioned that countingβ€”manual or automaticβ€”is kind of pointless. Unless you're a Rainman of fractional-point yose sequences and quick arithmetic, your count is prone to as large of swings as simply eyeballing empty areas, or your "internal compass" during the game. Thoughts?

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/Feryll
πŸ“…︎ May 29 2019
🚨︎ report
Last night, Basketball Reference's positional estimator had Dwight Howard's position as Point Guard 100% of the time imgur.com/a/1srmz3y
πŸ‘︎ 61
πŸ’¬︎
πŸ‘€︎ u/PormanNowell
πŸ“…︎ Nov 06 2018
🚨︎ report
[Statistics] What is the point of estimators?

From my understanding, estimators are used whenever you do not know the true value of what you are looking for. In that case, how do you know if an estimator is close to the true value if you do not know what the true value actually is? How do you determine if an estimator is biased if you do not know the value of the true parameter? If you do know the value of the true parameter, then what is the point of the estimator?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ADDMYRSN
πŸ“…︎ Apr 11 2019
🚨︎ report
Standard deviation is a biased point estimator right?

I sat through a seminar recently and raised my eyebrow a bit when the lecturer said that s was an unbiased estimator of sigma, because I was almost certain that's wrong. I googled in quite a bit and was not surprised to see that I'm right. But I went back and forth on it with the lecturer a bit and he said it's not really biased as long as n is large enough, which...is sort of true, but that's not the same thing.

My understanding is that s will always underestimate sigma which is why it needs to be corrected for with the C4 parameter (or C2 or whichever one it is).

I'm really concerned because the rest of this course is going to be using point estimators for sigma and if they are treated as unbiased the whole way through every bit of the math will be wrong unless there's jus a standing assumption that n>40 or whatever where s is only barely offset from sigma.

Anyways, am I taking crazy pills here? Never had a lecturer I disagree with like this before, and I really like this guy so I don't want to get in his face about it.

πŸ‘︎ 17
πŸ’¬︎
πŸ‘€︎ u/Hellkyte
πŸ“…︎ Jun 14 2016
🚨︎ report
Future Points Estimator Spread Sheet

Hey all! I wanted to share the spread sheet I've been using the last couple of months to estimate my future points. Save the document so that it's on your personal account, fill in the two highlighted squares and enter when you think you might buy something.

Here it is! Google Doc

The first highlight is "Estimated Points/day" Enter how many points you expect to earn everyday on average. I use 300, but if you're crazy you can probably put 330 :)

And the second highlight is your current point balance as of today. This should be todays balance AFTER your total earned points.

I update the sheet with the correct amount about 2 times a month. Usually around the 15th, and the last day of the month (unless I forget).

Hope you get some use from it.

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/Squirrel09
πŸ“…︎ Aug 31 2018
🚨︎ report
Are T distributions are just another way of unbiasing the point estimators of sigma?

I feel really stupid right now.

So s is a biased point estimator of sigma right, but it's effectively equivalent at a high enough n. At a low value of n you would need a correction factor like C4 or something.

But the T distribution already accounts for that doesn't it? This is the most basic crap and I can't believe I missed this. If the T distribution accounts for the low n count then why even use the C4 correction factor? Or, if the C4 term corrects the point estimator why no just use the Z distribution?

What's the difference between these two approaches? Can I just correct s with C4 and use a Z distribution? Or can I build a schewart control chart with a T distribution and just skip all the correction factors?

This is wrinkling my brain a bit here.

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/Hellkyte
πŸ“…︎ Jun 29 2016
🚨︎ report
Struggling with more advanced point estimators

I have been struggling with more advanced point estimation, consistency, sufficiency, method of moments, minimum variance unbiased, and maximum likelihood. Any resources or other tips for understanding these topics better?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/brookswilliams
πŸ“…︎ Mar 12 2019
🚨︎ report
The Remarkably Unclutch Oakland A’s -- BaseRuns is the best public estimator of how well a team’s individual events should add up to runs and wins . . . . Instead of being 12-21, BaseRuns actually expects that the A’s would be 18-15 at this point. fangraphs.com/blogs/the-r…
πŸ‘︎ 32
πŸ’¬︎
πŸ‘€︎ u/nowhathappenedwas
πŸ“…︎ May 11 2015
🚨︎ report
What is the intuition behind having more observations for explaining more of variation of dependent variables increasing the efficiency of the point estimator?

If i regress y on x, and assuming x is exogenous, i get some coefficient. I hear people saying 'I can add more covariates for precision..' and that makes sense that if x is exogenous, covariates wont influence my estimate and I understand mathematically why it will lower standard errors. But im confused as to why intuitively does explaining more of y, or using more observations, necessarily mean my estimate of beta is now more efficient, especially if the point estiamte doesnt change. to me, it seems like the underlying sampling distribution of my coefficient stays the same (just like the consistency and unbiasedness properties should remain the same).

I hope this quesiton makes sense!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Whynvme
πŸ“…︎ Nov 11 2018
🚨︎ report
Domination Mode - MyTeam Points Estimator

What's up fellow 2Kers, I was bored and decided to create a program on windows that estimates the current amount of MyTeam Points that you have earned in a match that you are currently playing. I posted this guide on Steam that explains more about it:

http://steamcommunity.com/sharedfiles/filedetails/?id=340815519

It isn't only for PC players though because the game uses the same calculations across platforms. I am currently not releasing the program to everyone because you would have to download and install it on your computer, and I think it would be better to just have it run on a website. I will probably convert it to a web-application if people actually want to use this tool.

Screenshots are in the guide, so check the guide out if you want to see what I created.

Thoughts appreciated!!

*edit I re-coded the application so it can run on my website!! It works on my iPhone and it should work on most browsers or mobile devices! Here is the short-link of the site :D http://bit.ly/MyTeamEstimator

πŸ‘︎ 20
πŸ’¬︎
πŸ‘€︎ u/originalwill23
πŸ“…︎ Nov 12 2014
🚨︎ report
How to find the point estimator for degrees of freedom from a chi square using Method of Moments

A random sample of X1 , ..., X53 , from the chi-square distribution with m degree of freedom, has sample mean X ο€½ 100.8 and sample standard deviation s ο€½ 12.4 . (a) Find the point estimator of m using the method of moments. (b) Find a one-sided 90% lower confidence interval of m.

I need to understand how to derive the chi square equation to find degrees of freedom "m", but the websites I am looking at still have the df in the equation so I am a bit confused.

Websites I referenced: https://www.statlect.com/probability-distributions/chi-square-distribution https://onlinecourses.science.psu.edu/stat414/node/148

Thanks!

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/onilgaparat
πŸ“…︎ Oct 04 2017
🚨︎ report
I want to make a point spread estimator for next season. I have an idea of what starts I want to use and how I want to incorporate them/weigh them. But i need to learn how to compile these stats in an automated fashion so I'm not compiling 20 stats for 128 teams per week by hand

Like I said, I'm looking for some tools to automatically update an excel spreadsheet with new stats every week. All of these stats can be found on NCAAs FBS stats page and then perhaps the sagarin SOS ratings.

What tools/skills should I learn in order to automate the data pulling process?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/radil
πŸ“…︎ Nov 18 2015
🚨︎ report
This polling is very encouraging - but under this estimate, @UKLabour need a 13 point lead for a majority of *one*, while the Tories' 12 point lead in 2019 got them an *EIGHTY* seat majority. Never let anyone tell you that First Past The Post benefits Labour. #Labour4PR

https://twitter.com/joeotooie/status/1483131143906107397?s=21 Always worth reiterating, FPTP only benefits Labour enough to keep us as second place but hardly ever enough to win. It is far easier for the Tories to with as evident

πŸ‘︎ 120
πŸ’¬︎
πŸ‘€︎ u/Ranger447
πŸ“…︎ Jan 18 2022
🚨︎ report
How do you feel about using man hours/days for estimation? Story points are too vague for us to use them consistently. Thanks!
πŸ‘︎ 37
πŸ’¬︎
πŸ‘€︎ u/writer-old-writer
πŸ“…︎ Jan 11 2022
🚨︎ report
[Riske] Speaking of a slump, here is why the Cardinals failed to win the division despite having been estimated with 96% odds to do just that at some point during the season. Meanwhile the Rams climbed out of it just at the right moment. But lightyears away from their peak performance twitter.com/PFF_Moo/statu…
πŸ‘︎ 127
πŸ’¬︎
πŸ‘€︎ u/Stauce52
πŸ“…︎ Jan 11 2022
🚨︎ report
On to 7 years of lifting. Started at 18 years old in the first pic, 25 now. Weight has fluctuated a lot from my starting point of 125 to my peak dirty bulk/fat phase of 190, to a comfortable 161-163ish after a year of dialing in my nutrition. Any body fat % estimates are appreciated! reddit.com/gallery/r4haeq
πŸ‘︎ 271
πŸ’¬︎
πŸ‘€︎ u/KingofPancakes7
πŸ“…︎ Nov 28 2021
🚨︎ report
Does there become a point if a tradesman runs over his estimate that he should suck some of it up himself.

I’m having work done on my windows right now with just over half the job done so far and it’s already running at 1k over the estimate. It’s renovation work on original sash windows which have been in place for 140 years now so finding stuff you didn’t anticipate could happen but part of me thinks this should have been anticipated if he’s doing this as his only job.

Trying to work out if it’s worth saying something.

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/Superdudeo
πŸ“…︎ Jan 15 2022
🚨︎ report
we will be getting 8 - 10 inches of snow tonight. If I wake up to a level 3 do I attempt to go into work or do I call off. Will I get pointed for calling off during a level 3. btw this isn't estimated like we already have a good amount and the surrounding states have already gotten hit bad.
πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/eivy0037
πŸ“…︎ Jan 17 2022
🚨︎ report
Proper Story point Estimation
πŸ‘︎ 150
πŸ’¬︎
πŸ‘€︎ u/jcolumbe
πŸ“…︎ Dec 12 2021
🚨︎ report
Body fat estimate? Currently 5'10" 160 pounds. I'd guess around 15 percent. Also what do you think my genetic strong point is? reddit.com/gallery/rpuabl
πŸ‘︎ 27
πŸ’¬︎
πŸ‘€︎ u/Amigors
πŸ“…︎ Dec 27 2021
🚨︎ report
Hi all! What point color is my Suebug? I rescued her a year ago and the shelter said she is a Siamese mix and estimated she is around 10 years old πŸ’Ÿ reddit.com/gallery/qnm1wc
πŸ‘︎ 2k
πŸ’¬︎
πŸ‘€︎ u/caroalien2
πŸ“…︎ Nov 05 2021
🚨︎ report
In the BBREF Play-by-Play data for Greg Monroe his position estimate is 100% Point Guard. Has positionless basketball gone too far?

Maybe the weird Iowa Wolves lineups we’ve had the past few games broke BBREF. He is averaging 5 assists in 23.5 MPG though.

Point Moose

πŸ‘︎ 51
πŸ’¬︎
πŸ‘€︎ u/Carth_Onasi_AMA
πŸ“…︎ Dec 30 2021
🚨︎ report
Odd question, but I’ve been wondering for a bit: Do we have an estimate of roughly how many elites died since first making contact with them? And what is their official population on the universe at this point?

I mean, they’ve been at constant war since that point.

β€’First the immediate war with humanity following first contact (which, granted, they were absolutely dominating for the most part. Though I do assume this came at the loss of at least a bit of a significant number of elites).

β€’Immediately followed by literal attempted genocide from the order they were just protecting and dying for.

β€’Which itself then almost immediately led to civil wars against equally technologically super advanced forces, first the covenant and then then elites themselves.

β€’Tonnes of elites are contracted by the banished to help do their dirty work, resulting in still fighting the UNSC (and so, chief) to this day

They’re also extremely honor/duty bound to combat and halo 5 showed us that they’d rather die of battle wounds than seek medical attention. This all screams of worrying numbers of dead elites to me. I wonder if their population is starting to reach Near-threatened species levels

πŸ‘︎ 216
πŸ’¬︎
πŸ‘€︎ u/JcraftY2K
πŸ“…︎ Nov 22 2021
🚨︎ report
I’m sorry everyone taking part in the hei point giveaway, it’s going to take longer than originally estimated
πŸ‘︎ 36
πŸ’¬︎
πŸ“…︎ Dec 21 2021
🚨︎ report
I am on day 8 of 100 days of code on Udemy. I made a project where I've inputted all my receipts of purchases from grocery stores, The point of this project is to input my grocery list before I go s hopping and have it give me an estimated cost. This is my first project, but make me cry :)
#This proram the user (me) inputs their grocery list prior to going shopping, and is able to veiw an estimate of the total stores.  This program assumes they are shopping from the same stores each week.  A dictionary containing items and their prices I believe is important to use in this program.  I am willing to input all my own weekly grocery items alogn with their costs.#




#Check to see if key exists in dictionary
#add key and value to dicitonary
from itertools import repeat

grocery_index = {"instant oatmeal": 2.49, "buffalo chicken tenders": 6.23, "banza pasta shells": 3.00, "spoon roast": 25.00, "chia seeds": 4.99, "olive oil": 7.49, "mini peppers": 3.99,  "balsamic vinegar": 2.29,  "vitamin d": 4.99, "almonds": 5.99, "riced cauliflower": 1.99, "rice": 4.49, "frozen blueberries": 2.49,
"chicken breasts": 9.79, "eggs": 3.79, "gf bread": 4.49, "grassfed milk": 4.99, "kerrygold butter": 3.49, "carrots": .89, "dish soap": 2.99,
"honey": 5.99, "potatos": 3.49, "celery": 2.69, "red onion": .99, "bag of onions": 2.99, "onion": .69, "tomatos": 2.99, "artichoke hearts": 2.49,
"multivitamin": 11.99, "jalapeno peppers": 1.69,"gf chicken nuggets": 3.69, "mandarins": 3.29, "lemon": .39, "asparagus": 2.29, "avocado": 1.49, "salmon": 9.39,
"iceberg lettuce": 1.69, "whole chicken": 15.22, "hummus": 3.69, "garlic": 3.18, "snow peas": 2.49, "kale": 3.69, "bananas": 1.75,
"heavy cream": 2.29, "peppers": 3.99, "mini cucumbers": 1.99, "beef broth": 2.49, "broccoli": 2.49, "ground beef": 6.99,
"chicken pot pie": 11.99, "baby spinach": 5.29, "pepper jack cheese": 2.5, "df ice cream": 4.69, "hard goat chedder": 2.99,
"kefir": 2.99, "cottage cheese": 2.39, "milk": 1.79, "raos sauce": 8.99, "chick pea pasta": 2.5, "dark chocolate chips": 4.99,
"banza pasta shells": 3.49, "gatorade powder": 2.99, "macaroni and cheese": 2.59, "breakfast sausage": 2.5, "df butter": 4.39,
"true lemon": 2.00, "tea tree soap": 4.99, "paper towels": 12.99}

#I open the program and am assined a list for input
my_list=[]

#Havent purchased anything yet
cost = []

#function to view their total cost
def grocery_total(grocery_index, my_list, cost):
  while True:

    for x in my_list:

        if x in grocery_index.keys():
            cost.append(grocery_index[x])

        elif x not in grocery_index:

            break
    amount = (sum(cost))
... keep reading on reddit ➑

πŸ‘︎ 285
πŸ’¬︎
πŸ‘€︎ u/kitestalker
πŸ“…︎ Nov 10 2021
🚨︎ report
18M 5’9 205lb. BF % estimate? Weak points? reddit.com/gallery/rx5m0d
πŸ‘︎ 13
πŸ’¬︎
πŸ‘€︎ u/Zefyyre
πŸ“…︎ Jan 06 2022
🚨︎ report
Can we relax with the β€œGuess my ethnicity” posts? Its starting to feel like spam at this point. If you want to post pictures of yourself that badly, just show us your ethnicity estimate. That would at least be interesting
πŸ‘︎ 64
πŸ’¬︎
πŸ‘€︎ u/bbooth5
πŸ“…︎ Nov 24 2021
🚨︎ report
[Riske] Speaking of a slump, here is why the Cardinals failed to win the division despite having been estimated with 96% odds to do just that at some point during the season. Meanwhile the Rams climbed out of it just at the right moment. But lightyears away from their peak performance twitter.com/PFF_Moo/statu…
πŸ‘︎ 15
πŸ’¬︎
πŸ‘€︎ u/Stauce52
πŸ“…︎ Jan 11 2022
🚨︎ report
Driver Pay per Points (Estimated)
πŸ‘︎ 2k
πŸ’¬︎
πŸ‘€︎ u/ImBaffledYT
πŸ“…︎ Sep 01 2021
🚨︎ report
Fully speculative question: Estimate for Model Y price tipping point

I fully understand no one on this sub has the answer to this question…just curious for y’all’s opinions on when you think we may see prices start to return to earth.

I have a overnight test drive next week and was fully planning on placing my order but after seeing a chart on the price changes for 2021 alone I didn’t realize it had gone up by 10000 since February and not inclined to pay that premium at this point.

πŸ‘︎ 21
πŸ’¬︎
πŸ‘€︎ u/cblackwe93
πŸ“…︎ Nov 22 2021
🚨︎ report
PSLV today ... $ 11.8 million into the Trust and 200,000 oz INTO THE VAULT bought at about $25.91 per oz or $ 1.85 (7.7%) above comex mid point. That premium to comex is my estimate, however it is the highest estimate in the post squeeze era.

Here is a chart of my estimated purchase price for silver compared to the comex mid point price:

https://preview.redd.it/fwu7bbzduou71.png?width=768&format=png&auto=webp&s=60d44a5156d08cf9a9fda929c942fc6419f1a263

End of day cash is $24.5 million, the second highest in the post squeeze era. That's enough to buy almost 1,000,000 oz. Here's the recent history of end of day cash:

https://preview.redd.it/8zwa664ptou71.png?width=811&format=png&auto=webp&s=46bf49a9794a0007f336e62e8f003fa8d455381d

SLV's ledger was flat.

Comex silver vaults was nearly flat.

Comex gold vaults saw a lot of movement. Brinks and HSBC Bank combined for 264,000 oz of gold moved OUT OF THE VAULT. That is 0.8% of the total vault inventory. It is almost $500 million worth of gold.

πŸ‘︎ 508
πŸ’¬︎
πŸ“…︎ Oct 20 2021
🚨︎ report
Point estimates for derivatives

I'm struggling a little with numerical evaluation. I have a model that depends on two variabls f(x,y). I need to evaluate the quantities

https://preview.redd.it/pu39kfmxgua81.png?width=1108&format=png&auto=webp&s=a86069f86030e43b83de2052a9110c051c835878

as well as

https://preview.redd.it/sfejjcqzgua81.png?width=1160&format=png&auto=webp&s=6d7d60fd8ff8120c66d7cd8fdf5b74d271560871

each evaluated at the point (\tilde x,\tilde y).

So far so good; my model can not be expressed analytically but I can produce point estimates f(x*,y*) running a simulation and in principle I would create a grid of x and y values, evaluate the model-function at each grid-point and calculate a numerical derivative for it - the problem is, that each simulation takes some time and I need to reduce the number of evaluations without losing too much information (e.g. I have to assume that f is non-linear...).

I'm asking here for some references towards strategies, since I have no idea where to even start. Specifically I want to know:

  • How can I justify a certain choice of grid-size?
  • How can I notice my grid-size is to small?
  • Should I sample the input-space by means other than using a parameter-grid? (Especially as I might not use Uniformly distributed input-spaces at some point)

Thank you in advance for any interesting wikipedia-pages, book-recommendations and what not!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/LtSmash5
πŸ“…︎ Jan 10 2022
🚨︎ report
16, 5’7, 164lbs ; what are my weak points and estimated bf%?? reddit.com/gallery/rzks1m
πŸ‘︎ 14
πŸ’¬︎
πŸ“…︎ Jan 09 2022
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.