A list of puns related to "Econometric model"
Some towns have nice people. Other towns are known to be full of donkeyholes, where donkey = ass.
I am trying to form a "jerk index" of entities. Those entities could be communities, towns or countries. But I suppose they could also be organizations, disciplines/fields, subreddits (wallstreetbets), or anything else. However, for the model I am focusing on towns, cities, states and countries.
The most important thing is that the parameters used in the jerk index should be readily available from secondary data alone.
What easily-observable-from-secondary-data parameters would you consider if you had to form a "jerk index" of towns, cities or states? Without visiting a town, you want to be able to tell from the data alone, whether the people in that town are major jerks. Any ideas of the variables I should consider in the model?
Hello,
I have a panel data comprising of three waves:
Based on this information, I have calculated the following trajectories of people:
I cannot figure out, how to determine the impact of gender, education, age, and experience on the labor market exit in the economic shock period conditioned on being employed in the first period.
Hello
Im looking to test whether Irish stock returns on a given index are correlated with macro econ variables eg GDP
what steps would i take to build and test this model?
My previous CMV was successful so I am hoping for the best.
I am new to social Science, especially Economics. I come from a Physics background and honestly, I have never encountered such a subject in my life (never had Economics in school) - a subject so mired in controversy and where the very basics - like the concept of utility, fall apart in reality and every single approach has a huge criticism.
The moment I studied 'utility' - both Cardinal and Ordinal - I am partially convinced this subject is messed up. What even is this 'utility'? How are you saying that a person gets this many 'utils' from consuming a product, and also claim that it is 'independent'?? This kind of Mathematical approach to human behaviour made me very very uncomfortable and I searched about it on Google. And what did I find? 'util' is a hypothetical concept that is not even measurable in real life - just a tool to make models. Like what? I have never seen such a thing in Physics. In Physics, everything - from force to electron spin - everything is real and measurable. So you mean to say, that something as basic as the concept of 'util', based on which very important Economic fields like Welfare Economics, Game Theory (on which they even distribute Nobel Prizes), Utility theory, Choice Theory, etc, is hypothetical and realistically cannot be even measured? That's like doing Physics and saying Force can't be measured.
Not just that but apparently the entire Advanced Mathematical approach to Economics is shrouded in controversy with both heterodox schools like Austrian School/Marxist School, as well as mainstream Economists, even father figures like Keynes, criticized overuse of Math like advanced Calculus and Optimization, etc. So, all these advanced Mathematical Methods for Econ classes, for many people, are complete waste of time! There are thousands of articles on the internet, research papers, theses and the Wiki on how Math Econ faces strong criticism.
In Physics we study mostly non-living things - like electrons, protons, molecules, billiard ball trajectory, bullets, pulleys, etc. - things that don't have sentience and follow simple Universal Laws. Even the most complicated thing - like climate is just a complex system of non-sentient fluids like air, ocean, etc. We are applying the same thing on a system of humans? We are applying complicated Game Theory and Optimization techniques on human behaviour? When was the last time you calculated an integral to purchase somethin
... keep reading on reddit β‘Iβm conducting an ex-ante forecasting experiment by constructing forecasts for the log weekly closing prices of the S&P500 using data from 2000-2021 (forecasts are being constructed for the year 2020). The assignment asks me to construct a simple deterministic time trend model defined as: y(t) = alpha + beta*t + epsilon. Alpha, beta and epsilon are all parameters that Iβve already defined, and t is said to take the value of 1 for 2000:1, 2 for 2000:2, 3 for 2000:3, and so on. Iβm not really clear on what that means, since that intuitively seems like it would eventually have me setting t equal to values around 1000. Forecasting that way would he pointless since the forecast would be really erroneous; I know Iβm misinterpreting somehow. Any pointers for how I could figure out how to define t?
predict the AUD/EURexchange rate on 31 December2021 by developingyour own theoretical model (combining any factorsthat you deem as important in determining the AUD/EURexchange rate). You will need to estimate an econometric model to receive credit for this part. You will need to provide justification for your model
I live in a developing country where 1/2 dollars an hour is a good salary. So that's my average price. Anyone interested in developing a model either for their own benefit or for a university project I'm in. Pay is going to be discussed depending on complexity of the model, difficulty in doing and time worked.
I've done ADL, MA(n), ARMA(p,n), AR(p) models.
For those with no understanding of this models: you can tell me "I want to see how BTC/ETH/ other major coin's price Is affected by X variable (S&P, Dow Jones, other crypto prices etc)" and I will make a model using STATA that tried to see the effect it has on price.
I am currently taking econometrics one and i have to make a proyect, by project i mean a model with theory supporting the regression. I looked for several but could not find any that was simple, withe theory supporting or with easy to access data and the ones that i was able to make had desappointing results.
1st estimate car sales based on WTI price, interest rates, personal income. Problems low r2, heteroskedasticity. 2nd gdp based on unemployment, population, and corporate profit. Low r2
3 estimate alcohol consumption based on alcohol cpi, personal income, population and spending on leisure goods. The coefficient of population was negative.
I dont know what else to do or look for. I dont know if i have to change my model and start from zero.
Any advice, recommendations, ideas will be highly appreciated
So far I regressed the model with eststo: probit [variables], vce(oim) and computed the partial effect of a specific variable at the average of other regressors like this:
> margins, dydx(balance) atmeans
Is that the right way of doing it? How do I interpret the results?
Then, for 2. I'm not sure how to include the additional units of a variable. In this case it's an additional 5 years of age. I assume I would have to generate a new variable for that, transforming age and then do
> margins, dydx(age_new)
Any help is appreciated!
Hi y'all,
Can someone help me understand the results from this paper -- The impact of problem drinking on employment, Feng et al. 2001? https://sci.bban.top/pdf/10.1002/hec.611.pdf
The authors use a univariate probit and a bivariate probit to investigate the effect of problem drinking on employment status. With the univariate probit, they find a statistically significant positive correlation between problem drinking and employment for women only (insignificant neutral effect for men), but when they do the bivariate probit they find a negative correlation for women (although not statistically significant) and a significant and positive correlation for men. The correlation between the error terms in the bivariate model is quite large, and is significant for women and approaching significance for men, and the correlation is positive for women and negative for men.
My smoothbrain interpretation of this is that, because the magnitude of the correlation of the error terms is large in both cases (and significant for women), there is an unobserved variable (an 'endogeneity', and/or an 'unobserved heterogeneity'?) which correlates positively with both employment and problem drinking for women, and an unobserved variable (possibly a different variable) which correlates positively with employment and negatively with problem drinking for men. Thus we can conclude that the univariate probit is insufficient for demonstrating the relationship between problem drinking and employment for either men or women; that there are unobserved variables which have opposite effects for men and women; and that we cannot from this analysis alone conclude that there is any relationship or correlation between problem drinking and employment.
Are my interpretation and conclusions correct or have I misunderstood? Thank you!
Sorry if this is not the right community to ask this in, but, I'm currently a second year finance and economics student, studying at a quality university with a higher focus on quantitative methods. I have already studied an elementary econometric course covering regression, multiple regression, binary, logit and probit etc, etc. In the next year or so I will have the option to choose some more intermediate level to advanced econometric courses. I'm just wondering how applicable the models ill learn in econometrics are to issues outside the social science spectrum? That is, the application of econometric models to data for purposes of non-formal analysis, maybe even call it "recreational analysis". For example, i know that certain models when meeting certain conditions, can be applied to social science type issues/experiments, but what if i wanted to try leverage econometric modelling to perform analysis on more niche things such as sports (like is presented in moneyball for example) or even other games such as card games or stocks(though i know econometrics is already prevalent here). I love finance and have an avid interest in statistics, stata and other programming languages, and want to deepen my statistical modelling capacity beyond that of regressions, etc. I am really considering chasing this pathway, however, i am worried that the absence of applicability to anything outside of social science type experiments , in addition to a high focus on theoretical content over practical implementation will lead to growing boredom of the subject if i were to study it. Hence, why i ask whether what i learn can be to at least some extent applied to recreational type "experiments", where my knowledge feels more tangible as i can actively apply what i have learnt to things that can be analysed with statistics and that are of interest to me, outside on my studies of course.
This is in the last steps of a basic fischer contracting model to find the aggregate supply function but our prof didn't give us a very comprehensive rundown of lag polynomials. Just trying to understand how this equality to cancel out Et-1Vt is found
https://preview.redd.it/v9vl3w9fbtw61.jpg?width=1443&format=pjpg&auto=webp&s=31bab3d393f00a799e057d7241d90fd598452d32
Suppose we are working with three I(1) variables and that the result below is the Johansen cointegration test:
Unrestricted Cointegration Rank Test (Trace)
Hypothesized No. of CE(s) | Eigenvalue | Trace statistic | 0.05 critical value | Prob.** |
---|---|---|---|---|
None* | 0.394297 | 59.23088 | 42.91525 | 0.0006 |
At most 1* | 0.203773 | 20.12442 | 25.87211 | 0.2197 |
At most 2* | 0.029684 | 2.350447 | 12.51798 | 0.9432 |
Answer:
a) According to this test, how many cointegration vectors are significant? Which model should we use? Explain your answer.
According to the test results, we reject the null of no cointegration vectors in favor of the alternative of at least one vector. However, we do not reject either At most 1* or At most 2*, implying that the correct number of cointegration vectors to use is 1.
How can I decide which model using only this information? Does "model" mean using a trend, trend+constant, etc ?
b) What would the equations of this model be?
I have the same doubt; how would I model the equations based only on the test results?
Thanks a lot!
Hi guys. I hope this place is also for asking for advice.
While developing the econometric model, I encountered an autocorrelation problem. I solved it with the Cochran-Orcutt transformation and indeed the autocorrelation disappeared. The first variable also disappeared, which I later found out is a normal side effect of this process. I am creating the project in the RStudio environment and my question is specifically about how to recover the first observation. I need it for later analysis (for example series test), and I don't know how I can constest it.
Anyway, my model has 72 observations. If you know of a way to check the randomness and symmetry of the residuals of the model afterwards despite missing the first observation then let me know. Because without the first observation I don't know how to do it even though I have all the residuals.
Julie
Hey guys. I am a beginner in the field of econometrics and I need help creating an econometric model
I am writing a dissertation on the impact of Quantitative Easing on gilt yields, and I want to include independent variables which affect bond yields such as QE, expected inflation, Bank of England base interest rates, and debt as % of GDP. The dependent variable is gilt yields. I want to do an OLS regression.
The issue is the data I have for all the variables is quarterly, except for QE, the data is more random (i.e. it is not quarterly). How would I go about overcoming this difficulty? Would a dummy variable for QE work? (Where dummy variable = 1 after QE is implemented)
I have tried reading other papers on this topic but the models seem confusing. I saw another post on this topic on this subreddit but I couldn't find the answer I needed.
Apologies if I have not explained the issue well!
Is there overlap between the subjects? Is it worth taking both or is it better to choose one and choose an another subject based on one's goals?
Hi-
If I wanted to simply measure the effect of a TV campaign by predicting what sales would have looked like business as usual, would building an econometric model with marketing spends as the regressors make sense? And would I want to exclude TV from the model, and only use other marketing channel spends as regressors? Additionally, would I want to use the entire data set (redo analysis as new data comes in) or use a training/holdout data set prior to TV starting to predict sales?
Any help is appreciated!
Hey everyone! One of my assignments requires that I grab some data that my professor provides and then choose the correct form to graph it out. So far only level-level, level-log, log-log, log-level, and the quadratic models have been discussed so I feel like I should use one of those. Anyways, there are four variables x, y, z, w. And I am supposed to use y as the dependent variable. The problem is that when I plot y against each variable individually, it shows up as a sideways teardrop shape and I haven't encountered data like this before so I don't know what to do. Any help would be appreciated.
Here's the link to the scatter plots I mentioned: https://imgur.com/gallery/oJMVq5x
Hi,
I am trying to forecast Eurusd using an econometrics model as part of an assignment.
following are the independent variables that I am considering:
growth differential, yield differential (10 year), inflation differential, crude oil and M1 growth differential
Please see the output
Call:
lm(formula = eur_usd ~ yield_diff + crude_oil + infl_diff + growth_diff +
m1_diff, data = train)
Residuals:
Min 1Q Median 3Q Max
-0.202599 -0.080471 -0.000001 0.072920 0.231584
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.8756421 0.0214444 40.833 < 2e-16 ***
yield_diff 0.0591806 0.0134197 4.410 1.71e-05 ***
crude_oil 0.0056936 0.0003068 18.560 < 2e-16 ***
infl_diff -0.0007536 0.0102356 -0.074 0.94138
growth_diff -0.0177423 0.0055277 -3.210 0.00156 **
m1_diff 0.0042970 0.0013537 3.174 0.00175 **
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
Residual standard error: 0.09543 on 194 degrees of freedom
Multiple R-squared: 0.7261, Adjusted R-squared: 0.719
F-statistic: 102.8 on 5 and 194 DF, p-value: < 2.2e-16
Can someone please tell me why the coefficient infl_diff is coming as insignificant? Also, if I run the regression just with crude oil and yield differential then also adjusted R2 is coming around 0.68.
I ran a regression on residuals and there is no autocorrelation, also independent variables and residuals are not correlated.
If you have any suggestions on how to improve this model, please drop a comment, that would be really helpful.
Thanks.
Hi guys!
I want to find intergenerational mobility in terms of education and occupation. Basically I am trying to analyze intergenerational mobility of Muslims with respect to Scheduled Castes/Scheduled Tribes and Other Forward Religions in India.
My model is: B=Beta
xiB= B0+B1agei+B2age^2+B3Muslim+B4OtherCommunities+B5rural+B6HouseholdSize+RegionDummy
Other Communities is a dummy variable with 1=SC/ST and 0=Forward Religions
I don't know if I am doing it correctly or should I adopt an intergenerational elasticity approach?
Hello!
For my Econometrics (I) course we have to develop a multiple linear regression model for an economics-related topic and I'm having some trouble coming up with good ideas that I can actually find information on.
Starting an econometrics model from scratch can be quite overwhelming, and I've run into one of two problems each time I attempt a model:
It's honestly thwarting my creativity at this point and I'm stuck. I want a cool, fun, out-of-the-box topic but I just don't know how/where to start researching or find resources.
Any help/ideas would be muuch appreciated!
The basic criteria for the project is: multiple linear regression model with at least 3 significant explanatory variables. Must be related to economics, of course.
Edit: I am SO grateful to everyone who contributed! Every reply has been incredibly useful, I think Iβve got a pretty good idea of what I need to do.
I need it for a class presentation. Please help!
Edit: Thank you so much! Although I missed one thing. The model does need to have regression in it. Is there any other one with regression in it?
Would probably cover a group of developing countries, so data is quite hard to come by. I was considering regressing GDP on capital, labor etc and Renewable Energy consumption or supply and measure the significance of the effect. But i am curious if there could be a macroeconomic model i can adapt to use in my research?
Hi Guys!
I want to compare the following:
The intergenerational educational mobility of Muslims with respect to Forward Castes/Religions and SC/STs. Should I use Fixed Effects/Within Group Estimation or any other? I cannot figure out.
Can anyone recommend me on a some books to read next?
What I'm interested in is learning about what economic models the Federal Reserve or any person would be using to try and understand Macroeconomics.
I read Mankiw's Macroeconomics, Krugman on Microeconomics and Econometrics for dummies so far. The last one was really much more on statistics and was a good refresher for me having studied math before, but it didn't really have any examples of economic models. Like examples of DGSE models and such
I've been trying to look for books on amazon but don't actually know which subject of economics I ought to be putting in the search box.
Thanks
I'm thinking of taking Econ 490 International Finance(Prof. Gregory Howard) & Econ 490 Nonlinear Econometrics Models(Prof. Russell Weinstein) in Spring2020. Can anybody provide insights/comments/advice for these classes and professors? There is no rating for them in ratemyprofessors.com so comments about the professor in general, their grading style, hard/easy level, and any kind of tips will be very helpful!
Im studying a Masters in Energy and in one of my assigments I am asked to research the asymmetry on transport fuel demand.
My data set includes fuel consumption per state per month since 2005 to 2018,
fuel price per state per trimester since 2005 to 2018,
GDP per year,
population per year,
total vehicle fleet per state per year
I come from an engineering background so I really dont know where to begin with econometrics.
thanks
"The loss which America has sustained since the peace, from the pestilent effects of paper money on the necessary confidence between man and man, on the necessary confidence in the public councils, on the industry and morals of the people, and on the character of republican government, constitutes an enormous debt against the States chargeable with this unadvised measure."
Federalist No. 44, January 25, 1788
Explain how your econometric models account for this, Keynesian dipshits.
Anyone interested in developing a model either for their own benefit or for a university project I'm in. Pay is going to be discussed depending on complexity of the model, difficulty in doing and Time worked.
I've done ADL, MA(n), ARMA(p,n), AR(p) models.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.