[Q] Calculating standardized coefficient/Beta in Excel

In the Excel Analysis ToolPak for regression, the results do not show the standardized coefficient (Beta). Is there a way to calculate it in Excel?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/catalanz
πŸ“…︎ Sep 24 2021
🚨︎ report
What is the difference when selecting the "Standardized Coefficients" option n Mediation Analysis ( PROCESS v3.5 by Andrew Hayes) and not selecting it.

Hello Everyone,
I hope you are well! I am carrying out mediation analysis. When I select "standardized coefficients" the results are significant. When I don't, the results are not significant. What does it mean?

The variables I am using in the mediation analysis model are not standardized. I am only adjusting for age and sex as confounders.

Thank you!!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/hz123456789
πŸ“…︎ Jun 11 2021
🚨︎ report
How do you compute confidence intervals for standardized regression coefficients in R?

I'm stunned that there doesn't seem to be a function in any of the major linear modeling packages to calculate CI's for beta coefficients!

How would you go ahead and do it manually?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/EntropyGoAway
πŸ“…︎ Sep 22 2020
🚨︎ report
How to interpret standardized regression coefficients?

Hi all,

A pretty basic question, I have a linear regression model with standardized independent variables but a non-standardized dependent variable. How should I interpret the coefficients?

For example, if I have

y = 400 + 30x1

and x has been standardized, would I interpret this as if x increases by one standard deviation (in x's units) then y will increase by 30 of y's units?

Thanks all.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/SpurtThrow
πŸ“…︎ Nov 24 2020
🚨︎ report
Standardized coefficients (STB) with proc mixed? Proc Standard as an alternative?

I am working on analyses for an R&R. I was running OLS producing standardized beta coefficients (STB) in OLS to compare effect sizes.

Reviewers also wanted to see country level variables, so I created them and ran HLM (proc mixed). It does not let me use the STB line of code, as far as I can tell. One approach I have seen to get around this is to standardized data before using the mixed procedure with proc standard. However, the examples I see take different approaches to setting the mean and standard deviation. Is there is a simple way to move forward with this?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/KaleMunoz
πŸ“…︎ Aug 07 2020
🚨︎ report
[Q] Is is appropriate to use standardized coefficients in lieu of back transforming coefficients for transformed data?

I am using Stata 15.1

My issue is that I have a number of variables in my model which I have transformed. The transformations are diverse and include log, cube, square, and square root transformations as appropriate. I have read that I have to back transform my confidence intervals and standard error, which is fine. I'm unclear about my coefficients however. It would seem to me that back transforming them would negate the use of transformations in the first place.

Would it be appropriate to use the standardized coefficient instead? I have multiply imputed data, so I had to use the mibeta command for the standardized coefficients. As a result it's the mean standardized coefficient over the number of imputations I used.

Thanks for any help you can provide. Please let me know if more information is necessary.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Gold4Schiff
πŸ“…︎ Sep 16 2019
🚨︎ report
Pearson's correlation coefficient or Standardized beta coefficient in multiple regression analysis?

Hi! I am doing a quantitative study on the impact of aspects of online reviews on purchase intention. I have conducted a survey, which resulted in a valid sample of 180. This measured 4 concepts as independent variables (predictors), and 1 dependent variable (purchase intention), all using validated measures and a 5-point Likert scale for all questions. I have calculated both the Pearson's correlation coefficient and the standardized beta coefficient using a multiple regression analysis. They are similar, but give a slightly different ranking for the 4 predictors. Can someone tell me why they result in different rankings, and perhaps which analysis to use primarily (or even to remove one)? I currently do not know if SCRAVG or VALAVG has more impact on PURAVG. The goal of the study is to identify which aspect is most important for reviews in order to have a high purchase intention. Thank you! SPSS output: (https://imgur.com/7h3dTnV)

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/noudus
πŸ“…︎ Jan 05 2020
🚨︎ report
Question about interpreting Adjusted R squared values and standardized coefficients.

Hello, I'm writing my first big medical research paper, and I have been trying to understand statistics along the way.

So, I have results from two regression models I am trying to write about. IV 1 includes 2/3 options. IV 2 includes access to 3/3 options.

IV1 has a higher adjusted R squared, but a lower standardized coefficient. I know what to write when both are higher/lower, but what do I state when R squared is higher but standardized coefficient is lower?

For context, IV2 should be stronger as it is all inclusive. If you need more context I am happy to provide it, I just didn't want to bore or distract from the question.

Thank you all in advance, and if there is another sub I should post to please let me know.

Edit: I am a noob and said DV everywhere I meant IV. My bad.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/ResearchRelated
πŸ“…︎ Apr 24 2018
🚨︎ report
Meaning of intercept in SPSS analysis for standardized coefficients

I'm helping someone with their thesis where they've used SPSS for their analysis. They use the standardized coefficients for their analysis but in that column you also have an intercept. What is the meaning of an intercept in the standardized analysis.

It's clear what it means in the non-standardized analysis (the value of x for y=0), but I have no idea how to interpret it when the coefficients are based on standard deviations.

Thanks in advance!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/JosVermeulen
πŸ“…︎ Jul 31 2019
🚨︎ report
Etiquette for discussing both coefficients and standardized betas in a findings section?

My tables have OLS coefficients, standard errors, and standardized estimates to compare effect strength. Is it acceptable, in a findings section, to refer to the OLS coefficients when discussion everything in the models and then introduce standardized coefficients only for the key independent variables, in order to highlight their strength in the models? Or should I just refer to the standardized coefficients for every variable exclusively? Most articles I'm aware of focus on one.

This is a second draft. I got comments that expressed interest in the coefficients, so I want to keep these, but they were concerned about the size of the coefficients for independent variables. Standardized estimates reveal these to be the strongest effects in the model.

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/fieldworkfroggy
πŸ“…︎ Nov 04 2018
🚨︎ report
Log-linear model: standardized beta coefficient interpretation?

In a regular linear model, it would be a 1 SD increase in X is associated with a B SD unit change in Y. In a log-linear model, is the correct interpretation, a 1 SD increase in is associated with an 100*(e^B - 1) SD unit change in Y? That seems difficult to interpret and awkward, but is it technically correct?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/throwuhwhay
πŸ“…︎ Feb 21 2017
🚨︎ report
How to get 95% confidence intervals for standardized coefficients for lmer model in lme4

I am running a model (lmer) model in lme4

Then getting standardized coefficients with lm.beta.lmer <- function(mod) { b <- fixef(mod)[-1] sd.x <- apply(getME(mod,"X")[,-1],2,sd) sd.y <- sd(getME(mod,"y")) b*sd.x/sd.y }

But how do I get 95% CI for those standardized coefficients?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/dipcupdipcup
πŸ“…︎ Jun 29 2018
🚨︎ report
Unstandardized coefficients, standardized coefficients, and semi partial correlations.

Hi there stats monkeys!

I've been running some multiple regression analyses and I've come across an issue regarding my unstandardized coefficients, standardized coefficients, and semi partial correlations.

Basically, I want to see which factors (number of previous sessions, psychological distress, life satisfaction, client expectations) predict client hope (goal oriented thinking, e.g. "I can solve my problems").

As can be seen below, the number of sessions a client has had does not predict Hope, nor does expectations of therapy. I would have thought both would, so I'm not sure how to explain that. Any suggestions would be helpful.

However, the main problem I have is the difference between B and Ξ² for psychological distress. The sample size is large (700+), and everything I've read suggests that it is mainly sample size differences that will affect Ξ². Any ideas why my B and Ξ² scores are so different for psychological distress, but not for prior sessions?

Finally, the semi partial correlations are 5.1% for psychological distress and 12.8% for life satisfaction, but the overall model predicts 35.2%. Where did the other 17.3% go?

Numbers are B -> Ξ² -> sr2 Prior Sessions: -.029, -.016, .001

Psychological Distress: -.029, -.261, .051

Life Satisfaction: .500, .417, .128

Expectations: -.006, -.003, .001

If anyone could help, I would really appreciate it. Trawling through text books and google for answers has been driving me insane.

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Mitchimus
πŸ“…︎ Jan 09 2014
🚨︎ report
Comparing standardized beta coefficients across models

I am running two OLS models on the same sample with the same dependent variable but different predictors with different scales (x1 for model 1 and x2 for model 2). Since I want to compare the usability of both predictors in the given context, I was thinking about using standardized beta coefficients to compare the influence of both predictors on the dependent variable.

Is this a valid way of comparing the two predictors or should I rather rely on measures of overall model fit? I cannot include both predictors in one model as they measure the same concept with few differences and are therefore highly correlated.

I would be glad if anybody could point me in the right direction. Tank you!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/statxta
πŸ“…︎ Jul 02 2017
🚨︎ report
[Q] Determining the formula for standard error of coefficients in Poisson Regression?

I see that with logistic regression that the standard error can be computed as in How to compute the standard errors of a logistic regressions coefficients which amounted to taking

`[;\sqrt{(X^TVX)^{-1}};]` where V is a diagonal matrix where the diagonal entries was probability of being in class A, `[;\pi_a * (1-\pi_a);]`

___

Looking at the same for linear regression (based on my understanding of Standard errors for multiple regression coefficients? ) we can compute the standard error of the coefficients by

`[;\sqrt{\sigma^2(X^TVX)^{-1}};]`

where s is the variance of the residuals (as per my understanding of

___

From the above I have 2 questions:

  1. It seems like from the above we are using more or less the same form (square root of the inverse of something). Am I on to something? How do we determine that "something"? In logistic regression it was V, a diagonal matrix, and in linear regression it is the variance of the residuals). It seems like we're encompassing a notion of "how wrong" our prediction is compared to some label.
  2. How might I derive the same for a Poisson regression?

___

Normally I'd just use R or statsmodels, but I'm building a custom library for encrypted ML/stats and I need to build all of this from scratch

πŸ‘︎ 21
πŸ’¬︎
πŸ‘€︎ u/iamquah
πŸ“…︎ Nov 14 2021
🚨︎ report
should i be playing on relative or legacy sensitivity? and for the transition timing should i choose instant, gradual, or after zoom? last question should i just leave the coefficient to the standard 1.33?
πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/purpps_
πŸ“…︎ Oct 30 2021
🚨︎ report
ISO 2020 Point Average, Games Played, and Consistency Ratings (Standard Deviation or Coefficient of Variation) for Standard and PPR

Hello,

I’m looking for a spreadsheet with some stats that I can’t seem to find anywhere. The total point average, games played, and consistency ratings (standard deviation or coefficient of variation) for standard and PPR. Thank you!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/hippysmuggler
πŸ“…︎ Aug 12 2021
🚨︎ report
Why collinearity reduces the accuracy of estimates of regression coefficient and causes standard error to grow?

The title is the question. It was mentioned in ISLR textbook. I am having difficult time in understanding that. Can anyone explain it?

πŸ‘︎ 25
πŸ’¬︎
πŸ‘€︎ u/lone_lonely
πŸ“…︎ May 16 2021
🚨︎ report
What’s the difference between a coefficient of variation vs standard deviation?

I don’t even know what these words meanπŸ˜…

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/TaehyungCupcake
πŸ“…︎ Aug 04 2021
🚨︎ report
The standard error for fisher's transformation with partial correlation coefficients

So I'm vaguely trying to follow this tutorial for meta analysis: https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/effects.html#pearson-cors

The problem I'm having is that I'm using partial correlation coefficients instead of theirs. In the link above the standard error for a fisher's z is 1/SQRT(N-3). Since I'm using partial correlation coefficients should I use 1/SQRT(N-C-3) where C is the number of control variables? or do I just use that equation. thanks.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Unknwon_To_All
πŸ“…︎ Aug 31 2021
🚨︎ report
[Question] How to calculate the standard error of the difference between regression coefficients of different regression.

I am performing analysis for my master thesis and found a paper that gave me a good idea on how to analyze my data. In this, I am performing an FF3 regression in order to check whether the 2 portfolios are significantly different from each other. In this, I've got excess returns for 2 portfolios for the same time frame, the thing I would like to create now looks like this:

Alpha RMRF SMB HML

High ESG Portfolio 0,0029* 1,1882*** -0,0069 -0,2420***

(0,0015) (0,0376) (0,0698) (0,0582)

Low ESG Portfolio 0,0013 1,086*** 0,0725 -0,1324**

(0,0016) (0,0392) (0,0728) (0,0607)

Difference 0,0016 0,1017 -0,0795 -0,1096

What I am looking for now is how to calculate the standard error of the differences between these coefficients based on the regressions outputs. How would one go about this to see whether the difference in returns and risk factors is significant?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/schriene
πŸ“…︎ Jun 18 2021
🚨︎ report
What's the intuition behind computing the standard errors of OLS coefficients with the variance-covariance matrix?

I understand that you can obtain the standard errors of the coefficients of a linear regression model by taking the square root of the diagonal elements of the variance-covariance matrix. For some reason, I just can't seem to see why that works. My intuition tells me that this should be really easy to grasp but apparently my brian is currently MIA. Can anybody ELI5 this to me please? Thanks!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Excusemyvanity
πŸ“…︎ Apr 22 2021
🚨︎ report
Is it possible to find the standard deviation in a y-value at a given x-value in a bivariate distribution with only the variance and the correlation coefficient?

For example if Data point 1 says

x = -3Οƒ and

y = -3Οƒ then

y = 6,

And data point 2 says

x = 3Οƒ and

y = 3Οƒ then

y = 7,

with a correlation of 0.5333

Is there enough information here to find what y would be at x=-3Οƒ, y=+3Οƒ and at x=+3Οƒ, y=-3Οƒ?

I'm not a math student, I just like to explore statistics for fun, so any help would be much appreciated as I am a layman and don't know what I'm doing.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ChamberKeeper
πŸ“…︎ Mar 29 2021
🚨︎ report
Intuitive explaination on why colinearity blows up the standard errors of the regression coefficients

In Linear Regression, why does colinearity (or multicolinearity) increase the standard error of the coefficients' estimators ? Mathematically it is because of the Variance Inflation Factor, I know this, but intuitively, without the maths, why ?

πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/Skouwo
πŸ“…︎ Oct 10 2020
🚨︎ report
How can I use more robust standard errors for cox models and adjust for the interaction between time and coefficients?

In my thesis I found multiple violations of the Schoenfeld residuals assumption in cox proportional hazard models, and I was lucky to find this one answered in stack exchange. But I don't know how to implement this advice using survival package:

>1. Use robust standard errors.
>
>2. Adjust for the interaction between (the log of) time (at risk) and coefficients

I'd appreciate any input or advice

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/Naj_md
πŸ“…︎ Jan 26 2021
🚨︎ report
No intercept in a linear regression model equation with standardized coefficients?

Why is there no intercept in a linear regression model equation with standardized coefficients?

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/Jeffpatat
πŸ“…︎ Jun 25 2018
🚨︎ report
Is it possible to find the standard deviation in a y-value at a given x-value in a bivariate distribution with only the variance and the correlation coefficient?

For example if Data point 1 says

x = -3Οƒ and

y = -3Οƒ then

y = 6,

And data point 2 says

x = 3Οƒ and

y = 3Οƒ then

y = 7,

with a correlation of 0.5333

Is there enough information here to find what y would be at x=-3Οƒ, y=+3Οƒ and at x=+3Οƒ, y=-3Οƒ?

To be clear y = 7 at y = 3Οƒ conditional on x = 3Οƒ and y = 6 at y = -3Οƒ conditional on x = -3Οƒ.

I'm not a math student, I just like to explore statistics for fun, so any help would be much appreciated as I am a layman and don't know what I'm doing.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/ChamberKeeper
πŸ“…︎ Mar 29 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.