A list of puns related to "Construct Validity"
https://doi.org/10.1037/pas0000809
It's unavailable on sci-hub and Research Gate :(
This is a dissertation and post-Hunter&Schmidt meta-analysis (Schmidt was the supervisor) that analyzes the relationship between three measures of intelligence (fluid, crystallized and g) and occupational and academic outcomes. I'll just summarize some of the findings here. General cognitive ability has a high correlation with academic performance, especially college (around .7), crystallized is about there. Crystallized ability is the best predictor out of the three for training performance. Contrary to popular belief, fluid ability probably isn't as useful as the other two at predicting outcomes, with sample sizes usually being too small to draw any firm conclusions, and it's definitely inferior when it comes to academic performance and occupational success in medium- and low-complexity jobs. The tests used for the analysis can be found at the bottom of the pdf, before the references. You can probably find the "job zones" used in the study with some googling and with sci-hub if you can't access the full article.
A friend of mine was very Into undergrad psych and last night we were talking about assessments and all he seemed to know about was reliability. I've seen students post stories on blogs misunderstanding personality assessments. I wonder if better psychological literacy would make I/O easier in the field when it comes to things like assessments.
The modern social constructionist claim is that whilst sex is a biological reality, gender is entirely a social construct and is an arbitrary division almost solely defined by how someone chooses to identify (i.e. gender has no psychological or biological reality).
Despite disagreeing with this, that isnβt my unpopular opinion.
My opinion is that the aforementioned view is logically incoherent with the view that changing your gender is a legitimate and logically sound choice.
When I say changing your gender, I donβt mean identifying differently, I mean surgery, hormones, etc. β the full transition.
Wouldnβt this mean that gender does in fact have biological and/or psychological reality? Thus contradicting the assertion that it is a social construct?
I want to know more about the construct validity of the structured job interview. I remember reading a paper that suggests that we can't really explain yet why structured interviews have relatively high predictive validity to them, but I'd like to know what that means in practice. So for example, interview questions are deduced from a job analysis, but what's the use if the questions and constructs we're trying to assess don't necessarily measure what we intended? Or are interviews just a verbal assessments of intelligence (and what would that mean for others construct we intend to assess?)
I'd also be interested in reading something on evaluation criteria that go with structured interviewing methods.
I have troubles finding recent literature on this and would like to hear any suggestions!
Doctoral Thesis from Columbia University
DOI: 10.7916/D8HX1BRC, No PMID, URL: https://academiccommons.columbia.edu/doi/10.7916/D8HX1BRC
Iβve been getting confused between both. May anyone please give me an example of one or each? Thank you.
I am a self-studier and the book I'm reading has the two switched up compared to when I search it on google.
From what I understand, the book defines it this way:
construct validity: correlating scores on a test to another test. The higher the correlation, the more construct validity the new measure has.
Could someone please explain to me the way that the view on validity has shifted? I'm reading in Schmidt and Sinha (2010) that there used to be a tripartite view of validity (content, construct, criterion-related) that has now shifted to the view that everything is about relationships between constructs and evidence that can be found to support construct validity. It then says that the Standards list eight different ways that evidence can be found for construct validity. How is everything now construct validity? Why is criterion-related validity and content validity now just versions of construct validity? It doesn't make sense to me. I've included a quote from the chapter below.
"This tripartite approach to validity has been replaced by the view that validity always concerns relationships between constructs and that there are various ways in which evidence can be marshaled to support construct validity. The most recent edition of the Standards (American Educational Research Association et al., 1999) identified eight different types of construct evidence. These include gathering information on what have traditionally been referred to as content validity (1) and criterion-related validity (2). In addition, the Standards suggested that validity can be assessed by investigating the processes engaged in when an examinee responds to a test item (3). For example, examination of eye movement or physiological activity while taking a test may reveal information consistent or inconsistent with the hypothesized attribution about the examineeβs standing on a construct. These indices have been used to assess integrity of responses and the attention required to give an answer. Correlational and factor analyses (4) and item analyses (5) can be used to determine the nature of the dimensions underlying test performance, that is, the testβs internal structure. Evidence based on relationships with other variables (6), including studies of convergent and discriminant validity (D. T. Campbell & Fiske, 1959) yields important information. These latter types of information would all have been included in earlier discussions of construct validity. Validity generalization (7) usually involves a summary of predictorβcriterion relationships across multiple settings and can be considered a form of construct validity in that differences in situations, sample sizes, and artifacts are removed from the observed relationship to provide estimates of underlying predictorβcri
... keep reading on reddit β‘So, I am struggling to understand the distinction between the two. This is what I (think I) understand:
Internal consistency is a form of reliability, and it tests whether items on my questionnaire measure different parts of the same construct by virtue of responses to these items correlating with one another.
Construct validity is (of course) a form a validity, and it measures the extent to which "a test measures what it claims to theoretically measure". EFA explores the data structure and returns the number of factors emergent in the data as well as the loadings of each question on these factor/s.
But...I still don't understand how this is different from each other?
Different papers I have read do different things.
Sometimes composite reliability is classed as measuring construct reliability and sometimes it is measuring convergent validity / construct validity. AVE is more consistently in convergent validity. Which one is it?
What's the difference between construct and face validity?
I understand that face validity is whether the test is measuring what it claims it is but I thought that this was what construct meant....
It's in the KA notes, but I'm not really seeing the difference.
I posted about this in /r/AskStatistics as well, but im really struggling with understanding the difference between construct and criterion validity. I've googled it and read many different pages about it and ive read the chapters in my textbook but i cant understand what the difference is. Can anybody explain it to me simply?
Curious - those who are interested enough to arrive here, what approach to validity and theory of measurement do you ascribe to if any? When you were trained, were you exposed to multiple ideas about these topics or shown the received wisdom, so to speak?
Hello all. Sorry for the throwaway account, but I'd hate to be doxxed on this one.
Please let me preface my question by saying I'm not upset about this reviewer. The reviews of my manuscript were generally positive. I'm posting this question because I'm genuinely wondering whether I understand this issue correctly.
One of the reviewers criticizes my manuscript's implication that construct validity is an attribute or property of the measure. The reviewer says construct validity is a property of the construct, not a property of the measure.
I'm bewildered by that. The reviewer's assertion doesn't jibe with my conceptualization of construct validity (or, indeed, psychometrics as a whole). To my way of thinking, if we have empirical evidence of a lack of construct validity, it's the measure that lacks construct validity, not the construct. We need to improve the measure because it lacks construct validity.
How could a construct lack construct validity?!? It seems to me that we could measure any construct, no matter how it's defined; the challenge is to find a measure with construct validity.
Maybe I've developed a misunderstanding of something. What do you think?
Does anyone have a trick to quickly recall what the different types of validities are? I always get tricked when it comes to those questionsπ«
I'm having a really hard time understanding the difference between construct and criterion validity. ive been googling it and reading my textbook over and over for the past few days and i still can't quite grasp it. I'm looking at a test manual for a project reviewing it and I'm trying to write the validity section. The manual gave some evidence of validity but didn't specify what kind and I am not able to decide what kind it is. can someone clearly differentiate the two for me in an ELI5 type manner?
https://uclpsych.eu.qualtrics.com/SE/?SID=SV_5Bhp8hi4GSGAzrv
Above is a link to a set of straightforward self-report questionnaires that I'm using in my research at University College London. The aim of the study is to further consolidate trait-social intelligence as a distinct and valid construct (in the same way that trait-emotional intelligence has been validated as a distinct construct). I need to gather as much data as possible and so I'm asking if any bored, like-minded or philanthropic individuals reading this would complete the questionnaires. It's all out in the open, no blinding in the procedure, no debriefing needed and it's using the software Qualtrics so all information is confidential by default. If you could participate, it would be really helping out myself and the academics above me at UCL, and the broadening of our understanding of individual differences in the field of psychology!
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.