The Operated Heart at Autopsy
(Sprache: Englisch)
After 17 years of private practice as a cardiovascular surgeon, my partners qu- tionedtherationalityofmydecisiontoleavetheclinicalpracticebehindandbecome acardiovascular pathologist. Infact,theirdisbeliefofmyintentiontomakethe"leap of faith" was...
Leider schon ausverkauft
versandkostenfrei
Buch
139.99 €
Produktdetails
Produktinformationen zu „The Operated Heart at Autopsy “
Klappentext zu „The Operated Heart at Autopsy “
After 17 years of private practice as a cardiovascular surgeon, my partners qu- tionedtherationalityofmydecisiontoleavetheclinicalpracticebehindandbecome acardiovascular pathologist. Infact,theirdisbeliefofmyintentiontomakethe"leap of faith" was understandable. For a surgeon, the operating room is where the action is. It is as simple as that. And when a cardiac surgeon can hold in his hand a beating heart, now off-bypass and improved by an operation just completed, satisfaction is real and profound. However, life is complex. Throughout my surgical career, questions regarding the pathogenesis of atherosclerotic cardiovascular disease arose; curiosities of va- ous phenotypes of the disease piqued my interest. I became aware of the power of investigative techniques that might address these questions. I then began to realize that my career in the operating room left me little time to address them. I needed to study the disease full time in order to contribute to my understanding of it. Ironically, my ?rst autopsy as a pathology resident was on an individual with a past history of coronary artery bypass surgery. When it came to examining the heart, the dissection, as all pathologists know, was complex. However, I found it to be straightforward and enjoyable. But I subsequently learned that my fellow re- dents and mentors did not share my intrigue and comfort in de?ning the nuances of the operated heart.
Lese-Probe zu „The Operated Heart at Autopsy “
Psychometrics and the Measurement of Emotional Intelligence von Gilles E. Gignac It may be suggested that the measurement of emotional intelligence (EI) has been met with a non-negligible amount of scepticism and criticism within academia, with some commentators suggesting that the area has suffered from a general lack of psychometric and statistical rigour (Brody, 2004). To potentially help ameliorate this noted lack of sophistication, as well as to facilitate an understanding of many of the research strategies and findings reported in the various chapters of this book, this chapter will describe and elucidate several of the primary psychometric considerations in the evaluation of an inventory or test purported to measure a particular attribute or construct. To this effect, two central elements of psychometrics, reliability and validity, will be discussed in detail. Rather than assert a position as to whether the scores derived from putative measures of EI may or may not be associated with adequate levels of reliability and/or validity, this chapter will focus primarily on the description of contemporary approaches to the assessment of reliability and validity. However, in many cases, comments specifically relevant to the area of EI will be made within the context of reliability and/or validity assessment. Test Score ReliabilityIntroduction
... mehr
Overwhelmingly, the concept of reliability in psychology tends to be interpreted within the context of composite scores. In practice, a composite score usually consists of an aggregation of equally weighted smaller unit scores, where those unit scores are typically derived from item responses or subtest scores within an inventory. While any group of scores can technically be aggregated to form a composite score, a psychometrically defensible composite will be associated with item/subtest scores that exhibit a particular level of ‘‘inter-connectedness’’. Throughout the history of psychometrics, various concepts and methods have been formulated to represent and estimate the degree of inter-connectedness between the corresponding item scores. While the various methods of reliability estimation are associated with conspicuous differences, all forms of test score reliability may be argued to be based on the notion of repeated measurements (Brennan, 2001). In its purest Classical Test Theory (CTT) form, the reliability of measurement represents the hypothetical distribution of scores expected from repeated measurements derived from the same individual, under the pretence that the individual’s memory of the previous testing session is erased (from this perspective, the notion of test score reliability may be considered to be based on a ‘‘thought experiment’’; Borsboom, 2005). The wider the distribution of scores (i.e., the larger the standard deviation), the less reliability one would ascribe to the scores as an indicator of a particular dimension or attribute. As the prospect of erasing the minds of individuals is not exactly practical, various other methods of estimating reliability have been devised to approximate the scores that would be expected to be derived from the ‘‘thought experiment’’. From this perspective, the most well-known are ‘‘parallel forms reliability’’ and ‘‘test–retest reliability’’. Within the context of reliability estimation via a single-testing session the most well-known reliability methods are ‘‘split-half reliability’’ and ‘‘Cronbach’s alpha’’ (a). Less well-known methods of estimating internal consistency reliability are based directly upon latent variable model solutions. The most well-established method of estimating the internal consistency reliability of a composite score via a latent variable model solution is known as ‘‘McDonald’s omega’’ (o). Prior to describing the above methods of reliability estimation in detail, it should be emphasized that reliability should not be viewed as a property of a test, per se. Instead, reliability should be interpreted as a property of scores derived from a test within a particular sample (Thompson & Vacha-Haase, 2000). This issue is not merely semantic, as the implications are directly relevant to the practice of testing and measurement in psychology. Specifically, because reliability is not a property of a test, researchers can not rely upon previous estimates of reliability to support the use of a test in their own work. Consequently, researchers are responsible for estimating and reporting the reliability of their scores based on their own data. The possibility that a particular test will yield scores of a particular level of reliability across samples and settings is a hypothesis to be tested, rather than an assumption to be made. The generalizability of a reliability estimate may be tested within a ‘‘reliability generalization’’ framework, a concept and method which will not be described in any further detail in this chapter (interested readers may consult Shavelson, Webb, & Rowley, 1989, for an accessible discussion of reliability generalization). Types of Reliability Estimation
Parallel Forms Reliability In contemporary psychometric practice, parallel forms reliability (a.k.a., alternative forms reliability) is rarely reported, despite contentions that it may be the most fundamentally sound method of estimating reliability (e.g., Brennan, 2001). Parallel forms reliability is based on the premise of creating two tests or two inventories which yield composite scores associated with the same parameters (i.e., means and variances) and are justifiably regarded to measure the same construct. In practice, participants would complete form A and form B during two different testing sessions separated by approximately two weeks (Nunnally & Bernstein, 1994). The squared correlation between the composite scores obtained from the two forms would represent an estimate of reliability of the scores derived from the inventories, individually. The methodology of parallel forms reliability can be applied in a such a way as to offer the opportunity to identify three sources of ‘‘error’’ variance: (1) systematic in item content between tests (which, realistically, is expected because items are not random samples drawn from a population of items); (2) systematic differences in scoring (more common in scenarios where a rating is made by a test administrator); and (3) systematic changes in the actual attribute of interest (Nunnally & Bernstein, 1994). Thus, the capacity to segregate these three sources of measurement error via parallel forms reliability may be viewed as particularly valuable. However, the procedure is rarely observed in the applied literature. To my knowledge, there has yet to be a published instance of parallel forms reliability in the emotional intelligence literature. Thus, the temporal variation in EI (source #3) as distinct from ‘‘pure’’ measurement error has yet to be determined. Perhaps the primary reason why parallel forms reliability is so rarely reported in the applied literature is due to the difficulties of creating a second parallel test with the same mean and variance characteristics as the first test, not to mention the same validity. A less onerous reliability procedure that may (justifiably or unjustifiably) be viewed as sharing some properties of parallel forms reliability is known as test–retest reliability.
Test–Retest Reliability
Rather than create two separate forms considered to measure the same attribute and have participants respond to the separate forms at two different testing sessions (i.e., parallel forms reliability), an alternative reliability methodology consists of creating a single test and having participants respond to the items at two different points in time. The correlation between the corresponding time 1 Psychometrics and the Measurement of Emotional Intelligence and time 2 scores represents a type of reliability methodology known as ‘‘test–retest reliability’’. Test–retest reliability is indicated when the correlation between the scores is positive, although no widely acknowledged guidelines for interpretation appear to exist. In its purest form, the premise of test–retest reliability may still be considered predicated upon the Classical Test Theory notion of a ‘‘thought experiment’’ (see, Borsboom, 2005), as the participants are assumed to have largely forgotten the questions and responses once the second testing session takes place. Such an assumption may be plausibly challenged, however, particularly given that the time interval between testing sessions may be as little as two weeks. For this reason, the utility of the test–retest method as an indicator of measurement error has been seriously challenged (e.g., Nunnally & Bernstein, 1994). Despite these criticisms, the use of the test–retest method appears to continue unabated in most disciplines in psychology, including EI. It remains to be determined what reliability related information may be drawn from this type of research. Despite the problems associated with the interpretation of a test–retest reliability coefficient as an indicator of reliability, the observation of ‘‘stability’’ (as the method of test–retest reliability is often preferentially called, e.g., Matarazzo & Herman, 1984) in trait scores across time may be suggested to be important in practice. That is, if peoples’ level of EI is shown to fluctuate widely across time (in the absence of any systematic treatment effects), it is doubtful that the scores could ever be found to correlate with any external attribute of interest that would be expected to be relative stable (e.g., wellbeing, job performance, etc.). Thus, although the supposed importance of test–retest reliability may be questioned, the importance of test–retest stability can probably not. Consequently, an examination of test–retest stability should nonetheless be considered when evaluating the scores of a psychometric inventory. Internal Consistency Reliability
In contrast to parallel forms reliability and test–retest reliability, internal consistency reliability can be conceptualized and estimated within the context of a single administration of a single set of test items. Consequently, it is much more convenient to estimate, which may explain its popularity. The two most popular methods of estimating internal consistency reliability are the split-half method and Cronbach’s alpha (a). A more sophisticated approach to internal consistency reliability has also been established within a latent variable framework, known as McDonald’s omega (o), which is beginning to gain some popularity, as it is more flexible in accommodating data that do not satisfy the rather strict assumptions associated with Cronbach’s a. Split-Half Reliability
Split-half reliability may be the simplest method of internal consistency estimation. In effect, a particular inventory is split into two halves and the summed scores from those two halves are correlated with each other. The correlation between the two summed halves may be considered conceptually equivalent to the correlation between two parallel forms. However, the correlation between the two halves would be expected to underestimate the reliability of the scores derived from the entire test. Consequently, split-half reliability is often formulated as (Nunnally & Bernstein, 1994).
Parallel Forms Reliability In contemporary psychometric practice, parallel forms reliability (a.k.a., alternative forms reliability) is rarely reported, despite contentions that it may be the most fundamentally sound method of estimating reliability (e.g., Brennan, 2001). Parallel forms reliability is based on the premise of creating two tests or two inventories which yield composite scores associated with the same parameters (i.e., means and variances) and are justifiably regarded to measure the same construct. In practice, participants would complete form A and form B during two different testing sessions separated by approximately two weeks (Nunnally & Bernstein, 1994). The squared correlation between the composite scores obtained from the two forms would represent an estimate of reliability of the scores derived from the inventories, individually. The methodology of parallel forms reliability can be applied in a such a way as to offer the opportunity to identify three sources of ‘‘error’’ variance: (1) systematic in item content between tests (which, realistically, is expected because items are not random samples drawn from a population of items); (2) systematic differences in scoring (more common in scenarios where a rating is made by a test administrator); and (3) systematic changes in the actual attribute of interest (Nunnally & Bernstein, 1994). Thus, the capacity to segregate these three sources of measurement error via parallel forms reliability may be viewed as particularly valuable. However, the procedure is rarely observed in the applied literature. To my knowledge, there has yet to be a published instance of parallel forms reliability in the emotional intelligence literature. Thus, the temporal variation in EI (source #3) as distinct from ‘‘pure’’ measurement error has yet to be determined. Perhaps the primary reason why parallel forms reliability is so rarely reported in the applied literature is due to the difficulties of creating a second parallel test with the same mean and variance characteristics as the first test, not to mention the same validity. A less onerous reliability procedure that may (justifiably or unjustifiably) be viewed as sharing some properties of parallel forms reliability is known as test–retest reliability.
Test–Retest Reliability
Rather than create two separate forms considered to measure the same attribute and have participants respond to the separate forms at two different testing sessions (i.e., parallel forms reliability), an alternative reliability methodology consists of creating a single test and having participants respond to the items at two different points in time. The correlation between the corresponding time 1 Psychometrics and the Measurement of Emotional Intelligence and time 2 scores represents a type of reliability methodology known as ‘‘test–retest reliability’’. Test–retest reliability is indicated when the correlation between the scores is positive, although no widely acknowledged guidelines for interpretation appear to exist. In its purest form, the premise of test–retest reliability may still be considered predicated upon the Classical Test Theory notion of a ‘‘thought experiment’’ (see, Borsboom, 2005), as the participants are assumed to have largely forgotten the questions and responses once the second testing session takes place. Such an assumption may be plausibly challenged, however, particularly given that the time interval between testing sessions may be as little as two weeks. For this reason, the utility of the test–retest method as an indicator of measurement error has been seriously challenged (e.g., Nunnally & Bernstein, 1994). Despite these criticisms, the use of the test–retest method appears to continue unabated in most disciplines in psychology, including EI. It remains to be determined what reliability related information may be drawn from this type of research. Despite the problems associated with the interpretation of a test–retest reliability coefficient as an indicator of reliability, the observation of ‘‘stability’’ (as the method of test–retest reliability is often preferentially called, e.g., Matarazzo & Herman, 1984) in trait scores across time may be suggested to be important in practice. That is, if peoples’ level of EI is shown to fluctuate widely across time (in the absence of any systematic treatment effects), it is doubtful that the scores could ever be found to correlate with any external attribute of interest that would be expected to be relative stable (e.g., wellbeing, job performance, etc.). Thus, although the supposed importance of test–retest reliability may be questioned, the importance of test–retest stability can probably not. Consequently, an examination of test–retest stability should nonetheless be considered when evaluating the scores of a psychometric inventory. Internal Consistency Reliability
In contrast to parallel forms reliability and test–retest reliability, internal consistency reliability can be conceptualized and estimated within the context of a single administration of a single set of test items. Consequently, it is much more convenient to estimate, which may explain its popularity. The two most popular methods of estimating internal consistency reliability are the split-half method and Cronbach’s alpha (a). A more sophisticated approach to internal consistency reliability has also been established within a latent variable framework, known as McDonald’s omega (o), which is beginning to gain some popularity, as it is more flexible in accommodating data that do not satisfy the rather strict assumptions associated with Cronbach’s a. Split-Half Reliability
Split-half reliability may be the simplest method of internal consistency estimation. In effect, a particular inventory is split into two halves and the summed scores from those two halves are correlated with each other. The correlation between the two summed halves may be considered conceptually equivalent to the correlation between two parallel forms. However, the correlation between the two halves would be expected to underestimate the reliability of the scores derived from the entire test. Consequently, split-half reliability is often formulated as (Nunnally & Bernstein, 1994).
... weniger
Inhaltsverzeichnis zu „The Operated Heart at Autopsy “
- PrefaceChapter 1: External Evidence of Open Heart Surgery
Chapter 2: Exposing the Cardiopulmonary Block
Chapter 3: The Post-mortem Coronary Injection
Chapter 4: The Cardiac Dissection
Chapter 5: Putting it All Together
Chapter 6: Footprints and Congenital Heart Disease
Chapter 7: A Matter of Mindset
- Glossary
- Abbreviations
Bibliographische Angaben
- Autor: Stuart Lair Houser
- 2010, 187 Seiten, Maße: 16,1 x 24,2 cm, Gebunden, Englisch
- Verlag: Humana Press
- ISBN-10: 1603278079
- ISBN-13: 9781603278072
- Erscheinungsdatum: 17.06.2009
Sprache:
Englisch
Kommentar zu "The Operated Heart at Autopsy"
0 Gebrauchte Artikel zu „The Operated Heart at Autopsy“
Zustand | Preis | Porto | Zahlung | Verkäufer | Rating |
---|
Schreiben Sie einen Kommentar zu "The Operated Heart at Autopsy".
Kommentar verfassen