of 8

Please download to get full document.

View again

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
8 pages
0 downs
test bias1
  Whose IQ Is It?—Assessor Bias Variance in High-StakesPsychological Assessment Paul A. McDermott University of Pennsylvania Marley W. Watkins Baylor University Anna M. Rhoad University of Pennsylvania Assessor bias variance exists for a psychological measure when some appreciable portion of the scorevariation that is assumed to reflect examinees’ individual differences (i.e., the relevant phenomena inmost psychological assessments) instead reflects differences among the examiners who perform theassessment. Ordinary test reliability estimates and standard errors of measurement do not inherentlyencompass assessor bias variance. This article reports on the application of multilevel linear modeling toexamine the presence and extent of assessor bias in the administration of the Wechsler Intelligence Scalefor Children—Fourth Edition (WISC–IV) for a sample of 2,783 children evaluated by 448 regionalschool psychologists for high-stakes special education classification purposes. It was found that nearlyall WISC–IV scores conveyed significant and nontrivial amounts of variation that had nothing to do withchildren’s actual individual differences and that the Full Scale IQ and Verbal Comprehension Indexscores evidenced quite substantial assessor bias. Implications are explored. Keywords:  measurement bias, assessment, assessor variance, WISC–IV The Wechsler scales are among the most popular and re-spected intelligence tests worldwide (Groth-Marnat, 2009). Themany scores extracted from a given Wechsler test administra-tion have purported utility for a multitude of applications. Forexample, as pertains to the contemporary version for school-agechildren (the Wechsler Intelligence Scale for Children—FourthEdition [WISC–IV]; Wechsler, 2003), the publisher recom- mends that resultant scores be used to (a) assess general intel-lectual functioning; (b) assess performance in each major do-main of cognitive ability; (c) discover strengths and weaknessesin each domain of cognitive ability; (d) interpret clinicallymeaningful score patterns associated with diagnostic groups;(e) interpret the scatter of subtests both diagnostically andprescriptively; (f) suggest classroom modifications and teacheraccommodations; (g) analyze score profiles from both an inter-individual and intraindividual perspective; and (h) statisticallycontrast and then interpret differences between pairs of com-ponent scores and between individual scores and subsets of multiple scores (Prifitera, Saklofske, & Weiss, 2008; Wechsler, 2003; Weiss, Saklofske, Prifitera, & Holdnack, 2006). The publisher and other writers offer interpretations for theunique underlying construct meaning (as distinguished from theactual nominal labels) for every WISC–IV composite score, sub-score, and many combinations thereof (Flanagan & Kaufman,2009; Groth-Marnat, 2009; Mascolo, 2009). Moreover, the Wechsler Full Scale IQ (FSIQ) is routinely used to differentiallyclassify mental disability (Bergeron, Floyd, & Shands, 2008;Spruill, Oakland, & Harrison, 2005) and giftedness (McClain & Pfeiffer, 2012), to discover appreciable discrepancies betweenexpected and observed school achievement as related to learningdisabilities (Ahearn, 2009; Kozey & Siegel, 2008), and to exclude ability problems as an etiological alternative in the identification of noncognitive disorders (emotional disturbance, communicationdisabilities, etc.; Kamphaus, Worrell, & Harrison, 2005). As Kane (2013) has reminded test publishers and users, “thevalidity of a proposed interpretation or use depends on how wellthe evidence supports the claims being made” and “more-ambitious claims require more support than less-ambitious claims”(p. 1). At the most fundamental level, the legitimacy of every claimis entirely dependent on the accuracy of test scores in reflectingindividual differences. Such accuracy is traditionally assessedthrough measures of content sampling error (internal consistencyestimates) and temporal sampling error (test–retest stability esti-mates; Allen & Yen, 2001; Wasserman & Bracken, 2013). These estimates are commonplace in test manuals, as incorporated in astandard error of measurement index. It is sometimes assumed thatsuch indexes fully represent the major threats to test score inter-pretation and use, but they do not (Hanna, Bradley, & Holen, 198l; This article was published Online First November 4, 2013.Paul A. McDermott, Graduate School of Education, Quantitative Meth-ods Division, University of Pennsylvania; Marley W. Watkins, Departmentof Educational Psychology, Baylor University; Anna M. Rhoad, GraduateSchool of Education, Quantitative Methods Division, University of Penn-sylvania.This research was supported in part by U.S. Department of Education’sInstitute of Education Sciences Grant R05C050041-05.Correspondence concerning this article should be addressed to Paul A.McDermott, Graduate School of Education, Quantitative Methods Divi-sion, University of Pennsylvania, 3700 Walnut Street, Philadelphia, PA19104-6216. E-mail: drpaul4@verizon.net      T     h     i   s     d   o   c   u   m   e   n    t     i   s   c   o   p   y   r     i   g     h    t   e     d     b   y    t     h   e     A   m   e   r     i   c   a   n     P   s   y   c     h   o     l   o   g     i   c   a     l     A   s   s   o   c     i   a    t     i   o   n   o   r   o   n   e   o     f     i    t   s   a     l     l     i   e     d   p   u     b     l     i   s     h   e   r   s .     T     h     i   s   a   r    t     i   c     l   e     i   s     i   n    t   e   n     d   e     d   s   o     l   e     l   y     f   o   r    t     h   e   p   e   r   s   o   n   a     l   u   s   e   o     f    t     h   e     i   n     d     i   v     i     d   u   a     l   u   s   e   r   a   n     d     i   s   n   o    t    t   o     b   e     d     i   s   s   e   m     i   n   a    t   e     d     b   r   o   a     d     l   y . Psychological Assessment © 2013 American Psychological Association2014, Vol. 26, No. 1, 207–214 1040-3590/14/$12.00 DOI: 10.1037/a0034832 207  Oakland, Lee, & Axelrad, 1975; Thorndike & Thorndike-Christ,2010; Viswanathan, 2005). Tests administered individually by psychologists or other specialists (in contrast to paper-and-penciltest administrations) are highly vulnerable to error sources beyondcontent and time sampling. For example, substantial portions of error variance in scores are rooted in the systematic and erraticerrors of those who administer and score the tests (Terman, 1918). This is referred to as assessor  bias  (Hoyt & Kerns, 1999; Rauden- bush & Sadoff, 2008).Assessor bias is manifest where, for example, a psychologistwill tend to drift from the standardized protocol for test adminis-tration (altering or ignoring stopping rules or verbal prompts,mishandling presentation of items and materials, etc.) and errone-ously scoring test responses (failure to query ambiguous answers,giving too much or too little credit for performance, erring on timelimits, etc.). Sometimes these errors appear sporadically and arelimited to a given testing session, whereas other errors will tend toreside more systematically with given psychologists and general-ize over a more pervasive mode of unconventional, error-bound,testing practice. Administration and scoring biases, most espe-cially pervasive types, undermine the purpose of testing. Theircorrupting effects are exponentially more serious when testingpurposes are high stakes, and there is abundant evidence that suchbiases will operate to distort major score interpretations, to changeresults of clinical trials, and to alter clinical diagnoses and specialeducation classifications (Allard, Butler, Faust, & Shea, 1995;Allard & Faust, 2000; Franklin, Stillman, Burpeau, & Sabers,1982; Mrazik, Janzen, Dombrowski, Barford, & Krawchuk, 2012; Schafer, De Santi, & Schneider, 2011).Recently, Waterman, McDermott, Fantuzzo, and Gadsden(2012) demonstrated research designs to estimate the amount of systematic assessor bias variance carried by cognitive abilityscores in early childhood. Well-trained assessors applying individ-ually administered tests were randomly assigned to child examin-ees, whereafter each assessor tested numerous children. Conven-tional test-score internal consistency, stability, and generalizabilitywere first supported (McDermott et al., 2009), and thereafter hierarchical linear modeling (HLM) was used to partition scorevariance into that part conveying children’s actual individualdifferences (the relevant target phenomena in any high-stakespsychological assessment) and that part conveying assessor bias(also known as assessor variance; Waterman et al., 2012). The technique was repeated for other high-stakes assessments inelementary school and on multiple occasions, each applicationrevealing whether assessor variance was relatively trivial orsubstantial.This article reports on the application of the Waterman et al.(2012) technique to WISC–IV assessments by regional schoolpsychologists over a period of years. The sample comprises childexaminees who were actually undergoing assessment for high-stakes special education classification and related clinical pur-poses. Whereas the study was designed to investigate the presenceand extent of assessor bias variance, it was not designed to pin-point the exact causes of that bias. Rather, multilevel proceduresare used to narrow the scope of probable primary causes andancillary empirical analyses, and interpretations are used to shedlight on the most likely sources of WISC–IV score bias. MethodParticipants Two large southwestern public school districts were recruited forthis study by university research personnel, as regulated by InternalReview Board (IRB) and respective school district confidentiality andprocedural policies. School District 1 had an enrollment of 32,500students and included 31 elementary, eight middle, and six highschools. Ethnic composition for the 2009–2010 academic year was67.2% Caucasian, 23.8% Hispanic, 4.0% African American, 3.9%Asian, and 1.1% Native American. District 2 served 26,000 studentsin 2009–2010, with 16 elementary schools, three kindergartenthrough eighth-grade schools, six middle schools, five high schools,and one alternative school. Caucasian students comprised 83.1% of enrollments, Hispanic 10.5%, Asian 2.9%, African American 1.7%,and other ethnic minorities 1.8%.Eight trained school psychology doctoral students examined ap-proximately 7,500 student special education files and retrieved perti-nent information from all special education files spanning the years2003–2010, during which psychologists had administered the WISC–IV. Although some special education files contained multiple periodicWISC–IVassessments,onlythosedatapertainingtothefirst(oronly)WISC–IV assessment for a given child were applied for this study;this was used as a measure to enhance comparability of assessmentconditions and to avert sources of within-child temporal variance.Informationwascollectedforatotalof2,783childrenassessedforthefirst time via WISC–IV, that information having been provided by448 psychologists over the study years, with 2,044 assessments col-lectedthroughDistrict1filesand739District2files.Theassessmentsranged from one to 86 per psychologist (  M     6.5,  SD    13.2).Characteristics of the examining psychologists were not availablethrough school district files, nor was such information necessary forthe statistical separation of WISC–IV score variance attributable topsychologists versus children.Sample constituency for the 2,783 first-time assessments included66.0% male children, 78.3% Caucasian, 13.0% Hispanic, 5.4% Afri-can American, and 3.3% other less represented ethnic minorities.Ages ranged from 6 to 16 years (  M   10.3 years,  SD  2.5), whereEnglish was the home language for 95.0% of children (Spanish thelargest exception at 3.8%) and English was the primary language for96.7% of children (Spanish the largest exception at 2.3%).Whereas all children were undergoing special education assess-ment for the first time using the WISC–IV, 15.7% of those childrenhad undergone prior psychological assessments not involving theWISC–IV (periodic assessments were obligatory under state policy).Allassessmentsweredeemedashighstakes,withaprimarydiagnosisof learning disability rendered for 57.6% of children, emotional dis-turbance for 11.6%, attention-deficit/hyperactivity disorder for 8.0%,intellectual disability for 2.6%, 12.1% with other diagnoses, and 8.0%receiving no diagnosis. Secondary diagnoses included 10.3% of chil-dren with speech impairments and 3.7% with learning disabilities. Instrumentation The WISC–IV features 10 core and five supplemental subtests,each with an age-blocked population mean of 10 and standarddeviation of 3. The core subtests are used to form four factorindexes, where the Verbal Comprehension Index (VCI) is based on      T     h     i   s     d   o   c   u   m   e   n    t     i   s   c   o   p   y   r     i   g     h    t   e     d     b   y    t     h   e     A   m   e   r     i   c   a   n     P   s   y   c     h   o     l   o   g     i   c   a     l     A   s   s   o   c     i   a    t     i   o   n   o   r   o   n   e   o     f     i    t   s   a     l     l     i   e     d   p   u     b     l     i   s     h   e   r   s .     T     h     i   s   a   r    t     i   c     l   e     i   s     i   n    t   e   n     d   e     d   s   o     l   e     l   y     f   o   r    t     h   e   p   e   r   s   o   n   a     l   u   s   e   o     f    t     h   e     i   n     d     i   v     i     d   u   a     l   u   s   e   r   a   n     d     i   s   n   o    t    t   o     b   e     d     i   s   s   e   m     i   n   a    t   e     d     b   r   o   a     d     l   y . 208  M C DERMOTT, WATKINS, AND RHOAD  the Similarities, Vocabulary, and Comprehension subtests; thePerceptual Reasoning Index is based on Block Design, MatrixReasoning, and Picture Concepts subtests; the Working MemoryIndex (WMI) on the Digit Span and Letter–Number Sequencingsubtests; and the Processing Speed Index (PSI) on the Coding andSymbol Search subtests. The FSIQ is also formed from the 10 coresubtests. The factor indexes and FSIQ each retain an age-blockedpopulation mean of 100 and standard deviation of 15. The supple-mental subtests were not included in this study because theirinfrequent application precluded requisite statistical power formultilevel analyses. Analyses The eight school psychology doctoral students examined eachspecial education case file and collected WISC–IV scores, assess-ment date, child demographics, consequent psychological diagno-ses, and identity of the examining psychologist. Following IRBand school district requirements, the identity of participating chil-dren and psychologists was concealed before data were released tothe researchers. Because test protocols were not accessible, norhad standardized observations of test sessions been conducted, itwas not possible to determine whether specific scoring errors werepresent, nor to associate psychologists with specific error types.Rather, test score variability was analyzed via multilevel linearmodeling as conducted through SAS PROC MIXED (SAS Insti-tute, 2011).As a preliminary step to identify the source(s) of appreciablescore nesting, a three-level unconditional one-way random effectsHLM model was tested for the FSIQ score and each respectivefactor index and subtest score, where Level 1 modeled scorevariance between children within psychologists, Level 2 modeledscore variance between psychologists within school districts, andLevel 3 modeled variance between school districts. This series of analyses sought to determine whether sufficient score variationexisted between psychologists and whether this was related toschool district affiliation. A second series of multilevel modelsexamined the prospect that because all data had been filteredthrough a process involving eight different doctoral students, per-haps score variation was affected by the data collection mechanismas distinguished from the psychologists who produced the data.Here, an unconditional cross-classified model was constructed forFSIQ and each factor index and subtest score, with score variancedually nested within doctoral student data collectors and examin-ing psychologists.Setting aside alternative hypotheses regarding influence of datacollectors and school districts, each IQ measure was examinedthrough a two-level unconditional HLM model in which Level 1represented variation between children within examining psychol-ogists and Level 2 variation between psychologists. The intraclasscorrelation was derived from the random coefficient for interceptsassociated with each model and thereafter converted to a percent-age of score variation between psychologists and between childrenwithin psychologists.Because psychologists were not assigned randomly to assessgiven children (assignment will normally vary as a function of random events, but also as related to which psychologists maymore often be affiliated with certain child age cohorts, schools,educational levels, etc.), it seemed reasonable to hypothesize thatsuch nonrandom assignment would potentially result in somesystematic characterization of those students assessed by givenpsychologists. Thus, any systematic patterns of assignments bychild demographics could somehow homogenize IQ score varia-tion within psychologists. To ameliorate this potential, each two-level unconditional model was augmented by addition of covari-ates including child age, sex, ethnicity (minority vs. Caucasian),child primary language (English as a secondary language vs.English as a primary language), and their interactions. The binarycovariates were transformed to reflect the percentage of childrenmanifesting a given demographic characteristic as associated witheach psychologist, and all the covariates were grand-mean recen-tered to capture (and control) differences between psychologists(Hofmann & Gavim, 1998). Covariates were added systematically to the model for each IQ score so as to minimize Akaike’sinformation criterion (AIC; as recommended by Burnham & An-derson, 2004), and only statistically significant effects were per-mitted to remain in final models (although nonsignificant maineffects were permitted to remain in the presence of their significantinteractions). Whereas final models were tested under restrictedmaximum-likelihood estimation, and are so reported, the overallstatistical consequence of the covariate augmentation for eachmodel was tested through likelihood ratio deviance tests contrast-ing each respective unconditional and final conditional modelunder full maximum-likelihood estimation (per Littell, Milliken,Stroup, Wolfinger, & Schabenberger, 2006). In essence, the con-ditional models operated to correct estimates of between-psychologists variance (obtained through the initial unconditionalmodels) for the prospect that some of that variance was influencedby the nonrandom assignment of psychologists to children. Results A preliminary unconditional HLM model was applied for FSIQand each respective factor index and subtest score, where childrenwere nested within psychologists and psychologists within schooldistricts. The coefficient for random intercepts of children nestedwithin psychologists was statistically significant for almost allmodels, but the coefficient for psychologists nested within districtswas nonsignificant for every model. Similarly, a preliminary mul-tilevel model for each IQ score measured cross-classified childrennested within data collectors as well as psychologists. No modelproduced a statistically significant effect for collectors, whereasmost models evinced a significant effect for psychologists. There-fore, school district and data collection effects were deemed in-consequential, and subsequent HLM models tested a random in-tercept for nesting within psychologists only.For each IQ score, two-level, unconditional and conditionalHLM models were constructed, initially testing the presence of psychologist assessor variance and thereafter controlling for dif-ferences in child age, sex, ethnicity, language status, and theirinteractions. Table 1 reports the statistical significance of theassessor variance effect for each IQ score and the estimatedpercentage of variance associated exclusively with psychologistsversus children’s individual differences. The last column indicatesthe statistical significance of the improvement of the conditionalmodel (controlling for child demographics) over the unconditionalmodel for each IQ measure. Where these values are nonsignificant,understanding is enhanced by interpreting percentages associated      T     h     i   s     d   o   c   u   m   e   n    t     i   s   c   o   p   y   r     i   g     h    t   e     d     b   y    t     h   e     A   m   e   r     i   c   a   n     P   s   y   c     h   o     l   o   g     i   c   a     l     A   s   s   o   c     i   a    t     i   o   n   o   r   o   n   e   o     f     i    t   s   a     l     l     i   e     d   p   u     b     l     i   s     h   e   r   s .     T     h     i   s   a   r    t     i   c     l   e     i   s     i   n    t   e   n     d   e     d   s   o     l   e     l   y     f   o   r    t     h   e   p   e   r   s   o   n   a     l   u   s   e   o     f    t     h   e     i   n     d     i   v     i     d   u   a     l   u   s   e   r   a   n     d     i   s   n   o    t    t   o     b   e     d     i   s   s   e   m     i   n   a    t   e     d     b   r   o   a     d     l   y . 209 WHOSE IQ IS IT?  with the unconditional model, and where values are significant,interpretation is enhanced by percentages associated with the con-ditional model. Following this logic, percentages preferred forinterpretation are boldfaced.The conditional models (which control for child demographics)make a difference for FSIQ, VCI (especially its Similarities sub-test), WMI, and PSI (especially its Coding subtest) scores. Thissuggests at least that the nonrandom assignment of school psy-chologists to children may result in imbalanced distributions of children by their age, sex, ethnicity, and language status. This initself is not problematic and likely reflects the realities of requisitequasi-systematic case assignment within school districts. Thus,psychologists will be assigned partly on the basis of their famil-iarity with given schools, levels of expertise with age cohorts,travel convenience, and school district administrative divisions—all factors that would tend to militate demographic differencesacross case loads. The conditional models accommodate for thatprospect. At the same time, it should be recognized that the controlmechanisms in the conditional models are also probably overlyconservative because they will inadvertently control for assessorbias arising as a function of children’s demographic characteristics(race, sex, etc.) unrelated to case assignment methods.Considering the major focus of the study (identification of thatportion of IQ score variation that without mitigation has nothing todo with children’s actual individual differences), the FSIQ and allfour factor index scores convey significant and nontrivial(viz.  5%) assessor bias. More troubling, bias for FSIQ (12.5%)and VCI (10.0%) is substantial (  10%). Within VCI, the Vocab-ulary subtest (14.3% bias variance) and Comprehension subtest(10.7% bias variance) are the primary culprits, each conveyingsubstantial bias. Further problematic, under PSI, the SymbolSearch subtest is laden with substantial bias variance (12.7%).On the positive side, the Matrix Reasoning subtest involves nostatistically significant bias (2.8%). Additionally, the Coding sub-test, although retaining a statistically significant amount of asses-sor variance, essentially yields a trivial (  5%) amount of suchvariance (4.4%). (Note that the   5% criterion for deeming hier-archical cluster variance as practically inconsequential comportswith the convention recommended by Snijders & Baker, 1999, and Waterman et al., 2012.) Discussion The degree of assessor bias variance conveyed by FSIQ andVCI scores effectively vitiates the usefulness of those measures fordifferential diagnosis and classification, particularly in the vicinityof the critical cut points ordinarily applied for decision making.That is, to the extent that decisions on mental deficiency andintellectual giftedness will depend on discovery of FSIQs    70or    130, respectively, or that ability-achievement discrepancies(whether based on regression modeling or not) will depend onaccurate measurement of the FSIQ, those decisions cannot beTable 1 Percentages of Score Variance Associated With Examiner Psychologists Versus Children’s Individual Differences on the Wechsler  Intelligence Scale for Children—Fourth Edition IQ score  N  Unconditional models a Conditional models b Difference betweenunconditional andconditional models (  p ) c % variance betweenpsychologists% variance betweenchildren% variance betweenpsychologists% variance betweenchildrenFull Scale IQ 2,722 16.2  83.8  12.5  87.5  .0049Verbal Comprehension Index 2,783 14.0  86.0  10.0  90.0   .0001Similarities 2,551 10.6  89.4  7.4  92.6  .0069Vocabulary 2,538  14.3  85.7  10.4  89.6  ns Comprehension 2,524  10.7  87.3  9.9  90.1  ns Perceptual Reasoning Index 2,783  7.1  92.9  5.7  94.3  ns Block Design 2,544  5.3  94.7  3.8  96.2  ns Matrix Reasoning 2,520  2.8 97.2  2.4 97.6  ns Picture Concepts 2,540  5.4  94.6  4.9  95.1  ns Working Memory Index 2,782 9.8  90.2  8.3  91.7  .002Digit Span 2,548  7.8  92.2  7.5  92.5  ns Letter–Number Sequencing 2,486  5.2  94.8  4.2  95.8  ns Processing Speed Index 2,778 12.6  87.4  7.6  92.4   .0001Coding 2,528 9.2  90.8  4.4  95.6   .0001Symbol Search 2,521  12.7  87.3  9.9  90.1  ns a Entries for percentage of variance between psychologists equal ICC  100 as derived in hierarchical linear modeling. Percentages of variance betweenchildren equal (1  ICC)  100. Boldface entries are regarded optimal for interpretation purposes (in contrast to entries under the alternative conditionalmodel, which do not represent significant improvement). Model specification is  Y  ij  00  0j  r  ij , where  i  indexes children within psychologists and  j  indexes psychologists. Significance tests indicate statistical significance of the random coefficient for psychologists, where  p  values  .01 are considerednonsignificant. ICC  interclass correlation coefficient.  b Entries for percentage of variance between psychologists equal residual ICC  100 as derivedin hierarchical linear modeling, incorporating statistically significant fixed effects for child age, sex, ethnicity, language status, and their interactions.Percentages of variance between children equal (1  residual ICC)  100. Boldface entries are regarded optimal for interpretation purposes (in contrastto entries under the alternative unconditional model). Model specification is  Y  ij    00    01  MeanAge  j    02  MeanPercentMale  j   03  MeanPercentMinority  j    04  MeanPercentESL  j    05 (  MeanAge  j )(  MeanPercentMale  j )    . . .    r  ij , where  i  indexes children within psychologists,  j indexes psychologists, and nonsignificant terms are dropped from models. Significance tests indicate statistical significance of the residualized randomcoefficient for psychologists, where  p  values  .01 are considered nonsignificant.  c Values are based on tests of the deviance between  2 log likelihoodestimates for respective unconditional and conditional models under full maximum-likelihood estimation.  p s  .01 are considered nonsignificant ( ns ).   p  .01.    p  .001.    p  .0001.      T     h     i   s     d   o   c   u   m   e   n    t     i   s   c   o   p   y   r     i   g     h    t   e     d     b   y    t     h   e     A   m   e   r     i   c   a   n     P   s   y   c     h   o     l   o   g     i   c   a     l     A   s   s   o   c     i   a    t     i   o   n   o   r   o   n   e   o     f     i    t   s   a     l     l     i   e     d   p   u     b     l     i   s     h   e   r   s .     T     h     i   s   a   r    t     i   c     l   e     i   s     i   n    t   e   n     d   e     d   s   o     l   e     l   y     f   o   r    t     h   e   p   e   r   s   o   n   a     l   u   s   e   o     f    t     h   e     i   n     d     i   v     i     d   u   a     l   u   s   e   r   a   n     d     i   s   n   o    t    t   o     b   e     d     i   s   s   e   m     i   n   a    t   e     d     b   r   o   a     d     l   y . 210  M C DERMOTT, WATKINS, AND RHOAD
Similar documents
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks