Evaluation of Competency to Stand Trial – Revised (ECST-R)

Evaluation of Competency to Stand Trial – Revised (ECST-R)

The ECST-R is a semistructured interview designed to assess psycholegal domains relevant to the legal standard for competency to stand trial as propounded in Dusky v. United States (1960).  In addition, the ECST-R provides a systematic screening for feigned incompetency to stand trial.  There are four scales – 1) Factual Understanding of the Courtroom Proceedings (FAC), 2) Rational Understanding of the Courtroom Proceedings (RAC), Consult with Counsel (CWC), and Overall Rational Ability (Rational).

The ECST-R is appropriate for use with individuals ages 18 years and older who are involved in adult proceedings.  It was also validated on defendants with a range of cognitive abilities.  Most defendants with functional intelligence in the borderline and upper level of mild mental retardation (i.e., IQs = 60-69) can be tested with the ECST-R. The ECST-R contains 28 items yielding scores on five atypical Presentation scales that screen for feigned incompetency, Realistic (ATP-R), Psychotic (ATP-P), Nonpsychotic (ATP-N), Impairment (ATP-1), and Both Psychotic and Nonpsychotic (ATP-B).

Benefits of the ECST-R include congruence with the Dusky standard, established construct validity, admissibility under the Daubert standard, and systematic screening for feigned incompetency.

Dr. Marvin Acklin comments; “The ECST-R, developed by Richard Rogers and his colleagues, is a hybrid interview organized into separate semistructured and unstructured components, designed for use ‘as a validated psychological measure for competency to stand trial and closely related psycholegal constructs’ (Rogers, Tillbrook, & Sewell, 2004).  The measure is designed for individuals 18 years of age or older, for individuals with IQs greater than 60, with English-speaking populations.  The measure provides a number of scales derived from the Dusky standard: ability to consult with counsel, factual understanding of court proceedings, rational understanding of courtroom proceedings, and reflecting Rogers’s ongoing interest, atypical presentation, which assesses response style and potential attempts to feign incompetence. Here too the psychometric properties of the measure are quite strong and the manual details the research foundations of the measure.”  Marvin W. Acklin, Ph.D. (December 13, 2010. www.hawaiiforensicpsychology.com)

Review of the Evaluation of Competency to Stand Trial-Revised
Leark, R. A. (2005).
In R. A. Spies & B. S. Plake (Eds.), The sixteenth mental measurements yearbook. Retrieved from http://marketplace.unl.edu/buros/

DESCRIPTION
The Evaluation of Competency to Stand Trial-Revised (ECST-R) is an 18-item semistructured interview with an additional 28 items that provide for a screen for feigned incompetency. The semistructured interview items yield scores for three scales that formulate the basis of the decision as to whether the individual is competent to stand trial: Factual Understanding of the Courtroom Proceedings (FAC), Rational Understanding of the Courtroom Proceedings (RAC), Consult with Counsel (CWC) plus an Overall Rational Ability (Rational). The additional 28 items yield five additional scales measuring Atypical Presentation Response Style: Realistic (ATP-R), Psychotic (ATP-P), Nonpsychotic (ATP-N), Impairment (ATP-I), and Both Psychotic and Nonpsychotic (ATP-B).

The target population for the ECST-R is persons of age 18 and upward. The test is to be used by professionals who are licensed to practice independently within their state and who have specialized training in forensic evaluations. The test authors stress the necessity for the test user to have undergone supervised training in providing forensic evaluations. In addition the authors implore the user to remain current in the legal standards for the jurisdiction to which the evaluation will apply. Further, the ECST-R has not been validated for defendants with tested IQ scores of less than 60. Finally, the standardization of the ECST-R is with English-speaking defendants.

The administration and scoring of the ECST-R rests upon the information gleaned from the initial semistructured interview (18 items) and the responses for the structured interview format for the response style measures (28 items). For the initial competency scales there are specific questions posed by the evaluator. However, following this standardized question, the evaluator may ask further questions to inquire or clarify information. The CWC scale is composed of six scores based upon 15 ratings that involve the nature and quality of the attorney-client relationship. Each of the six test items has multiple rating questions. These rating questions utilize a Likert type rating format of 0 (not observed), 1 (questionable clinical significance), 2 (mild impairment, unrelated to competency), 3 (moderate impairment, peripherally related to competency-will affect but not impair competency), and 4 (severe impairment, directly related to competency-will substantially impair competency). The CWC scale score is the summation of the 15 specific ratings. The FAC scale uses six scores based upon 16 ratings that focus on the defendant’s knowledge of the courtroom proceedings and the specific roles of the persons within the courtroom. The specific item ratings vary by question. The total FAC scale score is the summation of these ratings. The RAC scale comprises seven scores based on 11 ratings that measure the defendant’s decision-making capacity on a variety of matters that could arise over the course of a trial. The total RAC score equals the summation of these ratings. The Rational score is found by adding the CWC plus the RAC.

The 28 response style items are scored either as 0 (no), 1 (sometimes, a qualified yes), or 2 (yes) for the ATP-R, APT-P, and the ATP-N direction. The ATP-I is scored either as 0 (nonimpaired) or 1 (impaired). The totals for the response style scores are simply the summation of those ratings.

Each of the summated scores is then plotted on the ECST-R profile form. The profile form converts the summated scores into linear T-score transformations. Each of the T-scores has four levels of impairment: Moderate: scores 60 to 69T, Severe: scores 70 to 79T, Extreme: scores 80 to 89T, and Very Extreme: scores 90T and higher. The Competency Scales can further be scored for four levels of certitude. These are: Preponderant (more likely than >50%), Probable (84.1% likelihood), Very Probable (95.0% likelihood), and Definite (98.0% likelihood). The linear T-score transformations can also be converted into percentile rankings if desired.

DEVELOPMENT
The ECST-R is based upon congruence with the Dusky standard articulated by the United States Supreme Court ruling (Dusky v. United States, 1960). This standard rests upon a single sentence: “The test must be whether he has sufficient present ability to consult with his lawyer with a reasonable degree of rational understanding-and whether he has a rational as well as factual understanding of the proceedings against him” (Dusky v. United States, 1960, p. 789). The initial version of the test, the ECST, was developed to meet the two-prong objectives of the Dusky standard plus provide a standardized format for this assessment (Rogers, 1995). An expert-based rating was used initially to score the original test items. The initial test items were derived from Rogers’ review of appellate decisions and the forensic literature. This prototypical analysis allowed for a selection of items that are more consistent with the prongs of the Dusky standard. The authors then used a confirmatory factor analysis (CFA) to test for discrete abilities, domains, and cognitive complexity (Rogers, Jackson, Sewell, Tillbrook, & Martin, 2003). The authors further tested their model using samples of competency cases, feigned incompetency cases, mentally disordered offenders, and jail detainees. The use of these samples permitted the refinement of the items as well as the analyses of the predictability of the scales. The final format of these analyses yielded the current revised version of the test.

TECHNICAL
The manual provides extant information concerning the standardization of the ECST-R. Following the derivation of the original test items and the initial analyses, the authors further refined the instrument. The standardized semistructured interview questions were then reformulated into simpler, easier to understand items. For example, the CWC questions have an average length of 7.73 words, the FAC average length is 7.22 words, and the RAC average length is 8.09 words. The items of the CWC were then reviewed to assure low face validity by keeping the items intentionally general. The FAC scale items assure the defendant could accurately identify the courtroom’s personnel and their functions. The RAC items needed to assure that the questions would apply to most criminal cases and to develop potential trial outcome questions (Rogers, Tillbrook, & Sewell, 2004). The items comprising the feigning of incompetency were evaluated by using the known-groups comparison method (Rogers, 1997). The Structured Interview of Reported Symptoms (SIR; Rogers, Bagby, & Dickens, 1992) was used to classify the 87 participants into probable fake (n = 22) and a clinical group (n = 65). The participants were a consecutive referral sample from a jail mental health unit. The atypical presentation scales yielded overall classification hit rates of .67 (overall AP score), .63 (ATP-P), .62 (ATP-N), and .70 (ATP-I) (Rogers, Sewell, Grandjean, & Vitacco, 2002). The AP scales were further validated in a different study using participants from a competency restoration program (n = 96) with a clinical comparison group (n = 56) from a different prison mental health unit. An improved hit rate was reported: ATP-P (.77), ATP-N (.67), ATP-B (.77), and ATP-I (.86) (Rogers, Jackson, Sewell, & Harrison, 2004).

The manual provides tables of alpha reliability coefficients as follows: FAC (.87), RAC (.89), CWC (.83), Rational (.93), ATP-R (.63), ATP-P (.79), ATP-N (.70), ATP-I (.87), and ATP-B (.86) based upon a sample of 411. Given that the instrument is a semistructured interview, interrater reliabilities are of critical importance. The manual reports the following interrater reliability coefficients for the ECST-R Scales: CWC (.91), FAC (.96), RAC (.91), Rational (.96), ATP-R (1.0), ATP-P (1.0), ATP-N (.98), ATP-I (1.0), and ATP-B (1.0) using a sample of 99. Average interrater reliability coefficients for the individual ECST-R items are also reported: CWC (.69), FAC (.90), RAC (.72) and rational (.77) using the same sample as above. Test-retest reliability poses a distinct issue concerning the nature of the construct being measured, namely current competency (or incompetency). Another issue is that of the estimates of reliability over any specific time period. The authors cite a limited sample of 29 detainees who were retested at a 1-week interval. A professional, blind to the fact that the detainee had been previously evaluated, did the second administration of the ECST-R. The overall concordance between evaluations was reported by the authors at: CWC (.98), FAC (.83), and RAC (.99) indicating seemingly stable estimates. The ATP items pose a separate matter as they distinctly measure the feigning of behavior. The concordance rates reported were: ATP-P (.84), ATP-N (.79), ATP-I (.86), and ATP-B (.79). Overall, the ECST-R scales demonstrate internal reliability, rather superior interrater reliability, and stable test-retest reliability.

The content validity of the ECST-R was initially documented by the use of test items gleaned from appellate reviews and forensic psychology literature. The use of the know-experts method of analyzing items further aided this process. Ancillary analysis using trial judges and forensic experts further increased the item content validity.

Construct validity is a more enduring task for most measurements, but more so for one that purports to measure behavior with a low incidence base rate (i.e., incompetency). To understand this, Rogers, Grandjean, Tillbrook, Vitacco, and Sewell (2001) used an exploratory factor analysis that yielded two factors. A second analysis using confirmatory factor analysis to test the ability of the items to fit into the three Dusky prongs was conducted. The authors note that using a three-factor discrete abilities model evidenced a good fit following a multivariate analysis. The authors also note that the factor loadings were “very robust (i.e., >= 60) with an overall mean loading of .72” (Rogers, Tillbrook & Sewell, 2004, p. 136). Criterion-related validity was analyzed using the independent experts method that resulted in an overall mean hit rate of .82. In summary, the ECST-R has demonstrated evidence for construct and criterion-related validity.

COMMENTARY
The test authors have taken on a complex issue: the standardization of legal terms into traditional psychometrically sound instrumentation. In addition, the authors have attempted to assess a behavior that has a low base rate in the general population, namely incompetence. To this effort the authors must be applauded.

Because the ECST-R is designed to be used in a limited forensic arena to address specific legal issues, the normative data do not meet, and are not intended to meet, the expected sampling done on the majority of psychological tests. Specifically, the normative sample is of prison detainees who were either referred for competency reasons or other jail/correction referrals from offender samples that could be referred for competency evaluations. This yielded a total aggregate of 444 within the sample. Of these, 355 were males. The normative data used for the linear T-score transformations are a more restricted data sample. For these data, the sample was restricted to those offenders with “genuine impairment” (Rogers, Tillbrook & Sewell, 2004, p. 118). This restricted sample is of only 356 offenders. Thus, the test may be viewed by some as less than adequate in meeting the distribution according to a United States Census-based model of sampling. Given the rather specific nature of the instrument, this may not be so overwhelming of a problem.

SUMMARY
Overall, the authors have done a rather thorough job in creating test items that focus on key legal issues. They had added to this by developing the items into meaningful scales that address very specific (although still rather generalized legal terms) legal issues. Finally, the authors have attempted to develop this method into a test that meets standards of technical quality.

Users must be careful towards generalization of findings past the original intent of this instrument. It has limited range of use (i.e., specifically for competency-based issues) and its generalization towards other psychological constructs is not warranted. Further, it is clearly meant to be used on those individuals, primarily male, who have demonstrated IQ scores above 60.

REVIEWER’S REFERENCES

  • Dusky v. United States, 362 U.S. 402 (1960).
  • Rogers, R. (1995). Evaluation of Competency to Stand Trial (ECST). Unpublished test, University of North Texas, Denton.
  • Rogers, R. (Ed.). (1997). Clinical assessment of malingering and deception (2nd ed.). New York: Guilford.
  • Rogers, R., Bagby, R. M., & Dickens, S. E. (1992). Structured Interview of Reported Symptoms (SIRS). Odessa, FL: Psychological Assessment Resources.
  • Rogers, R., Grandjean, N. R., Tillbrook, C. E., Vitacco, M. J., & Sewell, K. W. (2001). Recent interview-based measures of competency to stand trial: A critical review augmented with research data. Behavioral Sciences and the Law, 19, 503-518.
  • Rogers, R., Sewell, K. W., Grandjean, N. R., & Vitacco, M. J. (2002). The detection of feigned mental disorders on specific competency measures. Psychological Assessment, 14, 177-183.
  • Rogers, R., Jackson, R. L., Sewell, K. W., Tillbrook, C. E., & Martin, M. A. (2003). Assessing dimensions of competency to stand trial: Construct validation of the ECST-R. Assessment, 10(4), 344-351.
  • Rogers, R., Tillbrook, C. E., & Sewell K. W. (2004). Evaluation of Competency to Stand Trial-Revised professional manual. Lutz, FL: Psychological Assessment Resources, Inc.
  • Rogers, R., Jackson, R. L., Sewell, K. W., & Harrison, K. S. (2004). An examination of the ECST-R as a screen for feigned incompetency to stand trial. Psychological Assessment, 16(2), 139-145.

Research on the Evaluation of Competency to Stand Trial – Revised

The Forensic Clinician’s Toolbox I: A review of competency to stand trial (CST) instruments.
Acklin, Marvin W.
Journal of Personality Assessment, Vol 94(2), Mar, 2012. pp. 220-222.
Abstract:
This article presents a review of competency to stand trial (CST) instruments. There are a number of measures informally and commercially available for evaluating CST. Basic competence includes understanding of charges, concept of a criminal defense, knowledge of judicial concept and procedures, and general ability to work with defense counsel. Decisional competence, on the other hand, assesses the quality of the defendant’s reasoning process. CST evaluations typically involve a clinical and forensic interview, and administration of clinical, forensically relevant, and forensic assessment instruments depending on the evaluator’s strategy. The author focuses on three instruments, MacArthur Competence Assessment Tool–Criminal Adjudication (MacCAT–CA), Evaluation of Competency to Stand Trial–Revised (ECST–R) and Inventory of Legal Knowledge (ILK). They are complex measures that require thorough understanding of the underlying legal standards, constructs, rationale, and administration, scoring, and interpretation procedures. Integration of these measures into the forensic clinician’s regular practice will require study of the measures, repeat administrations, and careful consideration of their role in the clinician’s assessment strategy and reporting.

An investigation of the ECST-R as a measure of competence and feigning.
Norton, Kerri A. and Ryba, Nancy L.
Journal of Forensic Psychology Practice, Vol 10(2), Mar, 2010. pp. 91-106.
Abstract:
Competency to stand trial is the most commonly raised psycholegal issue. Evaluations of a defendant’s competency must be as accurate and complete as possible, and clinicians must be careful to screen for feigned incompetence. The Evaluation of Competency to Stand Trial-Revised (ECST-R), a recently developed competency assessment instrument, assesses the constructs of both competence and feigning. The present study provides further validation research on the ECST-R by comparing the performance of honest responders and coached feigners. Results support the discriminant validity of the ESCT-R and homogeneity of individual scales. This study supports use of the competency scales and provides some support for the use of the feigning scales, although some caution is advised.

Evaluating competency to stand trial with evidence-based practice.
Rogers, Richard, and Johansson-Love, Jill
Journal of the American Academy of Psychiatry and the Law, Vol 37(4), Dec, 2009. Special issue: Daubert and EBP. pp. 450-460.
Abstract:
Evaluations for competency to stand trial are distinguished from other areas of forensic consultation by their long history of standardized assessment beginning in the 1970s. As part of a special issue of the Journal on evidence-based forensic practice, this article examines three published competency measures: the MacArthur Competence Assessment Tool-Criminal Adjudication (MacCAT-CA), the Evaluation of Competency to Stand Trial-Revised (ECST-R), and the Competence Assessment for Standing Trial for Defendants with Mental Retardation (CASTMR). Using the Daubert guidelines as a framework, we examined each competency measure regarding its relevance to the Dusky standard and its error and classification rates. The article acknowledges the past polarization of forensic practitioners on acceptance versus rejection of competency measures. It argues that no valuable information, be it clinical acumen or standardized data, should be systematically ignored. Consistent with the American Academy of Psychiatry and the Law Practice Guideline, it recommends the integration of competency interview findings with other sources of data in rendering evidence-based competency determinations.

An investigation of the ECST-R in male pretrial patients: Evaluating the effects of feigning on competencyevaluations.
Vitacco, Michael J., et.al.
Abstract:
Forensic clinicians have the option of employing well-validated structured interviews when conducting competency to stand trial (CST) evaluations to ensure adequate coverage of the three prongs delineated in Dusky v. United States. This study evaluates the effects of feigning on the Evaluation of Competency to Stand Trial—Revised (ECST-R) in a sample of 100 male defendants undergoing CST evaluations. The ECST-R competency scales are reliable, with good alpha coefficients and interrater reliabilities, and differentiate patients found competent from those found not competent. The current study suggests that feigning may bridge both psychopathology and cognitive abilities and that clinicians should consider each when conducting CST evaluations. These results are discussed in the context of conducting comprehensive evaluations integrating response style assessments in CST evaluations.

The effects of test-strategy coaching on measures of competency to stand trial.
Springman, Rachael E. and Vandenberg, Brian R.
Journal of Forensic Psychology Practice, Vol 9(3), Jul, 2009. pp. 179-198.
Abstract:
This study examined whether individuals who are coached to malinger can elude detection on measures assessing competency to stand trial. Participants consisted of 92 undergraduates (65 females and 27 males; mean age, 23.5 years) who were randomly assigned to groups comprising control (honest responders), uncoached malingerer (feign incompetency without tips), and coached malingerer (feign incompetency with tips). Participants were presented with a hypothetical criminal case scenario that required them to undergo an evaluation of their competency to stand trial, assessed by the Georgia Court Competency Test and the Evaluation of Competency to stand Trial-Revised. The results indicated that the two malingering groups appeared significantly impaired on overall competency scores in comparison to the control group. Furthermore, the two malingering groups appeared significantly elevated on malingering scale scores in comparison to the control group. No differences were found between the uncoached and coached malingering group on the competency and malingering scale scores. Both malingering scales effectively discriminated between malingerers and honest responders.

An evaluation of malingering screens with competency to stand Trial patients: A known-groups comparison.
Vitacco, Michael J., et.al.
Law and Human Behavior, Vol 31(3), Jun, 2007. pp. 249-260.
Abstract:
The assessment of malingering is a fundamental component of forensic evaluations that should be considered with each referral. In systematizing the evaluation of malingering, one option is the standardized administration of screens as an initial step. The current study assessed the effectiveness of three common screening measures: the Miller Forensic Assessment of Symptoms Test (M-FAST; Miller, 2001), the Structured Inventory of Malingered Symptomatology (SIMS; Widows & Smith, 2004), and the Evaluation of Competency to Stand Trial-Revised Atypical Presentation Scale (ECST-R ATP; Rogers, Tillbrook, & Sewell, 2004). Using the Structured Interview of Reported Symptoms (SIRS) as the external criterion, 100 patients involved in competency to stand trial evaluations were categorized as either probable malingerers (n = 21) or nonmalingerers (n = 79). Each malingering scale produced robust effect sizes in this known-groups comparison. Results are discussed in relation to the comprehensive assessment of malingering within a forensic context.

An Examination of the ECST-R as a Screen for Feigned Incompetency to Stand Trial.
Rogers, Richard, et.al.
Psychological Assessment, Vol 16(2), Jun, 2004. pp. 139-145.
Abstract:
Psychological assessments of competency-to-stand-trial (CST) referrals must consider whether the defendants’ impairment is genuine or feigned. This study addressed feigning on the Evaluation of Competency to Stand Trial–Revised (ECST-R), a standardized interview designed for assessing dimensions of CST and screening for feigned CST. In particular, this study examined the effectiveness of the ECST-R’s Atypical Presentation (ATP) scales as screens for feigned incompetency. It examined ATP scales for (a) jail detainees (n=96) in simulation and control conditions and (b) inpatient competency cases (n=56) in clinical comparison and probable malingering groups. Comparisons of ATP scales yielded very large effect sizes for feigners when compared with jail controls (mean d=2.50) and genuine inpatient competency cases (mean d=1.83). Several cut scores were established with very few false negatives and robust sensitivity estimates. In summary, the ECST-R ATP scales appear to be homogenous scales with established clinical use as feigning screens in CST evaluations.

Assessing dimensions of competency to stand trial: Construct validation of the ECST-R.
Rogers, Richard, et.al.
Assessment, Vol 10(4), Dec, 2003. Special issue: Psychological and neuropsychological assessment in the forensic setting. pp. 344-351.
Abstract:
Four decades of forensic research have left unanswered a fundamental issue regarding the best conceptualization of competency to stand trial vis-á-vis the Dusky standard. The current study investigated three competing models (discrete abilities, domains, and cognitive complexity) on combined data (N=411) from six forensic and correctional samples. Using the Evaluation of Competency to Stand Trial-Revised (ECST-R), items representative of the Dusky prongs were used to test the three models via maximum-likelihood confirmatory factor analyses (CFA). Of the three, only the discrete abilities model evidenced a good fit, indicating that competency to stand trial should consider separately each defendant’s factual understanding of the proceedings, rational understanding of the proceedings, and ability to consult with counsel. ECST-R competency scales, based on the current CFA, have excellent alphas (.83 to .89) and interrater reliabilities (.97 to .98).

The detection of feigned mental disorders on specific competency measures.
Rogers, Richard, et.al.
Psychological Assessment, Vol 14(2), Jun, 2002. pp. 177-183.
Abstract:
Psychologists have standardized competency-to-stand-trial (CST) assessments through the development of specialized CST measures. However, their research has largely neglected the possibility that CST measures may be stymied by feigning mental disorders and concomitant impairment. The current study is the first systematic examination of (a) how feigned mental disorders may affect CST measures and (b) which scales are effective at identifying feigned cases. Bona fide patients (n=65) were compared with suspected malingerers (n=22) on 3 CST measures: the Georgia Court Competency Test (GCCT), the MacArthur Competence Assessment Tool-Criminal Adjudication, and the Evaluation of Competency to Stand Trial-Revised (ECST-R). Results indicated that these CST measures are vulnerable to feigning. The development of specialized GCCT and ECST-R scales yielded moderately effective screens for feigned mental disorders in the context of CST evaluations.

Recent interview-based measures of competency to stand trial: A critical review augmented with research data.
Rogers, Richard, et.al.
Behavioral Sciences & the Law, Vol 19(4), 2001. pp. 503-518.
Abstract:
Forensic experts are frequently asked to conduct competency-to-stand trial evaluations and address the substantive prongs propounded in Dusky v. United States (1960). In understanding its application to competency evaluations, alternative conceptualizations of Dusky are critically examined. With Dusky providing the conceptual framework, three interview-based competency measures are reviewed: the Georgia Court Competency Test, the MacArthur Competence Assessment Tool—Criminal Adjudication (Mac-CAT-CA), and the Evaluation of Competency to Stand Trial—Revised (ECST-R). This review has a twin focus on reliability of each measure and its correspondence to Dusky prongs. The current review is augmented by new factor analytic data on the MacCAT-CA and ECST-R. The article concludes with specific recommendations for competency evaluations.