Your browser doesn't support javascript.
loading
Monitoring Variations in the Use of Automated Contouring Software.
Nealon, Kelly A; Han, Eun Young; Kry, Stephen F; Nguyen, Callistus; Pham, Mary; Reed, Valerie K; Rosenthal, David; Simiele, Samantha; Court, Laurence E.
Afiliação
  • Nealon KA; Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts. Electronic address: KANealon@mgh.harvard.edu.
  • Han EY; Department of Radiation Physics - Patient Care, The University of Texas MD Anderson Cancer Center, Houston, Texas.
  • Kry SF; Radiation Physics Outreach, The University of Texas MD Anderson Cancer Center, Houston, Texas.
  • Nguyen C; Department of Radiation Physics - Research, The University of Texas MD Anderson Cancer Center, Houston, Texas.
  • Pham M; Department of Radiation Physics - Research, The University of Texas MD Anderson Cancer Center, Houston, Texas.
  • Reed VK; Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas.
  • Rosenthal D; Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas.
  • Simiele S; Department of Radiation Physics - Patient Care, The University of Texas MD Anderson Cancer Center, Houston, Texas.
  • Court LE; Department of Radiation Physics - Patient Care, The University of Texas MD Anderson Cancer Center, Houston, Texas.
Pract Radiat Oncol ; 14(1): e75-e85, 2024.
Article em En | MEDLINE | ID: mdl-37797883
ABSTRACT

PURPOSE:

Our purpose was to identify variations in the clinical use of automatically generated contours that could be attributed to software error, off-label use, or automation bias. METHODS AND MATERIALS For 500 head and neck patients who were contoured by an in-house automated contouring system, Dice similarity coefficient and added path length were calculated between the contours generated by the automated system and the final contours after editing for clinical use. Statistical process control was used and control charts were generated with control limits at 3 standard deviations. Contours that exceeded the thresholds were investigated to determine the cause. Moving mean control plots were then generated to identify dosimetrists who were editing less over time, which could be indicative of automation bias.

RESULTS:

Major contouring edits were flagged for 1.0% brain, 3.1% brain stem, 3.5% left cochlea, 2.9% right cochlea, 4.8% esophagus, 4.1% left eye, 4.0% right eye, 2.2% left lens, 4.9% right lens, 2.5% mandible, 11% left optic nerve, 6.1% right optic nerve, 3.8% left parotid, 5.9% right parotid, and 3.0% of spinal cord contours. Identified causes of editing included unexpected patient positioning, deviation from standard clinical practice, and disagreement between dosimetrist preference and automated contouring style. A statistically significant (P < .05) difference was identified between the contour editing practice of dosimetrists, with 1 dosimetrist editing more across all organs at risk. Eighteen percent (27/150) of moving mean control plots created for 5 dosimetrists indicated the amount of contour editing was decreasing over time, possibly corresponding to automation bias.

CONCLUSIONS:

The developed system was used to detect statistically significant edits caused by software error, unexpected clinical use, and automation bias. The increased ability to detect systematic errors that occur when editing automatically generated contours will improve the safety of the automatic treatment planning workflow.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article