Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Pract Radiat Oncol ; 14(1): e75-e85, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37797883

RESUMO

PURPOSE: Our purpose was to identify variations in the clinical use of automatically generated contours that could be attributed to software error, off-label use, or automation bias. METHODS AND MATERIALS: For 500 head and neck patients who were contoured by an in-house automated contouring system, Dice similarity coefficient and added path length were calculated between the contours generated by the automated system and the final contours after editing for clinical use. Statistical process control was used and control charts were generated with control limits at 3 standard deviations. Contours that exceeded the thresholds were investigated to determine the cause. Moving mean control plots were then generated to identify dosimetrists who were editing less over time, which could be indicative of automation bias. RESULTS: Major contouring edits were flagged for: 1.0% brain, 3.1% brain stem, 3.5% left cochlea, 2.9% right cochlea, 4.8% esophagus, 4.1% left eye, 4.0% right eye, 2.2% left lens, 4.9% right lens, 2.5% mandible, 11% left optic nerve, 6.1% right optic nerve, 3.8% left parotid, 5.9% right parotid, and 3.0% of spinal cord contours. Identified causes of editing included unexpected patient positioning, deviation from standard clinical practice, and disagreement between dosimetrist preference and automated contouring style. A statistically significant (P < .05) difference was identified between the contour editing practice of dosimetrists, with 1 dosimetrist editing more across all organs at risk. Eighteen percent (27/150) of moving mean control plots created for 5 dosimetrists indicated the amount of contour editing was decreasing over time, possibly corresponding to automation bias. CONCLUSIONS: The developed system was used to detect statistically significant edits caused by software error, unexpected clinical use, and automation bias. The increased ability to detect systematic errors that occur when editing automatically generated contours will improve the safety of the automatic treatment planning workflow.


Assuntos
Pescoço , Software , Humanos , Esôfago , Glândula Parótida , Planejamento da Radioterapia Assistida por Computador , Órgãos em Risco
2.
Front Oncol ; 13: 1204323, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37771435

RESUMO

Purpose: Variability in contouring structures of interest for radiotherapy continues to be challenging. Although training can reduce such variability, having radiation oncologists provide feedback can be impractical. We developed a contour training tool to provide real-time feedback to trainees, thereby reducing variability in contouring. Methods: We developed a novel metric termed localized signed square distance (LSSD) to provide feedback to the trainee on how their contour compares with a reference contour, which is generated real-time by combining trainee contour and multiple expert radiation oncologist contours. Nine trainees performed contour training by using six randomly assigned training cases that included one test case of the heart and left ventricle (LV). The test case was repeated 30 days later to assess retention. The distribution of LSSD maps of the initial contour for the training cases was combined and compared with the distribution of LSSD maps of the final contours for all training cases. The difference in standard deviations from the initial to final LSSD maps, ΔLSSD, was computed both on a per-case basis and for the entire group. Results: For every training case, statistically significant ΔLSSD were observed for both the heart and LV. When all initial and final LSSD maps were aggregated for the training cases, before training, the mean LSSD ([range], standard deviation) was -0.8 mm ([-37.9, 34.9], 4.2) and 0.3 mm ([-25.1, 32.7], 4.8) for heart and LV, respectively. These were reduced to -0.1 mm ([-16.2, 7.3], 0.8) and 0.1 mm ([-6.6, 8.3], 0.7) for the final LSSD maps during the contour training sessions. For the retention case, the initial and final LSSD maps of the retention case were aggregated and were -1.5 mm ([-22.9, 19.9], 3.4) and -0.2 mm ([-4.5, 1.5], 0.7) for the heart and 1.8 mm ([-16.7, 34.5], 5.1) and 0.2 mm ([-3.9, 1.6],0.7) for the LV. Conclusions: A tool that uses real-time contouring feedback was developed and successfully used for contour training of nine trainees. In all cases, the utility was able to guide the trainee and ultimately reduce the variability of the trainee's contouring.

3.
J Appl Clin Med Phys ; 24(8): e13995, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37073484

RESUMO

PURPOSE: Hazard scenarios were created to assess and reduce the risk of planning errors in automated planning processes. This was accomplished through iterative testing and improvement of examined user interfaces. METHODS: Automated planning requires three user inputs: a computed tomography (CT), a prescription document, known as the service request, and contours. We investigated the ability of users to catch errors that were intentionally introduced into each of these three stages, according to an FMEA analysis. Five radiation therapists each reviewed 15 patient CTs, containing three errors: inappropriate field of view, incorrect superior border, and incorrect identification of isocenter. Four radiation oncology residents reviewed 10 service requests, containing two errors: incorrect prescription and treatment site. Four physicists reviewed 10 contour sets, containing two errors: missing contour slices and inaccurate target contour. Reviewers underwent video training prior to reviewing and providing feedback for various mock plans. RESULTS: Initially, 75% of hazard scenarios were detected in the service request approval. The visual display of prescription information was then updated to improve the detectability of errors based on user feedback. The change was then validated with five new radiation oncology residents who detected 100% of errors present. 83% of the hazard scenarios were detected in the CT approval portion of the workflow. For the contour approval portion of the workflow none of the errors were detected by physicists, indicating this step will not be used for quality assurance of contours. To mitigate the risk from errors that could occur at this step, radiation oncologists must perform a thorough review of contour quality prior to final plan approval. CONCLUSIONS: Hazard testing was used to pinpoint the weaknesses of an automated planning tool and as a result, subsequent improvements were made. This study identified that not all workflow steps should be used for quality assurance and demonstrated the importance of performing hazard testing to identify points of risk in automated planning tools.


Assuntos
Planejamento da Radioterapia Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Planejamento da Radioterapia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos
4.
J Appl Clin Med Phys ; 23(9): e13694, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35775105

RESUMO

PURPOSE: To develop a checklist that improves the rate of error detection during the plan review of automatically generated radiotherapy plans. METHODS: A custom checklist was developed using guidance from American Association of Physicists in Medicine task groups 275 and 315 and the results of a failure modes and effects analysis of the Radiation Planning Assistant (RPA), an automated contouring and treatment planning tool. The preliminary checklist contained 90 review items for each automatically generated plan. In the first study, eight physicists were recruited from our institution who were familiar with the RPA. Each physicist reviewed 10 artificial intelligence-generated resident treatment plans from the RPA for safety and plan quality, five of which contained errors. Physicists performed plan checks, recorded errors, and rated each plan's clinical acceptability. Following a 2-week break, physicists reviewed 10 additional plans with a similar distribution of errors using our customized checklist. Participants then provided feedback on the usability of the checklist and it was modified accordingly. In a second study, this process was repeated with 14 senior medical physics residents who were randomly assigned to checklist or no checklist for their reviews. Each reviewed 10 plans, five of which contained errors, and completed the corresponding survey. RESULTS: In the first study, the checklist significantly improved the rate of error detection from 3.4 ± 1.1 to 4.4 ± 0.74 errors per participant without and with the checklist, respectively (p = 0.02). Error detection increased by 20% when the custom checklist was utilized. In the second study, 2.9 ± 0.84 and 3.5 ± 0.84 errors per participant were detected without and with the revised checklist, respectively (p = 0.08). Despite the lack of statistical significance for this cohort, error detection increased by 18% when the checklist was utilized. CONCLUSION: Our results indicate that the use of a customized checklist when reviewing automated treatment plans will result in improved patient safety.


Assuntos
Planejamento da Radioterapia Assistida por Computador , Radioterapia de Intensidade Modulada , Inteligência Artificial , Lista de Checagem , Humanos , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/métodos
5.
Pract Radiat Oncol ; 12(4): e344-e353, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35305941

RESUMO

PURPOSE: In this study, we applied the failure mode and effects analysis (FMEA) approach to an automated radiation therapy contouring and treatment planning tool to assess, and subsequently limit, the risk of deploying automated tools. METHODS AND MATERIALS: Using an FMEA, we quantified the risks associated with the Radiation Planning Assistant (RPA), an automated contouring and treatment planning tool currently under development. A multidisciplinary team identified and scored each failure mode, using a combination of RPA plan data and experience for guidance. A 1-to-10 scale for severity, occurrence, and detectability of potential errors was used, following American Association of Physicists in Medicine Task Group 100 recommendations. High-risk failure modes were further explored to determine how the workflow could be improved to reduce the associated risk. RESULTS: Of 290 possible failure modes, we identified 126 errors that were unique to the RPA workflow, with a mean risk priority number (RPN) of 56.3 and a maximum RPN of 486. The top 10 failure modes were caused by automation bias, operator error, and software error. Twenty-one failure modes were above the action threshold of RPN = 125, leading to corrective actions. The workflow was modified to simplify the user interface and better training resources were developed, which highlight the importance of thorough review of the output of automated systems. After the changes, we rescored the high-risk errors, resulting in a final mean and maximum RPN of 33.7 and 288, respectively. CONCLUSIONS: We identified 126 errors specific to the automated workflow, most of which were caused by automation bias or operator error, which emphasized the need to simplify the user interface and ensure adequate user training. As a result of changes made to the software and the enhancement of training resources, the RPNs subsequently decreased, showing that FMEA is an effective way to assess and reduce risk associated with the deployment of automated planning tools.


Assuntos
Análise do Modo e do Efeito de Falhas na Assistência à Saúde , Automação , Humanos , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA