Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
AJR Am J Roentgenol ; 214(3): 613-617, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31846375

RESUMO

OBJECTIVE. The objective of this article is to assess the impact of integrating peer review in PACS on the reporting of discrepancies. Our hypothesis is that a PACS-integrated machine-randomized and semiblinded peer review tool leads to an increase in discrepancies reported. MATERIALS AND METHODS. A PACS tool was implemented to prompt radiologists to perform peer review of prior comparison studies in a randomized fashion. The reviewed radiologist's name was omitted from the prior report in PACS. Before this implementation, radiologists entered peer reviews directly on the RADPEER website. Three academic subspecialty sections comprising 24 radiologists adopted the tool (adopters group). Three sections comprising 14 radiologists did not adopt the tool (nonadopters group). Peer review submissions were analyzed for 4 months before and 4 months after the implementation. The mean rate of significant discrepancies (RADPEER score 2b or higher) reported per radiologist was calculated and the discrepancy rates of the periods before and after the implementation were compared. RESULTS. The mean significant discrepancy rate reported per radiologist in the adopters group increased from 0.19% ± 0.46% (SD) before the implementation to 0.93% ± 1.45% after implementation (p = 0.01). No significant discrepancies were reported by the nonadopters group in either period. CONCLUSION. In this single institutional retrospective analysis, integrating peer review in PACS resulted in a fivefold increase in reported significant discrepancies. These results suggest that peer review data are influenced by the design of the tool used including PACS integration, randomization, and blinding.


Assuntos
Erros de Diagnóstico/prevenção & controle , Erros de Diagnóstico/estatística & dados numéricos , Revisão por Pares/métodos , Competência Profissional/estatística & dados numéricos , Sistemas de Informação em Radiologia , Humanos , Garantia da Qualidade dos Cuidados de Saúde , Estudos Retrospectivos
2.
Pediatr Radiol ; 49(4): 526-530, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30923885

RESUMO

Peer learning represents a shift away from traditional peer review. Peer learning focuses on improvement of diagnostic performance rather than on suboptimal performance. The shift in focus away from random selection and toward identification of cases with valuable teaching points can encourage more active radiologist engagement in the learning process. An effective peer learning program relies on a trusting environment that lessens the fear of embarrassment or punitive action. Here we describe the shortcomings of traditional peer review, and the benefits of peer learning. We also provide tips for a successful peer learning program and examples of implementation.


Assuntos
Competência Clínica , Erros de Diagnóstico/prevenção & controle , Segurança do Paciente , Pediatria/educação , Revisão por Pares , Garantia da Qualidade dos Cuidados de Saúde , Radiologia/educação , Humanos , Aprendizagem , Melhoria de Qualidade
3.
AJR Am J Roentgenol ; 207(6): 1215-1222, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-27533881

RESUMO

OBJECTIVE: Peer review is an important and necessary part of radiology. There are several options to perform the peer review process. This study examines the reproducibility of peer review by comparing two scoring systems. MATERIALS AND METHODS: American Board of Radiology-certified radiologists from various practice environments and subspecialties were recruited to score deidentified examinations on a web-based PACS with two scoring systems, RADPEER and Cleareview. Quantitative analysis of the scores was performed for interrater agreement. RESULTS: Interobserver variability was high for both the RADPEER and Cleareview scoring systems. The interobserver correlations (kappa values) were 0.17-0.23 for RADPEER and 0.10-0.16 for Cleareview. Interrater correlation was not statistically significantly different when comparing the RADPEER and Cleareview systems (p = 0.07-0.27). The kappa values were low for the Cleareview subscores when we evaluated for missed findings (0.26), satisfaction of search (0.17), and inadequate interpretation of findings (0.12). CONCLUSION: Our study confirms the previous report of low interobserver correlation when using the peer review process. There was low interobserver agreement seen when using both the RADPEER and the Cleareview scoring systems.


Assuntos
Interpretação de Imagem Assistida por Computador/normas , Variações Dependentes do Observador , Revisão por Pares/normas , Sistemas de Informação em Radiologia/classificação , Sistemas de Informação em Radiologia/normas , Radiologia/normas , Interpretação de Imagem Assistida por Computador/métodos , Revisão por Pares/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Estados Unidos
4.
Eur J Radiol ; 148: 110162, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35065484

RESUMO

PURPOSE: We hypothesize procedural images of CT-guided interventions may contain diagnostic findings not present in the reference images. METHOD: A retrospective review of CT-guided interventions performed at our hospital, from 01 April 2017 to 08 May 2020. Two radiologists independently reviewed the procedural CT images for presence of diagnostic findings in comparison to the reference images (CT, MRI, or PET/CT). ACR RADPEER score was assigned to all findings. The Findings were categorized into new finding, characterization of prior finding or change of prior finding. The results of biopsy and drainage samples were also reviewed. RESULTS: The prevalence of diagnostic findings in procedural CT images was found to be 6.1% (81/1336); 32 new, 8 characterization, and 41 change findings. Having CT as reference image, procedure in the chest and having drainage were associated with presence of findings (p < 0.05). Increase time interval between the reference image and the procedure increases the odds of having diagnostic findings (p < 0.001). Age, sex, or whether in-patient or out-patient, malignant pathology result or infectious collection were not related to presence of findings (p > 0.05). The majority of findings were likely clinically significant (73%) and the majority were not documented in the procedure report (63%). CONCLUSION: Clinically relevant diagnostic findings in procedural images of CT-guided interventions are not uncommon and are underreported. Time delay between the reference image and the procedure is the most significant factor associated with presence of diagnostic findings.


Assuntos
Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia Computadorizada por Raios X , Humanos , Biópsia Guiada por Imagem/métodos , Imageamento por Ressonância Magnética , Radiologistas , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
5.
J Am Coll Radiol ; 17(6): 779-785, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31991118

RESUMO

ACR RADPEER® is the leading method of radiologic peer review in the United States. The program has evolved since its inception in 2002 and was most recently updated in 2016. In 2018, a survey was sent to RADPEER participants to gauge the current state of the program and explore opportunities for continued improvement. A total of 26 questions were included, and more than 300 practices responded. In this report, the ACR RADPEER Committee authors summarize the survey results and discuss opportunities for future iterations of the RADPEER program.


Assuntos
Garantia da Qualidade dos Cuidados de Saúde , Radiologia , Competência Clínica , Humanos , Revisão por Pares , Radiologia/educação , Inquéritos e Questionários , Estados Unidos
6.
J Am Coll Radiol ; 14(8): 1080-1086, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28551339

RESUMO

The ACR's RADPEER program is currently the leading method for peer review in the United States. To date, more than 18,000 radiologists and more than 1,100 groups participate in the program. The ABR accepted RADPEER as a practice quality improvement in 2009, which can be applied toward maintenance of certification; there are currently over 2,200 practice quality improvement participants. There have been ongoing deliberations regarding the utility of RADPEER, its goals, and its scoring system since the preceding 2009 white paper. This white paper reviews the history and evolution of RADPEER and eRADPEER, the 2016 ACR Peer Review Committee's discussions, the updated recommended scoring system and lexicon for RADPEER, and updates to eRADPEER including the study type, age, and discrepancy classifications. The central goal of RADPEER to aid in nonpunitive peer learning is discussed.


Assuntos
Comitês Consultivos , Revisão por Pares , Melhoria de Qualidade , Radiologia , Sociedades Médicas , Certificação , Humanos , Garantia da Qualidade dos Cuidados de Saúde , Radiologia/educação , Estados Unidos
7.
J Am Coll Radiol ; 13(9): 1111-7, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-27338216

RESUMO

PURPOSE: To determine whether resident abdominopelvic CT reports considered prospectively concordant with the final interpretation are also considered concordant by other blinded specialists and abdominal radiologists. METHODS: In this institutional review board-approved retrospective cohort study, 119 randomly selected urgent abdominopelvic CT examinations with a resident preliminary report deemed prospectively "concordant" by the signing faculty were identified. Nine blinded specialists from Emergency Medicine, Internal Medicine, and Abdominal Radiology reviewed the preliminary and final reports and scored the preliminary report with respect to urgent findings as follows: 1.) concordant; 2.) discordant with minor differences; 3.) discordant with major differences that do not alter patient management; or 4.) discordant with major differences that do alter patient management. Predicted management resulting from scores of 4 was recorded. Consensus was defined as majority agreement within a specialty. Consensus major discrepancy rates (ie, scores 3 or 4) were compared to the original major discrepancy rate of 0% (0/119) using the McNemar test. RESULTS: Consensus scores of 4 were assigned in 18% (21/119, P < .001, Emergency Medicine), 5% (6/119, P = .03, Internal Medicine), and 13% (16/119, P < .001, Abdominal Radiology) of examinations. Consensus scores of 3 or 4 were assigned in 31% (37/119, P < .001, Emergency Medicine), 14% (17/119, P < .001, Internal Medicine), and 18% (22/119, P < .001, Abdominal Radiology). Predicted management alterations included hospital status (0-4%), medical therapy (1%-4%), imaging (1%-10%), subspecialty consultation (3%-13%), nonsurgical procedure (3%), operation (1%-3%), and other (0-3%). CONCLUSIONS: The historical low major discrepancy rate for urgent findings between resident and faculty radiologists is likely underreported.


Assuntos
Erros de Diagnóstico/estatística & dados numéricos , Internato e Residência/estatística & dados numéricos , Pelve/diagnóstico por imagem , Radiografia Abdominal/estatística & dados numéricos , Radiologia/estatística & dados numéricos , Tomografia Computadorizada por Raios X/estatística & dados numéricos , Erros de Diagnóstico/prevenção & controle , Humanos , Michigan/epidemiologia , Variações Dependentes do Observador , Encaminhamento e Consulta/estatística & dados numéricos , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade
8.
J Am Coll Radiol ; 13(12 Pt A): 1519-1524, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28233533

RESUMO

The current practice of peer review within radiology is well developed and widely implemented compared with other medical specialties. However, there are many factors that limit current peer review practices from reducing diagnostic errors and improving patient care. The development of "meaningful peer review" requires a transition away from compliance toward quality improvement, whereby the information and insights gained facilitate education and drive systematic improvements that reduce the frequency and impact of diagnostic error. The next generation of peer review requires significant improvements in IT functionality and integration, enabling features such as anonymization, adjudication by multiple specialists, categorization and analysis of errors, tracking, feedback, and easy export into teaching files and other media that require strong partnerships with vendors. In this article, the authors assess various peer review practices, with focused discussion on current limitations and future needs for meaningful peer review in radiology.


Assuntos
Erros de Diagnóstico/prevenção & controle , Revisão dos Cuidados de Saúde por Pares/normas , Garantia da Qualidade dos Cuidados de Saúde/normas , Radiologia/normas , Competência Clínica/normas , Previsões , Humanos , Melhoria de Qualidade
9.
J Am Coll Radiol ; 13(6): 656-62, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26908200

RESUMO

PURPOSE: The objective of this study was to evaluate the feasibility of the consensus-oriented group review (COGR) method of radiologist peer review within a large subspecialty imaging department. METHODS: This study was institutional review board approved and HIPAA compliant. Radiologist interpretations of CT, MRI, and ultrasound examinations at a large academic radiology department were subject to peer review using the COGR method from October 2011 through September 2013. Discordance rates and sources of discordance were evaluated on the basis of modality and division, with group differences compared using a χ(2) test. Potential associations between peer review outcomes and the time after the initiation of peer review or the number of radiologists participating in peer review were tested by linear regression analysis and the t test, respectively. RESULTS: A total of 11,222 studies reported by 83 radiologists were peer reviewed using COGR during the two-year study period. The average radiologist participated in 112 peer review conferences and had 3.3% of his or her available CT, MRI and ultrasound studies peer reviewed. The rate of discordance was 2.7% (95% confidence interval [CI], 2.4%-3.0%), with significant differences in discordance rates on the basis of division and modality. Discordance rates were highest for MR (3.4%; 95% CI, 2.8%-4.1%), followed by ultrasound (2.7%; 95% CI, 2.0%-3.4%) and CT (2.4%; 95% CI, 2.0%-2.8%). Missed findings were the most common overall cause for discordance (43.8%; 95% CI, 38.2%-49.4%), followed by interpretive errors (23.5%; 95% CI, 18.8%-28.3%), dictation errors (19.0%; 95% CI, 14.6%-23.4%), and recommendation (10.8%; 95% CI, 7.3%-14.3%). Discordant cases, compared with concordant cases, were associated with a significantly greater number of radiologists participating in the peer review process (5.9 vs 4.7 participating radiologists, P < .001) and were significantly more likely to lead to an addendum (62.9% vs 2.7%, P < .0001). CONCLUSIONS: COGR permits departments to collect highly contextualized peer review data to better elucidate sources of error in diagnostic imaging reports, while reviewing a sufficient case volume to comply with external standards for ongoing performance review.


Assuntos
Revisão dos Cuidados de Saúde por Pares/métodos , Garantia da Qualidade dos Cuidados de Saúde/organização & administração , Serviço Hospitalar de Radiologia/normas , Consenso , Estudos de Viabilidade , Humanos
10.
J Am Coll Radiol ; 11(9): 899-904, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24842585

RESUMO

RADPEER is a product developed by the ACR that aims to assist radiologists with quality assessment and improvement through peer review. The program opened in 2002, was initially offered to physician groups in 2003, developed an electronic version in 2005 (eRADPEER), revised the scoring system in 2009, and first surveyed the RADPEER membership in 2010. In 2012, a survey was sent to 16,000 ACR member radiologists, both users and nonusers of RADPEER, with the goal of understanding how to make RADPEER more relevant to its members. A total of 31 questions were used, some of which were repeated from the 2010 survey. The ACR's RADPEER committee has published 3 papers on the program since its inception. In this report, the authors summarize the survey results and suggest future opportunities for making RADPEER more useful to its membership.


Assuntos
Revisão dos Cuidados de Saúde por Pares , Garantia da Qualidade dos Cuidados de Saúde/organização & administração , Radiologia/normas , Competência Clínica , Erros de Diagnóstico/estatística & dados numéricos , Humanos , Sociedades Médicas , Inquéritos e Questionários , Estados Unidos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA