Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
AJR Am J Roentgenol ; 219(3): 509-519, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35441532

RESUMO

BACKGROUND. Improved communication between radiologists and patients is a key component of patient-centered radiology. OBJECTIVE. The purpose of this study was to create patient-centered video radiology reports using simple-to-understand language and annotated images and to assess the effect of these reports on patients' experience and understanding of their imaging results. METHODS. During a 4-month study period, faculty radiologists created video radiology reports using a tool integrated within the diagnostic viewer that allows both image and voice capture. To aid patients' understanding of cross-sectional images, cinematically rendered images were automatically created and made immediately available to radiologists at the workstation, allowing their incorporation into video radiology reports. Video radiology reports were made available to patients via the institutional health portal along with the written radiology report and the examination images. Patient views of the video report were recorded, and descriptive analyses were performed on radiologist and examination characteristics as well as patient demographics. A survey was sent to patients to obtain feedback on their experience. RESULTS. During the study period, 105 of 227 faculty radiologists created 3763 video radiology reports (mean number of reports per radiologist, 36 ± 27 [SD] reports). Mean time to create a video report was 238 ± 141 seconds. Patients viewed 864 unique video reports. The mean overall video radiology report experience rating based on 101 patient surveys was 4.7 of 5. The mean rating for how well the video report helped patients understand their findings was also 4.7 of 5. Of the patients who responded to the survey, 91% preferred having both written and video reports together over having written reports alone. CONCLUSION. Patient-centered video radiology reports are a useful tool to help improve patient understanding of imaging results. The mechanism of creating the video reports and delivering them to patients can be integrated into existing informatics infrastructure. CLINICAL IMPACT. Video radiology reports can play an important role in patient-centered radiology, increasing patient understanding of imaging results, and they may improve the visibility of radiologists to patients and highlight the radiologist's important role in patient care.


Assuntos
Radiologia , Comunicação , Humanos , Assistência Centrada no Paciente , Radiografia , Radiologistas
2.
Abdom Radiol (NY) ; 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39292278

RESUMO

PURPOSE: Retrospectively compare image quality, radiologist diagnostic confidence, and time for images to reach PACS for contrast enhanced abdominopelvic CT examinations created on the scanner console by technologists versus those generated automatically by thin-client artificial intelligence (AI) mechanisms. METHODS: A retrospective PACS search identified adults who underwent an emergency department contrast-enhanced abdominopelvic CT in 07/2022 (Console Cohort) and 07/2023 (Server Cohort). Coronal and sagittal multiplanar reformatted images (MPR) were created by AI software in the Server cohort. Time to completion of MPR images was compared using 2-sample t-tests for all patients in both cohorts. Two radiologists qualitatively assessed image quality and diagnostic confidence on 5-point Likert scales for 50 consecutive examinations from each cohort. Additionally, they assessed for acute abdominopelvic findings. Continuous variables and qualitative scores were compared with the Mann-Whitney U test. A p < .05 indicated statistical significance. RESULTS: Mean[SD] time to exam completion in PACS was 8.7[11.1] minutes in the Console cohort (n = 728) and 4.6[6.6] minutes in the Server cohort (n = 892), p < .001. 50 examinations in the Console Cohort (28 women 22 men, 51[19] years) and Server cohort (27 women 23 men, 57[19] years) were included for radiologist review. Age, sex, CTDlvol, and DLP were not statistically different between the cohorts (all p > .05). There was no significant difference in image quality or diagnostic confidence for either reader when comparing the Console and Server cohorts (all p > .05). CONCLUSION: Examinations utilizing AI generated MPRs on a thin-client architecture were completed approximately 50% faster than those utilizing reconstructions generated at the console with no statistical difference in diagnostic confidence or image quality.

3.
Sci Data ; 11(1): 254, 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38424079

RESUMO

Resection and whole brain radiotherapy (WBRT) are standard treatments for brain metastases (BM) but are associated with cognitive side effects. Stereotactic radiosurgery (SRS) uses a targeted approach with less side effects than WBRT. SRS requires precise identification and delineation of BM. While artificial intelligence (AI) algorithms have been developed for this, their clinical adoption is limited due to poor model performance in the clinical setting. The limitations of algorithms are often due to the quality of datasets used for training the AI network. The purpose of this study was to create a large, heterogenous, annotated BM dataset for training and validation of AI models. We present a BM dataset of 200 patients with pretreatment T1, T1 post-contrast, T2, and FLAIR MR images. The dataset includes contrast-enhancing and necrotic 3D segmentations on T1 post-contrast and peritumoral edema 3D segmentations on FLAIR. Our dataset contains 975 contrast-enhancing lesions, many of which are sub centimeter, along with clinical and imaging information. We used a streamlined approach to database-building through a PACS-integrated segmentation workflow.


Assuntos
Neoplasias Encefálicas , Humanos , Inteligência Artificial , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/secundário , Irradiação Craniana/efeitos adversos , Irradiação Craniana/métodos , Imageamento por Ressonância Magnética , Radiocirurgia
4.
Clin Imaging ; 101: 200-205, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37421715

RESUMO

OBJECTIVE: To test the performance of a novel machine learning-based breast density tool. The tool utilizes a convolutional neural network to predict the BI-RADS based density assessment of a study. The clinical density assessments of 33,000 mammographic examinations (164,000 images) from one academic medical center (Site A) were used for training. MATERIALS AND METHODS: This was an IRB approved HIPAA compliant study performed at two academic medical centers. The validation data set was composed of 500 studies from one site (Site A) and 700 from another (Site B). At Site A, each study was assessed by three breast radiologists and the majority (consensus) assessment was used as truth. At Site B, if the tool agreed with the clinical reading, then it was considered to have correctly predicted the clinical reading. In cases where the tool and the clinical reading disagreed, then the study was evaluated by three radiologists and the consensus reading was used as the clinical reading. RESULTS: For the classification into the four categories of the Breast Imaging Reporting and Data System (BI-RADS®), the AI classifier had an accuracy of 84.6% at Site A and 89.7% at Site B. For binary classification (dense vs. non-dense), the AI classifier had an accuracy of 94.4% at Site A and 97.4% at Site B. In no case did the classifier disagree with the consensus reading by more than one category. CONCLUSIONS: The automated breast density tool showed high agreement with radiologists' assessments of breast density.


Assuntos
Densidade da Mama , Neoplasias da Mama , Humanos , Feminino , Mamografia/métodos , Mama/diagnóstico por imagem , Aprendizado de Máquina , Neoplasias da Mama/diagnóstico por imagem
5.
ArXiv ; 2023 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-37744461

RESUMO

Resection and whole brain radiotherapy (WBRT) are the standards of care for the treatment of patients with brain metastases (BM) but are often associated with cognitive side effects. Stereotactic radiosurgery (SRS) involves a more targeted treatment approach and has been shown to avoid the side effects associated with WBRT. However, SRS requires precise identification and delineation of BM. While many AI algorithms have been developed for this purpose, their clinical adoption has been limited due to poor model performance in the clinical setting. Major reasons for non-generalizable algorithms are the limitations in the datasets used for training the AI network. The purpose of this study was to create a large, heterogenous, annotated BM dataset for training and validation of AI models to improve generalizability. We present a BM dataset of 200 patients with pretreatment T1, T1 post-contrast, T2, and FLAIR MR images. The dataset includes contrast-enhancing and necrotic 3D segmentations on T1 post-contrast and whole tumor (including peritumoral edema) 3D segmentations on FLAIR. Our dataset contains 975 contrast-enhancing lesions, many of which are sub centimeter, along with clinical and imaging feature information. We used a streamlined approach to database-building leveraging a PACS-integrated segmentation workflow.

6.
ArXiv ; 2023 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-37396600

RESUMO

Clinical monitoring of metastatic disease to the brain can be a laborious and timeconsuming process, especially in cases involving multiple metastases when the assessment is performed manually. The Response Assessment in Neuro-Oncology Brain Metastases (RANO-BM) guideline, which utilizes the unidimensional longest diameter, is commonly used in clinical and research settings to evaluate response to therapy in patients with brain metastases. However, accurate volumetric assessment of the lesion and surrounding peri-lesional edema holds significant importance in clinical decision-making and can greatly enhance outcome prediction. The unique challenge in performing segmentations of brain metastases lies in their common occurrence as small lesions. Detection and segmentation of lesions that are smaller than 10 mm in size has not demonstrated high accuracy in prior publications. The brain metastases challenge sets itself apart from previously conducted MICCAI challenges on glioma segmentation due to the significant variability in lesion size. Unlike gliomas, which tend to be larger on presentation scans, brain metastases exhibit a wide range of sizes and tend to include small lesions. We hope that the BraTS-METS dataset and challenge will advance the field of automated brain metastasis detection and segmentation.

7.
Front Neurosci ; 16: 860208, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36312024

RESUMO

Purpose: Personalized interpretation of medical images is critical for optimum patient care, but current tools available to physicians to perform quantitative analysis of patient's medical images in real time are significantly limited. In this work, we describe a novel platform within PACS for volumetric analysis of images and thus development of large expert annotated datasets in parallel with radiologist performing the reading that are critically needed for development of clinically meaningful AI algorithms. Specifically, we implemented a deep learning-based algorithm for automated brain tumor segmentation and radiomics extraction, and embedded it into PACS to accelerate a supervised, end-to- end workflow for image annotation and radiomic feature extraction. Materials and methods: An algorithm was trained to segment whole primary brain tumors on FLAIR images from multi-institutional glioma BraTS 2021 dataset. Algorithm was validated using internal dataset from Yale New Haven Health (YHHH) and compared (by Dice similarity coefficient [DSC]) to radiologist manual segmentation. A UNETR deep-learning was embedded into Visage 7 (Visage Imaging, Inc., San Diego, CA, United States) diagnostic workstation. The automatically segmented brain tumor was pliable for manual modification. PyRadiomics (Harvard Medical School, Boston, MA) was natively embedded into Visage 7 for feature extraction from the brain tumor segmentations. Results: UNETR brain tumor segmentation took on average 4 s and the median DSC was 86%, which is similar to published literature but lower than the RSNA ASNR MICCAI BRATS challenge 2021. Finally, extraction of 106 radiomic features within PACS took on average 5.8 ± 0.01 s. The extracted radiomic features did not vary over time of extraction or whether they were extracted within PACS or outside of PACS. The ability to perform segmentation and feature extraction before radiologist opens the study was made available in the workflow. Opening the study in PACS, allows the radiologists to verify the segmentation and thus annotate the study. Conclusion: Integration of image processing algorithms for tumor auto-segmentation and feature extraction into PACS allows curation of large datasets of annotated medical images and can accelerate translation of research into development of personalized medicine applications in the clinic. The ability to use familiar clinical tools to revise the AI segmentations and natively embedding the segmentation and radiomic feature extraction tools on the diagnostic workstation accelerates the process to generate ground-truth data.

8.
IEEE Trans Med Imaging ; 25(10): 1319-28, 2006 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-17024835

RESUMO

The study of cerebral microvascular networks requires high-resolution images. However, to obtain statistically relevant results, a large area of the brain (several square millimeters) must be analyzed. This leads us to consider huge images, too large to be loaded and processed at once in the memory of a standard computer. To consider a large area, a compact representation of the vessels is required. The medial axis is the preferred tool for this application. To extract it, a dedicated skeletonization algorithm is proposed. Numerous approaches already exist which focus on computational efficiency. However, they all implicitly assume that the image can be completely processed in the computer memory, which is not realistic with the large images considered here. We present in this paper a skeletonization algorithm that processes data locally (in subimages) while preserving global properties (i.e., homotopy). We then show some results obtained on a mosaic of three-dimensional images acquired by confocal microscopy.


Assuntos
Encéfalo/irrigação sanguínea , Encéfalo/citologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Microcirculação/citologia , Microscopia Confocal/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Inteligência Artificial , Circulação Cerebrovascular , Metodologias Computacionais , Humanos , Aumento da Imagem/métodos , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
9.
J Comp Neurol ; 492(1): 1-19, 2005 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-16175557

RESUMO

The anatomical substrates of neural nets are usually composed from reconstructions of neurons that were stained in different preparations. Realistic models of the structural relationships between neurons require a common framework. Here we present 3-D reconstructions of single projection neurons (PN) connecting the antennal lobe (AL) with the mushroom body (MB) and lateral horn, groups of intrinsic mushroom body neurons (type 5 Kenyon cells), and a single mushroom body extrinsic neuron (PE1), aiming to compose components of the olfactory pathway in the honeybee. To do so, we constructed a digital standard atlas of the bee brain. The standard atlas was created as an average-shape atlas of 22 neuropils, calculated from 20 individual immunostained whole-mount bee brains. After correction for global size and positioning differences by repeatedly applying an intensity-based nonrigid registration algorithm, a sequence of average label images was created. The results were qualitatively evaluated by generating average gray-value images corresponding to the average label images and judging the level of detail within the labeled regions. We found that the first affine registration step in the sequence results in a blurred image because of considerable local shape differences. However, already the first nonrigid iteration in the sequence corrected for most of the shape differences among individuals, resulting in images rich in internal detail. A second iteration improved on that somewhat and was selected as the standard. Registering neurons from different preparations into the standard atlas reveals 1) that the m-ACT neuron occupies the entire glomerulus (cortex and core) and overlaps with a local interneuron in the cortical layer; 2) that, in the MB calyces and the lateral horn of the protocerebral lobe, the axon terminals of two identified m-ACT neurons arborize in separate but close areas of the neuropil; and 3) that MB-intrinsic clawed Kenyon cells (type 5), with somata outside the calycal cups, project to the peduncle and lobe output system of the MB and contact (proximate) the dendritic tree of the PE1 neuron at the base of the vertical lobe. Thus the standard atlas and the procedures applied for registration serve the function of creating realistic neuroanatomical models of parts of a neural net. The Honeybee Standard Brain is accessible at www.neurobiologie.fu-berlin.de/beebrain.


Assuntos
Anatomia Artística , Abelhas/anatomia & histologia , Encéfalo/anatomia & histologia , Ilustração Médica , Condutos Olfatórios/anatomia & histologia , Animais , Drosophila/anatomia & histologia , Feminino , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional , Microscopia Confocal , Modelos Anatômicos , Corpos Pedunculados/anatomia & histologia , Neuroanatomia/instrumentação , Neuroanatomia/métodos , Neurônios/citologia , Neurópilo/citologia , Tamanho do Órgão
10.
Stud Health Technol Inform ; 94: 171-3, 2003.
Artigo em Inglês | MEDLINE | ID: mdl-15455885

RESUMO

This work presents first quantitative results of a method for automatic liver segmentation from CT data. It is based on a 3D deformable model approach using a-priori statistical information about the shape of the liver gained from a training set. The model is adapted to the data in an iterative process by analysis of the grey value profiles along its surface normals after nonlinear diffusion filtering. Leave-one-out experiments over 26 CT data sets reveal an accuracy of 2.4 mm with respect to the manual segmentation.


Assuntos
Fígado/cirurgia , Cuidados Pré-Operatórios , Automação , Humanos , Fígado/anatomia & histologia , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa