Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Sensors (Basel) ; 19(4)2019 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-30813245

RESUMO

The main goal of brain cancer surgery is to perform an accurate resection of the tumor, preserving as much normal brain tissue as possible for the patient. The development of a non-contact and label-free method to provide reliable support for tumor resection in real-time during neurosurgical procedures is a current clinical need. Hyperspectral imaging is a non-contact, non-ionizing, and label-free imaging modality that can assist surgeons during this challenging task without using any contrast agent. In this work, we present a deep learning-based framework for processing hyperspectral images of in vivo human brain tissue. The proposed framework was evaluated by our human image database, which includes 26 in vivo hyperspectral cubes from 16 different patients, among which 258,810 pixels were labeled. The proposed framework is able to generate a thematic map where the parenchymal area of the brain is delineated and the location of the tumor is identified, providing guidance to the operating surgeon for a successful and precise tumor resection. The deep learning pipeline achieves an overall accuracy of 80% for multiclass classification, improving the results obtained with traditional support vector machine (SVM)-based approaches. In addition, an aid visualization system is presented, where the final thematic map can be adjusted by the operating surgeon to find the optimal classification threshold for the current situation during the surgical procedure.


Assuntos
Aprendizado Profundo , Glioblastoma/diagnóstico por imagem , Algoritmos , Encéfalo/diagnóstico por imagem , Biologia Computacional , Humanos , Processamento de Imagem Assistida por Computador , Medicina de Precisão , Máquina de Vetores de Suporte
2.
J Digit Imaging ; 30(6): 782-795, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28342043

RESUMO

Three dimensional (3D) manual segmentation of the prostate on magnetic resonance imaging (MRI) is a laborious and time-consuming task that is subject to inter-observer variability. In this study, we developed a fully automatic segmentation algorithm for T2-weighted endorectal prostate MRI and evaluated its accuracy within different regions of interest using a set of complementary error metrics. Our dataset contained 42 T2-weighted endorectal MRI from prostate cancer patients. The prostate was manually segmented by one observer on all of the images and by two other observers on a subset of 10 images. The algorithm first coarsely localizes the prostate in the image using a template matching technique. Then, it defines the prostate surface using learned shape and appearance information from a set of training images. To evaluate the algorithm, we assessed the error metric values in the context of measured inter-observer variability and compared performance to that of our previously published semi-automatic approach. The automatic algorithm needed an average execution time of ∼60 s to segment the prostate in 3D. When compared to a single-observer reference standard, the automatic algorithm has an average mean absolute distance of 2.8 mm, Dice similarity coefficient of 82%, recall of 82%, precision of 84%, and volume difference of 0.5 cm3 in the mid-gland. Concordant with other studies, accuracy was highest in the mid-gland and lower in the apex and base. Loss of accuracy with respect to the semi-automatic algorithm was less than the measured inter-observer variability in manual segmentation for the same task.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Neoplasias da Próstata/diagnóstico por imagem , Algoritmos , Humanos , Masculino , Variações Dependentes do Observador , Próstata/diagnóstico por imagem , Reprodutibilidade dos Testes
3.
Artigo em Inglês | MEDLINE | ID: mdl-38501056

RESUMO

Magnetic resonance imaging (MRI) has gained popularity in the field of prenatal imaging due to the ability to provide high quality images of soft tissue. In this paper, we presented a novel method for extracting different textural and morphological features of the placenta from MRI volumes using topographical mapping. We proposed polar and planar topographical mapping methods to produce common placental features from a unique point of observation. The features extracted from the images included the entire placenta surface, as well as the thickness, intensity, and entropy maps displayed in a convenient two-dimensional format. The topography-based images may be useful for clinical placental assessments as well as computer-assisted diagnosis, and prediction of potential pregnancy complications.

4.
Med Phys ; 49(2): 1153-1160, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34902166

RESUMO

PURPOSE: The goal is to study the performance improvement of a deep learning algorithm in three-dimensional (3D) image segmentation through incorporating minimal user interaction into a fully convolutional neural network (CNN). METHODS: A U-Net CNN was trained and tested for 3D prostate segmentation in computed tomography (CT) images. To improve the segmentation accuracy, the CNN's input images were annotated with a set of border landmarks to supervise the network for segmenting the prostate. The network was trained and tested again with annotated images after 5, 10, 15, 20, or 30 landmark points were used. RESULTS: Compared to fully automatic segmentation, the Dice similarity coefficient increased up to 9% when 5-30 sparse landmark points were involved, with the segmentation accuracy improving as more border landmarks were used. CONCLUSIONS: When a limited number of sparse border landmarks are used on the input image, the CNN performance approaches the interexpert observer difference observed in manual segmentation.


Assuntos
Processamento de Imagem Assistida por Computador , Próstata , Curadoria de Dados , Humanos , Masculino , Redes Neurais de Computação , Próstata/diagnóstico por imagem , Tomografia Computadorizada por Raios X
5.
Artigo em Inglês | MEDLINE | ID: mdl-36793657

RESUMO

Ultrasound-guided biopsy is widely used for disease detection and diagnosis. We plan to register preoperative imaging, such as positron emission tomography / computed tomography (PET/CT) and/or magnetic resonance imaging (MRI), with real-time intraoperative ultrasound imaging for improved localization of suspicious lesions that may not be seen on ultrasound but visible on other imaging modalities. Once the image registration is completed, we will combine the images from two or more imaging modalities and use Microsoft HoloLens 2 augmented reality (AR) headset to display three-dimensional (3D) segmented lesions and organs from previously acquired images and real-time ultrasound images. In this work, we are developing a multi-modal, 3D augmented reality system for the potential use in ultrasound-guided prostate biopsy. Preliminary results demonstrate the feasibility of combining images from multiple modalities into an AR-guided system.

6.
Artigo em Inglês | MEDLINE | ID: mdl-36793655

RESUMO

Given the prevalence of cardiovascular diseases (CVDs), the segmentation of the heart on cardiac computed tomography (CT) remains of great importance. Manual segmentation is time-consuming and intra-and inter-observer variabilities yield inconsistent and inaccurate results. Computer-assisted, and in particular, deep learning approaches to segmentation continue to potentially offer an accurate, efficient alternative to manual segmentation. However, fully automated methods for cardiac segmentation have yet to achieve accurate enough results to compete with expert segmentation. Thus, we focus on a semi-automated deep learning approach to cardiac segmentation that bridges the divide between a higher accuracy from manual segmentation and higher efficiency from fully automated methods. In this approach, we selected a fixed number of points along the surface of the cardiac region to mimic user interaction. Points-distance maps were then generated from these points selections, and a three-dimensional (3D) fully convolutional neural network (FCNN) was trained using points-distance maps to provide a segmentation prediction. Testing our method with different numbers of selected points, we achieved a Dice score from 0.742 to 0.917 across the four chambers. Specifically. Dice scores averaged 0.846 ± 0.059, 0.857 ± 0.052, 0.826 ± 0.062, and 0.824 ± 0.062 for the left atrium, left ventricle, right atrium, and right ventricle, respectively across all points selections. This point-guided, image-independent, deep learning segmentation approach illustrated a promising performance for chamber-by-chamber delineation of the heart in CT images.

7.
Artigo em Inglês | MEDLINE | ID: mdl-36793656

RESUMO

Phantoms are invaluable tools broadly used for research and training purposes designed to mimic tissues and structures in the body. In this paper, polyvinyl chloride (PVC)-plasticizer and silicone rubbers were explored as economical materials to reliably create long-lasting, realistic kidney phantoms with contrast under both ultrasound (US) and X-ray imaging. The radiodensity properties of varying formulations of soft PVC-based gels were characterized to allow adjustable image intensity and contrast. Using this data, a phantom creation workflow was established which can be easily adapted to match radiodensity values of other organs and soft tissues in the body. Internal kidney structures such as the medulla and ureter were created using a two-part molding process to allow greater phantom customization. The kidney phantoms were imaged under US and X-ray scanners to compare the contrast enhancement of a PVC-based medulla versus a silicone-based medulla. Silicone was found to have higher attenuation than plastic under X-ray imaging, but poor quality under US imaging. PVC was found to exhibit good contrast under X-ray imaging and excellent performance for US imaging. Finally, the durability and shelf life of our PVC-based phantoms were observed to be vastly superior to that of common agar-based phantoms. The work presented here allows extended periods of usage and storage for each kidney phantom while simultaneously preserving anatomical detail, contrast under dual-modality imaging, and low cost of materials.

8.
Artigo em Inglês | MEDLINE | ID: mdl-36798628

RESUMO

Hyperspectral imaging (HSI) and radiomics have the potential to improve the accuracy of tumor malignancy prediction and assessment. In this work, we extracted radiomic features of fresh surgical papillary thyroid carcinoma (PTC) specimen that were imaged with HSI. A total of 107 unique radiomic features were extracted. This study includes 72 ex-vivo tissue specimens from 44 patients with pathology-confirmed PTC. With the dilated hyperspectral images, the shape feature of least axis length was able to predict the tumor aggressiveness with a high accuracy. The HSI-based radiomic method may provide a useful tool to aid oncologists in determining tumors with intermediate to high risk and in clinical decision making.

9.
Artigo em Inglês | MEDLINE | ID: mdl-36794092

RESUMO

Hyperspectral endoscopy can offer multiple advantages as compared to conventional endoscopy. Our goal is to design and develop a real-time hyperspectral endoscopic imaging system for the diagnosis of gastrointestinal (GI) tract cancers using a micro-LED array as an in-situ illumination source. The wavelengths of the system range from ultraviolet to visible and near infrared. To evaluate the use of the LED array for hyperspectral imaging, we designed a prototype system and conducted ex vivo experiments using normal and cancerous tissues of mice, chicken, and sheep. We compared the results of our LED-based approach with our reference hyperspectral camera system. The results confirm the similarity between the LED-based hyperspectral imaging system and the reference HSI camera. Our LED-based hyperspectral imaging system can be used not only as an endoscope but also as a laparoscopic or handheld devices for cancer detection and surgery.

10.
Artigo em Inglês | MEDLINE | ID: mdl-36844110

RESUMO

In women with placenta accreta spectrum (PAS), patient management may involve cesarean hysterectomy at delivery. Magnetic resonance imaging (MRI) has been used for further evaluation of PAS and surgical planning. This work tackles two prediction problems: predicting presence of PAS and predicting hysterectomy using MR images of pregnant patients. First, we extracted approximately 2,500 radiomic features from MR images with two regions of interest: the placenta and the uterus. In addition to analyzing two regions of interest, we dilated the placenta and uterus masks by 5, 10, 15, and 20 mm to gain insights from the myometrium, where the uterus and placenta overlap in the case of PAS. This study cohort includes 241 pregnant women. Of these women, 89 underwent hysterectomy while 152 did not; 141 with suspected PAS, and 100 without suspected PAS. We obtained an accuracy of 0.88 for predicting hysterectomy and an accuracy of 0.92 for classifying suspected PAS. The radiomic analysis tool is further validated, it can be useful for aiding clinicians in decision making on the care of pregnant women.

11.
Artigo em Inglês | MEDLINE | ID: mdl-36798450

RESUMO

Magnetic resonance imaging (MRI) is useful for the detection of abnormalities affecting maternal and fetal health. In this study, we used a fully convolutional neural network for simultaneous segmentation of the uterine cavity and placenta on MR images. We trained the network with MR images of 181 patients, with 157 for training and 24 for validation. The segmentation performance of the algorithm was evaluated using MR images of 60 additional patients that were not involved in training. The average Dice similarity coefficients achieved for the uterine cavity and placenta were 92% and 80%, respectively. The algorithm could estimate the volume of the uterine cavity and placenta with average errors of less than 1.1% compared to manual estimations. Automated segmentation, when incorporated into clinical use, has the potential to quantify, standardize, and improve placental assessment, resulting in improved outcomes for mothers and fetuses.

12.
Artigo em Inglês | MEDLINE | ID: mdl-36798853

RESUMO

In severe cases, placenta accreta spectrum (PAS) requires emergency hysterectomy, endangering the life of both mother and fetus. Early prediction may reduce complications and aid in management decisions in these high-risk pregnancies. In this work, we developed a novel convolutional network architecture to combine MRI volumes, radiomic features, and custom feature maps to predict PAS severe enough to result in hysterectomy after fetal delivery in pregnant women. We trained, optimized, and evaluated the networks using data from 241 patients, in groups of 157, 24, and 60 for training, validation, and testing, respectively. We found the network using all three paths produced the best performance, with an AUC of 87.8, accuracy 83.3%, sensitivity of 85.0, and specificity of 82.5. This deep learning algorithm, deployed in clinical settings, may identify women at risk before birth, resulting in improved patient outcomes.

13.
Artigo em Inglês | MEDLINE | ID: mdl-35784009

RESUMO

We designed a compact, real-time LED-based endoscopic imaging system for the detection of various diseases including cancer. In gastrointestinal applications, conventional endoscopy cannot reliably differentiate tumor from normal tissue. Current hyperspectral imaging systems are too slow to be used for real-time endoscopic applications. We are investigating real-time spectral imaging for different tissue types. Our objective is to develop a catheter for real-time hyperspectral gastrointestinal endoscopy. The endoscope uses multiple wavelengths within UV, visible, and IR light spectra generated by a micro-LED array. We capture images with a monochrome micro camera, which is cost-effective and smaller than the current hyperspectral imagers. A wireless transceiver sends the captured images to a workstation for further processing, such as tumor detection. The spatial resolution of the system is defined by camera resolution and the distance to the object, while the number of LEDs in the multi-wavelength light source determines the spectral resolution. To investigate the properties and the limitations of our high-speed spectral imaging approach, we designed a prototype system. We conducted two experiments to measure the optimal forward voltages and lighting duration of the LEDs. These factors affect the maximum feasible imaging rate and resolution. The lighting duration of each LED can be shorter than 10 ms while producing an image with a high signal-to-noise ratio and no illumination interference. These results support the idea of using a high-speed camera and an LED-array for real-time hyperspectral endoscopic imaging.

14.
Artigo em Inglês | MEDLINE | ID: mdl-35177877

RESUMO

Cardiac catheterization is a delicate strategy often used during various heart procedures. However, the procedure carries a myriad of risks associated with it, including damage to the vessel or heart itself, blood clots, and arrhythmias. Many of these risks increase in probability as the length of the operation increases, creating a demand for a more accurate procedure while reducing the overall time required. To this end, we developed an adaptable virtual reality simulation and visualization method to provide essential information to the physician ahead of time with the goal of reducing potential risks, decreasing operation time, and improving the accuracy of cardiac catheterization procedures. We additionally conducted a phantom study to evaluate the impact of using our virtual reality system prior to a procedure.

15.
Artigo em Inglês | MEDLINE | ID: mdl-35755405

RESUMO

Accurate segmentation of the prostate on computed tomography (CT) has many diagnostic and therapeutic applications. However, manual segmentation is time-consuming and suffers from high inter- and intra-observer variability. Computer-assisted approaches are useful to speed up the process and increase the reproducibility of the segmentation. Deep learning-based segmentation methods have shown potential for quick and accurate segmentation of the prostate on CT images. However, difficulties in obtaining manual, expert segmentations on a large quantity of images limit further progress. Thus, we proposed an approach to train a base model on a small, manually-labeled dataset and fine-tuned the model using unannotated images from a large dataset without any manual segmentation. The datasets used for pre-training and fine-tuning the base model have been acquired in different centers with different CT scanners and imaging parameters. Our fine-tuning method increased the validation and testing Dice scores. A paired, two-tailed t-test shows a significant change in test score (p = 0.017) demonstrating that unannotated images can be used to increase the performance of automated segmentation models.

16.
Artigo em Inglês | MEDLINE | ID: mdl-35755403

RESUMO

Surgery is a major treatment method for squamous cell carcinoma (SCC). During surgery, insufficient tumor margin may lead to local recurrence of cancer. Hyperspectral imaging (HSI) is a promising optical imaging technique for in vivo cancer detection and tumor margin assessment. In this study, a fully convolutional network (FCN) was implemented for tumor classification and margin assessment on hyperspectral images of SCC. The FCN was trained and validated with hyperspectral images of 25 ex vivo SCC surgical specimens from 20 different patients. The network was evaluated per patient and achieved pixel-level tissue classification with an average area under the curve (AUC) of 0.88, as well as 0.83 accuracy, 0.84 sensitivity, and 0.70 specificity across all the 20 patients. The 95% Hausdorff distance of assessed tumor margin in 17 patients was less than 2 mm, and the classification time of each tissue specimen took less than 10 seconds. The proposed methods can potentially facilitate intraoperative tumor margin assessment and improve surgical outcomes.

17.
J Med Imaging (Bellingham) ; 8(5): 054001, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34589556

RESUMO

Purpose: Magnetic resonance imaging has been recently used to examine the abnormalities of the placenta during pregnancy. Segmentation of the placenta and uterine cavity allows quantitative measures and further analyses of the organs. The objective of this study is to develop a segmentation method with minimal user interaction. Approach: We developed a fully convolutional neural network (CNN) for simultaneous segmentation of the uterine cavity and placenta in three dimensions (3D) while a minimal operator interaction was incorporated for training and testing of the network. The user interaction guided the network to localize the placenta more accurately. In the experiments, we trained two CNNs, one using 70 normal training cases and the other using 129 training cases including normal cases as well as cases with suspected placenta accreta spectrum (PAS). We evaluated the performance of the segmentation algorithms on two test sets: one with 20 normal cases and the other with 50 images from both normal women and women with suspected PAS. Results: For the normal test data, the average Dice similarity coefficient (DSC) was 92% and 82% for the uterine cavity and placenta, respectively. For the combination of normal and abnormal cases, the DSC was 88% and 83% for the uterine cavity and placenta, respectively. The 3D segmentation algorithm estimated the volume of the normal and abnormal uterine cavity and placenta with average volume estimation errors of 4% and 9%, respectively. Conclusions: The deep learning-based segmentation method provides a useful tool for volume estimation and analysis of the placenta and uterus cavity in human placental imaging.

18.
Artigo em Inglês | MEDLINE | ID: mdl-35784397

RESUMO

A Deep-Learning (DL) based segmentation tool was applied to a new magnetic resonance imaging dataset of pregnant women with suspected Placenta Accreta Spectrum (PAS). Radiomic features from DL segmentation were compared to those from expert manual segmentation via intraclass correlation coefficients (ICC) to assess reproducibility. An additional imaging marker quantifying the placental location within the uterus (PLU) was included. Features with an ICC > 0.7 were used to build logistic regression models to predict hysterectomy. Of 2059 features, 781 (37.9%) had ICC >0.7. AUC was 0.69 (95% CI 0.63-0.74) for manually segmented data and 0.78 (95% CI 0.73-0.83) for DL segmented data.

19.
Artigo em Inglês | MEDLINE | ID: mdl-32606488

RESUMO

Wearable augmented reality (AR) is an emerging technology with enormous potential for use in the medical field, from training and procedure simulations to image-guided surgery. Medical AR seeks to enable surgeons to see tissue segmentations in real time. With the objective of achieving real-time guidance, the emphasis on speed produces the need for a fast method for imaging and classification. Hyperspectral imaging (HSI) is a non-contact, optical imaging modality that rapidly acquires hundreds of images of tissue at different wavelengths, which can be used to generate spectral data of the tissue. Combining HSI information and machine-learning algorithms allows for effective tissue classification. In this paper, we constructed a brain tissue phantom with porcine blood, yellow-dyed gelatin, and colorless gelatin to represent blood vessels, tumor, and normal brain tissue, respectively. Using a segmentation algorithm, hundreds of hyperspectral images were compiled to classify each of the pixels. Three segmentation labels were generated from the data, each with a different type of tissue. Our system virtually superimposes the HSI channels and segmentation labels of a brain tumor phantom onto the real scene using the HoloLens AR headset. The user can manipulate and interact with the segmentation labels and HSI channels by repositioning, rotating, changing visibility, and switching between them. All actions can be performed through either hand or voice controls. This creates a convenient and multifaceted visualization of brain tissue in real time with minimal user restrictions. We demonstrate the feasibility of a fast and practical HIS-AR technique for potential use of image-guided brain surgery.

20.
Artigo em Inglês | MEDLINE | ID: mdl-32476701

RESUMO

Computer-assisted image segmentation techniques could help clinicians to perform the border delineation task faster with lower inter-observer variability. Recently, convolutional neural networks (CNNs) are widely used for automatic image segmentation. In this study, we used a technique to involve observer inputs for supervising CNNs to improve the accuracy of the segmentation performance. We added a set of sparse surface points as an additional input to supervise the CNNs for more accurate image segmentation. We tested our technique by applying minimal interactions to supervise the networks for segmentation of the prostate on magnetic resonance images. We used U-Net and a new network architecture that was based on U-Net (dual-input path [DIP] U-Net), and showed that our supervising technique could significantly increase the segmentation accuracy of both networks as compared to fully automatic segmentation using U-Net. We also showed DIP U-Net outperformed U-Net for supervised image segmentation. We compared our results to the measured inter-expert observer difference in manual segmentation. This comparison suggests that applying about 15 to 20 selected surface points can achieve a performance comparable to manual segmentation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA