Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
1.
Article in English | MEDLINE | ID: mdl-38501056

ABSTRACT

Magnetic resonance imaging (MRI) has gained popularity in the field of prenatal imaging due to the ability to provide high quality images of soft tissue. In this paper, we presented a novel method for extracting different textural and morphological features of the placenta from MRI volumes using topographical mapping. We proposed polar and planar topographical mapping methods to produce common placental features from a unique point of observation. The features extracted from the images included the entire placenta surface, as well as the thickness, intensity, and entropy maps displayed in a convenient two-dimensional format. The topography-based images may be useful for clinical placental assessments as well as computer-assisted diagnosis, and prediction of potential pregnancy complications.

2.
Article in English | MEDLINE | ID: mdl-36793655

ABSTRACT

Given the prevalence of cardiovascular diseases (CVDs), the segmentation of the heart on cardiac computed tomography (CT) remains of great importance. Manual segmentation is time-consuming and intra-and inter-observer variabilities yield inconsistent and inaccurate results. Computer-assisted, and in particular, deep learning approaches to segmentation continue to potentially offer an accurate, efficient alternative to manual segmentation. However, fully automated methods for cardiac segmentation have yet to achieve accurate enough results to compete with expert segmentation. Thus, we focus on a semi-automated deep learning approach to cardiac segmentation that bridges the divide between a higher accuracy from manual segmentation and higher efficiency from fully automated methods. In this approach, we selected a fixed number of points along the surface of the cardiac region to mimic user interaction. Points-distance maps were then generated from these points selections, and a three-dimensional (3D) fully convolutional neural network (FCNN) was trained using points-distance maps to provide a segmentation prediction. Testing our method with different numbers of selected points, we achieved a Dice score from 0.742 to 0.917 across the four chambers. Specifically. Dice scores averaged 0.846 ± 0.059, 0.857 ± 0.052, 0.826 ± 0.062, and 0.824 ± 0.062 for the left atrium, left ventricle, right atrium, and right ventricle, respectively across all points selections. This point-guided, image-independent, deep learning segmentation approach illustrated a promising performance for chamber-by-chamber delineation of the heart in CT images.

3.
Article in English | MEDLINE | ID: mdl-36793656

ABSTRACT

Phantoms are invaluable tools broadly used for research and training purposes designed to mimic tissues and structures in the body. In this paper, polyvinyl chloride (PVC)-plasticizer and silicone rubbers were explored as economical materials to reliably create long-lasting, realistic kidney phantoms with contrast under both ultrasound (US) and X-ray imaging. The radiodensity properties of varying formulations of soft PVC-based gels were characterized to allow adjustable image intensity and contrast. Using this data, a phantom creation workflow was established which can be easily adapted to match radiodensity values of other organs and soft tissues in the body. Internal kidney structures such as the medulla and ureter were created using a two-part molding process to allow greater phantom customization. The kidney phantoms were imaged under US and X-ray scanners to compare the contrast enhancement of a PVC-based medulla versus a silicone-based medulla. Silicone was found to have higher attenuation than plastic under X-ray imaging, but poor quality under US imaging. PVC was found to exhibit good contrast under X-ray imaging and excellent performance for US imaging. Finally, the durability and shelf life of our PVC-based phantoms were observed to be vastly superior to that of common agar-based phantoms. The work presented here allows extended periods of usage and storage for each kidney phantom while simultaneously preserving anatomical detail, contrast under dual-modality imaging, and low cost of materials.

4.
Article in English | MEDLINE | ID: mdl-36793657

ABSTRACT

Ultrasound-guided biopsy is widely used for disease detection and diagnosis. We plan to register preoperative imaging, such as positron emission tomography / computed tomography (PET/CT) and/or magnetic resonance imaging (MRI), with real-time intraoperative ultrasound imaging for improved localization of suspicious lesions that may not be seen on ultrasound but visible on other imaging modalities. Once the image registration is completed, we will combine the images from two or more imaging modalities and use Microsoft HoloLens 2 augmented reality (AR) headset to display three-dimensional (3D) segmented lesions and organs from previously acquired images and real-time ultrasound images. In this work, we are developing a multi-modal, 3D augmented reality system for the potential use in ultrasound-guided prostate biopsy. Preliminary results demonstrate the feasibility of combining images from multiple modalities into an AR-guided system.

5.
Article in English | MEDLINE | ID: mdl-36794092

ABSTRACT

Hyperspectral endoscopy can offer multiple advantages as compared to conventional endoscopy. Our goal is to design and develop a real-time hyperspectral endoscopic imaging system for the diagnosis of gastrointestinal (GI) tract cancers using a micro-LED array as an in-situ illumination source. The wavelengths of the system range from ultraviolet to visible and near infrared. To evaluate the use of the LED array for hyperspectral imaging, we designed a prototype system and conducted ex vivo experiments using normal and cancerous tissues of mice, chicken, and sheep. We compared the results of our LED-based approach with our reference hyperspectral camera system. The results confirm the similarity between the LED-based hyperspectral imaging system and the reference HSI camera. Our LED-based hyperspectral imaging system can be used not only as an endoscope but also as a laparoscopic or handheld devices for cancer detection and surgery.

6.
Article in English | MEDLINE | ID: mdl-36798450

ABSTRACT

Magnetic resonance imaging (MRI) is useful for the detection of abnormalities affecting maternal and fetal health. In this study, we used a fully convolutional neural network for simultaneous segmentation of the uterine cavity and placenta on MR images. We trained the network with MR images of 181 patients, with 157 for training and 24 for validation. The segmentation performance of the algorithm was evaluated using MR images of 60 additional patients that were not involved in training. The average Dice similarity coefficients achieved for the uterine cavity and placenta were 92% and 80%, respectively. The algorithm could estimate the volume of the uterine cavity and placenta with average errors of less than 1.1% compared to manual estimations. Automated segmentation, when incorporated into clinical use, has the potential to quantify, standardize, and improve placental assessment, resulting in improved outcomes for mothers and fetuses.

7.
Article in English | MEDLINE | ID: mdl-36798628

ABSTRACT

Hyperspectral imaging (HSI) and radiomics have the potential to improve the accuracy of tumor malignancy prediction and assessment. In this work, we extracted radiomic features of fresh surgical papillary thyroid carcinoma (PTC) specimen that were imaged with HSI. A total of 107 unique radiomic features were extracted. This study includes 72 ex-vivo tissue specimens from 44 patients with pathology-confirmed PTC. With the dilated hyperspectral images, the shape feature of least axis length was able to predict the tumor aggressiveness with a high accuracy. The HSI-based radiomic method may provide a useful tool to aid oncologists in determining tumors with intermediate to high risk and in clinical decision making.

8.
Article in English | MEDLINE | ID: mdl-36798853

ABSTRACT

In severe cases, placenta accreta spectrum (PAS) requires emergency hysterectomy, endangering the life of both mother and fetus. Early prediction may reduce complications and aid in management decisions in these high-risk pregnancies. In this work, we developed a novel convolutional network architecture to combine MRI volumes, radiomic features, and custom feature maps to predict PAS severe enough to result in hysterectomy after fetal delivery in pregnant women. We trained, optimized, and evaluated the networks using data from 241 patients, in groups of 157, 24, and 60 for training, validation, and testing, respectively. We found the network using all three paths produced the best performance, with an AUC of 87.8, accuracy 83.3%, sensitivity of 85.0, and specificity of 82.5. This deep learning algorithm, deployed in clinical settings, may identify women at risk before birth, resulting in improved patient outcomes.

9.
Article in English | MEDLINE | ID: mdl-36844110

ABSTRACT

In women with placenta accreta spectrum (PAS), patient management may involve cesarean hysterectomy at delivery. Magnetic resonance imaging (MRI) has been used for further evaluation of PAS and surgical planning. This work tackles two prediction problems: predicting presence of PAS and predicting hysterectomy using MR images of pregnant patients. First, we extracted approximately 2,500 radiomic features from MR images with two regions of interest: the placenta and the uterus. In addition to analyzing two regions of interest, we dilated the placenta and uterus masks by 5, 10, 15, and 20 mm to gain insights from the myometrium, where the uterus and placenta overlap in the case of PAS. This study cohort includes 241 pregnant women. Of these women, 89 underwent hysterectomy while 152 did not; 141 with suspected PAS, and 100 without suspected PAS. We obtained an accuracy of 0.88 for predicting hysterectomy and an accuracy of 0.92 for classifying suspected PAS. The radiomic analysis tool is further validated, it can be useful for aiding clinicians in decision making on the care of pregnant women.

10.
Med Phys ; 49(2): 1153-1160, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34902166

ABSTRACT

PURPOSE: The goal is to study the performance improvement of a deep learning algorithm in three-dimensional (3D) image segmentation through incorporating minimal user interaction into a fully convolutional neural network (CNN). METHODS: A U-Net CNN was trained and tested for 3D prostate segmentation in computed tomography (CT) images. To improve the segmentation accuracy, the CNN's input images were annotated with a set of border landmarks to supervise the network for segmenting the prostate. The network was trained and tested again with annotated images after 5, 10, 15, 20, or 30 landmark points were used. RESULTS: Compared to fully automatic segmentation, the Dice similarity coefficient increased up to 9% when 5-30 sparse landmark points were involved, with the segmentation accuracy improving as more border landmarks were used. CONCLUSIONS: When a limited number of sparse border landmarks are used on the input image, the CNN performance approaches the interexpert observer difference observed in manual segmentation.


Subject(s)
Image Processing, Computer-Assisted , Prostate , Data Curation , Humans , Male , Neural Networks, Computer , Prostate/diagnostic imaging , Tomography, X-Ray Computed
11.
J Med Imaging (Bellingham) ; 8(5): 054001, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34589556

ABSTRACT

Purpose: Magnetic resonance imaging has been recently used to examine the abnormalities of the placenta during pregnancy. Segmentation of the placenta and uterine cavity allows quantitative measures and further analyses of the organs. The objective of this study is to develop a segmentation method with minimal user interaction. Approach: We developed a fully convolutional neural network (CNN) for simultaneous segmentation of the uterine cavity and placenta in three dimensions (3D) while a minimal operator interaction was incorporated for training and testing of the network. The user interaction guided the network to localize the placenta more accurately. In the experiments, we trained two CNNs, one using 70 normal training cases and the other using 129 training cases including normal cases as well as cases with suspected placenta accreta spectrum (PAS). We evaluated the performance of the segmentation algorithms on two test sets: one with 20 normal cases and the other with 50 images from both normal women and women with suspected PAS. Results: For the normal test data, the average Dice similarity coefficient (DSC) was 92% and 82% for the uterine cavity and placenta, respectively. For the combination of normal and abnormal cases, the DSC was 88% and 83% for the uterine cavity and placenta, respectively. The 3D segmentation algorithm estimated the volume of the normal and abnormal uterine cavity and placenta with average volume estimation errors of 4% and 9%, respectively. Conclusions: The deep learning-based segmentation method provides a useful tool for volume estimation and analysis of the placenta and uterus cavity in human placental imaging.

12.
Article in English | MEDLINE | ID: mdl-35755403

ABSTRACT

Surgery is a major treatment method for squamous cell carcinoma (SCC). During surgery, insufficient tumor margin may lead to local recurrence of cancer. Hyperspectral imaging (HSI) is a promising optical imaging technique for in vivo cancer detection and tumor margin assessment. In this study, a fully convolutional network (FCN) was implemented for tumor classification and margin assessment on hyperspectral images of SCC. The FCN was trained and validated with hyperspectral images of 25 ex vivo SCC surgical specimens from 20 different patients. The network was evaluated per patient and achieved pixel-level tissue classification with an average area under the curve (AUC) of 0.88, as well as 0.83 accuracy, 0.84 sensitivity, and 0.70 specificity across all the 20 patients. The 95% Hausdorff distance of assessed tumor margin in 17 patients was less than 2 mm, and the classification time of each tissue specimen took less than 10 seconds. The proposed methods can potentially facilitate intraoperative tumor margin assessment and improve surgical outcomes.

13.
Article in English | MEDLINE | ID: mdl-35755405

ABSTRACT

Accurate segmentation of the prostate on computed tomography (CT) has many diagnostic and therapeutic applications. However, manual segmentation is time-consuming and suffers from high inter- and intra-observer variability. Computer-assisted approaches are useful to speed up the process and increase the reproducibility of the segmentation. Deep learning-based segmentation methods have shown potential for quick and accurate segmentation of the prostate on CT images. However, difficulties in obtaining manual, expert segmentations on a large quantity of images limit further progress. Thus, we proposed an approach to train a base model on a small, manually-labeled dataset and fine-tuned the model using unannotated images from a large dataset without any manual segmentation. The datasets used for pre-training and fine-tuning the base model have been acquired in different centers with different CT scanners and imaging parameters. Our fine-tuning method increased the validation and testing Dice scores. A paired, two-tailed t-test shows a significant change in test score (p = 0.017) demonstrating that unannotated images can be used to increase the performance of automated segmentation models.

14.
Article in English | MEDLINE | ID: mdl-35784009

ABSTRACT

We designed a compact, real-time LED-based endoscopic imaging system for the detection of various diseases including cancer. In gastrointestinal applications, conventional endoscopy cannot reliably differentiate tumor from normal tissue. Current hyperspectral imaging systems are too slow to be used for real-time endoscopic applications. We are investigating real-time spectral imaging for different tissue types. Our objective is to develop a catheter for real-time hyperspectral gastrointestinal endoscopy. The endoscope uses multiple wavelengths within UV, visible, and IR light spectra generated by a micro-LED array. We capture images with a monochrome micro camera, which is cost-effective and smaller than the current hyperspectral imagers. A wireless transceiver sends the captured images to a workstation for further processing, such as tumor detection. The spatial resolution of the system is defined by camera resolution and the distance to the object, while the number of LEDs in the multi-wavelength light source determines the spectral resolution. To investigate the properties and the limitations of our high-speed spectral imaging approach, we designed a prototype system. We conducted two experiments to measure the optimal forward voltages and lighting duration of the LEDs. These factors affect the maximum feasible imaging rate and resolution. The lighting duration of each LED can be shorter than 10 ms while producing an image with a high signal-to-noise ratio and no illumination interference. These results support the idea of using a high-speed camera and an LED-array for real-time hyperspectral endoscopic imaging.

15.
Article in English | MEDLINE | ID: mdl-35784397

ABSTRACT

A Deep-Learning (DL) based segmentation tool was applied to a new magnetic resonance imaging dataset of pregnant women with suspected Placenta Accreta Spectrum (PAS). Radiomic features from DL segmentation were compared to those from expert manual segmentation via intraclass correlation coefficients (ICC) to assess reproducibility. An additional imaging marker quantifying the placental location within the uterus (PLU) was included. Features with an ICC > 0.7 were used to build logistic regression models to predict hysterectomy. Of 2059 features, 781 (37.9%) had ICC >0.7. AUC was 0.69 (95% CI 0.63-0.74) for manually segmented data and 0.78 (95% CI 0.73-0.83) for DL segmented data.

16.
Article in English | MEDLINE | ID: mdl-35177877

ABSTRACT

Cardiac catheterization is a delicate strategy often used during various heart procedures. However, the procedure carries a myriad of risks associated with it, including damage to the vessel or heart itself, blood clots, and arrhythmias. Many of these risks increase in probability as the length of the operation increases, creating a demand for a more accurate procedure while reducing the overall time required. To this end, we developed an adaptable virtual reality simulation and visualization method to provide essential information to the physician ahead of time with the goal of reducing potential risks, decreasing operation time, and improving the accuracy of cardiac catheterization procedures. We additionally conducted a phantom study to evaluate the impact of using our virtual reality system prior to a procedure.

17.
Article in English | MEDLINE | ID: mdl-32606488

ABSTRACT

Wearable augmented reality (AR) is an emerging technology with enormous potential for use in the medical field, from training and procedure simulations to image-guided surgery. Medical AR seeks to enable surgeons to see tissue segmentations in real time. With the objective of achieving real-time guidance, the emphasis on speed produces the need for a fast method for imaging and classification. Hyperspectral imaging (HSI) is a non-contact, optical imaging modality that rapidly acquires hundreds of images of tissue at different wavelengths, which can be used to generate spectral data of the tissue. Combining HSI information and machine-learning algorithms allows for effective tissue classification. In this paper, we constructed a brain tissue phantom with porcine blood, yellow-dyed gelatin, and colorless gelatin to represent blood vessels, tumor, and normal brain tissue, respectively. Using a segmentation algorithm, hundreds of hyperspectral images were compiled to classify each of the pixels. Three segmentation labels were generated from the data, each with a different type of tissue. Our system virtually superimposes the HSI channels and segmentation labels of a brain tumor phantom onto the real scene using the HoloLens AR headset. The user can manipulate and interact with the segmentation labels and HSI channels by repositioning, rotating, changing visibility, and switching between them. All actions can be performed through either hand or voice controls. This creates a convenient and multifaceted visualization of brain tissue in real time with minimal user restrictions. We demonstrate the feasibility of a fast and practical HIS-AR technique for potential use of image-guided brain surgery.

18.
Article in English | MEDLINE | ID: mdl-32528216

ABSTRACT

Guided biopsy of soft tissue lesions can be challenging in the presence of sensitive organs or when the lesion itself is small. Computed tomography (CT) is the most frequently used modality to target soft tissue lesions. In order to aid physicians, small field of view (FOV) low dose non-contrast CT volumes are acquired prior to intervention while the patient is on the procedure table to localize the lesion and plan the best approach. However, patient motion between the end of the scan and the start of the biopsy procedure can make it difficult for a physician to translate the lesion location from the CT onto the patient body, especially for a deep-seated lesion. In addition, the needle should be managed well in three-dimensional trajectories in order to reach the lesion and avoid vital structures. This is especially challenging for less experienced interventionists. These usually result in multiple additional image acquisitions during the course of procedure to ensure accurate needle placement, especially when multiple core biopsies are required. In this work, we present an augmented reality (AR)-guided biopsy system and procedure for soft tissue and lung lesions and quantify the results using a phantom study. We found an average error of 0.75 cm from the center of the lesion when AR guidance was used, compared to an error of 1.52 cm from the center of the lesion during unguided biopsy for soft tissue lesions while upon testing the system on lung lesions, an average error of 0.62 cm from the center of the tumor while using AR guidance versus a 1.12 cm error while relying on unguided biopsies. The AR-guided system is able to improve the accuracy and could be useful in the clinical application.

19.
Article in English | MEDLINE | ID: mdl-32476701

ABSTRACT

Computer-assisted image segmentation techniques could help clinicians to perform the border delineation task faster with lower inter-observer variability. Recently, convolutional neural networks (CNNs) are widely used for automatic image segmentation. In this study, we used a technique to involve observer inputs for supervising CNNs to improve the accuracy of the segmentation performance. We added a set of sparse surface points as an additional input to supervise the CNNs for more accurate image segmentation. We tested our technique by applying minimal interactions to supervise the networks for segmentation of the prostate on magnetic resonance images. We used U-Net and a new network architecture that was based on U-Net (dual-input path [DIP] U-Net), and showed that our supervising technique could significantly increase the segmentation accuracy of both networks as compared to fully automatic segmentation using U-Net. We also showed DIP U-Net outperformed U-Net for supervised image segmentation. We compared our results to the measured inter-expert observer difference in manual segmentation. This comparison suggests that applying about 15 to 20 selected surface points can achieve a performance comparable to manual segmentation.

20.
Article in English | MEDLINE | ID: mdl-32476702

ABSTRACT

Segmentation of the uterine cavity and placenta in fetal magnetic resonance (MR) imaging is useful for the detection of abnormalities that affect maternal and fetal health. In this study, we used a fully convolutional neural network for 3D segmentation of the uterine cavity and placenta while a minimal operator interaction was incorporated for training and testing the network. The user interaction guided the network to localize the placenta more accurately. We trained the network with 70 training and 10 validation MRI cases and evaluated the algorithm segmentation performance using 20 cases. The average Dice similarity coefficient was 92% and 82% for the uterine cavity and placenta, respectively. The algorithm could estimate the volume of the uterine cavity and placenta with average errors of 2% and 9%, respectively. The results demonstrate that the deep learning-based segmentation and volume estimation is possible and can potentially be useful for clinical applications of human placental imaging.

SELECTION OF CITATIONS
SEARCH DETAIL