Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 39
1.
Med Phys ; 51(3): 2044-2056, 2024 Mar.
Article En | MEDLINE | ID: mdl-37708456

BACKGROUND: Ultrasound (US) has demonstrated to be an effective guidance technique for lumbar spine injections, enabling precise needle placement without exposing the surgeon or the patient to ionizing radiation. However, noise and acoustic shadowing artifacts make US data interpretation challenging. To mitigate these problems, many authors suggested using computed tomography (CT)-to-US registration to align the spine in pre-operative CT to intra-operative US data, thus providing localization of spinal landmarks. PURPOSE: In this paper, we propose a deep learning (DL) pipeline for CT-to-US registration and address the problem of a need for annotated medical data for network training. Firstly, we design a data generation method to generate paired CT-US data where the spine is deformed in a physically consistent manner. Secondly, we train a point cloud (PC) registration network using anatomy-aware losses to enforce anatomically consistent predictions. METHODS: Our proposed pipeline relies on training the network on realistic generated data. In our data generation method, we model the properties of the joints and disks between vertebrae based on biomechanical measurements in previous studies. We simulate the supine and prone position deformation by applying forces on the spine models. We choose the spine models from 35 patients in VerSe dataset. Each spine is deformed 10 times to create a noise-free data with ground-truth segmentation at hand. In our experiments, we use one-leave-out cross-validation strategy to measure the performance and the stability of the proposed method. For each experiment, we choose generated PCs from three spines as the test set. From the remaining, data from 3 spines act as the validation set and we use the rest of the data for training the algorithm. To train our network, we introduce anatomy-aware losses and constraints on the movement to match the physics of the spine, namely, rigidity loss and bio-mechanical loss. We define rigidity loss based on the fact that each vertebra can only transform rigidly while the disks and the surrounding tissue are deformable. Second, by using bio-mechanical loss we stop the network from inferring extreme movements by penalizing the force needed to get to a certain pose. RESULTS: To validate the effectiveness of our fully automated data generation pipeline, we qualitatively assess the fidelity of the generated data. This assessment involves verifying the realism of the spinal deformation and subsequently confirming the plausibility of the simulated ultrasound images. Next, we demonstrate that the introduction of the anatomy-aware losses brings us closer to state-of-the-art (SOTA) and yields a reduction of 0.25 mm in terms of target registration error (TRE) compared to using only mean squared error (MSE) loss on the generated dataset. Furthermore, by using the proposed losses, the rigidity loss in inference decreases which shows that the inferred deformation respects the rigidity of the vertebrae and only introduces deformations in the soft tissue area to compensate the difference to the target PC. We also show that our results are close to the SOTA for the simulated US dataset with TRE of 3.89 mm and 3.63 mm for the proposed method and SOTA respectively. In addition, we show that our method is more robust against errors in the initialization in comparison to SOTA and significantly achieves better results (TRE of 4.88 mm compared to 5.66 mm) in this experiment. CONCLUSIONS: In conclusion, we present a pipeline for spine CT-to-US registration and explore the potential benefits of utilizing anatomy-aware losses to enhance registration results. Additionally, we propose a fully automatic method to synthesize paired CT-US data with physically consistent deformations, which offers the opportunity to generate extensive datasets for network training. The generated dataset and the source code for data generation and registration pipeline can be accessed via https://github.com/mfazampour/medphys_ct_us_registration.


Spine , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Spine/diagnostic imaging , Algorithms , Lumbar Vertebrae , Software , Radiation, Ionizing , Image Processing, Computer-Assisted/methods
2.
J Orthop Res ; 42(4): 729-736, 2024 Apr.
Article En | MEDLINE | ID: mdl-37874323

This study aimed to create a conversion equation that accurately predicts cartilage magnetic resonance imaging (MRI) T2 relaxation times using ultrasound echo-intensity and common participant demographics. We recruited 15 participants with a primary anterior cruciate ligament reconstruction between the ages of 18 and 35 years at 1-5 years after surgery. A single investigator completed a transverse suprapatellar scan with the ACLR limb in max knee flexion to image the femoral trochlea cartilage. A single reader manually segmented the femoral cartilage cross-sectional area to assess the echo-intensity (i.e., mean gray-scale pixel value). At a separate visit, a T2 mapping sequence with the MRI beam set to an oblique angle was used to image the femoral trochlea cartilage. A single reader manually segmented the cartilage cross-sectional area on a single MRI slice to assess the T2 relaxation time. A stepwise, multiple linear regression was used to predict T2 relaxation time from cartilage echo-intensity and common demographic variables. We created a conversion equation using the regression betas and then used an ICC and Bland-Altman plot to assess agreement between the estimated and true T2 relaxation time. Cartilage ultrasound echo-intensity and age significantly predicted T2 relaxation time (F = 7.33, p = 0.008, R2 = 0.55). When using the new conversion equation to estimate T2 relaxation time from cartilage echo-intensity and age, there was strong agreement between the estimated and true T2 relaxation time (ICC2,k = 0.84). This study provides promising preliminary data that cartilage echo-intensity combined with age can be used as a clinically accessible tool for evaluating cartilage composition.


Anterior Cruciate Ligament Injuries , Anterior Cruciate Ligament Reconstruction , Cartilage, Articular , Humans , Adolescent , Young Adult , Adult , Knee Joint/pathology , Cartilage, Articular/pathology , Femur/diagnostic imaging , Femur/surgery , Anterior Cruciate Ligament Injuries/surgery , Anterior Cruciate Ligament Reconstruction/methods , Magnetic Resonance Imaging/methods
3.
Med Image Anal ; 88: 102846, 2023 08.
Article En | MEDLINE | ID: mdl-37295311

Denoising diffusion models, a class of generative models, have garnered immense interest lately in various deep-learning problems. A diffusion probabilistic model defines a forward diffusion stage where the input data is gradually perturbed over several steps by adding Gaussian noise and then learns to reverse the diffusion process to retrieve the desired noise-free data from noisy data samples. Diffusion models are widely appreciated for their strong mode coverage and quality of the generated samples in spite of their known computational burdens. Capitalizing on the advances in computer vision, the field of medical imaging has also observed a growing interest in diffusion models. With the aim of helping the researcher navigate this profusion, this survey intends to provide a comprehensive overview of diffusion models in the discipline of medical imaging. Specifically, we start with an introduction to the solid theoretical foundation and fundamental concepts behind diffusion models and the three generic diffusion modeling frameworks, namely, diffusion probabilistic models, noise-conditioned score networks, and stochastic differential equations. Then, we provide a systematic taxonomy of diffusion models in the medical domain and propose a multi-perspective categorization based on their application, imaging modality, organ of interest, and algorithms. To this end, we cover extensive applications of diffusion models in the medical domain, including image-to-image translation, reconstruction, registration, classification, segmentation, denoising, 2/3D generation, anomaly detection, and other medically-related challenges. Furthermore, we emphasize the practical use case of some selected approaches, and then we discuss the limitations of the diffusion models in the medical domain and propose several directions to fulfill the demands of this field. Finally, we gather the overviewed studies with their available open-source implementations at our GitHub.1 We aim to update the relevant latest papers within it regularly.


Diagnostic Imaging , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Algorithms , Models, Statistical
4.
Article En | MEDLINE | ID: mdl-37028313

Ultrasound (US) imaging is a paramount modality in many image-guided surgeries and percutaneous interventions, thanks to its high portability, temporal resolution, and cost-efficiency. However, due to its imaging principles, the US is often noisy and difficult to interpret. Appropriate image processing can greatly enhance the applicability of the imaging modality in clinical practice. Compared with the classic iterative optimization and machine learning (ML) approach, deep learning (DL) algorithms have shown great performance in terms of accuracy and efficiency for US processing. In this work, we conduct a comprehensive review on deep-learning algorithms in the applications of US-guided interventions, summarize the current trends, and suggest future directions on the topic.


Deep Learning , Machine Learning , Image Processing, Computer-Assisted/methods , Algorithms , Ultrasonography, Interventional
5.
Neurophotonics ; 9(2): 025002, 2022 Apr.
Article En | MEDLINE | ID: mdl-35651869

Significance: Interaction of neurons with their extracellular environment and the mechanical forces at focal adhesions and synaptic junctions play important roles in neuronal development. Aim: To advance studies of mechanotransduction, we demonstrate the use of the vinculin tension sensor (VinTS) in primary cultures of cortical neurons. VinTS consists of TS module (TSMod), a Förster resonance energy transfer (FRET)-based tension sensor, inserted between vinculin's head and tail. FRET efficiency decreases with increased tension across vinculin. Approach: Primary cortical neurons cultured on glass coverslips coated with poly-d-lysine and laminin were transfected with plasmids encoding untargeted TSMod, VinTS, or tail-less vinculinTS (VinTL) lacking the actin-binding domain. The neurons were imaged between day in vitro (DIV) 5 to 8. We detail the image processing steps for calculation of FRET efficiency and use this system to investigate the expression and FRET efficiency of VinTS in growth cones. Results: The distribution of fluorescent constructs was similar within growth cones at DIV 5 to 8. The mean FRET efficiency of TSMod ( 28.5 ± 3.6 % ) in growth cones was higher than the mean FRET efficiency of VinTS ( 24.6 ± 2 % ) and VinTL ( 25.8 ± 1.8 % ) ( p < 10 - 6 ). While small, the difference between the FRET efficiency of VinTS and VinTL was statistically significant ( p < 10 - 3 ), suggesting that vinculin is under low tension in growth cones. Two-hour treatment with the Rho-associated kinase inhibitor Y-27632 did not affect the mean FRET efficiency. Growth cones exhibited dynamic changes in morphology as observed by time-lapse imaging. VinTS FRET efficiency showed greater variance than TSMod FRET efficiency as a function of time, suggesting a greater dependence of VinTS FRET efficiency on growth cone dynamics compared with TSMod. Conclusions: The results demonstrate the feasibility of using VinTS to probe the function of vinculin in neuronal growth cones and provide a foundation for studies of mechanotransduction in neurons using this tension probe.

6.
Cartilage ; 13(2): 19476035221093069, 2022.
Article En | MEDLINE | ID: mdl-35438030

OBJECTIVE: To validate a semi-automated technique to segment ultrasound-assessed femoral cartilage without compromising segmentation accuracy to a traditional manual segmentation technique in participants with an anterior cruciate ligament injury (ACL). DESIGN: We recruited 27 participants with a primary unilateral ACL injury at a pre-operative clinic visit. One investigator performed a transverse suprapatellar ultrasound scan with the participant's ACL injured knee in maximum flexion. Three femoral cartilage ultrasound images were recorded. A single expert reader manually segmented the femoral cartilage cross-sectional area in each image. In addition, we created a semi-automatic program to segment the cartilage using a random walker-based method. We quantified the average cartilage thickness and echo-intensity for the manual and semi-automated segmentations. Intraclass correlation coefficients (ICC2,k) and Bland-Altman plots were used to validate the semi-automated technique to the manual segmentation for assessing average cartilage thickness and echo-intensity. A dice correlation coefficient was used to quantify the overlap between the segmentations created with the semi-automated and manual techniques. RESULTS: For average cartilage thickness, there was excellent reliability (ICC2,k = 0.99) and a small mean difference (+0.8%) between the manual and semi-automated segmentations. For average echo-intensity, there was excellent reliability (ICC2,k = 0.97) and a small mean difference (-2.5%) between the manual and semi-automated segmentations. The average dice correlation coefficient between the manual segmentation and semi-automated segmentation was 0.90, indicating high overlap between techniques. CONCLUSIONS: Our novel semi-automated segmentation technique is a valid method that requires less technical expertise and time than manual segmentation in patients after ACL injury.


Anterior Cruciate Ligament Injuries , Cartilage, Articular , Anterior Cruciate Ligament Injuries/diagnostic imaging , Cartilage, Articular/diagnostic imaging , Cartilage, Articular/injuries , Humans , Knee Joint/diagnostic imaging , Reproducibility of Results , Ultrasonography
7.
IEEE Trans Med Imaging ; 41(4): 965-976, 2022 04.
Article En | MEDLINE | ID: mdl-34813472

Most methods for medical image segmentation use U-Net or its variants as they have been successful in most of the applications. After a detailed analysis of these "traditional" encoder-decoder based approaches, we observed that they perform poorly in detecting smaller structures and are unable to segment boundary regions precisely. This issue can be attributed to the increase in receptive field size as we go deeper into the encoder. The extra focus on learning high level features causes U-Net based approaches to learn less information about low-level features which are crucial for detecting small structures. To overcome this issue, we propose using an overcomplete convolutional architecture where we project the input image into a higher dimension such that we constrain the receptive field from increasing in the deep layers of the network. We design a new architecture for im- age segmentation- KiU-Net which has two branches: (1) an overcomplete convolutional network Kite-Net which learns to capture fine details and accurate edges of the input, and (2) U-Net which learns high level features. Furthermore, we also propose KiU-Net 3D which is a 3D convolutional architecture for volumetric segmentation. We perform a detailed study of KiU-Net by performing experiments on five different datasets covering various image modalities. We achieve a good performance with an additional benefit of fewer parameters and faster convergence. We also demonstrate that the extensions of KiU-Net based on residual blocks and dense blocks result in further performance improvements. Code: https://github.com/jeya-maria-jose/KiU-Net-pytorch.


Image Processing, Computer-Assisted , Neural Networks, Computer , Image Processing, Computer-Assisted/methods
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2618-2621, 2021 11.
Article En | MEDLINE | ID: mdl-34891790

The global pandemic of the novel coronavirus disease 2019 (COVID-19) has put tremendous pressure on the medical system. Imaging plays a complementary role in the management of patients with COVID-19. Computed tomography (CT) and chest X-ray (CXR) are the two dominant screening tools. However, difficulty in eliminating the risk of disease transmission, radiation exposure and not being cost-effective are some of the challenges for CT and CXR imaging. This fact induces the implementation of lung ultrasound (LUS) for evaluating COVID-19 due to its practical advantages of noninvasiveness, repeatability, and sensitive bedside property. In this paper, we utilize a deep learning model to perform the classification of COVID-19 from LUS data, which could produce objective diagnostic information for clinicians. Specifically, all LUS images are processed to obtain their corresponding local phase filtered images and radial symmetry transformed images before fed into the multi-scale residual convolutional neural network (CNN). Secondly, image combination as the input of the network is used to explore rich and reliable features. Feature fusion strategy at different levels is adopted to investigate the relationship between the depth of feature aggregation and the classification accuracy. Our proposed method is evaluated on the point-of-care US (POCUS) dataset together with the Italian COVID-19 Lung US database (ICLUS-DB) and shows promising performance for COVID-19 prediction.


COVID-19 , Humans , Lung/diagnostic imaging , Neural Networks, Computer , SARS-CoV-2
9.
Int J Comput Assist Radiol Surg ; 16(9): 1537-1548, 2021 Sep.
Article En | MEDLINE | ID: mdl-34097226

PURPOSE: Ultrasound (US) is the preferred modality for fatty liver disease diagnosis due to its noninvasive, real-time, and cost-effective imaging capabilities. However, traditional B-mode US is qualitative, and therefore, the assessment is very subjective. Computer-aided diagnostic tools can improve the specificity and sensitivity of US and help clinicians to perform uniform diagnoses. METHODS: In this work, we propose a novel deep learning model for nonalcoholic fatty liver disease classification from US data. We design a multi-feature guided multi-scale residual convolutional neural network (CNN) architecture to capture features of different receptive fields. B-mode US images are combined with their corresponding local phase filtered images and radial symmetry transformed images as multi-feature inputs for the network. Various fusion strategies are studied to improve prediction accuracy. We evaluate the designed network architectures on B-mode in vivo liver US images collected from 55 subjects. We also provide quantitative results by comparing our proposed multi-feature CNN architecture against traditional CNN designs and machine learning methods. RESULTS: Quantitative results show an average classification accuracy above 90% over tenfold cross-validation. Our proposed method achieves a 97.8% area under the ROC curve (AUC) for the patient-specific leave-one-out cross-validation (LOOCV) evaluation. Comprehensive validation results further demonstrate that our proposed approaches achieve significant improvements compared to training mono-feature CNN architectures ([Formula: see text]). CONCLUSIONS: Feature combination is valuable for the traditional classification methods, and the use of multi-scale CNN can improve liver classification accuracy. Based on the promising performance, the proposed method has the potential in practical applications to help radiologists diagnose nonalcoholic fatty liver disease.


Liver Diseases , Neural Networks, Computer , Humans , Liver Diseases/diagnostic imaging , Machine Learning , Ultrasonography
10.
Front Bioeng Biotechnol ; 9: 678048, 2021.
Article En | MEDLINE | ID: mdl-34178967

The association between blood viscosity and pathological conditions involving a number of organ systems is well known. However, how the body measures and maintains appropriate blood viscosity is not well-described. The literature endorsing the function of the carotid sinus as a site of baroreception can be traced back to some of the earliest descriptions of digital pressure on the neck producing a drop in blood delivery to the brain. For the last 30 years, improved computational fluid dynamic (CFD) simulations of blood flow within the carotid sinus have demonstrated a more nuanced understanding of the changes in the region as it relates to changes in conventional metrics of cardiovascular function, including blood pressure. We suggest that the unique flow patterns within the carotid sinus may make it an ideal site to transduce flow data that can, in turn, enable real-time measurement of blood viscosity. The recent characterization of the PIEZO receptor family in the sinus vessel wall may provide a biological basis for this characterization. When coupled with other biomarkers of cardiovascular performance and descriptions of the blood rheology unique to the sinus region, this represents a novel venue for bioinspired design that may enable end-users to manipulate and optimize blood flow.

11.
Int J Comput Assist Radiol Surg ; 16(5): 819-827, 2021 May.
Article En | MEDLINE | ID: mdl-33840037

PURPOSE: Accurate placement of the needle is critical in interventions like biopsies and regional anesthesia, during which incorrect needle insertion can lead to procedure failure and complications. Therefore, ultrasound guidance is widely used to improve needle placement accuracy. However, at steep and deep insertions, the visibility of the needle is lost. Computational methods for automatic needle tip localization could improve the clinical success rate in these scenarios. METHODS: We propose a novel algorithm for needle tip localization during challenging ultrasound-guided insertions when the shaft may be invisible, and the tip has a low intensity. There are two key steps in our approach. First, we enhance the needle tip features in consecutive ultrasound frames using a detection scheme which recognizes subtle intensity variations caused by needle tip movement. We then employ a hybrid deep neural network comprising a convolutional neural network and long short-term memory recurrent units. The input to the network is a consecutive plurality of fused enhanced frames and the corresponding original B-mode frames, and this spatiotemporal information is used to predict the needle tip location. RESULTS: We evaluate our approach on an ex vivo dataset collected with in-plane and out-of-plane insertion of 17G and 22G needles in bovine, porcine, and chicken tissue, acquired using two different ultrasound systems. We train the model with 5000 frames from 42 video sequences. Evaluation on 600 frames from 30 sequences yields a tip localization error of [Formula: see text] mm and an overall inference time of 0.064 s (15 fps). Comparison against prior art on challenging datasets reveals a 30% improvement in tip localization accuracy. CONCLUSION: The proposed method automatically models temporal dynamics associated with needle tip motion and is more accurate than state-of-the-art methods. Therefore, it has the potential for improving needle tip localization in challenging ultrasound-guided interventions.


Motion , Neural Networks, Computer , Surgery, Computer-Assisted/methods , Ultrasonography, Interventional/methods , Ultrasonography/methods , Algorithms , Animals , Artifacts , Biopsy , Cattle , Chickens , Needles , Reproducibility of Results , Swine
12.
Int J Comput Assist Radiol Surg ; 16(2): 197-206, 2021 Feb.
Article En | MEDLINE | ID: mdl-33420641

PURPOSE: Recently, the outbreak of the novel coronavirus disease 2019 (COVID-19) pandemic has seriously endangered human health and life. In fighting against COVID-19, effective diagnosis of infected patient is critical for preventing the spread of diseases. Due to limited availability of test kits, the need for auxiliary diagnostic approach has increased. Recent research has shown radiography of COVID-19 patient, such as CT and X-ray, contains salient information about the COVID-19 virus and could be used as an alternative diagnosis method. Chest X-ray (CXR) due to its faster imaging time, wide availability, low cost, and portability gains much attention and becomes very promising. In order to reduce intra- and inter-observer variability, during radiological assessment, computer-aided diagnostic tools have been used in order to supplement medical decision making and subsequent management. Computational methods with high accuracy and robustness are required for rapid triaging of patients and aiding radiologist in the interpretation of the collected data. METHOD: In this study, we design a novel multi-feature convolutional neural network (CNN) architecture for multi-class improved classification of COVID-19 from CXR images. CXR images are enhanced using a local phase-based image enhancement method. The enhanced images, together with the original CXR data, are used as an input to our proposed CNN architecture. Using ablation studies, we show the effectiveness of the enhanced images in improving the diagnostic accuracy. We provide quantitative evaluation on two datasets and qualitative results for visual inspection. Quantitative evaluation is performed on data consisting of 8851 normal (healthy), 6045 pneumonia, and 3323 COVID-19 CXR scans. RESULTS: In Dataset-1, our model achieves 95.57% average accuracy for a three classes classification, 99% precision, recall, and F1-scores for COVID-19 cases. For Dataset-2, we have obtained 94.44% average accuracy, and 95% precision, recall, and F1-scores for detection of COVID-19. CONCLUSIONS: Our proposed multi-feature-guided CNN achieves improved results compared to single-feature CNN proving the importance of the local phase-based CXR image enhancement. Future work will involve further evaluation of the proposed method on a larger-size COVID-19 dataset as they become available.


COVID-19/diagnostic imaging , Neural Networks, Computer , Pneumonia/diagnostic imaging , Radiography, Thoracic/methods , Thorax/diagnostic imaging , Algorithms , Deep Learning , Humans , Pandemics , Tomography, X-Ray Computed/methods
13.
Int J Comput Assist Radiol Surg ; 15(9): 1477-1485, 2020 Sep.
Article En | MEDLINE | ID: mdl-32656685

PURPOSE: Real-time, two (2D) and three-dimensional (3D) ultrasound (US) has been investigated as a potential alternative to fluoroscopy imaging in various surgical and non-surgical orthopedic procedures. However, low signal to noise ratio, imaging artifacts and bone surfaces appearing several millimeters (mm) in thickness have hindered the wide spread adaptation of this safe imaging modality. Limited field of view and manual data collection cause additional problems during US-based orthopedic procedures. In order to overcome these limitations various bone segmentation and registration methods have been developed. Acoustic bone shadow is an important image artifact used to identify the presence of bone boundaries in the collected US data. Information about bone shadow region can be used (1) to guide the orthopedic surgeon or clinician to a standardized diagnostic viewing plane with minimal artifacts, (2) as a prior feature to improve bone segmentation and registration. METHOD: In this work, we propose a computational method, based on a novel generative adversarial network (GAN) architecture, to segment bone shadow images from in vivo US scans in real-time. We also show how these segmented shadow images can be incorporated, as a proxy, to a multi-feature guided convolutional neural network (CNN) architecture for real-time and accurate bone surface segmentation. Quantitative and qualitative evaluation studies are performed on 1235 scans collected from 27 subjects using two different US machines. Finally, we provide qualitative and quantitative comparison results against state-of-the-art GANs. RESULTS: We have obtained mean dice coefficient (± standard deviation) of [Formula: see text] ([Formula: see text]) for bone shadow segmentation, showing that the method is in close range with manual expert annotation. Statistical significant improvements against state-of-the-art GAN methods (paired t-test [Formula: see text]) is also obtained. Using the segmented bone shadow features average bone localization accuracy of 0.11 mm ([Formula: see text]) was achieved. CONCLUSIONS: Reported accurate and robust results make the proposed method promising for various orthopedic procedures. Although we did not investigate in this work, the segmented bone shadow images could also be used as an additional feature to improve accuracy of US-based registration methods. Further extensive validations are required in order to fully understand the clinical utility of the proposed method.


Bone Diseases/diagnostic imaging , Bone and Bones/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Fluoroscopy , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Ultrasonography , Acoustics , Algorithms , Artifacts , Computer Simulation , Humans , Image Processing, Computer-Assisted , Orthopedic Procedures , Orthopedics , Reproducibility of Results
14.
Int J Comput Assist Radiol Surg ; 15(7): 1127-1135, 2020 Jul.
Article En | MEDLINE | ID: mdl-32430694

PURPOSE: Automatic bone surfaces segmentation is one of the fundamental tasks of ultrasound (US)-guided computer-assisted orthopedic surgery procedures. However, due to various US imaging artifacts, manual operation of the transducer during acquisition, and different machine settings, many existing methods cannot deal with the large variations of the bone surface responses, in the collected data, without manual parameter selection. Even for fully automatic methods, such as deep learning-based methods, the problem of dataset bias causes networks to perform poorly on the US data that are different from the training set. METHODS: In this work, an intensity-invariant convolutional neural network (CNN) architecture is proposed for robust segmentation of bone surfaces from US data obtained from two different US machines with varying acquisition settings. The proposed CNN takes US image as input and simultaneously generates two intermediate output images, denoted as local phase tensor (LPT) and global context tensor (GCT), from two branches which are invariant to intensity variations. LPT and GCT are fused to generate the final segmentation map. In the training process, the LPT network branch is supervised by precalculated ground truth without manual annotation. RESULTS: The proposed method is evaluated on 1227 in vivo US scans collected using two US machines, including a portable handheld ultrasound scanner, by scanning various bone surfaces from 28 volunteers. Validation of proposed method on both US machines not only shows statistically significant improvements in cross-machine segmentation of bone surfaces compared to state-of-the-art methods but also achieves a computation time of 30 milliseconds per image, [Formula: see text] improvement over state-of-the-art. CONCLUSION: The encouraging results obtained in this initial study suggest that the proposed method is promising enough for further evaluation. Future work will include extensive validation of the method on new US data collected from various machines using different acquisition settings. We will also evaluate the potential of using the segmented bone surfaces as an input to a point set-based registration method.


Bone and Bones/surgery , Image Processing, Computer-Assisted/methods , Surgery, Computer-Assisted , Ultrasonography, Interventional/methods , Artifacts , Bone and Bones/diagnostic imaging , Deep Learning , Humans , Young Adult
16.
Article En | MEDLINE | ID: mdl-31158269

Computational image analysis is one means for evaluating digitized histopathology specimens that can increase the reproducibility and reliability with which cancer diagnoses are rendered while simultaneously providing insight as to the underlying mechanisms of disease onset and progression. A major challenge that is confronted when analyzing samples that have been prepared at disparate laboratories and institutions is that the algorithms used to assess the digitized specimens often exhibit heterogeneous staining characteristics because of slight differences in incubation times and the protocols used to prepare the samples. Unfortunately, such variations can render a prediction model learned from one batch of specimens ineffective for characterizing an ensemble originating from another site. In this work, we propose to adopt unsupervised domain adaptation to effectively transfer the discriminative knowledge obtained from any given source domain to the target domain without requiring any additional labeling or annotation of images at the target site. In this paper, our team investigates the use of two approaches for performing the adaptation: (1) color normalization and (2) adversarial training. The adversarial training strategy is implemented through the use of convolutional neural networks to find an invariant feature space and Siamese architecture within the target domain to add a regularization that is appropriate for the entire set of whole-slide images. The adversarial adaptation results in significant classification improvement compared with the baseline models under a wide range of experimental settings.

17.
Int J Comput Assist Radiol Surg ; 14(5): 775-783, 2019 May.
Article En | MEDLINE | ID: mdl-30868478

PURPOSE: Ultrasound (US) provides real-time, two-/three-dimensional safe imaging. Due to these capabilities, it is considered a safe alternative to intra-operative fluoroscopy in various computer-assisted orthopedic surgery (CAOS) procedures. However, interpretation of the collected bone US data is difficult due to high levels of noise, various imaging artifacts, and bone surfaces response appearing several millimeters (mm) in thickness. For US-guided CAOS procedures, it is an essential objective to have a segmentation mechanism, that is both robust and computationally inexpensive. METHOD: In this paper, we present our development of a convolutional neural network-based technique for segmentation of bone surfaces from in vivo US scans. The novelty of our proposed design is that it utilizes fusion of feature maps and employs multi-modal images to abate sensitivity to variations caused by imaging artifacts and low intensity bone boundaries. B-mode US images, and their corresponding local phase filtered images are used as multi-modal inputs for the proposed fusion network. Different fusion architectures are investigated for fusing the B-mode US image and the local phase features. RESULTS: The proposed methods was quantitatively and qualitatively evaluated on 546 in vivo scans by scanning 14 healthy subjects. We achieved an average F-score above 95% with an average bone surface localization error of 0.2 mm. The reported results are statistically significant compared to state-of-the-art. CONCLUSIONS: Reported accurate and robust segmentation results make the proposed method promising in CAOS applications. Further extensive validations are required in order to fully understand the clinical utility of the proposed method.


Bone and Bones/diagnostic imaging , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Surgery, Computer-Assisted/methods , Ultrasonography/methods , Bone and Bones/surgery , Humans , Orthopedic Procedures/methods
18.
Int J Comput Assist Radiol Surg ; 14(6): 1017-1026, 2019 Jun.
Article En | MEDLINE | ID: mdl-30911878

PURPOSE: This paper addresses localization of needles inserted both in-plane and out-of-plane in challenging ultrasound-guided interventions where the shaft and tip have low intensity. Our approach combines a novel digital subtraction scheme for enhancement of low-level intensity changes caused by tip movement in the ultrasound image and a state-of-the-art deep learning scheme for tip detection. METHODS: As the needle tip moves through tissue, it causes subtle spatiotemporal variations in intensity. Relying on these intensity changes, we formulate a foreground detection scheme for enhancing the tip from consecutive ultrasound frames. The tip is augmented by solving a spatial total variation regularization problem using the split Bregman method. Lastly, we filter irrelevant motion events with a deep learning-based end-to-end data-driven method that models the appearance of the needle tip in ultrasound images, resulting in needle tip detection. RESULTS: The detection model is trained and evaluated on an extensive ex vivo dataset collected with 17G and 22G needles inserted in-plane and out-of-plane in bovine, porcine and chicken phantoms. We use 5000 images extracted from 20 video sequences for training and 1000 images from 10 sequences for validation. The overall framework is evaluated on 700 images from 20 sequences not used in training and validation, and achieves a tip localization error of 0.72 ± 0.04 mm and an overall processing time of 0.094 s per frame (~ 10 frames per second). CONCLUSION: The proposed method is faster and more accurate than state of the art and is resilient to spatiotemporal redundancies. The promising results demonstrate its potential for accurate needle localization in challenging ultrasound-guided interventions.


Biopsy/methods , Needles , Ultrasonography, Interventional/methods , Animals , Cattle , Chickens , Motion , Phantoms, Imaging , Swine
19.
J Imaging ; 5(4)2019 Apr 02.
Article En | MEDLINE | ID: mdl-34460481

Ultrasound (US) could become a standard of care imaging modality for the quantitative assessment of femoral cartilage thickness for the early diagnosis of knee osteoarthritis. However, low contrast, high levels of speckle noise, and various imaging artefacts hinder the analysis of collected data. Accurate, robust, and fully automatic US image-enhancement and cartilage-segmentation methods are needed in order to improve the widespread deployment of this imaging modality for knee-osteoarthritis diagnosis and monitoring. In this work, we propose a method based on local-phase-based image processing for automatic knee-cartilage image enhancement, segmentation, and thickness measurement. A local-phase feature-guided dynamic-programming approach is used for the fully automatic localization of knee-bone surfaces. The localized bone surfaces are used as seed points for automating the seed-guided segmentation of the cartilage. We evaluated the Random Walker (RW), watershed, and graph-cut-based segmentation methods from 200 scans obtained from ten healthy volunteers. Validation against manual expert segmentation achieved a mean dice similarity coefficient of 0.90, 0.86, and 0.84 for the RW, watershed, and graph-cut segmentation methods, respectively. Automatically segmented cartilage regions achieved 0.18 mm localization accuracy compared to manual expert thickness measurement.

20.
Med Image Comput Comput Assist Interv ; 11071: 201-209, 2018 Sep.
Article En | MEDLINE | ID: mdl-30465047

Automatic and accurate Gleason grading of histopathology tissue slides is crucial for prostate cancer diagnosis, treatment, and prognosis. Usually, histopathology tissue slides from different institutions show heterogeneous appearances because of different tissue preparation and staining procedures, thus the predictable model learned from one domain may not be applicable to a new domain directly. Here we propose to adopt unsupervised domain adaptation to transfer the discriminative knowledge obtained from the source domain to the target domain with-out requiring labeling of images at the target domain. The adaptation is achieved through adversarial training to find an invariant feature space along with the proposed Siamese architecture on the target domain to add a regularization that is appropriate for the whole-slide images. We validate the method on two prostate cancer datasets and obtain significant classification improvement of Gleason scores as compared with the baseline models.

...