Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
J Biomech ; 166: 111967, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38388222

ABSTRACT

Spine biomechanics is at a transformation with the advent and integration of machine learning and computer vision technologies. These novel techniques facilitate the estimation of 3D body shapes, anthropometrics, and kinematics from as simple as a single-camera image, making them more accessible and practical for a diverse range of applications. This study introduces a framework that merges these methodologies with traditional musculoskeletal modeling, enabling comprehensive analysis of spinal biomechanics during complex activities from a single camera. Additionally, we aim to evaluate their performance and limitations in spine biomechanics applications. The real-world applications explored in this study include assessment in workplace lifting, evaluation of whiplash injuries in car accidents, and biomechanical analysis in professional sports. Our results demonstrate potential and limitations of various algorithms in estimating body shape, kinematics, and conducting in-field biomechanical analyses. In industrial settings, the potential to utilize these new technologies for biomechanical risk assessments offers a pathway for preventive measures against back injuries. In sports activities, the proposed framework provides new opportunities for performance optimization, injury prevention, and rehabilitation. The application in forensic domain further underscores the wide-reaching implications of this technology. While certain limitations were identified, particularly in accuracy of predictions, complex interactions, and external load estimation, this study demonstrates their potential for advancement in spine biomechanics, heralding an optimistic future in both research and practical applications.


Subject(s)
Spine , Sports , Biomechanical Phenomena
2.
BMC Bioinformatics ; 23(1): 38, 2022 Jan 13.
Article in English | MEDLINE | ID: mdl-35026982

ABSTRACT

BACKGROUND: Accurate cancer classification is essential for correct treatment selection and better prognostication. microRNAs (miRNAs) are small RNA molecules that negatively regulate gene expression, and their dyresgulation is a common disease mechanism in many cancers. Through a clearer understanding of miRNA dysregulation in cancer, improved mechanistic knowledge and better treatments can be sought. RESULTS: We present a topology-preserving deep learning framework to study miRNA dysregulation in cancer. Our study comprises miRNA expression profiles from 3685 cancer and non-cancer tissue samples and hierarchical annotations on organ and neoplasticity status. Using unsupervised learning, a two-dimensional topological map is trained to cluster similar tissue samples. Labelled samples are used after training to identify clustering accuracy in terms of tissue-of-origin and neoplasticity status. In addition, an approach using activation gradients is developed to determine the attention of the networks to miRNAs that drive the clustering. Using this deep learning framework, we classify the neoplasticity status of held-out test samples with an accuracy of 91.07%, the tissue-of-origin with 86.36%, and combined neoplasticity status and tissue-of-origin with an accuracy of 84.28%. The topological maps display the ability of miRNAs to recognize tissue types and neoplasticity status. Importantly, when our approach identifies samples that do not cluster well with their respective classes, activation gradients provide further insight in cancer subtypes or grades. CONCLUSIONS: An unsupervised deep learning approach is developed for cancer classification and interpretation. This work provides an intuitive approach for understanding molecular properties of cancer and has significant potential for cancer classification and treatment selection.


Subject(s)
MicroRNAs , Neoplasms , Cluster Analysis , Gene Expression Profiling , Gene Expression Regulation, Neoplastic , Humans , MicroRNAs/genetics , Neoplasms/genetics
3.
Ultrasound Med Biol ; 46(10): 2846-2854, 2020 10.
Article in English | MEDLINE | ID: mdl-32646685

ABSTRACT

Effective epidural needle placement and injection involves accurate identification of the midline of the spine. Ultrasound, as a safe pre-procedural imaging modality, is suitable for epidural guidance because it offers adequate visibility of the vertebral anatomy. However, image interpretation remains a key challenge, especially for novices. A deep neural network is proposed to automatically classify the transverse ultrasound images of the vertebrae and identify the midline. To distinguish midline images from off-center frames, the proposed network detects the left-right symmetric anatomic landmarks. To assess the feasibility of the proposed method for midline detection, a data set of ultrasound images was collected from 20 volunteers, whose body mass indices were less than 30. The data were split into two segments, for training and test. The performance of the proposed method was further evaluated using fourfold cross validation. Moreover, it was compared against a state-of-the-art deep neural network. Compared with the gold standard provided by an expert sonographer, the proposed trained network correctly classified 88% of the transverse planes from unseen test patients. This capability supports the first step of guiding the placement of an epidural needle.


Subject(s)
Anatomic Landmarks/diagnostic imaging , Epidural Space/diagnostic imaging , Neural Networks, Computer , Humans , Ultrasonography
4.
Int J Comput Assist Radiol Surg ; 15(6): 1023-1031, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32356095

ABSTRACT

PURPOSE: Ultrasound imaging is routinely used in prostate biopsy, which involves obtaining prostate tissue samples using a systematic, yet, non-targeted approach. This approach is blinded to individual patient intraprostatic pathology, and unfortunately, has a high rate of false negatives. METHODS: In this paper, we propose a deep network for improved detection of prostate cancer in systematic biopsy. We address several challenges associated with training such network: (1) Statistical labels: Since biopsy core's pathology report only represents a statistical distribution of cancer within the core, we use multiple instance learning (MIL) networks to enable learning from ultrasound image regions associated with those data; (2) Limited labels: The number of biopsy cores are limited to at most 12 per patient. As a result, the number of samples available for training a deep network is limited. We alleviate this issue by effectively combining Independent Conditional Variational Auto Encoders (ICVAE) with MIL. We train ICVAE to learn label-invariant features of RF data, which is subsequently used to generate synthetic data for improved training of the MIL network. RESULTS: Our in vivo study includes data from 339 prostate biopsy cores of 70 patients. We achieve an area under the curve, sensitivity, specificity, and balanced accuracy of 0.68, 0.77, 0.55 and 0.66, respectively. CONCLUSION: The proposed approach is generic and can be applied to several other scenarios where unlabeled data and noisy labels in training samples are present.


Subject(s)
Image-Guided Biopsy/methods , Prostate/pathology , Prostatic Neoplasms/pathology , Ultrasonography, Interventional/methods , Feasibility Studies , Humans , Male , Neural Networks, Computer , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging , Sensitivity and Specificity
5.
IEEE Trans Med Imaging ; 38(12): 2807-2820, 2019 12.
Article in English | MEDLINE | ID: mdl-31059432

ABSTRACT

Current deep supervised learning methods typically require large amounts of labeled data for training. Since there is a significant cost associated with clinical data acquisition and labeling, medical datasets used for training these models are relatively small in size. In this paper, we aim to alleviate this limitation by proposing a variational generative model along with an effective data augmentation approach that utilizes the generative model to synthesize data. In our approach, the model learns the probability distribution of image data conditioned on a latent variable and the corresponding labels. The trained model can then be used to synthesize new images for data augmentation. We demonstrate the effectiveness of the approach on two independent clinical datasets consisting of ultrasound images of the spine and magnetic resonance images of the brain. For the spine dataset, a baseline and a residual model achieve an accuracy of 85% and 92%, respectively, using our method compared to 78% and 83% using a conventional training approach for image classification task. For the brain dataset, a baseline and a U-net network achieve an accuracy of 84% and 88%, respectively, in Dice coefficient in tumor segmentation compared to 80% and 83% for the convention training approach.


Subject(s)
Deep Learning , Image Interpretation, Computer-Assisted/methods , Algorithms , Brain/diagnostic imaging , Databases, Factual , Humans , Magnetic Resonance Imaging , Neoplasms/diagnostic imaging , Spine/diagnostic imaging , Ultrasonography
6.
Int J Comput Assist Radiol Surg ; 14(6): 1009-1016, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30905010

ABSTRACT

Prostate cancer (PCa) is the most frequent noncutaneous cancer in men. Early detection of PCa is essential for clinical decision making, and reducing metastasis and mortality rates. The current approach for PCa diagnosis is histopathologic analysis of core biopsies taken under transrectal ultrasound guidance (TRUS-guided). Both TRUS-guided systematic biopsy and MR-TRUS-guided fusion biopsy have limitations in accurately identifying PCa, intraoperatively. There is a need to augment this process by visualizing highly probable areas of PCa. Temporal enhanced ultrasound (TeUS) has emerged as a promising modality for PCa detection. Prior work focused on supervised classification of PCa verified by gold standard pathology. Pathology labels are noisy, and data from an entire core have a single label even when significantly heterogeneous. Additionally, supervised methods are limited by data from cores with known pathology, and a significant portion of prostate data is discarded without being used. We provide an end-to-end unsupervised solution to map PCa distribution from TeUS data using an innovative representation learning method, deep neural maps. TeUS data are transformed to a topologically arranged hyper-lattice, where similar samples are closer together in the lattice. Therefore, similar regions of malignant and benign tissue in the prostate are clustered together. Our proposed method increases the number of training samples by several orders of magnitude. Data from biopsy cores with known labels are used to associate the clusters with PCa. Cancer probability maps generated using the unsupervised clustering of TeUS data help intuitively visualize the distribution of abnormal tissue for augmenting TRUS-guided biopsies.


Subject(s)
Image-Guided Biopsy/methods , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging , Biopsy, Large-Core Needle , Early Detection of Cancer , Humans , Magnetic Resonance Imaging/methods , Male , Neoplasm Grading , Prostate/pathology , Prostatic Neoplasms/pathology , Ultrasonography/methods
7.
Ultrasound Med Biol ; 45(5): 1081-1093, 2019 05.
Article in English | MEDLINE | ID: mdl-30685076

ABSTRACT

Attenuation coefficient estimation has the potential to be a useful tool for placental tissue characterization. A current challenge is the presence of inhomogeneities in biological tissue that result in a large variance in the attenuation coefficient estimate (ACE), restricting its clinical utility. In this work, we propose a new Attenuation Estimation Region Of Interest (AEROI) selection method for computing the ACE based on the (i) envelope signal-to-noise ratio deviation and (ii) coefficient of variation of the transmit pulse bandwidth. The method was first validated on a tissue-mimicking phantom, for which an 18%-21% reduction in the standard deviation of ACE and a 14%-24% reduction in the ACE error, expressed as a percentage of reported ACE, were obtained. A study on 59 post-delivery clinically normal placentas was then performed. The proposed AEROI selection method reduced the intra-subject standard deviation of ACE from 0.72 to 0.39 dB/cm/MHz. The measured ACE of 59 placentas was 0.77 ± 0.37 dB/cm/MHz, which establishes a baseline for future studies on placental tissue characterization.


Subject(s)
Placenta/anatomy & histology , Signal Processing, Computer-Assisted , Ultrasonography/methods , Adult , Female , Humans , Middle Aged , Placenta/diagnostic imaging , Pregnancy , Reference Values , Young Adult
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 3477-3480, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30441130

ABSTRACT

Multiparametric Quantitative Ultrasound (QUS) holds promise for characterizing placental tissue and detecting placental disorders. In this study, we simultaneously extract two qualitatively different QUS parameters, namely attenuation coefficient estimate (ACE) and shear wave speed from ultrasound radio frequency data acquired using a shear wave vibro elastography (SWAVE) method. The study comprised data from 59 post-delivery clinically normal placentas. The shear wave speed was found to be equal to 1.74 ± 0.13 m/s whereas the attenuation coefficient estimate was 0.57 ± 0.48 dB/cm-MHz. This provides a baseline for future studies of placental disorders.


Subject(s)
Placenta/diagnostic imaging , Elasticity Imaging Techniques , Female , Humans , Pregnancy , Ultrasonography
9.
IEEE Trans Med Imaging ; 37(1): 81-92, 2018 01.
Article in English | MEDLINE | ID: mdl-28809679

ABSTRACT

Accurate identification of the needle target is crucial for effective epidural anesthesia. Currently, epidural needle placement is administered by a manual technique, relying on the sense of feel, which has a significant failure rate. Moreover, misleading the needle may lead to inadequate anesthesia, post dural puncture headaches, and other potential complications. Ultrasound offers guidance to the physician for identification of the needle target, but accurate interpretation and localization remain challenges. A hybrid machine learning system is proposed to automatically localize the needle target for epidural needle placement in ultrasound images of the spine. In particular, a deep network architecture along with a feature augmentation technique is proposed for automatic identification of the anatomical landmarks of the epidural space in ultrasound images. Experimental results of the target localization on planes of 3-D as well as 2-D images have been compared against an expert sonographer. When compared with the expert annotations, the average lateral and vertical errors on the planes of 3-D test data were 1 and 0.4 mm, respectively. On 2-D test data set, an average lateral error of 1.7 mm and vertical error of 0.8 mm were acquired.


Subject(s)
Anesthesia, Epidural/methods , Epidural Space/diagnostic imaging , Image Processing, Computer-Assisted/methods , Ultrasonography, Interventional/methods , Adult , Algorithms , Deep Learning , Humans , Lumbosacral Region/diagnostic imaging , Needles , Young Adult
10.
Proc SPIE Int Soc Opt Eng ; 101352017 Feb 11.
Article in English | MEDLINE | ID: mdl-28615794

ABSTRACT

Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

11.
Ultrasound Med Biol ; 43(6): 1112-1124, 2017 06.
Article in English | MEDLINE | ID: mdl-28392000

ABSTRACT

The placenta is the interface between the fetus and the mother and is vital for fetal development. Ultrasound elastography provides a non-invasive way to examine in vivo the stiffness of the placenta; increased stiffness has previously been linked to fetal growth restriction. This study used a previously developed dynamic elastography method, called shear wave absolute vibro-elastography, to study 61 post-delivery clinically normal placentas. The shear wave speeds in the placenta were recorded under five different low-frequency mechanical excitations. The elasticity and viscosity were estimated through rheological modeling. The shear wave speeds at excitation frequencies of 60, 80, 90, 100 and 120 Hz were measured to be 1.23 ± 0.44, 1.67 ± 0.76, 1.74 ± 0.72, 1.80 ± 0.78 and 2.25 ± 0.80 m/s. The shear wave speed values we obtained are consistent with previous studies. In addition, our multi-frequency acquisition approach enables us to provide viscosity estimates that have not been previously reported.


Subject(s)
Elastic Modulus/physiology , Elasticity Imaging Techniques/methods , Image Interpretation, Computer-Assisted/methods , Placenta/diagnostic imaging , Placenta/physiology , Pregnancy/physiology , Adult , Feasibility Studies , Female , Humans , In Vitro Techniques , Middle Aged , Pilot Projects , Reproducibility of Results , Sensitivity and Specificity , Shear Strength/physiology , Stress, Mechanical , Tensile Strength/physiology , Viscosity , Young Adult
12.
Int J Comput Assist Radiol Surg ; 10(6): 901-12, 2015 Jun.
Article in English | MEDLINE | ID: mdl-26026697

ABSTRACT

PURPOSE: Injection therapy is a commonly used solution for back pain management. This procedure typically involves percutaneous insertion of a needle between or around the vertebrae, to deliver anesthetics near nerve bundles. Most frequently, spinal injections are performed either blindly using palpation or under the guidance of fluoroscopy or computed tomography. Recently, due to the drawbacks of the ionizing radiation of such imaging modalities, there has been a growing interest in using ultrasound imaging as an alternative. However, the complex spinal anatomy with different wave-like structures, affected by speckle noise, makes the accurate identification of the appropriate injection plane difficult. The aim of this study was to propose an automated system that can identify the optimal plane for epidural steroid injections and facet joint injections. METHODS: A multi-scale and multi-directional feature extraction system to provide automated identification of the appropriate plane is proposed. Local Hadamard coefficients are obtained using the sequency-ordered Hadamard transform at multiple scales. Directional features are extracted from local coefficients which correspond to different regions in the ultrasound images. An artificial neural network is trained based on the local directional Hadamard features for classification. RESULTS: The proposed method yields distinctive features for classification which successfully classified 1032 images out of 1090 for epidural steroid injection and 990 images out of 1052 for facet joint injection. In order to validate the proposed method, a leave-one-out cross-validation was performed. The average classification accuracy for leave-one-out validation was 94 % for epidural and 90 % for facet joint targets. Also, the feature extraction time for the proposed method was 20 ms for a native 2D ultrasound image. CONCLUSION: A real-time machine learning system based on the local directional Hadamard features extracted by the sequency-ordered Hadamard transform for detecting the laminae and facet joints in ultrasound images has been proposed. The system has the potential to assist the anesthesiologists in quickly finding the target plane for epidural steroid injections and facet joint injections.


Subject(s)
Anesthesia, Spinal/methods , Back Pain/drug therapy , Injections, Epidural , Ultrasonography, Interventional/methods , Zygapophyseal Joint/diagnostic imaging , Back Pain/diagnostic imaging , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...