Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 78
Filter
1.
Med Image Anal ; 93: 103096, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38301347

ABSTRACT

We present a fully automated method of integrating intraoral scan (IOS) and dental cone-beam computerized tomography (CBCT) images into one image by complementing each image's weaknesses. Dental CBCT alone may not be able to delineate precise details of the tooth surface due to limited image resolution and various CBCT artifacts, including metal-induced artifacts. IOS is very accurate for the scanning of narrow areas, but it produces cumulative stitching errors during full-arch scanning. The proposed method is intended not only to compensate the low-quality of CBCT-derived tooth surfaces with IOS, but also to correct the cumulative stitching errors of IOS across the entire dental arch. Moreover, the integration provides both gingival structure of IOS and tooth roots of CBCT in one image. The proposed fully automated method consists of four parts; (i) individual tooth segmentation and identification module for IOS data (TSIM-IOS); (ii) individual tooth segmentation and identification module for CBCT data (TSIM-CBCT); (iii) global-to-local tooth registration between IOS and CBCT; and (iv) stitching error correction for full-arch IOS. The experimental results show that the proposed method achieved landmark and surface distance errors of 112.4µm and 301.7µm, respectively.


Subject(s)
Spiral Cone-Beam Computed Tomography , Trimethylsilyl Compounds , Humans , Artifacts , Cone-Beam Computed Tomography , Imidazoles
2.
PLoS One ; 17(9): e0275114, 2022.
Article in English | MEDLINE | ID: mdl-36170279

ABSTRACT

Identification of 3D cephalometric landmarks that serve as proxy to the shape of human skull is the fundamental step in cephalometric analysis. Since manual landmarking from 3D computed tomography (CT) images is a cumbersome task even for the trained experts, automatic 3D landmark detection system is in a great need. Recently, automatic landmarking of 2D cephalograms using deep learning (DL) has achieved great success, but 3D landmarking for more than 80 landmarks has not yet reached a satisfactory level, because of the factors hindering machine learning such as the high dimensionality of the input data and limited amount of training data due to the ethical restrictions on the use of medical data. This paper presents a semi-supervised DL method for 3D landmarking that takes advantage of anonymized landmark dataset with paired CT data being removed. The proposed method first detects a small number of easy-to-find reference landmarks, then uses them to provide a rough estimation of the all landmarks by utilizing the low dimensional representation learned by variational autoencoder (VAE). The anonymized landmark dataset is used for training the VAE. Finally, coarse-to-fine detection is applied to the small bounding box provided by rough estimation, using separate strategies suitable for the mandible and the cranium. For mandibular landmarks, patch-based 3D CNN is applied to the segmented image of the mandible (separated from the maxilla), in order to capture 3D morphological features of mandible associated with the landmarks. We detect 6 landmarks around the condyle all at once rather than one by one, because they are closely related to each other. For cranial landmarks, we again use the VAE-based latent representation for more accurate annotation. In our experiment, the proposed method achieved a mean detection error of 2.88 mm for 90 landmarks using only 15 paired training data.


Subject(s)
Anatomic Landmarks , Imaging, Three-Dimensional , Anatomic Landmarks/anatomy & histology , Anatomic Landmarks/diagnostic imaging , Cephalometry/methods , Humans , Imaging, Three-Dimensional/methods , Reproducibility of Results , Supervised Machine Learning , Tomography, X-Ray Computed
3.
Phys Med Biol ; 67(17)2022 08 25.
Article in English | MEDLINE | ID: mdl-35944531

ABSTRACT

Objective.Recently, dental cone-beam computed tomography (CBCT) methods have been improved to significantly reduce radiation dose while maintaining image resolution with minimal equipment cost. In low-dose CBCT environments, metallic inserts such as implants, crowns, and dental fillings cause severe artifacts, which result in a significant loss of morphological structures of teeth in reconstructed images. Such metal artifacts prevent accurate 3D bone-teeth-jaw modeling for diagnosis and treatment planning. However, the performance of existing metal artifact reduction (MAR) methods in handling the loss of the morphological structures of teeth in reconstructed CT images remains relatively limited. In this study, we developed an innovative MAR method to achieve optimal restoration of anatomical details.Approach.The proposed MAR approach is based on a two-stage deep learning-based method. In the first stage, we employ a deep learning network that utilizes intra-oral scan data as side-inputs and performs multi-task learning of auxiliary tooth segmentation. The network is designed to improve the learning ability of capturing teeth-related features effectively while mitigating metal artifacts. In the second stage, a 3D bone-teeth-jaw model is constructed with weighted thresholding, where the weighting region is determined depending on the geometry of the intra-oral scan data.Main results.The results of numerical simulations and clinical experiments are presented to demonstrate the feasibility of the proposed approach.Significance.We propose for the first time a MAR method using radiation-free intra-oral scan data as supplemental information on the tooth morphological structures of teeth, which is designed to perform accurate 3D bone-teeth-jaw modeling in low-dose CBCT environments.


Subject(s)
Artifacts , Deep Learning , Algorithms , Cone-Beam Computed Tomography , Image Processing, Computer-Assisted/methods , Metals , Prostheses and Implants
4.
Med Phys ; 49(8): 5195-5205, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35582909

ABSTRACT

PURPOSE: Dental cone-beam computed tomography (CBCT) has been increasingly used for dental and maxillofacial imaging. However, the presence of metallic inserts, such as implants, crowns, and dental braces, violates the CT model assumption, which leads to severe metal artifacts in the reconstructed CBCT image, resulting in the degradation of diagnostic performance. In this study, we used deep learning to reduce metal artifacts. METHODS: The metal artifacts, appearing as streaks and shadows, are nonlocal and highly associated with various factors, including the geometry of metallic inserts, energy-dependent attenuation, and energy spectrum of the incident X-ray beam, making it difficult to learn their complicated structures directly. To provide a step-by-step environment in which deep learning can be trained, we propose an iterative learning approach in which the network at each iteration step learns the correction error caused by the previous network, while enforcing the data fidelity in the projection domain. To generate a realistic paired training dataset, metal-free CBCT scans were collected from patients without metallic inserts, and then simulated metal projection data were added to generate the corresponding metal-corrupted projection data. RESULTS: The feasibility of the proposed method was investigated in clinical metal-affected CBCT scans, as well as simulated metal-affected CBCT scans. The results show that the proposed method significantly reduces metal artifacts while preserving the morphological structures near metallic objects and outperforms direct image domain learning. CONCLUSION: The proposed fidelity-embedded learning can effectively reduce metal artifacts in dental CBCT compared with direct image domain learning.


Subject(s)
Artifacts , Spiral Cone-Beam Computed Tomography , Algorithms , Cone-Beam Computed Tomography , Humans , Image Processing, Computer-Assisted/methods , Metals , Phantoms, Imaging
5.
IEEE Trans Pattern Anal Mach Intell ; 44(10): 6562-6568, 2022 10.
Article in English | MEDLINE | ID: mdl-34077356

ABSTRACT

Accurate and automatic segmentation of three-dimensional (3D) individual teeth from cone-beam computerized tomography (CBCT) images is a challenging problem because of the difficulty in separating an individual tooth from adjacent teeth and its surrounding alveolar bone. Thus, this paper proposes a fully automated method of identifying and segmenting 3D individual teeth from dental CBCT images. The proposed method addresses the aforementioned difficulty by developing a deep learning-based hierarchical multi-step model. First, it automatically generates upper and lower jaws panoramic images to overcome the computational complexity caused by high-dimensional data and the curse of dimensionality associated with limited training dataset. The obtained 2D panoramic images are then used to identify 2D individual teeth and capture loose- and tight- regions of interest (ROIs) of 3D individual teeth. Finally, accurate 3D individual tooth segmentation is achieved using both loose and tight ROIs. Experimental results showed that the proposed method achieved an F1-score of 93.35 percent for tooth identification and a Dice similarity coefficient of 94.79 percent for individual 3D tooth segmentation. The results demonstrate that the proposed method provides an effective clinical and practical framework for digital dentistry.


Subject(s)
Spiral Cone-Beam Computed Tomography , Tooth , Algorithms , Cone-Beam Computed Tomography/methods , Imaging, Three-Dimensional/methods , Tooth/diagnostic imaging
6.
Med Image Anal ; 69: 101951, 2021 04.
Article in English | MEDLINE | ID: mdl-33515982

ABSTRACT

The estimation of antenatal amniotic fluid (AF) volume (AFV) is important as it offers crucial information about fetal development, fetal well-being, and perinatal prognosis. However, AFV measurement is cumbersome and patient specific. Moreover, it is heavily sonographer-dependent, with measurement accuracy varying greatly depending on the sonographer's experience. Therefore, the development of accurate, robust, and adoptable methods to evaluate AFV is highly desirable. In this regard, automation is expected to reduce user-based variability and workload of sonographers. However, automating AFV measurement is very challenging, because accurate detection of AF pockets is difficult owing to various confusing factors, such as reverberation artifact, AF mimicking region and floating matter. Furthermore, AF pocket exhibits an unspecified variety of shapes and sizes, and ultrasound images often show missing or incomplete structural boundaries. To overcome the abovementioned difficulties, we develop a hierarchical deep-learning-based method, which consider clinicians' anatomical-knowledge-based approaches. The key step is the segmentation of the AF pocket using our proposed deep learning network, AF-net. AF-net is a variation of U-net combined with three complementary concepts - atrous convolution, multi-scale side-input layer, and side-output layer. The experimental results demonstrate that the proposed method provides a measurement of the amniotic fluid index (AFI) that is as robust and precise as the results from clinicians. The proposed method achieved a Dice similarity of 0.877±0.086 for AF segmentation and achieved a mean absolute error of 2.666±2.986 and mean relative error of 0.018±0.023 for AFI value. To the best of our knowledge, our method, for the first time, provides an automated measurement of AFI.


Subject(s)
Amniotic Fluid , Deep Learning , Amniotic Fluid/diagnostic imaging , Female , Humans , Pregnancy , Ultrasonography
7.
Med Image Anal ; 69: 101967, 2021 04.
Article in English | MEDLINE | ID: mdl-33517242

ABSTRACT

Recently, with the significant developments in deep learning techniques, solving underdetermined inverse problems has become one of the major concerns in the medical imaging domain, where underdetermined problems are motivated by the willingness to provide high resolution medical images with as little data as possible, by optimizing data collection in terms of minimal acquisition time, cost-effectiveness, and low invasiveness. Typical examples include undersampled magnetic resonance imaging(MRI), interior tomography, and sparse-view computed tomography(CT), where deep learning techniques have achieved excellent performances. However, there is a lack of mathematical analysis of why the deep learning method is performing well. This study aims to explain about learning the causal relationship regarding the structure of the training data suitable for deep learning, to solve highly underdetermined problems. We present a particular low-dimensional solution model to highlight the advantage of deep learning methods over conventional methods, where two approaches use the prior information of the solution in a completely different way. We also analyze whether deep learning methods can learn the desired reconstruction map from training data in the three models (undersampled MRI, sparse-view CT, interior tomography). This paper also discusses the nonlinearity structure of underdetermined linear systems and conditions of learning (called M-RIP condition).


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Tomography, X-Ray Computed
8.
Comput Methods Programs Biomed ; 200: 105833, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33250283

ABSTRACT

For compression fracture detection and evaluation, an automatic X-ray image segmentation technique that combines deep-learning and level-set methods is proposed. Automatic segmentation is much more difficult for X-ray images than for CT or MRI images because they contain overlapping shadows of thoracoabdominal structures including lungs, bowel gases, and other bony structures such as ribs. Additional difficulties include unclear object boundaries, the complex shape of the vertebra, inter-patient variability, and variations in image contrast. Accordingly, a structured hierarchical segmentation method is presented that combines the advantages of two deep-learning methods. Pose-driven learning is used to selectively identify the five lumbar vertebrae in an accurate and robust manner. With knowledge of the vertebral positions, M-net is employed to segment the individual vertebra. Finally, fine-tuning segmentation is applied by combining the level-set method with the previously obtained segmentation results. The performance of the proposed method was validated by 160 lumbar X-ray images, resulting in a mean Dice similarity metric of 91.60±2.22%. The results show that the proposed method achieves accurate and robust identification of each lumbar vertebra and fine segmentation of individual vertebra.


Subject(s)
Fractures, Compression , Algorithms , Fractures, Compression/diagnostic imaging , Humans , Image Processing, Computer-Assisted , Lumbar Vertebrae/diagnostic imaging , Tomography, X-Ray Computed , X-Rays
9.
Phys Med Biol ; 65(8): 085018, 2020 04 23.
Article in English | MEDLINE | ID: mdl-32101805

ABSTRACT

The annotation of three-dimensional (3D) cephalometric landmarks in 3D computerized tomography (CT) has become an essential part of cephalometric analysis, which is used for diagnosis, surgical planning, and treatment evaluation. The automation of 3D landmarking with high-precision remains challenging due to the limited availability of training data and the high computational burden. This paper addresses these challenges by proposing a hierarchical deep-learning method consisting of four stages: 1) a basic landmark annotator for 3D skull pose normalization, 2) a deep-learning-based coarse-to-fine landmark annotator on the midsagittal plane, 3) a low-dimensional representation of the total number of landmarks using variational autoencoder (VAE), and 4) a local-to-global landmark annotator. The implementation of the VAE allows two-dimensional-image-based 3D morphological feature learning and similarity/dissimilarity representation learning of the concatenated vectors of cephalometric landmarks. The proposed method achieves an average 3D point-to-point error of 3.63 mm for 93 cephalometric landmarks using a small number of training CT datasets. Notably, the VAE captures variations of craniofacial structural characteristics.


Subject(s)
Anatomic Landmarks , Cephalometry , Imaging, Three-Dimensional/standards , Machine Learning , Automation , Humans , Reproducibility of Results , Skull/anatomy & histology , Skull/diagnostic imaging , Tomography, X-Ray Computed
10.
Article in English | MEDLINE | ID: mdl-31221614

ABSTRACT

OBJECTIVE: The purpose of this study was to evaluate the accuracy of an optical tracking system during reference point localization, measurement, and registration of skull models for navigational maxillary orthognathic surgery. STUDY DESIGN: Accuracy was first evaluated on the basis of the position recording discrepancy at a static point and at 2 points of fixed lengths. Ten reference points were measured on a skull model at 7 different locations, and their measurements were compared with predicted positions by using 4 registration methods. Finally, positional tracking of reference points for simulated maxillary surgery was performed and compared with laser scan data. RESULTS: The average linear measurement discrepancy was 0.28 mm, and the mean measurement discrepancy with the 5 registered cranial points was 1.53 mm. The average measurement discrepancy after maxillary surgery was 1.91 mm (for impaction) and 1.56 mm (for advancement). The registration discrepancy in jitter and point registration on the y-axis was significantly greater than on the other axes. CONCLUSIONS: The optical tracking system seems clinically acceptable for precise tracking of the maxillary position during navigational orthognathic surgery, notwithstanding the chance of greater measurement error on the y-axis.


Subject(s)
Orthognathic Surgical Procedures , Surgery, Computer-Assisted , Imaging, Three-Dimensional , Maxilla , Orthognathic Surgery
11.
Physiol Meas ; 40(6): 065009, 2019 07 01.
Article in English | MEDLINE | ID: mdl-31091515

ABSTRACT

OBJECTIVE: Ultrasound-based fetal biometric measurements, such as head circumference (HC) and biparietal diameter (BPD), are frequently used to evaluate gestational age and diagnose fetal central nervous system pathology. Because manual measurements are operator-dependent and time-consuming, much research is being actively conducted on automated methods. However, the existing automated methods are still not satisfactory in terms of accuracy and reliability, owing to difficulties dealing with various artefacts in ultrasound images. APPROACH: Using the proposed method, a labeled dataset containing 102 ultrasound images was used for training, and validation was performed with 70 ultrasound images. MAIN RESULTS: A success rate of 91.43% and 100% for HC and BPD estimations, respectively, and an accuracy of 87.14% for the plane acceptance check. SIGNIFICANCE: This paper focuses on fetal head biometry and proposes a deep-learning-based method for estimating HC and BPD with a high degree of accuracy and reliability.


Subject(s)
Biometry , Head/diagnostic imaging , Image Processing, Computer-Assisted , Machine Learning , Ultrasonography, Prenatal , Automation , Cephalometry , Humans , Regression Analysis
12.
Phys Med Biol ; 64(5): 055002, 2019 02 20.
Article in English | MEDLINE | ID: mdl-30669128

ABSTRACT

This paper presents a new approach to automatic three-dimensional (3D) cephalometric annotation for diagnosis, surgical planning, and treatment evaluation. There has long been considerable demand for automated cephalometric landmarking, since manual landmarking requires considerable time and experience as well as objectivity and scrupulous error avoidance. Due to the inherent limitation of two-dimensional (2D) cephalometry and the 3D nature of surgical simulation, there is a trend away from current 2D to 3D cephalometry. Deep learning approaches to cephalometric landmarking seem highly promising, but there exist serious difficulties in handling high dimensional 3D CT data, dimension referring to the number of voxels. To address this issue of dimensionality, this paper proposes a shadowed 2D image-based machine learning method which uses multiple shadowed 2D images with various lighting and view directions to capture 3D geometric cues. The proposed method using VGG-net was trained and tested using 2700 shadowed 2D images and corresponding manual landmarkings. Test data evaluation shows that our method achieved an average point-to-point error of 1.5 mm for the seven major landmarks.


Subject(s)
Anatomic Landmarks , Cephalometry/methods , Imaging, Three-Dimensional/standards , Machine Learning , Automation , Humans , Reproducibility of Results
13.
Med Phys ; 45(12): 5376-5384, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30238586

ABSTRACT

PURPOSE: This paper proposes a sinogram-consistency learning method to deal with beam hardening-related artifacts in polychromatic computerized tomography (CT). The presence of highly attenuating materials in the scan field causes an inconsistent sinogram that does not match the range space of the Radon transform. When the mismatched data are entered into the range space during CT reconstruction, streaking and shading artifacts are generated owing to the inherent nature of the inverse Radon transform METHODS: The proposed learning method aims to repair inconsistent sinogram by removing the primary metal-induced beam hardening factors along the metal trace in the sinogram. Taking account of the fundamental difficulty in obtaining sufficient training data in a medical environment, the learning method is designed to use simulated training data and a patient's implant type-specific learning model is used to simplify the learning process. RESULTS: The feasibility of the proposed method is investigated using a dataset, consisting of real CT scans of pelvises containing simulated hip prostheses. The anatomical areas in training and test data are different, in order to demonstrate that the proposed method extracts the beam hardening features, selectively. The results show that our method successfully corrects sinogram inconsistency by extracting beam hardening sources by means of deep learning. CONCLUSION: This paper proposed a deep learning method of sinogram correction for beam hardening reduction in CT for the first time. Conventional methods for beam hardening reduction are based on regularizations, and have the fundamental drawback of being not easily able to use manifold CT images, while a deep learning approach has the potential to do so.


Subject(s)
Artifacts , Image Processing, Computer-Assisted/methods , Machine Learning , Metals , Tomography, X-Ray Computed , Humans , Pelvis/diagnostic imaging
14.
Physiol Meas ; 39(10): 105007, 2018 10 22.
Article in English | MEDLINE | ID: mdl-30226815

ABSTRACT

OBJECTIVE: Obstetricians mainly use ultrasound imaging for fetal biometric measurements. However, such measurements are cumbersome. Hence, there is urgent need for automatic biometric estimation. Automated analysis of ultrasound images is complicated owing to the patient-specific, operator-dependent, and machine-specific characteristics of such images. APPROACH: This paper proposes a method for the automatic fetal biometry estimation from 2D ultrasound data through several processes consisting of a specially designed convolutional neural network (CNN) and U-Net for each process. These machine learning techniques take clinicians' decisions, anatomical structures, and the characteristics of ultrasound images into account. The proposed method is divided into three steps: initial abdominal circumference (AC) estimation, AC measurement, and plane acceptance checking. MAIN RESULTS: A CNN is used to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical vein), and a Hough transform is used to obtain an initial estimate of the AC. These data are applied to other CNNs to estimate the spine position and bone regions. Then, the obtained information is used to determine the final AC. After determining the AC, a U-Net and a classification CNN are used to check whether the image is suitable for AC measurement. Finally, the efficacy of the proposed method is validated by clinical data. SIGNIFICANCE: Our method achieved a Dice similarity metric of [Formula: see text] for AC measurement and an accuracy of 87.10% for our acceptance check of the fetal abdominal standard plane.


Subject(s)
Abdomen/diagnostic imaging , Abdomen/embryology , Biometry/methods , Image Interpretation, Computer-Assisted/methods , Machine Learning , Ultrasonography, Prenatal/methods , Abdomen/anatomy & histology , Female , Humans , Pattern Recognition, Automated/methods , Pregnancy
15.
IEEE J Biomed Health Inform ; 22(5): 1512-1520, 2018 09.
Article in English | MEDLINE | ID: mdl-29990257

ABSTRACT

Ultrasound diagnosis is routinely used in obstetrics and gynecology for fetal biometry, and owing to its time-consuming process, there has been a great demand for automatic estimation. However, the automated analysis of ultrasound images is complicated because they are patient specific, operator dependent, and machine specific. Among various types of fetal biometry, the accurate estimation of abdominal circumference (AC) is especially difficult to perform automatically because the abdomen has low contrast against surroundings, nonuniform contrast, and irregular shape compared to other parameters. We propose a method for the automatic estimation of the fetal AC from two-dimensional ultrasound data through a specially designed convolutional neural network (CNN), which takes account of doctors' decision process, anatomical structure, and the characteristics of the ultrasound image. The proposed method uses CNN to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical vein) and Hough transformation for measuring AC. We test the proposed method using clinical ultrasound data acquired from 56 pregnant women. Experimental results show that, with relatively small training samples, the proposed CNN provides sufficient classification results for AC estimation through the Hough transformation. The proposed method automatically estimates AC from ultrasound images. The method is quantitatively evaluated and shows stable performance in most cases and even for ultrasound images deteriorated by shadowing artifacts. As a result of experiments for our acceptance check, the accuracies are 0.809 and 0.771 with expert 1 and expert 2, respectively, whereas the accuracy between the two experts is 0.905. However, for cases of oversized fetus, when the amniotic fluid is not observed or the abdominal area is distorted, it could not correctly estimate AC.


Subject(s)
Abdomen/diagnostic imaging , Fetus/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Ultrasonography, Prenatal/methods , Female , Humans , Neural Networks, Computer , Pregnancy
16.
Phys Med Biol ; 63(13): 135007, 2018 06 25.
Article in English | MEDLINE | ID: mdl-29787383

ABSTRACT

This paper presents a deep learning method for faster magnetic resonance imaging (MRI) by reducing k-space data with sub-Nyquist sampling strategies and provides a rationale for why the proposed approach works well. Uniform subsampling is used in the time-consuming phase-encoding direction to capture high-resolution image information, while permitting the image-folding problem dictated by the Poisson summation formula. To deal with the localization uncertainty due to image folding, a small number of low-frequency k-space data are added. Training the deep learning net involves input and output images that are pairs of the Fourier transforms of the subsampled and fully sampled k-space data. Our experiments show the remarkable performance of the proposed method; only 29[Formula: see text] of the k-space data can generate images of high quality as effectively as standard MRI reconstruction with the fully sampled data.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Algorithms , Fourier Analysis , Humans , Uncertainty
17.
Int J Numer Method Biomed Eng ; 34(7): e2980, 2018 07.
Article in English | MEDLINE | ID: mdl-29521020

ABSTRACT

Electrical properties of human tissues are usually linked with structure of thin insulating membranes and thereby reflect physiological function of the tissues or organs. It is clinically important to characterize electrical properties of tissues in vivo. Electrical impedance tomography is a recently developed medical imaging technique, which has been exploited to characterize electrical properties (conductivity and permittivity) of human tissues by injecting currents and measuring the resulting voltages at boundary electrodes. The electrical characteristic of a majority of human tissues, such as bones, muscles, and brain white matter, exhibits an anisotropic property. The anisotropic phenomenon of human tissues is frequency dependent that vanishes at high frequencies. Previous electrical impedance tomography studies that aimed at the reconstruction of anisotropic subject tissues have been focused on the theoretical analysis of uniqueness up to a diffeomorphism or the establishment of an accurate forward model by using an anisotropic conductivity tensor. However, effects of the current frequency on the accuracy of the reconstructions of anisotropic subjects remain poorly studied. The goal of this study is to examine the feasibility of multifrequency electrical impedance tomography by using it in a simulation study to recover the frequency-dependent anisotropic properties of a phantom subject composed of alternating insulating and conductive layers. The anisotropic properties of the subject were analyzed by an effective admittivity tensor, and the responses of the current flow pathways and voltages were investigated at various applied current frequencies in the forward model. The linear reconstruction was performed following the sensitivity matrix approach at multiple frequencies. Simulation results achieved at various frequencies revealed that the anisotropy of the model was effectively reconstructed at low frequencies and disappeared at high frequencies, from which we validated the feasibility of multifrequency electrical impedance tomography method in reconstructing the anisotropic directions of the considered object.


Subject(s)
Computer Simulation , Electric Impedance , Tomography , Anisotropy , Models, Theoretical , Numerical Analysis, Computer-Assisted
18.
Phys Med Biol ; 63(4): 045011, 2018 02 13.
Article in English | MEDLINE | ID: mdl-29345626

ABSTRACT

We sought to improve efficiency of magnetic resonance electrical impedance tomography data acquisition so that fast conductivity changes or electric field variations could be monitored. Undersampling of k-space was used to decrease acquisition times in spin-echo-based sequences by a factor of two. Full MREIT data were reconstructed using continuity assumptions and preliminary scans gathered without current. We found that phase data were reconstructed faithfully from undersampled data. Conductivity reconstructions of phantom data were also possible. Therefore, undersampled k-space methods can potentially be used to accelerate MREIT acquisition. This method could be an advantage in imaging real-time conductivity changes with MREIT.


Subject(s)
Algorithms , Electric Conductivity , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/instrumentation , Magnetic Resonance Imaging/methods , Phantoms, Imaging , Tomography/methods , Humans
19.
IEEE Trans Med Imaging ; 37(9): 1970-1977, 2018 09.
Article in English | MEDLINE | ID: mdl-29035213

ABSTRACT

Electrical impedance tomography (EIT) provides functional images of an electrical conductivity distribution inside the human body. Since the 1980s, many potential clinical applications have arisen using inexpensive portable EIT devices. EIT acquires multiple trans-impedance measurements across the body from an array of surface electrodes around a chosen imaging slice. The conductivity image reconstruction from the measured data is a fundamentally ill-posed inverse problem notoriously vulnerable to measurement noise and artifacts. Most available methods invert the ill-conditioned sensitivity or the Jacobian matrix using a regularized least-squares data-fitting technique. Their performances rely on the regularization parameter, which controls the trade-off between fidelity and robustness. For clinical applications of EIT, it would be desirable to develop a method achieving consistent performance over various uncertain data, regardless of the choice of the regularization parameter. Based on the analysis of the structure of the Jacobian matrix, we propose a fidelity-embedded regularization (FER) method and a motion artifact reduction filter. Incorporating the Jacobian matrix in the regularization process, the new FER method with the motion artifact reduction filter offers stable reconstructions of high-fidelity images from noisy data by taking a very large regularization parameter value. The proposed method showed practical merits in experimental studies of chest EIT imaging.


Subject(s)
Electric Impedance , Image Processing, Computer-Assisted/methods , Lung/diagnostic imaging , Tomography/methods , Algorithms , Humans , Respiratory Function Tests/methods
20.
Med Phys ; 44(9): e147-e152, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28901618

ABSTRACT

PURPOSE: This study aims to propose a physics-based method of reducing beam-hardening artifacts induced by high-attenuation materials such as metal stents or other metallic implants. METHODS: The proposed approach consists of deriving a sinogram inconsistency formula representing the energy dependence of the attenuation coefficient of high-attenuation materials. This inconsistency formula more accurately represents the inconsistencies of the sinogram than that of a previously reported formula (called the MAC-BC method). This is achieved by considering the properties of the high-attenuation materials, which include the materials' shapes and locations and their effects on the incident X-ray spectrum, including their attenuation coefficients. RESULTS: Numerical simulation and phantom experiment demonstrate that the modeling error of MAC-BC method are nearly completely removed by means of the proposed method. CONCLUSION: The proposed method reduces beam-hardening artifacts arising from high-attenuation materials by relaxing the assumptions of the MAC-BC method. In doing so, it outperforms the original MAC-BC method. Further research is required to address other potential sources of metal artifacts, such as photon starvation, scattering, and noise.


Subject(s)
Artifacts , Image Processing, Computer-Assisted , Humans , Metals , Phantoms, Imaging , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL
...