Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
J Xray Sci Technol ; 31(1): 13-26, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36278390

RESUMO

Several limitations in algorithms and datasets in the field of X-ray security inspection result in the low accuracy of X-ray image inspection. In the literature, there have been rare studies proposed and datasets prepared for the topic of dangerous objects segmentation. In this work, we contribute a purely manual segmentation for labeling the existing X-ray security inspection dataset namely, SIXRay, with the pixel-level semantic information of dangerous objects. We also propose a composition method for X-ray security inspection images to effectively augment the positive samples. This composition method can quickly obtain the positive sample images using affine transformation and HSV features of X-ray images. Furthermore, to improve the recognition accuracy, especially for adjacent and overlapping dangerous objects, we propose to combine the target detection algorithm (i.e., the softer-non maximum suppression, Softer-NMS) with Mask RCNN, which is named as the Softer-Mask RCNN. Compared with the original model (i.e., Mask RCNN), the Softer-Mask RCNN improves by 3.4% in accuracy (mAP), and 6.2% with adding synthetic data. The study result indicates that our proposed method in this work can effectively improve the recognition performance of dangerous objects depicting in the X-ray security inspection images.


Assuntos
Aprendizado Profundo , Raios X , Radiografia , Algoritmos
2.
J Xray Sci Technol ; 30(4): 805-822, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35599528

RESUMO

Tube of X-ray computed tomography (CT) system emitting a polychromatic spectrum of photons leads to beam hardening artifacts such as cupping and streaks, while the metal implants in the imaged object results in metal artifacts in the reconstructed images. The simultaneous emergence of various beam-hardening artifacts degrades the diagnostic accuracy of CT images in clinics. Thus, it should be deeply investigated for suppressing such artifacts. In this study, data consistency condition is exploited to construct an objective function. Non-convex optimization algorithm is employed to solve the optimal scaling factors. Finally, an optimal bone correction is acquired to simultaneously correct for cupping, streaks and metal artifacts. Experimental result acquired by a realistic computer simulation demonstrates that the proposed method can adaptively determine the optimal scaling factors, and then correct for various beam-hardening artifacts in the reconstructed CT images. Especially, as compared to the nonlinear least squares before variable substitution, the running time of the new CT image reconstruction algorithm decreases 82.36% and residual error reduces 55.95%. As compared to the nonlinear least squares after variable substitution, the running time of the new algorithm decreases 67.54% with the same residual error.


Assuntos
Artefatos , Tomografia Computadorizada por Raios X , Algoritmos , Simulação por Computador , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas
3.
Opt Express ; 27(3): 2056-2073, 2019 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-30732250

RESUMO

Precisely evaluating the geometrical attenuation factor is critical for constructing a more complete bidirectional reflectance distribution function (BRDF) model. Conventional theories for determining the geometrical attenuation factor neglect the correlation between height and slope and the self-shadowing or self-masking effects on microsurfaces, leading to results that are discrepant from reality, apparently. This paper presents a three-dimensional (3D) geometrical attenuation factor formulation on 3D Gaussian random rough surfaces. The proposed numerical analysis of 3D geometrical attenuation factor is much more precise for a practical application, especially near grazing angles. Our proposed numerical analysis of 3D geometrical attenuation factor can precisely evaluate the BRDF model.

4.
J Xray Sci Technol ; 26(3): 435-448, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29562580

RESUMO

The optimization-based image reconstruction methods have been thoroughly investigated in the field of medical imaging. The Chambolle-Pock (CP) algorithm may be employed to solve these convex optimization image reconstruction programs. The preconditioned CP (PCP) algorithm has been shown to have much higher convergence rate than the ordinary CP (OCP) algorithm. This algorithm utilizes a preconditioner-parameter to tune the implementation of the algorithm to the specific application, which ranges from 0 and 2, but is often set to 1. In this work, we investigated the impact of the preconditioner-parameter on the convergence rate of the PCP algorithm when it is applied to the TV constrained, data-divergence minimization (TVDM) optimization based image reconstruction. We performed the investigations in the context of 2D computed tomography (CT) and 3D electron paramagnetic resonance imaging (EPRI). For 2D CT, we used the Shepp-Logan and two FORBILD phantoms. For 3D EPRI, we used a simulated 6-spheres phantom and a physical phantom. Study results showed that the optimal preconditioner-parameter depends on the specific imaging conditions. Simply setting the parameter equal to 1 cannot guarantee a fast convergence rate. Thus, this study suggests that one should adaptively tune the preconditioner-parameter to obtain the optimal convergence rate of the PCP algorithm.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Imageamento Tridimensional/instrumentação , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/instrumentação
5.
Sensors (Basel) ; 16(12)2016 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-27983680

RESUMO

When we encounter a malicious rumor or an infectious disease outbreak, immunizing k nodes of the relevant network with limited resources is always treated as an extremely effective method. The key challenge is how we can insulate limited nodes to minimize the propagation of those contagious things. In previous works, the best k immunised nodes are selected by learning the initial status of nodes and their strategies even if there is no feedback in the propagation process, which eventually leads to ineffective performance of their solutions. In this paper, we design a novel vaccines placement strategy for protecting much more healthy nodes from being infected by infectious nodes. The main idea of our solution is that we are not only utilizing the status of changing nodes as auxiliary knowledge to adjust our scheme, but also comparing the performance of vaccines in various transmission slots. Thus, our solution has a better chance to get more benefit from these limited vaccines. Extensive experiments have been conducted on several real-world data sets and the results have shown that our algorithm has a better performance than previous works.


Assuntos
Imunização , Algoritmos , Doenças Transmissíveis/imunologia , Resistência à Doença , Suscetibilidade a Doenças , Humanos , Modelos Biológicos , Fatores de Tempo , Vacinas/imunologia , Viroses/imunologia , Viroses/transmissão
6.
Physica A ; 420: 85-97, 2015 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-32288091

RESUMO

Immunizing important nodes has been shown to be an effective solution to suppress the epidemic spreading. Most studies focus on the globally important nodes in a network, but neglect the locally important nodes in different communities. We claim that given the temporal community feature of opportunistic social networks (OSN), this strategy has a biased understanding of the epidemic dynamics, leading us to conjecture that it is not "the more central, the better" for the implementation of control strategy. In this paper, we track the evolution of community structure and study the effect of community-based immunization strategy on epidemic spreading. We first break the OSN traces down into different communities, and find that the community structure helps to delay the outbreak of epidemic. We then evaluate the local importance of nodes in communities, and show that immunizing nodes with high local importance can remarkably suppress the epidemic. More interestingly, we find that high local importance but non-central nodes play a big role in epidemic spreading process, removing them improves the immunization efficiency by 25% to 150% at different scenarios.

7.
Sensors (Basel) ; 14(7): 11605-28, 2014 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-24984062

RESUMO

Seat-level positioning of a smartphone in a vehicle can provide a fine-grained context for many interesting in-vehicle applications, including driver distraction prevention, driving behavior estimation, in-vehicle services customization, etc. However, most of the existing work on in-vehicle positioning relies on special infrastructures, such as the stereo, cigarette lighter adapter or OBD (on-board diagnostic) adapter. In this work, we propose iLoc, an infrastructure-free, in-vehicle, cooperative positioning system via smartphones. iLoc does not require any extra devices and uses only embedded sensors in smartphones to determine the phones' seat-level locations in a car. In iLoc, in-vehicle smartphones automatically collect data during certain kinds of events and cooperatively determine the relative left/right and front/back locations. In addition, iLoc is tolerant to noisy data and possible sensor errors. We evaluate the performance of iLoc using experiments conducted in real driving scenarios. Results show that the positioning accuracy can reach 90% in the majority of cases and around 70% even in the worst-cases.

8.
Artigo em Inglês | MEDLINE | ID: mdl-37278039

RESUMO

INTRODUCTION: To understand the risk factors of asthma, we combined genome-wide association study (GWAS) risk loci and clinical data in predicting asthma using machine-learning approaches. METHODS: A case-control study with 123 asthmatics and 100 controls was conducted in the Zhuang population in Guangxi. GWAS risk loci were detected using polymerase chain reaction, and clinical data were collected. Machine-learning approaches were used to identify the major factors that contribute to asthma. RESULTS: A total of 14 GWAS risk loci with clinical data were analyzed on the basis of 10 times the 10-fold cross-validation for all machine-learning models. Using GWAS risk loci or clinical data, the best performances exhibited area under the curve (AUC) values of 64.3% and 71.4%, respectively. Combining GWAS risk loci and clinical data, the XGBoost established the best model with an AUC of 79.7%, indicating that the combination of genetics and clinical data can enable improved performance. We then sorted the importance of features and found the top six risk factors for predicting asthma to be rs3117098, rs7775228, family history, rs2305480, rs4833095, and body mass index. CONCLUSION: Asthma-prediction models based on GWAS risk loci and clinical data can accurately predict asthma, and thus provide insights into the disease pathogenesis.

9.
Med Phys ; 50(12): 7415-7426, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37860998

RESUMO

BACKGROUND: Functional assessment of right ventricle (RV) using gated myocardial perfusion single-photon emission computed tomography (MPS) heavily relies on the precise extraction of right ventricular contours. PURPOSE: In this paper, we present a new deep-learning-based model integrating both the spatial and temporal features in gated MPS images to perform the segmentation of the RV epicardium and endocardium. METHODS: By integrating the spatial features from each cardiac frame of the gated MPS and the temporal features from the sequential cardiac frames of the gated MPS, we developed a Spatial-Temporal V-Net (ST-VNet) for automatic extraction of RV endocardial and epicardial contours. In the ST-VNet, a V-Net is employed to hierarchically extract spatial features, and convolutional long-term short-term memory (ConvLSTM) units are added to the skip-connection pathway to extract the temporal features. The input of the ST-VNet is ECG-gated sequential frames of the MPS images and the output is the probability map of the epicardial or endocardial masks. A Dice similarity coefficient (DSC) loss which penalizes the discrepancy between the model prediction and the manual annotation was adopted to optimize the segmentation model. RESULTS: Our segmentation model was trained and validated on a retrospective dataset with 45 subjects, and the cardiac cycle of each subject was divided into eight gates. The proposed ST-VNet achieved a DSC of 0.8914 and 0.8157 for the RV epicardium and endocardium segmentation, respectively. The mean absolute error, the mean squared error, and the Pearson correlation coefficient of the RV ejection fraction (RVEF) between the manual annotation and the model prediction were 0.0609, 0.0830, and 0.6985. CONCLUSION: Our proposed ST-VNet is an effective model for RV segmentation. It has great promise for clinical use in RV functional assessment.


Assuntos
Ventrículos do Coração , Coração , Humanos , Ventrículos do Coração/diagnóstico por imagem , Estudos Retrospectivos , Coração/diagnóstico por imagem , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Perfusão , Processamento de Imagem Assistida por Computador/métodos
10.
Comput Biol Med ; 160: 106954, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37130501

RESUMO

Accurate segmentation of the left ventricle (LV) is crucial for evaluating myocardial perfusion SPECT (MPS) and assessing LV functions. In this study, a novel method combining deep learning with shape priors was developed and validated to extract the LV myocardium and automatically measure LV functional parameters. The method integrates a three-dimensional (3D) V-Net with a shape deformation module that incorporates shape priors generated by a dynamic programming (DP) algorithm to guide its output during training. A retrospective analysis was performed on an MPS dataset comprising 31 subjects without or with mild ischemia, 32 subjects with moderate ischemia, and 12 subjects with severe ischemia. Myocardial contours were manually annotated as the ground truth. A 5-fold stratified cross-validation was used to train and validate the models. The clinical performance was evaluated by measuring LV end-systolic volume (ESV), end-diastolic volume (EDV), left ventricular ejection fraction (LVEF), and scar burden from the extracted myocardial contours. There were excellent agreements between segmentation results by our proposed model and those from the ground truth, with a Dice similarity coefficient (DSC) of 0.9573 ± 0.0244, 0.9821 ± 0.0137, and 0.9903 ± 0.0041, as well as Hausdorff distances (HD) of 6.7529 ± 2.7334 mm, 7.2507 ± 3.1952 mm, and 7.6121 ± 3.0134 mm in extracting the LV endocardium, myocardium, and epicardium, respectively. Furthermore, the correlation coefficients between LVEF, ESV, EDV, stress scar burden, and rest scar burden measured from our model results and the ground truth were 0.92, 0.958, 0.952, 0.972, and 0.958, respectively. The proposed method achieved a high accuracy in extracting LV myocardial contours and assessing LV functions.


Assuntos
Aprendizado Profundo , Ventrículos do Coração , Humanos , Volume Sistólico , Estudos Retrospectivos , Ventrículos do Coração/diagnóstico por imagem , Ventrículos do Coração/patologia , Cicatriz , Função Ventricular Esquerda , Isquemia , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Perfusão
11.
Med Phys ; 39(9): 5498-512, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22957617

RESUMO

PURPOSES: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation of interview sampling rate in advanced clinical or preclinical applications. METHODS: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. RESULTS: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of "salt-and-pepper" noise and mosaic artifacts can be avoided. CONCLUSIONS: Since the interview sampling rate is taken into account in the projection domain multiscale decomposition, the proposed method is anticipated to be useful in advanced clinical and preclinical applications where the interview sampling rate varies.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Análise dos Mínimos Quadrados
12.
Med Phys ; 39(7): 4467-82, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-22830779

RESUMO

PURPOSE: Differential phase contrast CT (DPC-CT) is emerging as a new technology to improve the contrast sensitivity of conventional attenuation-based CT. The noise equivalent quanta as a function over spatial frequency, i.e., the spectrum of noise equivalent quanta NEQ(k), is a decisive indicator of the signal and noise transfer properties of an imaging system. In this work, we derive the functional form of NEQ(k) in DPC-CT. Via system modeling, analysis, and computer simulation, we evaluate and verify the derived NEQ(k) and compare it with that of the conventional attenuation-based CT. METHODS: The DPC-CT is implemented with x-ray tube and gratings. The x-ray propagation and data acquisition are modeled and simulated through Fresnel and Fourier analysis. A monochromatic x-ray source (30 keV) is assumed to exclude any system imperfection and interference caused by scatter and beam hardening, while a 360° full scan is carried out in data acquisition to avoid any weighting scheme that may disrupt noise randomness. Adequate upsampling is implemented to simulate the x-ray beam's propagation through the gratings G(1) and G(2) with periods 8 and 4 µm, respectively, while the intergrating distance is 193.6 mm (1∕16 of the Talbot distance). The dimensions of the detector cell for data acquisition are 32 × 32, 64 × 64, 96 × 96, and 128 × 128 µm(2), respectively, corresponding to a 40.96 × 40.96 mm(2) field of view in data acquisition. An air phantom is employed to obtain the noise power spectrum NPS(k), spectrum of noise equivalent quanta NEQ(k), and detective quantum efficiency DQE(k). A cylindrical water phantom at 5.1 mm diameter and complex refraction coefficient n = 1 - δ + iß = 1 -2.5604 × 10(-7) + i1.2353 × 10(-10) is placed in air to measure the edge transfer function, line spread function and then modulation transfer function MTF(k), of both DPC-CT and the conventional attenuation-based CT. The x-ray flux is set at 5 × 10(6) photon∕cm(2) per projection and observes the Poisson distribution, which is consistent with that of a micro-CT for preclinical applications. Approximately 360 regions, each at 128 × 128 matrix, are used to calculate the NPS(k) via 2D Fourier transform, in which adequate zero padding is carried out to avoid aliasing in noise. RESULTS: The preliminary data show that the DPC-CT possesses a signal transfer property [MTF(k)] comparable to that of the conventional attenuation-based CT. Meanwhile, though there exists a radical difference in their noise power spectrum NPS(k) (trait 1∕|k| in DPC-CT but |k| in the conventional attenuation-based CT) the NEQ(k) and DQE(k) of DPC-CT and the conventional attenuation-based CT are in principle identical. CONCLUSIONS: Under the framework of ideal observer study, the joint signal and noise transfer property NEQ(k) and detective quantum efficiency DQE(k) of DPC-CT are essentially the same as those of the conventional attenuation-based CT. The findings reported in this paper may provide insightful guidelines on the research, development, and performance optimization of DPC-CT for extensive preclinical and clinical applications in the future.


Assuntos
Algoritmos , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Razão Sinal-Ruído
13.
J Xray Sci Technol ; 20(4): 405-22, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23324782

RESUMO

PURPOSES: Interior tomography problem can be solved using the so-called differentiated backprojection-projection onto convex sets (DBP-POCS) method, which requires a priori knowledge within a small area interior to the region of interest (ROI) to be imaged. In theory, the small area wherein the a priori knowledge is required can be in any shape, but most of the existing implementations carry out the Hilbert filtering either horizontally or vertically, leading to a vertical or horizontal strip that may be across a large area in the object. In this work, we implement a practical DBP-POCS method with radial Hilbert filtering and thus the small area with the a priori knowledge can be roughly round (e.g., a sinus or ventricles among other anatomic cavities in human or animal body). We also conduct an experimental evaluation to verify the performance of this practical implementation. METHODS: We specifically re-derive the reconstruction formula in the DBP-POCS fashion with radial Hilbert filtering to assure that only a small round area with the a priori knowledge be needed (namely radial DBP-POCS method henceforth). The performance of the practical DBP-POCS method with radial Hilbert filtering and a priori knowledge in a small round area is evaluated with projection data of the standard and modified Shepp-Logan phantoms simulated by computer, followed by a verification using real projection data acquired by a computed tomography (CT) scanner. RESULTS: The preliminary performance study shows that, if a priori knowledge in a small round area is available, the radial DBP-POCS method can solve the interior tomography problem in a more practical way at high accuracy. CONCLUSIONS: In comparison to the implementations of DBP-POCS method demanding the a priori knowledge in horizontal or vertical strip, the radial DBP-POCS method requires the a priori knowledge within a small round area only. Such a relaxed requirement on the availability of a priori knowledge can be readily met in practice, because a variety of small round areas (e.g., air-filled sinuses or fluid-filled ventricles among other anatomic cavities) exist in human or animal body. Therefore, the radial DBP-POCS method with a priori knowledge in a small round area is more feasible in clinical and preclinical practice.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Animais , Simulação por Computador , Humanos , Modelos Teóricos , Imagens de Fantasmas , Ovinos
14.
J Xray Sci Technol ; 20(1): 45-68, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22398587

RESUMO

In this paper, we discuss the mathematical equivalence among four consistency conditions in the divergent-beam computed tomography (CT). The first is the consistency condition derived by Levine et al. by degenerating the John's equation; the second is the integral invariant derived by Wei et al. using the symmetric group theory; the third is the so-called parallel-fan-beam Hilbert projection equality derived by Hamaker et al.; and the fourth is the fan-beam data consistency condition (FDCC) derived by Chen et al. using the complex analysis theory. Historically, most of these consistency conditions were derived by their corresponding authors using complicated mathematical strategies, which are usually not easy to be precisely understood by researchers with only a general engineering mathematical background. In this paper, we symmetrically re-derive all these consistency conditions using a friendly mathematical language. Based on theoretical derivation, it has been found that all these consistency conditions can be viewed as a necessary condition for the specific solution to John's equation. From the physical point of view, all these consistency conditions have been essentially expressed as a similar constraint on the projection data acquired with arbitrary two x-ray source points. Numerical simulations have been carried out to experimentally evaluate and verify their merits.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X/métodos , Simulação por Computador , Cabeça/diagnóstico por imagem , Humanos , Imagens de Fantasmas , Reprodutibilidade dos Testes
15.
Med Biol Eng Comput ; 60(5): 1417-1429, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35322343

RESUMO

Automatic CT segmentation of proximal femur has a great potential for use in orthopedic diseases, especially in the imaging-based assessments of hip fracture risk. In this study, we proposed an approach based on deep learning for the fast and automatic extraction of the periosteal and endosteal contours of proximal femur in order to differentiate cortical and trabecular bone compartments. A three-dimensional (3D) end-to-end fully convolutional neural network (CNN), which can better combine the information among neighbor slices and get more accurate segmentation results by 3D CNN, was developed for our segmentation task. The separation of cortical and trabecular bones derived from the QCT software MIAF-Femur was used as the segmentation reference. Two models with the same network structures were trained, and they achieved a dice similarity coefficient (DSC) of 97.82% and 96.53% for the periosteal and endosteal contours, respectively. Compared with MIAF-Femur, it takes half an hour to segment a case, and our CNN model takes a few minutes. To verify the excellent performance of our model for proximal femoral segmentation, we measured the volumes of different parts of the proximal femur and compared it with the ground truth, and the relative errors of femur volume between predicted result and ground truth are all less than 5%. This approach will be expected helpful to measure the bone mineral densities of cortical and trabecular bones, and to evaluate the bone strength based on FEA.


Assuntos
Aprendizado Profundo , Osso Esponjoso , Fêmur/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
16.
IEEE/ACM Trans Comput Biol Bioinform ; 19(3): 1459-1471, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-33471766

RESUMO

Magnetic resonance imagings (MRIs) are providing increased access to neuropsychiatric disorders that can be made available for advanced data analysis. However, the single type of data limits the ability of psychiatrists to distinguish the subclasses of this disease. In this paper, we propose an ensemble hybrid features selection method for the neuropsychiatric disorder classification. The method consists of a 3D DenseNet and a XGBoost, which are used to select the image features from structural MRI images and the phenotypic feature from phenotypic records, respectively. The hybrid feature is composed of image features and phenotypic features. The proposed method is validated in the Consortium for Neuropsychiatric Phenomics (CNP) dataset, where samples are classified into one of the four classes (healthy controls (HC), attention deficit hyperactivity disorder (ADHD), bipolar disorder (BD), and schizophrenia (SD)). Experimental results show that the hybrid feature can improve the performance of classification methods. The best accuracy of binary and multi-class classification can reach 91.22 and 78.62 percent, respectively. We analyze the importance of phenotypic features and image features in different classification tasks. The importance of the structure MRI images is highlighted by incorporating phenotypic features with image features to generate hybrid features. We also visualize the features of three neuropsychiatric disorders and analyze their locations in the brain region.


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Esquizofrenia , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico por imagem , Transtorno do Deficit de Atenção com Hiperatividade/genética , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Humanos , Imageamento por Ressonância Magnética/métodos , Esquizofrenia/diagnóstico por imagem , Esquizofrenia/genética
17.
Med Phys ; 38(7): 4386-95, 2011 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-21859039

RESUMO

PURPOSE: The differential phase contrast CT is emerging as a new technology to improve the contrast sensitivity of the conventional CT. Via system analysis, modeling, and computer simulation, the authors study the noise power spectrum (NPS)--an imaging performance indicator-of the differential phase contrast CT and compare it with that of the conventional CT. METHODS: The differential phase contrast CT is implemented with x-ray tube and gratings. The x-ray propagation and data acquisition are modeled and simulated with Fourier analysis and Fresnel analysis. To avoid any interference caused by scatter and beam hardening, a monochromatic x-ray source (30 keV) is assumed, which irradiates the object to be imaged by 360 degrees so that no weighting scheme is needed. A 20-fold up-sampling is assumed to simulate x-ray beam's propagation through the gratings Gl and G2 with periods 8 and 4 microm, respectively, while the intergrating distance is 193.6 mm (1/16 of the Tabolt distance). The dimension of the detector cell for data acquisition ranges from 32 x 32 to 128 x 128 microm2, while the field of view in data acquisition is 40.96 x 40.96 mm2. A uniform water phantom with a diameter 37.68 mm is employed to study the NPS, with its complex refraction coefficient n = 1 - delta + ibeta = 1 - 2.5604 x 10(-7) + i1.2353 x 10(-10). The x-ray flux ranges from 10(6) to 10(8) photon/cm2.projection and observes the Poisson distribution, which is consistent with that of micro-CT in preclinical applications. The image matrix of reconstructed water phantom is 1280 x 1280, and a total of 180 regions at 128 x 128 matrix are used for NPS calculation via 2D Fourier Transform in which adequate zero padding is applied to avoid aliasing. RESULTS: The preliminary data show that the differential phase contrast CT manifests its NPS with a l/|k| trait, while the distribution of the conventional CTs NPS observes |k|. This accounts for the significant difference in their noise granularity and the differential phase contrast CTs substantial advantage in noise over the conventional CT, particularly, in the situations where the detector cell size for data acquisition is smaller than 100 microm. CONCLUSIONS: The differential phase contrast CT detects the projection of refractive coefficient's derivative and uses the Hilbert filter for image reconstruction, which leads to the radical difference in its NPS and the advantage in noise in comparison to that of the conventional CT.


Assuntos
Algoritmos , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X/instrumentação
18.
J Xray Sci Technol ; 19(2): 173-98, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21606581

RESUMO

An algorithm is proposed to directly reconstruct a CT gradient image in a region of interest(ROI). First, the central slice theorem is generalized and a differential constraint condition (DCC) is introduced in parallel-beam geometry. Then, an algorithm is developed to reconstruct the gradient images in both Cartesian and polar coordinate systems based on a two-step Hilbert transform method. Finally, the reconstruction algorithm is extended into the equi-distant fan-beam geometry. Meanwhile, a conditional truncation for projection data acquisition is permitted by using a one-dimensional(1-D) finite Hilbert transform in image domain. Because the reconstructed gradient image is in terms of local operator, it have a better performance in CT image analysis and other CT applications compared to the global Calderon operator in Lambda Tomography.


Assuntos
Algoritmos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Artefatos , Análise de Fourier , Humanos , Modelos Estatísticos
19.
Med Image Anal ; 69: 101985, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33588117

RESUMO

Although deep learning models like CNNs have achieved great success in medical image analysis, the small size of medical datasets remains a major bottleneck in this area. To address this problem, researchers have started looking for external information beyond current available medical datasets. Traditional approaches generally leverage the information from natural images via transfer learning. More recent works utilize the domain knowledge from medical doctors, to create networks that resemble how medical doctors are trained, mimic their diagnostic patterns, or focus on the features or areas they pay particular attention to. In this survey, we summarize the current progress on integrating medical domain knowledge into deep learning models for various tasks, such as disease diagnosis, lesion, organ and abnormality detection, lesion and organ segmentation. For each task, we systematically categorize different kinds of medical domain knowledge that have been utilized and their corresponding integrating methods. We also provide current challenges and directions for future research.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador
20.
Curr Med Imaging ; 16(10): 1323-1331, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33461446

RESUMO

BACKGROUND: Osteonecrosis of Femoral Head (ONFH) is a common complication in orthopaedics, wherein femoral structures are usually damaged due to the impairment or interruption of femoral head blood supply. AIM: In this study, an automatic approach for the classification of the early ONFH with deep learning has been proposed. METHODS: All femoral CT slices according to their spatial locations with the Convolutional Neural Network (CNN) are first classified. Therefore, all CT slices are divided into upper, middle or lower segments of femur head. Then the femur head areas can be segmented with the Conditional Generative Adversarial Network (CGAN) for each part. The Convolutional Autoencoder is employed to reduce dimensions and extract features of femur head, and finally K-means clustering is used for an unsupervised classification of the early ONFH. RESULTS: To invalidate the effectiveness of the proposed approach, the experiments on the dataset with 120 patients are carried out. The experimental results show that the segmentation accuracy is higher than 95%. The Convolutional Autoencoder can reduce the dimension of data, the Peak Signal- to-Noise Ratios (PSNRs) are better than 34dB for inputs and outputs. Meanwhile, there is a great intra-category similarity, and a significant inter-category difference. CONCLUSION: The research on the classification of the early ONFH has a valuable clinical merit, and hopefully it can assist physicians to apply more individualized treatment for patient.


Assuntos
Aprendizado Profundo , Necrose da Cabeça do Fêmur , Fêmur/diagnóstico por imagem , Cabeça do Fêmur/diagnóstico por imagem , Necrose da Cabeça do Fêmur/diagnóstico por imagem , Humanos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA