Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 67
Filter
1.
Glob Chang Biol ; 30(1): e17005, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37905717

ABSTRACT

Climate change has induced substantial shifts in vegetation boundaries such as alpine treelines and shrublines, with widespread ecological and climatic influences. However, spatial and temporal changes in the upper elevational limit of alpine grasslands ("alpine grasslines") are still poorly understood due to lack of field observations and remote sensing estimates. In this study, taking the Tibetan Plateau as an example, we propose a novel method for automatically identifying alpine grasslines from multi-source remote sensing data and determining their positions at 30-m spatial resolution. We first identified 2895 mountains potentially having alpine grasslines. On each mountain, we identified a narrow area around the upper elevational limit of alpine grasslands where the alpine grassline was potentially located. Then, we used linear discriminant analysis to adaptively generate from Landsat reflectance features a synthetic feature that maximized the difference between vegetated and unvegetated pixels in each of these areas. After that, we designed a graph-cut algorithm to integrate the advantages of the Otsu and Canny approaches, which was used to determine the precise position of the alpine grassline from the synthetic feature image. Validation against alpine grasslines visually interpreted from a large number of high-spatial-resolution images showed a high level of accuracy (R2 , .99 and .98; mean absolute error, 22.6 and 36.2 m, vs. drone and PlanetScope images, respectively). Across the Tibetan Plateau, the alpine grassline elevation ranged from 4038 to 5380 m (5th-95th percentile), lower in the northeast and southeast and higher in the southwest. This study provides a method for remotely sensing alpine grasslines for the first-time at large scale and lays a foundation for investigating their responses to climate change.


Subject(s)
Climate Change , Remote Sensing Technology , Tibet , Grassland , Ecosystem
2.
Sensors (Basel) ; 24(11)2024 May 29.
Article in English | MEDLINE | ID: mdl-38894303

ABSTRACT

The most critical aspect of panorama generation is maintaining local semantic consistency. Objects may be projected from different depths in the captured image. When warping the image to a unified canvas, pixels at the semantic boundaries of the different views are significantly misaligned. We propose two lightweight strategies to address this challenge efficiently. First, the original image is segmented as superpixels rather than regular grids to preserve the structure of each cell. We propose effective cost functions to generate the warp matrix for each superpixel. The warp matrix varies progressively for smooth projection, which contributes to a more faithful reconstruction of object structures. Second, to deal with artifacts introduced by stitching, we use a seam line method tailored to superpixels. The algorithm takes into account the feature similarity of neighborhood superpixels, including color difference, structure and entropy. We also consider the semantic information to avoid semantic misalignment. The optimal solution constrained by the cost functions is obtained under a graph model. The resulting stitched images exhibit improved naturalness. Extensive testing on common panorama stitching datasets is performed on the algorithm. Experimental results show that the proposed algorithm effectively mitigates artifacts, preserves the completeness of semantics and produces panoramic images with a subjective quality that is superior to that of alternative methods.

3.
Mol Phylogenet Evol ; 178: 107636, 2023 01.
Article in English | MEDLINE | ID: mdl-36208695

ABSTRACT

Phylogenetic trees are essential tools in evolutionary biology that present information on evolutionary events among organisms and molecules. From a dataset of n sequences, a phylogenetic tree of (2n-5)!! possible topologies exists, and determining the optimum topology using brute force is infeasible. Recently, a recursive graph cut on a graph-represented-similarity matrix has proven accurate in reconstructing a phylogenetic tree containing distantly related sequences. However, identifying the optimum graph cut is challenging, and approximate solutions are currently utilized. Here, a phylogenetic tree was reconstructed with an improved graph cut using a quantum-inspired computer, the Fujitsu Digital Annealer (DA), and the algorithm was named the "Normalized-Minimum cut by Digital Annealer (NMcutDA) method". First, a criterion for the graph cut, the normalized cut value, was compared with existing clustering methods. Based on the cut, we verified that the simulated phylogenetic tree could be reconstructed with the highest accuracy when sequences were diverged. Moreover, for some actual data from the structure-based protein classification database, only NMcutDA could cluster sequences into correct superfamilies. Conclusively, NMcutDA reconstructed better phylogenetic trees than those using other methods by optimizing the graph cut. We anticipate that when the diversity of sequences is sufficiently high, NMcutDA can be utilized with high efficiency.


Subject(s)
Algorithms , Computers , Phylogeny , Cluster Analysis , Databases, Protein
4.
Eng Appl Artif Intell ; 116: 105398, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36158870

ABSTRACT

Background: Recently, the coronavirus disease 2019 (COVID-19) has caused mortality of many people globally. Thus, there existed a need to detect this disease to prevent its further spread. Hence, the study aims to predict COVID-19 infected patients based on deep learning (DL) and image processing. Objectives: The study intends to classify the normal and abnormal cases of COVID-19 by considering three different medical imaging modalities namely ultrasound imaging, X-ray images and CT scan images through introduced attention bottleneck residual network (AB-ResNet). It also aims to segment the abnormal infected area from normal images for localizing localising the disease infected area through the proposed edge based graph cut segmentation (E-GCS). Methodology: AB-ResNet is used for classifying images whereas E-GCS segment the abnormal images. The study possess various advantages as it rely on DL and possess capability for accelerating the training speed of deep networks. It also enhance the network depth leading to minimum parameters, minimising the impact of vanishing gradient issue and attaining effective network performance with respect to better accuracy. Results/Conclusion: Performance and comparative analysis is undertaken to evaluate the efficiency of the introduced system and results explores the efficiency of the proposed system in COVID-19 detection with high accuracy (99%).

5.
Sensors (Basel) ; 21(2)2021 Jan 08.
Article in English | MEDLINE | ID: mdl-33435554

ABSTRACT

In this paper, we propose a novel guided normal filtering followed by vertex updating for mesh denoising. We introduce a two-stage scheme to construct adaptive consistent neighborhoods for guided normal filtering. In the first stage, we newly design a consistency measurement to select a coarse consistent neighborhood for each face in a patch-shift manner. In this step, the selected consistent neighborhoods may still contain some features. Then, a graph-cut based scheme is iteratively performed for constructing different adaptive neighborhoods to match the corresponding local shapes of the mesh. The constructed local neighborhoods in this step, known as the adaptive consistent neighborhoods, can avoid containing any geometric features. By using the constructed adaptive consistent neighborhoods, we compute a more accurate guide normal field to match the underlying surface, which will improve the results of the guide normal filtering. With the help of the adaptive consistent neighborhoods, our guided normal filtering can preserve geometric features well, and is robust against complex shapes of surfaces. Intensive experiments on various meshes show the superiority of our method visually and quantitatively.

6.
Sensors (Basel) ; 19(17)2019 Sep 02.
Article in English | MEDLINE | ID: mdl-31480745

ABSTRACT

Recent developments in laser scanning systems have inspired substantial interest in indoor modeling. Semantically rich indoor models are required in many fields. Despite the rapid development of 3D indoor reconstruction methods for building interiors from point clouds, the indoor reconstruction of multi-room environments with curved walls is still not resolved. This study proposed a novel straight and curved line tracking method followed by a straight line test. Robust parameters are used, and a novel straight line regularization method is achieved using constrained least squares. The method constructs a cell complex with both straight lines and curved lines, and the indoor reconstruction is transformed into a labeling problem that is solved based on a novel Markov Random Field formulation. The optimal labeling is found by minimizing an energy function by applying a minimum graph cut approach. Detailed experiments were conducted, and the results indicate that the proposed method is well suited for 3D indoor modeling in multi-room indoor environments with curved walls.

7.
Sensors (Basel) ; 19(2)2019 Jan 18.
Article in English | MEDLINE | ID: mdl-30669363

ABSTRACT

Segmentation of human bodies in images is useful for a variety of applications, including background substitution, human activity recognition, security, and video surveillance applications. However, human body segmentation has been a challenging problem, due to the complicated shape and motion of a non-rigid human body. Meanwhile, depth sensors with advanced pattern recognition algorithms provide human body skeletons in real time with reasonable accuracy. In this study, we propose an algorithm that projects the human body skeleton from a depth image to a color image, where the human body region is segmented in the color image by using the projected skeleton as a segmentation cue. Experimental results using the Kinect sensor demonstrate that the proposed method provides high quality segmentation results and outperforms the conventional methods.


Subject(s)
Algorithms , Human Body , Image Interpretation, Computer-Assisted , Skeleton/anatomy & histology , Color , Humans
8.
J Med Syst ; 43(12): 336, 2019 Nov 13.
Article in English | MEDLINE | ID: mdl-31724076

ABSTRACT

Early diagnosis of retinal OCT images has been shown to curtail blindness and visual impairments. However, the advancement of ophthalmic imaging technologies produces an ever-growing scale of retina images, both in volume and variety, which overwhelms the ophthalmologist ability to segment these images. While many automated methods exist, speckle noise and intensity inhomogeneity negatively impacts the performance of these methods. We present a comprehensive and fully automatic method for annotation of retinal layers in OCT images comprising of fuzzy histogram hyperbolisation (FHH) and graph cut methods to segment 7 retinal layers across 8 boundaries. The FHH handles speckle noise and inhomogeneity in the preprocessing step. Then the normalised vertical image gradient, and it's inverse to represent image intensity in calculating two adjacency matrices and then the FHH reassigns the edge-weights to make edges along retinal boundaries have a low cost, and graph cut method identifies the shortest-paths (layer boundaries). The method is evaluated on 150 B-Scan images, 50 each from the temporal, foveal and nasal regions were used in our study. Promising experimental results have been achieved with high tolerance and adaptability to contour variance and pathological inconsistency of the retinal layers in all (temporal, foveal and nasal) regions. The method also achieves high accuracy, sensitivity, and Dice score of 0.98360, 0.9692 and 0.9712, respectively in segmenting the retinal nerve fibre layer. The annotation can facilitate eye examination by providing accurate results. The integration of the vertical gradients into the graph cut framework, which captures the unique characteristics of retinal structures, is particularly useful in finding the actual minimum paths across multiple retinal layer boundaries. Prior knowledge plays an integral role in image segmentation.


Subject(s)
Image Processing, Computer-Assisted/methods , Retina/diagnostic imaging , Tomography, Optical Coherence/methods , Algorithms , Humans , Pattern Recognition, Automated/methods
9.
J Med Syst ; 43(3): 43, 2019 Jan 16.
Article in English | MEDLINE | ID: mdl-30649629

ABSTRACT

Apparent diffusion coefficient (ADC), derived from diffusion-weighted magnetic resonance images (DW-MRI), measures the motion of water molecules in vivo and can be used to quantify tumor response so as to determine the best therapy approach. In this paper, our goal was to determine whether the DW-MRI can be used for qualitative and quantitative liver cancer analysis, where an automated method will be proposed for improving the accuracy of liver segmentation in DW-MRI to increase the ability of diagnosis of disease. We firstly analyzed the research status of liver cancer diagnosis, especially on the issues of liver image segmentation technology in MRI. Then, the imaging mechanism and image features of the DW-MRI were analyzed, and the initial DW-MRI slice was segmented by graph-cut algorithm. Finally, our obtained result from the liver DW-MRI image is quantitatively and qualitatively analyzed. Experimental results show that DW-MRI has a great advantage in the diagnosis, the DWI images of benign lesion group was lower than that of malignant lesion, thus DW-MRI is segmented by graph-cut algorithm can provide important additional information regarding differential diagnosis of specific liver cancer to some extend.


Subject(s)
Diffusion Magnetic Resonance Imaging/methods , Image Interpretation, Computer-Assisted/methods , Liver Neoplasms/diagnosis , Liver Neoplasms/pathology , Algorithms , Diagnosis, Differential , Humans , Liver Neoplasms/diagnostic imaging , Magnetic Resonance Imaging/methods
10.
J Digit Imaging ; 31(4): 490-504, 2018 08.
Article in English | MEDLINE | ID: mdl-29352385

ABSTRACT

Aortic aneurysm segmentation remains a challenge. Manual segmentation is a time-consuming process which is not practical for routine use. To address this limitation, several automated segmentation techniques for aortic aneurysm have been developed, such as edge detection-based methods, partial differential equation methods, and graph partitioning methods. However, automatic segmentation of aortic aneurysm is difficult due to high pixel similarity to adjacent tissue and a lack of color information in the medical image, preventing previous work from being applicable to difficult cases. This paper uses uses a variable neighborhood search that alternates between intensity-based and gradient-based segmentation techniques. By alternating between intensity and gradient spaces, the search can escape from local optima of each space. The experimental results demonstrate that the proposed method outperforms the other existing segmentation methods in the literature, based on measurements of dice similarity coefficient and jaccard similarity coefficient at the pixel level. In addition, it is shown to perform well for cases that are difficult to segment.


Subject(s)
Aortic Aneurysm, Abdominal/diagnostic imaging , Aortic Aneurysm, Abdominal/pathology , Computed Tomography Angiography/methods , Image Processing, Computer-Assisted , Imaging, Three-Dimensional , Algorithms , Female , Humans , Male , Pattern Recognition, Automated/methods , Reproducibility of Results
11.
Magn Reson Med ; 73(3): 1289-99, 2015 Mar.
Article in English | MEDLINE | ID: mdl-24604689

ABSTRACT

PURPOSE: This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. METHODS: Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. RESULTS: The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. CONCLUSION: The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm.


Subject(s)
Adipose Tissue/anatomy & histology , Body Water , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Algorithms , Humans , Image Enhancement/methods , Imaging, Three-Dimensional/methods , Reproducibility of Results , Sensitivity and Specificity
12.
J Magn Reson Imaging ; 41(2): 517-24, 2015 Feb.
Article in English | MEDLINE | ID: mdl-24338961

ABSTRACT

PURPOSE: To develop and evaluate an automatic segmentation method that extracts the 3D configuration of the ablation zone, the iceball, from images acquired during the freezing phase of MRI-guided cryoablation. MATERIALS AND METHODS: Intraprocedural images at 63 timepoints from 13 kidney tumor cryoablation procedures were examined retrospectively. The images were obtained using a 3 Tesla wide-bore MRI scanner and axial HASTE sequence. Initialized with semiautomatically localized cryoprobes, the iceball was segmented automatically at each timepoint using the graph cut (GC) technique with adapted shape priors. RESULTS: The average Dice Similarity Coefficients (DSC), compared with manual segmentations, were 0.88, 0.92, 0.92, 0.93, and 0.93 at 3, 6, 9, 12, and 15 min timepoints, respectively, and the average DSC of the total 63 segmentations was 0.92 ± 0.03. The proposed method improved the accuracy significantly compared with the approach without shape prior adaptation (P = 0.026). The number of probes involved in the procedure had no apparent influence on the segmentation results using our technique. The average computation time was 20 s, which was compatible with an intraprocedural setting. CONCLUSION: Our automatic iceball segmentation method demonstrated high accuracy and robustness for practical use in monitoring the progress of MRI-guided cryoablation.


Subject(s)
Cryosurgery/methods , Kidney Neoplasms/surgery , Magnetic Resonance Imaging, Interventional/methods , Humans , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Retrospective Studies
13.
J Magn Reson Imaging ; 41(4): 924-34, 2015 Apr.
Article in English | MEDLINE | ID: mdl-24803305

ABSTRACT

PURPOSE: To develop an automatic segmentation algorithm to classify abdominal adipose tissues into visceral fat (VAT), deep (DSAT), and superficial (SSAT) subcutaneous fat compartments and evaluate its performance against manual segmentation. MATERIALS AND METHODS: Data were acquired from 44 normal (BMI 18.0-22.9 kg/m(2) ) and 38 overweight (BMI 23.0-29.9 kg/m(2) ) subjects at 3T using a two-point Dixon sequence. A fully automatic segmentation algorithm was developed to segment the fat depots. The first part of the segmentation used graph cuts to separate the subcutaneous and visceral adipose tissues and the second step employed a modified level sets approach to classify deep and superficial subcutaneous tissues. The algorithmic results of segmentation were validated against the ground truth generated by manual segmentation. RESULTS: The proposed algorithm showed good performance with Dice similarity indices of VAT/DSAT/SSAT: 0.92/0.82/0.88 against the ground truth. The study of the fat distribution showed that there is a steady increase in the proportion of DSAT and a decrease in the proportion of SSAT with increasing obesity. CONCLUSION: The presented technique provides an accurate approach for the segmentation and quantification of abdominal fat depots.


Subject(s)
Algorithms , Image Interpretation, Computer-Assisted/methods , Intra-Abdominal Fat/pathology , Magnetic Resonance Imaging/methods , Obesity/pathology , Subcutaneous Fat, Abdominal/pathology , Adiposity , Adult , Humans , Image Enhancement/methods , Male , Pattern Recognition, Automated/methods , Reference Values , Reproducibility of Results , Sensitivity and Specificity , Subtraction Technique , Young Adult
14.
Magn Reson Med ; 72(6): 1775-84, 2014 Dec.
Article in English | MEDLINE | ID: mdl-24347347

ABSTRACT

PURPOSE: Magnetic resonance imaging (MRI), specifically late-enhanced MRI, is the standard clinical imaging protocol to assess cardiac viability. Segmentation of myocardial walls is a prerequisite for this assessment. Automatic and robust multisequence segmentation is required to support processing massive quantities of data. METHODS: A generic rule-based framework to automatically segment the left ventricle myocardium is presented here. We use intensity information, and include shape and interslice smoothness constraints, providing robustness to subject- and study-specific changes. Our automatic initialization considers the geometrical and appearance properties of the left ventricle, as well as interslice information. The segmentation algorithm uses a decoupled, modified graph cut approach with control points, providing a good balance between flexibility and robustness. RESULTS: The method was evaluated on late-enhanced MRI images from a 20-patient in-house database, and on cine-MRI images from a 15-patient open access database, both using as reference manually delineated contours. Segmentation agreement, measured using the Dice coefficient, was 0.81±0.05 and 0.92±0.04 for late-enhanced MRI and cine-MRI, respectively. The method was also compared favorably to a three-dimensional Active Shape Model approach. CONCLUSION: The experimental validation with two magnetic resonance sequences demonstrates increased accuracy and versatility.


Subject(s)
Algorithms , Heart Ventricles/pathology , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging, Cine/methods , Pattern Recognition, Automated/methods , Ventricular Dysfunction, Left/pathology , Artificial Intelligence , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
15.
J Microsc ; 253(1): 42-53, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24237576

ABSTRACT

With the rapid advancement of 3D confocal imaging technology, more and more 3D cellular images will be available. However, robust and automatic extraction of nuclei shape may be hindered by a highly cluttered environment, as for example, in fly eye tissues. In this paper, we present a novel and efficient nuclei segmentation algorithm based on the combination of graph cut and convex shape assumption. The main characteristic of the algorithm is that it segments nuclei foreground using a graph-cut algorithm with our proposed new initialization method and splits overlapping or touching cell nuclei by simple convexity and concavity analysis. Experimental results show that the proposed algorithm can segment complicated nuclei clumps effectively in our fluorescent fruit fly eye images. Evaluation on a public hand-labelled 2D benchmark demonstrates substantial quantitative improvement over other methods. For example, the proposed method achieves a 3.2 Hausdorff distance decrease and a 1.8 decrease in the merged nuclei error per slice.


Subject(s)
Automation, Laboratory/methods , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Microscopy, Confocal/methods , Animals , Cell Nucleus , Drosophila , Eye/cytology
16.
Sci Rep ; 14(1): 7403, 2024 03 28.
Article in English | MEDLINE | ID: mdl-38548805

ABSTRACT

Quantitative computed tomography (QCT)-based in silico models have demonstrated improved accuracy in predicting hip fractures with respect to the current gold standard, the areal bone mineral density. These models require that the femur bone is segmented as a first step. This task can be challenging, and in fact, it is often almost fully manual, which is time-consuming, operator-dependent, and hard to reproduce. This work proposes a semi-automated procedure for femur bone segmentation from CT images. The proposed procedure is based on the bone and joint enhancement filter and graph-cut algorithms. The semi-automated procedure performances were assessed on 10 subjects through comparison with the standard manual segmentation. Metrics based on the femur geometries and the risk of fracture assessed in silico resulting from the two segmentation procedures were considered. The average Hausdorff distance (0.03 ± 0.01 mm) and the difference union ratio (0.06 ± 0.02) metrics computed between the manual and semi-automated segmentations were significantly higher than those computed within the manual segmentations (0.01 ± 0.01 mm and 0.03 ± 0.02). Besides, a blind qualitative evaluation revealed that the semi-automated procedure was significantly superior (p < 0.001) to the manual one in terms of fidelity to the CT. As for the hip fracture risk assessed in silico starting from both segmentations, no significant difference emerged between the two (R2 = 0.99). The proposed semi-automated segmentation procedure overcomes the manual one, shortening the segmentation time and providing a better segmentation. The method could be employed within CT-based in silico methodologies and to segment large volumes of images to train and test fully automated and supervised segmentation methods.


Subject(s)
Femur , Hip Fractures , Humans , Femur/diagnostic imaging , Tomography, X-Ray Computed/methods , Algorithms , Lower Extremity , Hip Fractures/diagnostic imaging , Image Processing, Computer-Assisted/methods
17.
Data Brief ; 47: 108970, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36875213

ABSTRACT

Phylogenetic trees provide insight into the evolutionary trajectories of species and molecules. However, because (2n-5)! Phylogenetic trees can be constructed from a dataset containing n sequences, but this method of phylogenetic tree construction is not ideal from the viewpoint of a combinatorial explosion to determine the optimal tree using brute force. Therefore, we developed a method for constructing a phylogenetic tree using a Fujitsu Digital Annealer, a quantum-inspired computer that solves combinatorial optimization problems at a high speed. Specifically, phylogenetic trees are generated by repeating the process of partitioning a set of sequences into two parts (i.e., the graph-cut problem). Here, the optimality of the solution (normalized cut value) obtained by the proposed method was compared with the existing methods using simulated and real data. The simulation dataset contained 32-3200 sequences, and the average branch length according to a normal distribution or the Yule model ranged from 0.125 to 0.750, covering a wide range of sequence diversity. In addition, the statistical information of the dataset is described in terms of two indices: transitivity and average p-distance. As phylogenetic tree construction methods are expected to continue to improve, we believe that this dataset can be used as a reference for comparison and confirmation of the validity of the results. Further interpretation of these analyses is explained in W. Onodera, N. Hara, S. Aoki, T. Asahi, N. Sawamura, Phylogenetic tree reconstruction via graph cut presented using a quantum-inspired computer, Mol. Phylogenet. Evol. 178 (2023) 107636.

18.
Comput Biol Med ; 154: 106512, 2023 03.
Article in English | MEDLINE | ID: mdl-36701964

ABSTRACT

BACKGROUND: Accurate retinal layer segmentation in optical coherence tomography (OCT) images is crucial for quantitatively analyzing age-related macular degeneration (AMD) and monitoring its progression. However, previous retinal segmentation models depend on experienced experts and manually annotating retinal layers is time-consuming. On the other hand, accuracy of AMD diagnosis is directly related to the segmentation model's performance. To address these issues, we aimed to improve AMD detection using optimized retinal layer segmentation and deep ensemble learning. METHOD: We integrated a graph-cut algorithm with a cubic spline to automatically annotate 11 retinal boundaries. The refined images were fed into a deep ensemble mechanism that combined a Bagged Tree and end-to-end deep learning classifiers. We tested the developed deep ensemble model on internal and external datasets. RESULTS: The total error rates for our segmentation model using the boundary refinement approach was significantly lower than OCT Explorer segmentations (1.7% vs. 7.8%, p-value = 0.03). We utilized the refinement approach to quantify 169 imaging features using Zeiss SD-OCT volume scans. The presence of drusen and thickness of total retina, neurosensory retina, and ellipsoid zone to inner-outer segment (EZ-ISOS) thickness had higher contributions to AMD classification compared to other features. The developed ensemble learning model obtained a higher diagnostic accuracy in a shorter time compared with two human graders. The area under the curve (AUC) for normal vs. early AMD was 99.4%. CONCLUSION: Testing results showed that the developed framework is repeatable and effective as a potentially valuable tool in retinal imaging research.


Subject(s)
Macular Degeneration , Tomography, Optical Coherence , Humans , Tomography, Optical Coherence/methods , Retina/diagnostic imaging , Macular Degeneration/diagnostic imaging , Algorithms , Machine Learning
19.
Front Artif Intell ; 5: 782225, 2022.
Article in English | MEDLINE | ID: mdl-35252849

ABSTRACT

In computer-aided diagnosis systems for lung cancer, segmentation of lung nodules is important for analyzing image features of lung nodules on computed tomography (CT) images and distinguishing malignant nodules from benign ones. However, it is difficult to accurately and robustly segment lung nodules attached to the chest wall or with ground-glass opacities using conventional image processing methods. Therefore, this study aimed to develop a method for robust and accurate three-dimensional (3D) segmentation of lung nodule regions using deep learning. In this study, a nested 3D fully connected convolutional network with residual unit structures was proposed, and designed a new loss function. Compared with annotated images obtained under the guidance of a radiologist, the Dice similarity coefficient (DS) and intersection over union (IoU) were 0.845 ± 0.008 and 0.738 ± 0.011, respectively, for 332 lung nodules (lung adenocarcinoma) obtained from 332 patients. On the other hand, for 3D U-Net and 3D SegNet, the DS was 0.822 ± 0.009 and 0.786 ± 0.011, respectively, and the IoU was 0.711 ± 0.011 and 0.660 ± 0.012, respectively. These results indicate that the proposed method is significantly superior to well-known deep learning models. Moreover, we compared the results obtained from the proposed method with those obtained from conventional image processing methods, watersheds, and graph cuts. The DS and IoU results for the watershed method were 0.628 ± 0.027 and 0.494 ± 0.025, respectively, and those for the graph cut method were 0.566 ± 0.025 and 0.414 ± 0.021, respectively. These results indicate that the proposed method is significantly superior to conventional image processing methods. The proposed method may be useful for accurate and robust segmentation of lung nodules to assist radiologists in the diagnosis of lung nodules such as lung adenocarcinoma on CT images.

20.
Med Phys ; 48(12): 7837-7849, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34653274

ABSTRACT

PURPOSE: Accurate segmentation of the pulmonary arteries and aorta is important due to the association of the diameter and the shape of these vessels with several cardiovascular diseases and with the risk of exacerbations and death in patients with chronic obstructive pulmonary disease. We propose a fully automatic method based on an optimal surface graph-cut algorithm to quantify the full 3D shape and the diameters of the pulmonary arteries and aorta in noncontrast computed tomography (CT) scans. METHODS: The proposed algorithm first extracts seed points in the right and left pulmonary arteries, the pulmonary trunk, and the ascending and descending aorta by using multi-atlas registration. Subsequently, the centerlines of the pulmonary arteries and aorta are extracted by a minimum cost path tracking between the extracted seed points, with a cost based on a combination of lumen intensity similarity and multiscale medialness in three planes. The centerlines are refined by applying the path tracking algorithm to curved multiplanar reformatted scans and are then smoothed and dilated nonuniformly according to the extracted local vessel radius from the medialness filter. The resulting coarse estimates of the vessels are used as initialization for a graph-cut segmentation. Once the vessels are segmented, the diameters of the pulmonary artery (PA) and the ascending aorta (AA) and the P A : A A ratio are automatically calculated both in a single axial slice and in a 10 mm volume around the automatically extracted PA bifurcation level. The method is evaluated on noncontrast CT scans from the Danish Lung Cancer Screening Trial (DLCST). Segmentation accuracy is determined by comparing with manual annotations on 25 CT scans. Intraclass correlation (ICC) between manual and automatic diameters, both measured in axial slices at the PA bifurcation level, is computed on an additional 200 CT scans. Repeatability of the automated 3D volumetric diameter and P A : A A ratio calculations (perpendicular to the vessel axis) are evaluated on 118 scan-rescan pairs with an average in-between time of 3 months. RESULTS: We obtained a Dice segmentation overlap of 0.94 ± 0.02 for pulmonary arteries and 0.96 ± 0.01 for the aorta, with a mean surface distance of 0.62 ± 0.33 mm and 0.43 ± 0.07 mm, respectively. ICC between manual and automatic in-slice diameter measures was 0.92 for PA, 0.97 for AA, and 0.90 for the P A : A A ratio, and for automatic diameters in 3D volumes around the PA bifurcation level between scan and rescan was 0.89, 0.95, and 0.86, respectively. CONCLUSION: The proposed automatic segmentation method can reliably extract diameters of the large arteries in non-ECG-gated noncontrast CT scans such as are acquired in lung cancer screening.


Subject(s)
Lung Neoplasms , Pulmonary Artery , Algorithms , Aorta/diagnostic imaging , Early Detection of Cancer , Humans , Pulmonary Artery/diagnostic imaging , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL