Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
1.
Comput Biol Med ; 174: 108458, 2024 May.
Article in English | MEDLINE | ID: mdl-38631114

ABSTRACT

Macular edema, a prevalent ocular complication observed in various retinal diseases, can lead to significant vision loss or blindness, necessitating accurate and timely diagnosis. Despite the potential of deep learning for segmentation of macular edema, challenges persist in accurately identifying lesion boundaries, especially in low-contrast and noisy regions, and in distinguishing between Inner Retinal Fluid (IRF), Sub-Retinal Fluid (SRF), and Pigment Epithelial Detachment (PED) lesions. To address these challenges, we present a novel approach, termed Semantic Uncertainty Guided Cross-Transformer Network (SuGCTNet), for the simultaneous segmentation of multi-class macular edema. Our proposed method comprises two key components, the semantic uncertainty guided attention module (SuGAM) and the Cross-Transformer module (CTM). The SuGAM module utilizes semantic uncertainty to allocate additional attention to regions with semantic ambiguity, improves the segmentation performance of these challenging areas. On the other hand, the CTM module capitalizes on both uncertainty information and multi-scale image features to enhance the overall continuity of the segmentation process, effectively minimizing feature confusion among different lesion types. Rigorous evaluation on public datasets and various OCT imaging device data demonstrates the superior performance of our proposed method compared to state-of-the-art approaches, highlighting its potential as a valuable tool for improving the accuracy and reproducibility of macular edema segmentation in clinical settings, and ultimately aiding in the early detection and diagnosis of macular edema-related diseases and associated retinal conditions.


Subject(s)
Macular Edema , Tomography, Optical Coherence , Humans , Macular Edema/diagnostic imaging , Tomography, Optical Coherence/methods , Deep Learning , Image Interpretation, Computer-Assisted/methods , Semantics
2.
Natl Sci Rev ; 11(1): nwad294, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38288367

ABSTRACT

To investigate the circuit-level neural mechanisms of behavior, simultaneous imaging of neuronal activity in multiple cortical and subcortical regions is highly desired. Miniature head-mounted microscopes offer the capability of calcium imaging in freely behaving animals. However, implanting multiple microscopes on a mouse brain remains challenging due to space constraints and the cumbersome weight of the equipment. Here, we present TINIscope, a Tightly Integrated Neuronal Imaging microscope optimized for electronic and opto-mechanical design. With its compact and lightweight design of 0.43 g, TINIscope enables unprecedented simultaneous imaging of behavior-relevant activity in up to four brain regions in mice. Proof-of-concept experiments with TINIscope recorded over 1000 neurons in four hippocampal subregions and revealed concurrent activity patterns spanning across these regions. Moreover, we explored potential multi-modal experimental designs by integrating additional modules for optogenetics, electrical stimulation or local field potential recordings. Overall, TINIscope represents a timely and indispensable tool for studying the brain-wide interregional coordination that underlies unrestrained behaviors.

3.
IEEE Trans Med Imaging ; 43(3): 1237-1246, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37956005

ABSTRACT

Retinal arteriovenous nicking (AVN) manifests as a reduced venular caliber of an arteriovenous crossing. AVNs are signs of many systemic, particularly cardiovascular diseases. Studies have shown that people with AVN are twice as likely to have a stroke. However, AVN classification faces two challenges. One is the lack of data, especially AVNs compared to the normal arteriovenous (AV) crossings. The other is the significant intra-class variations and minute inter-class differences. AVNs may look different in shape, scale, pose, and color. On the other hand, the AVN could be different from the normal AV crossing only by slight thinning of the vein. To address these challenges, first, we develop a data synthesis method to generate AV crossings, including normal and AVNs. Second, to mitigate the domain shift between the synthetic and real data, an edge-guided unsupervised domain adaptation network is designed to guide the transfer of domain invariant information. Third, a semantic contrastive learning branch (SCLB) is introduced and a set of semantically related images, as a semantic triplet, are input to the network simultaneously to guide the network to focus on the subtle differences in venular width and to ignore the differences in appearance. These strategies effectively mitigate the lack of data, domain shift between synthetic and real data, and significant intra- but minute inter-class differences. Extensive experiments have been performed to demonstrate the outstanding performance of the proposed method.


Subject(s)
Cardiovascular Diseases , Retinal Diseases , Retinal Vein , Humans
4.
Inflamm Bowel Dis ; 2023 Nov 27.
Article in English | MEDLINE | ID: mdl-38011673

ABSTRACT

BACKGROUND: The purpose of this article is to develop a deep learning automatic segmentation model for the segmentation of Crohn's disease (CD) lesions in computed tomography enterography (CTE) images. Additionally, the radiomics features extracted from the segmented CD lesions will be analyzed and multiple machine learning classifiers will be built to distinguish CD activity. METHODS: This was a retrospective study with 2 sets of CTE image data. Segmentation datasets were used to establish nnU-Net neural network's automatic segmentation model. The classification dataset was processed using the automatic segmentation model to obtain segmentation results and extract radiomics features. The most optimal features were then selected to build 5 machine learning classifiers to distinguish CD activity. The performance of the automatic segmentation model was evaluated using the Dice similarity coefficient, while the performance of the machine learning classifier was evaluated using the area under the curve, sensitivity, specificity, and accuracy. RESULTS: The segmentation dataset had 84 CTE examinations of CD patients (mean age 31 ±â€…13 years , 60 males), and the classification dataset had 193 (mean age 31 ±â€…12 years , 136 males). The deep learning segmentation model achieved a Dice similarity coefficient of 0.824 on the testing set. The logistic regression model showed the best performance among the 5 classifiers in the testing set, with an area under the curve, sensitivity, specificity, and accuracy of 0.862, 0.697, 0.840, and 0.759, respectively. CONCLUSION: The automated segmentation model accurately segments CD lesions, and machine learning classifier distinguishes CD activity well. This method can assist radiologists in promptly and precisely evaluating CD activity.


The automatic segmentation and radiomics of computed tomography enterography images can assist radiologists in accurately and quickly identifying Crohn's disease lesions and evaluating Crohn's disease activity.

5.
Front Pediatr ; 11: 1252875, 2023.
Article in English | MEDLINE | ID: mdl-37691773

ABSTRACT

Purpose: The purpose of this study was to investigate the quantitative retinal vascular morphological characteristics of Retinopathy of Prematurity (ROP) and Familial Exudative Vitreoretinopathy (FEVR) in the newborn by the application of a deep learning network with artificial intelligence. Methods: Standard 130-degree fundus photographs centered on the optic disc were taken in the newborns. The deep learning network provided segmentation of the retinal vessels and the optic disc (OD). Based on the vessel segmentation, the vascular morphological characteristics, including avascular area, vessel angle, vessel density, fractal dimension (FD), and tortuosity, were automatically evaluated. Results: 201 eyes of FEVR, 289 eyes of ROP, and 195 eyes of healthy individuals were included in this study. The deep learning system of blood vessel segmentation had a sensitivity of 72% and a specificity of 99%. The vessel angle in the FEVR group was significantly smaller than that in the normal group and ROP group (37.43 ± 5.43 vs. 39.40 ± 5.61, 39.50 ± 5.58, P = 0.001, < 0.001 respectively). The normal group had the lowest vessel density, the ROP group was in between, and the FEVR group had the highest (2.64 ± 0.85, 2.97 ± 0.92, 3.37 ± 0.88 respectively). The FD was smaller in controls than in the FEVR and ROP groups (0.984 ± 0.039, 1.018 ± 0.039 and 1.016 ± 0.044 respectively, P < 0.001). The ROP group had the most tortuous vessels, while the FEVR group had the stiffest vessels, the controls were in the middle (11.61 ± 3.17, 8.37 ± 2.33 and 7.72 ± 1.57 respectively, P < 0.001). Conclusions: The deep learning technology used in this study has good performance in the quantitative analysis of vascular morphological characteristics in fundus photography. Vascular morphology was different in the newborns of FEVR and ROP compared to healthy individuals, which showed great clinical value for the differential diagnosis of ROP and FEVR.

6.
Quant Imaging Med Surg ; 13(8): 5242-5257, 2023 Aug 01.
Article in English | MEDLINE | ID: mdl-37581055

ABSTRACT

Background: Recent advances in artificial intelligence and digital image processing have inspired the use of deep neural networks for segmentation tasks in multimodal medical imaging. Unlike natural images, multimodal medical images contain much richer information regarding different modal properties and therefore present more challenges for semantic segmentation. However, there is no report on systematic research that integrates multi-scaled and structured analysis of single-modal and multimodal medical images. Methods: We propose a deep neural network, named as Modality Preserving U-Net (MPU-Net), for modality-preserving analysis and segmentation of medical targets from multimodal medical images. The proposed MPU-Net consists of a modality preservation encoder (MPE) module that preserves the feature independency among the modalities and a modality fusion decoder (MFD) module that performs a multiscale feature fusion analysis for each modality in order to provide a rich feature representation for the final task. The effectiveness of such a single-modal preservation and multimodal fusion feature extraction approach is verified by multimodal segmentation experiments and an ablation study using brain tumor and prostate datasets from Medical Segmentation Decathlon (MSD). Results: The segmentation experiments demonstrated the superiority of MPU-Net over other methods in the segmentation tasks for multimodal medical images. In the brain tumor segmentation tasks, the Dice scores (DSCs) for the whole tumor (WT), the tumor core (TC) and the enhancing tumor (ET) regions were 89.42%, 86.92%, and 84.59%, respectively. In the meanwhile, the 95% Hausdorff distance (HD95) results were 3.530, 4.899 and 2.555, respectively. In the prostate segmentation tasks, the DSCs for the peripheral zone (PZ) and the transitional zone (TZ) of the prostate were 71.20% and 90.38%, respectively. In the meanwhile, the 95% HD95 results were 6.367 and 4.766, respectively. The ablation study showed that the combination of single-modal preservation and multimodal fusion methods improved the performance of multimodal medical image feature analysis. Conclusions: In the segmentation tasks using brain tumor and prostate datasets, the MPU-Net method has achieved the improved performance in comparison with the conventional methods, indicating its potential application for other segmentation tasks in multimodal medical images.

7.
Int Ophthalmol ; 43(4): 1215-1228, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36207566

ABSTRACT

PURPOSE: To achieve an accurate diagnosis of idiopathic epiretinal membrane (iERM) through analyzing retinal blood vessel oxygen saturation (SO2) and vascular morphological features in fundus images. METHODS: Dual-modal fundus camera was used to obtain color fundus image, 570-nm, and 610-nm images. As iERM affects the macular area, a macular-centered semicircle area as region of interest (MROI) was selected and analyzed SO2 and vascular morphologies in it. Eventually, random forest (RF) and support vector machine (SVM) were as classifiers to diagnose iERM patients. RESULTS: The arterial and venous SO2 levels of the iERM group were significantly higher than that of the control group. There were significant differences in the vessel density and fractal dimension on the artery for vascular morphology, while the tortuosity had a significant difference in the vein. By feeding the SO2 and the vascular morphological features into classifiers, an accuracy of over 82% was obtained, which is significantly better than the two separate inputs for classification. CONCLUSION: Significant differences in SO2 and vascular morphologies between control and iERM groups confirmed that the occurrence of iERM may affect blood supply and vascular structures. Benefiting from the dual-modal fundus camera and machine learning models, accurate judgments can be made on fundus images. Extensive experiments proved the importance of blood vessel SO2 and vascular morphologies for diagnosis, which is of great significance for clinical screening.


Subject(s)
Epiretinal Membrane , Humans , Epiretinal Membrane/diagnosis , Oxygen Saturation , Fundus Oculi , Retinal Vessels/diagnostic imaging , Fluorescein Angiography/methods , Oxygen
8.
Front Med (Lausanne) ; 9: 956179, 2022.
Article in English | MEDLINE | ID: mdl-36874950

ABSTRACT

Purpose: The purpose of this study is to investigate the retinal vascular morphological characteristics in high myopia patients of different severity. Methods: 317 eyes of high myopia patients and 104 eyes of healthy control subjects were included in this study. The severity of high myopia patients is classified into C0-C4 according to the Meta Analysis of the Pathologic Myopia (META-PM) classification and their vascular morphological characteristics in ultra-wide field imaging were analyzed using transfer learning methods and RU-net. Correlation with axial length (AL), best corrected visual acuity (BCVA) and age was analyzed. In addition, the vascular morphological characteristics of myopic choroidal neovascularization (mCNV) patients and their matched high myopia patients were compared. Results: The RU-net and transfer learning system of blood vessel segmentation had an accuracy of 98.24%, a sensitivity of 71.42%, a specificity of 99.37%, a precision of 73.68% and a F1 score of 72.29. Compared with healthy control group, high myopia group had smaller vessel angle (31.12 ± 2.27 vs. 32.33 ± 2.14), smaller fractal dimension (Df) (1.383 ± 0.060 vs. 1.424 ± 0.038), smaller vessel density (2.57 ± 0.96 vs. 3.92 ± 0.93) and fewer vascular branches (201.87 ± 75.92 vs. 271.31 ± 67.37), all P < 0.001. With the increase of myopia maculopathy severity, vessel angle, Df, vessel density and vascular branches significantly decreased (all P < 0.001). There were significant correlations of these characteristics with AL, BCVA and age. Patients with mCNV tended to have larger vessel density (P < 0.001) and more vascular branches (P = 0.045). Conclusion: The RU-net and transfer learning technology used in this study has an accuracy of 98.24%, thus has good performance in quantitative analysis of vascular morphological characteristics in Ultra-wide field images. Along with the increase of myopic maculopathy severity and the elongation of eyeball, vessel angle, Df, vessel density and vascular branches decreased. Myopic CNV patients have larger vessel density and more vascular branches.

9.
Vet Comp Oncol ; 19(4): 624-631, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34173314

ABSTRACT

Soft tissue sarcoma (STS) is a locally aggressive and infiltrative tumour in dogs. Surgical resection is the treatment of choice for local tumour control. Currently, post-operative pathology is performed for surgical margin assessment. Spectral-domain optical coherence tomography (OCT) has recently been evaluated for its value for surgical margin assessment in some tumour types in dogs. The purpose of this study was to develop an automatic diagnosis system that can assist clinicians in real-time for OCT image interpretation of tissues at surgical margins. We utilized a ResNet-50 network to classify healthy and cancerous tissues. A patch-based approach was adopted to achieve accurate classification with limited training data (80 cancer images, 80 normal images) and the validation set (20 cancer images, 20 normal images). The proposed method achieved an average accuracy of 97.1% with an excellent sensitivity of 94.3% on the validation set; the quadratic weighted κ was 0.94 for the STS diagnosis. In an independent test data set of 20 OCT images (10 cancer images, 10 normal images), the proposed method correctly differentiated all the STS images. Furthermore, we proposed a diagnostic curve, which could be evaluated in real-time to assist clinicians in detecting the specific location of a lesion. In short, the proposed method is accurate, operates in real-time and is non-invasive, which could be helpful for future surgical guidance.


Subject(s)
Deep Learning , Dog Diseases , Sarcoma , Animals , Dog Diseases/diagnostic imaging , Dog Diseases/surgery , Dogs , Margins of Excision , Sarcoma/diagnostic imaging , Sarcoma/surgery , Sarcoma/veterinary , Tomography, Optical Coherence/veterinary
10.
Math Biosci Eng ; 18(3): 2331-2356, 2021 03 08.
Article in English | MEDLINE | ID: mdl-33892548

ABSTRACT

Collagen alignment has shown clinical significance in a variety of diseases. For instance, vulvar lichen sclerosus (VLS) is characterized by homogenization of collagen fibers with increasing risk of malignant transformation. To date, a variety of imaging techniques have been developed to visualize collagen fibers. However, few works focused on quantifying the alignment quality of collagen fiber. To assess the level of disorder of local fiber orientation, the homogeneity index (HI) based on limiting entropy is proposed as an indicator of disorder. Our proposed methods are validated by verification experiments on Poly Lactic Acid (PLA) filament phantoms with controlled alignment quality of fibers. A case study on 20 VLS tissue biopsies and 14 normal tissue biopsies shows that HI can effectively characterize VLS tissue from normal tissue (P < 0.01). The classification results are very promising with a sensitivity of 93% and a specificity of 95%, which indicated that our method can provide quantitative assessment for the alignment quality of collagen fibers in VLS tissue and aid in improving histopathological examination of VLS.


Subject(s)
Collagen , Extracellular Matrix , Diagnostic Imaging , Entropy , Skin
11.
IEEE J Biomed Health Inform ; 24(12): 3374-3383, 2020 12.
Article in English | MEDLINE | ID: mdl-32750919

ABSTRACT

Cataracts are the leading cause of visual impairment worldwide. Examination of the retina through cataracts using a fundus camera is challenging and error-prone due to degraded image quality. We sought to develop an algorithm to dehaze such images to support diagnosis by either ophthalmologists or computer-aided diagnosis systems. Based on the generative adversarial network (GAN) concept, we designed two neural networks: CataractSimGAN and CataractDehazeNet. CataractSimGAN was intended for the synthesis of cataract-like images through unpaired clear retinal images and cataract images. CataractDehazeNet was trained using pairs of synthesized cataract-like images and the corresponding clear images through supervised learning. With two networks trained independently, the number of hyper-parameters was reduced, leading to better performance. We collected 400 retinal images without cataracts and 400 hazy images from cataract patients as the training dataset. Fifty cataract images and the corresponding clear images from the same patients after surgery comprised the test dataset. The clear images after surgery were used for reference to evaluate the performance of our method. CataractDehazeNet was able to enhance the degraded image from cataract patients substantially and to visualize blood vessels and the optic disc, while actively suppressing the artifacts common in application of similar methods. Thus, we developed an algorithm to improve the quality of the retinal images acquired from cataract patients. We achieved high structure similarity and fidelity between processed images and images from the same patients after cataract surgery.


Subject(s)
Cataract/diagnostic imaging , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Retina/diagnostic imaging , Aged , Aged, 80 and over , Algorithms , Cataract Extraction , Deep Learning , Female , Humans , Male , Middle Aged
12.
Acta Ophthalmol ; 98(3): e339-e345, 2020 May.
Article in English | MEDLINE | ID: mdl-31559701

ABSTRACT

BACKGROUND: The purpose of this study was to develop an automated diagnosis and quantitative analysis system for plus disease. The system provides a diagnostic decision but also performs quantitative analysis of the typical pathological features of the disease, which helps the physicians to make the best judgement and communicate the decisions. METHODS: The deep learning network provided segmentation of the retinal vessels and the optic disc (OD). Based on the vessel segmentation, plus disease was classified and tortuosity, width, fractal dimension and vessel density were evaluated automatically. RESULTS: The trained network achieved a sensitivity of 95.1% with 97.8% specificity for the diagnosis of plus disease. For detection of preplus or worse, the sensitivity and specificity were 92.4% and 97.4%. The quadratic weighted k was 0.9244. The tortuosities for the normal, preplus and plus groups were 3.61 ± 0.08, 5.95 ± 1.57 and 10.67 ± 0.50 (104  cm-3 ). The widths of the blood vessels were 63.46 ± 0.39, 67.21 ± 0.70 and 68.89 ± 0.75 µm. The fractal dimensions were 1.18 ± 0.01, 1.22 ± 0.01 and 1.26 ± 0.02. The vessel densities were 1.39 ± 0.03, 1.60 ± 0.01 and 1.64 ± 0.09 (%). All values were statistically different among the groups. After treatment for plus disease with ranibizumab injection, quantitative analysis showed significant changes in the pathological features. CONCLUSIONS: Our system achieved high accuracy of diagnosis of plus disease in retinopathy of prematurity. It provided a quantitative analysis of the dynamic features of the disease progression. This automated system can assist physicians by providing a classification decision with auxiliary quantitative evaluation of the typical pathological features of the disease.


Subject(s)
Deep Learning , Image Interpretation, Computer-Assisted/methods , Retinopathy of Prematurity/diagnosis , Angiogenesis Inhibitors/administration & dosage , Diagnosis, Computer-Assisted , Humans , Infant, Extremely Low Birth Weight , Infant, Newborn , Infant, Premature , Intravitreal Injections , ROC Curve , Ranibizumab/administration & dosage , Retinal Vessels/diagnostic imaging , Retinal Vessels/pathology , Retinopathy of Prematurity/drug therapy
13.
Clin Exp Ophthalmol ; 48(2): 220-229, 2020 03.
Article in English | MEDLINE | ID: mdl-31648403

ABSTRACT

BACKGROUND: To define a new quantitative grading criterion for retinal haemorrhages in term newborns based on the segmentation results of a deep convolutional neural network. METHODS: We constructed a dataset of 1543 retina images acquired from 847 term newborns, and developed a deep convolutional neural network to segment retinal haemorrhages, blood vessels and optic discs and locate the macular region. Based on the ratio of areas of retinal haemorrhage to optic disc, and the location of retinal haemorrhages relative to the macular region, we defined a new criterion to grade the degree of retinal haemorrhages in term newborns. RESULTS: The F1 scores of the proposed network for segmenting retinal haemorrhages, blood vessels and optic discs were 0.84, 0.73 and 0.94, respectively. Compared with two commonly used retinal haemorrhage grading criteria, this new method is more accurate, objective and quantitative, with the relative location of the retinal haemorrhages to the macula as an important factor. CONCLUSIONS: Based on a deep convolutional neural network, we can segment retinal haemorrhages, blood vessels and optic disc with high accuracy. The proposed grading criterion considers not only the area of the haemorrhages but also the locations relative to the macular region. It provides a more objective and comprehensive evaluation criterion. The developed deep convolutional neural network offers an end-to-end solution that can assist doctors to grade retinal haemorrhages in term newborns.


Subject(s)
Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Retinal Hemorrhage/classification , Retinal Hemorrhage/diagnostic imaging , Deep Learning , Humans , Infant, Newborn , Optic Disk/pathology , Retinal Vessels/pathology , Term Birth
14.
Appl Opt ; 58(14): 3877-3885, 2019 May 10.
Article in English | MEDLINE | ID: mdl-31158206

ABSTRACT

Retinal vessel oxygen supply is important for retinal tissue metabolism. Commonly used retinal vessel oximetry devices are based on dual-wavelength spectral measurement of oxyhemoglobin and deoxyhemoglobin. However, there is no traceable standard for reliable calibration of these devices. In this study, we developed a fundus-simulating phantom that closely mimicked the optical properties of human fundus tissues. Microchannels of precisely controlled topological structures were produced by soft lithography to simulate the retinal vasculature. Optical properties of the phantom were adjusted by adding scattering and absorption agents to simulate different concentrations of fundus pigments. The developed phantom was used to calibrate the linear correlation between oxygen saturation (SO2) level and optical density ratio in a dual-wavelength oximetry device. The obtained calibration factors were used to calculate the retinal vessel SO2 in both eyes of five volunteers aged between 24 and 27 years old. The test results showed that the mean arterial and venous SO2 levels after phantom calibration were coincident with those after empirical value calibration, indicating the potential clinical utility of the produced phantom as a calibration standard.

15.
J Biophotonics ; 12(9): e201800410, 2019 09.
Article in English | MEDLINE | ID: mdl-31081258

ABSTRACT

Manual hand counting of parasites in fecal samples requires costly components and substantial expertise, limiting its use in resource-constrained settings and encouraging overuse of prophylactic medication. To address this issue, a cost-effective, automated parasite diagnostic system that does not require special sample preparation or a trained user was developed. It is composed of an inexpensive (~US$350), portable, robotic microscope that can scan over the size of an entire McMaster chamber (100 mm2 ) and capture high-resolution (~1 µm lateral resolution) bright field images without need for user intervention. Fecal samples prepared using the McMaster flotation method were imaged, with the imaging region comprising the entire McMaster chamber. These images are then automatically segmented and analyzed using a trained convolution neural network (CNN) to robustly separate eggs from background debris. Simple postprocessing of the CNN output yields both egg species and egg counts. The system was validated by comparing accuracy with hand-counts by a trained operator, with excellent performance. As a further demonstration of utility, the system was used to conveniently quantify drug response over time in a single animal, showing residual disease due to Anthelmintic resistance after 2 weeks.


Subject(s)
Deep Learning , Feces/parasitology , Microscopy/methods , Parasitemia/diagnostic imaging , Pattern Recognition, Automated , Animals , Anthelmintics/pharmacology , Dogs , Drug Resistance , Eimeria , Goats , Haplorhini , Image Processing, Computer-Assisted/methods , Machine Learning , Microscopy/economics , Microscopy/veterinary , Neural Networks, Computer , Parasitemia/economics , Parasitemia/veterinary , Robotics , Sheep , Specimen Handling
16.
Biomed Opt Express ; 9(10): 4863-4878, 2018 Oct 01.
Article in English | MEDLINE | ID: mdl-30319908

ABSTRACT

Diabetic retinopathy (DR) is a leading cause of blindness worldwide. However, 90% of DR caused blindness can be prevented if diagnosed and intervened early. Retinal exudates can be observed at the early stage of DR and can be used as signs for early DR diagnosis. Deep convolutional neural networks (DCNNs) have been applied for exudate detection with promising results. However, there exist two main challenges when applying the DCNN based methods for exudate detection. One is the very limited number of labeled data available from medical experts, and another is the severely imbalanced distribution of data of different classes. First, there are many more images of normal eyes than those of eyes with exudates, particularly for screening datasets. Second, the number of normal pixels (non-exudates) is much greater than the number of abnormal pixels (exudates) in images containing exudates. To tackle the small sample set problem, an ensemble convolutional neural network (MU-net) based on a U-net structure is presented in this paper. To alleviate the imbalance data problem, the conditional generative adversarial network (cGAN) is adopted to generate label-preserving minority class data specifically to implement the data augmentation. The network was trained on one dataset (e_ophtha_EX) and tested on the other three public datasets (DiaReTDB1, HEI-MED and MESSIDOR). CGAN, as a data augmentation method, significantly improves network robustness and generalization properties, achieving F1-scores of 92.79%, 92.46%, 91.27%, and 94.34%, respectively, as measured at the lesion level. While without cGAN, the corresponding F1-scores were 92.66%, 91.41%, 90.72%, and 90.58%, respectively. When measured at the image level, with cGAN we achieved the accuracy of 95.45%, 92.13%, 88.76%, and 89.58%, compared with the values achieved without cGAN of 86.36%, 87.64%, 76.33%, and 86.42%, respectively.

17.
Mol Biol Cell ; 27(16): 2528-41, 2016 08 15.
Article in English | MEDLINE | ID: mdl-27385337

ABSTRACT

Rho GAPs are important regulators of Rho GTPases, which are involved in various steps of cytokinesis and other processes. However, regulation of Rho-GAP cellular localization and function is not fully understood. Here we report the characterization of a novel coiled-coil protein Rng10 and its relationship with the Rho-GAP Rga7 in fission yeast. Both rng10Δ and rga7Δ result in defective septum and cell lysis during cytokinesis. Rng10 and Rga7 colocalize on the plasma membrane at the cell tips during interphase and at the division site during cell division. Rng10 physically interacts with Rga7 in affinity purification and coimmunoprecipitation. Of interest, Rga7 localization is nearly abolished without Rng10. Moreover, Rng10 and Rga7 work together to regulate the accumulation and dynamics of glucan synthases for successful septum formation in cytokinesis. Our results show that cellular localization and function of the Rho-GAP Rga7 are regulated by a novel protein, Rng10, during cytokinesis in fission yeast.


Subject(s)
GTPase-Activating Proteins/physiology , Schizosaccharomyces/physiology , Cell Division/physiology , Cell Wall/metabolism , Cytokinesis , GTPase-Activating Proteins/genetics , GTPase-Activating Proteins/metabolism , Glucosyltransferases/metabolism , Guanine Nucleotide Exchange Factors/genetics , Guanine Nucleotide Exchange Factors/metabolism , Protein Structural Elements , Schizosaccharomyces/cytology , Schizosaccharomyces/metabolism , rho GTP-Binding Proteins/metabolism
18.
J Biol Chem ; 290(40): 24592-603, 2015 Oct 02.
Article in English | MEDLINE | ID: mdl-26306047

ABSTRACT

Cell membrane repair is an important aspect of physiology, and disruption of this process can result in pathophysiology in a number of different tissues, including wound healing, chronic ulcer and scarring. We have previously identified a novel tripartite motif family protein, MG53, as an essential component of the cell membrane repair machinery. Here we report the functional role of MG53 in the modulation of wound healing and scarring. Although MG53 is absent from keratinocytes and fibroblasts, remarkable defects in skin architecture and collagen overproduction are observed in mg53(-/-) mice, and these animals display delayed wound healing and abnormal scarring. Recombinant human MG53 (rhMG53) protein, encapsulated in a hydrogel formulation, facilitates wound healing and prevents scarring in rodent models of dermal injuries. An in vitro study shows that rhMG53 protects against acute injury to keratinocytes and facilitates the migration of fibroblasts in response to scratch wounding. During fibrotic remodeling, rhMG53 interferes with TGF-ß-dependent activation of myofibroblast differentiation. The resulting down-regulation of α smooth muscle actin and extracellular matrix proteins contributes to reduced scarring. Overall, these studies establish a trifunctional role for MG53 as a facilitator of rapid injury repair, a mediator of cell migration, and a modulator of myofibroblast differentiation during wound healing. Targeting the functional interaction between MG53 and TGF-ß signaling may present a potentially effective means for promoting scarless wound healing.


Subject(s)
Carrier Proteins/physiology , Cell Membrane/metabolism , Muscle Proteins/physiology , Vesicular Transport Proteins/physiology , Wound Healing/physiology , 3T3 Cells , Actins/metabolism , Animals , Cell Differentiation , Cell Movement , Cicatrix/pathology , Collagen Type I/metabolism , Fibroblasts/cytology , Fibronectins/metabolism , Fibrosis/pathology , Gene Expression Regulation , Humans , Hydrogels/chemistry , Keratinocytes/metabolism , Membrane Proteins , Mice , Muscle, Smooth/metabolism , Myofibroblasts/metabolism , Rabbits , Rats , Rats, Sprague-Dawley , Recombinant Proteins/metabolism , Skin/pathology , Tripartite Motif Proteins
19.
J R Soc Interface ; 12(109): 20150049, 2015 Aug 06.
Article in English | MEDLINE | ID: mdl-26246416

ABSTRACT

The formation of a collectively moving group benefits individuals within a population in a variety of ways. The surface-dwelling bacterium Myxococcus xanthus forms dynamic collective groups both to feed on prey and to aggregate during times of starvation. The latter behaviour, termed fruiting-body formation, involves a complex, coordinated series of density changes that ultimately lead to three-dimensional aggregates comprising hundreds of thousands of cells and spores. How a loose, two-dimensional sheet of motile cells produces a fixed aggregate has remained a mystery as current models of aggregation are either inconsistent with experimental data or ultimately predict unstable structures that do not remain fixed in space. Here, we use high-resolution microscopy and computer vision software to spatio-temporally track the motion of thousands of individuals during the initial stages of fruiting-body formation. We find that cells undergo a phase transition from exploratory flocking, in which unstable cell groups move rapidly and coherently over long distances, to a reversal-mediated localization into one-dimensional growing streams that are inherently stable in space. These observations identify a new phase of active collective behaviour and answer a long-standing open question in Myxococcus development by describing how motile cell groups can remain statistically fixed in a spatial location.


Subject(s)
Models, Biological , Myxococcus xanthus/physiology
20.
Opt Lett ; 40(13): 2989-92, 2015 Jul 01.
Article in English | MEDLINE | ID: mdl-26125349

ABSTRACT

Single-molecule localization microscopy achieves sub-diffraction-limit resolution by localizing a sparse subset of stochastically activated emitters in each frame. Its temporal resolution is limited by the maximal emitter density that can be handled by the image reconstruction algorithms. Multiple algorithms have been developed to accurately locate the emitters even when they have significant overlaps. Currently, compressive-sensing-based algorithm (CSSTORM) achieves the highest emitter density. However, CSSTORM is extremely computationally expensive, which limits its practical application. Here, we develop a new algorithm (MempSTORM) based on two-dimensional spectrum analysis. With the same localization accuracy and recall rate, MempSTORM is 100 times faster than CSSTORM with ℓ(1)-homotopy. In addition, MempSTORM can be implemented on a GPU for parallelism, which can further increase its computational speed and make it possible for online super-resolution reconstruction of high-density emitters.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Microscopy
SELECTION OF CITATIONS
SEARCH DETAIL
...