Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
IEEE Trans Med Imaging ; PP2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38557622

RESUMO

Ophthalmic diseases such as central serous chorioretinopathy (CSC) significantly impair the vision of millions of people globally. Precise segmentation of choroid and macular edema is critical for diagnosing and treating these conditions. However, existing 3D medical image segmentation methods often fall short due to the heterogeneous nature and blurry features of these conditions, compounded by medical image clarity issues and noise interference arising from equipment and environmental limitations. To address these challenges, we propose the Spectrum Analysis Synergy Axial-Spatial Network (SASAN), an approach that innovatively integrates spectrum features using the Fast Fourier Transform (FFT). SASAN incorporates two key modules: the Frequency Integrated Neural Enhancer (FINE), which mitigates noise interference, and the Axial-Spatial Elementum Multiplier (ASEM), which enhances feature extraction. Additionally, we introduce the Self-Adaptive Multi-Aspect Loss (LSM), which balances image regions, distribution, and boundaries, adaptively updating weights during training. We compiled and meticulously annotated the Choroid and Macular Edema OCT Mega Dataset (CMED-18k), currently the world's largest dataset of its kind. Comparative analysis against 13 baselines shows our method surpasses these benchmarks, achieving the highest Dice scores and lowest HD95 in the CMED and OIMHS datasets. Our code is publicly available at https://github.com/IMOP-lab/SASAN-Pytorch.

2.
Artif Intell Med ; 150: 102837, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38553151

RESUMO

The thickness of the choroid is considered to be an important indicator of clinical diagnosis. Therefore, accurate choroid segmentation in retinal OCT images is crucial for monitoring various ophthalmic diseases. However, this is still challenging due to the blurry boundaries and interference from other lesions. To address these issues, we propose a novel prior-guided and knowledge diffusive network (PGKD-Net) to fully utilize retinal structural information to highlight choroidal region features and boost segmentation performance. Specifically, it is composed of two parts: a Prior-mask Guided Network (PG-Net) for coarse segmentation and a Knowledge Diffusive Network (KD-Net) for fine segmentation. In addition, we design two novel feature enhancement modules, Multi-Scale Context Aggregation (MSCA) and Multi-Level Feature Fusion (MLFF). The MSCA module captures the long-distance dependencies between features from different receptive fields and improves the model's ability to learn global context. The MLFF module integrates the cascaded context knowledge learned from PG-Net to benefit fine-level segmentation. Comprehensive experiments are conducted to evaluate the performance of the proposed PGKD-Net. Experimental results show that our proposed method achieves superior segmentation accuracy over other state-of-the-art methods. Our code is made up publicly available at: https://github.com/yzh-hdu/choroid-segmentation.


Assuntos
Corioide , Aprendizagem , Corioide/diagnóstico por imagem , Retina/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
3.
Sensors (Basel) ; 24(3)2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38339491

RESUMO

Optical coherence tomography angiography (OCTA) offers critical insights into the retinal vascular system, yet its full potential is hindered by challenges in precise image segmentation. Current methodologies struggle with imaging artifacts and clarity issues, particularly under low-light conditions and when using various high-speed CMOS sensors. These challenges are particularly pronounced when diagnosing and classifying diseases such as branch vein occlusion (BVO). To address these issues, we have developed a novel network based on topological structure generation, which transitions from superficial to deep retinal layers to enhance OCTA segmentation accuracy. Our approach not only demonstrates improved performance through qualitative visual comparisons and quantitative metric analyses but also effectively mitigates artifacts caused by low-light OCTA, resulting in reduced noise and enhanced clarity of the images. Furthermore, our system introduces a structured methodology for classifying BVO diseases, bridging a critical gap in this field. The primary aim of these advancements is to elevate the quality of OCTA images and bolster the reliability of their segmentation. Initial evaluations suggest that our method holds promise for establishing robust, fine-grained standards in OCTA vascular segmentation and analysis.


Assuntos
Oclusão da Veia Retiniana , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Reprodutibilidade dos Testes , Oclusão da Veia Retiniana/diagnóstico , Vasos Retinianos/diagnóstico por imagem , Angiografia
4.
IEEE J Biomed Health Inform ; 28(1): 66-77, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37368799

RESUMO

Deep learning methods are frequently used in segmenting histopathology images with high-quality annotations nowadays. Compared with well-annotated data, coarse, scribbling-like labelling is more cost-effective and easier to obtain in clinical practice. The coarse annotations provide limited supervision, so employing them directly for segmentation network training remains challenging. We present a sketch-supervised method, called DCTGN-CAM, based on a dual CNN-Transformer network and a modified global normalised class activation map. By modelling global and local tumour features simultaneously, the dual CNN-Transformer network produces accurate patch-based tumour classification probabilities by training only on lightly annotated data. With the global normalised class activation map, more descriptive gradient-based representations of the histopathology images can be obtained, and inference of tumour segmentation can be performed with high accuracy. Additionally, we collect a private skin cancer dataset named BSS, which contains fine and coarse annotations for three types of cancer. To facilitate reproducible performance comparison, experts are also invited to label coarse annotations on the public liver cancer dataset PAIP2019. On the BSS dataset, our DCTGN-CAM segmentation outperforms the state-of-the-art methods and achieves 76.68 % IOU and 86.69 % Dice scores on the sketch-based tumour segmentation task. On the PAIP2019 dataset, our method achieves a Dice gain of 8.37 % compared with U-Net as the baseline network.


Assuntos
Neoplasias Hepáticas , Neoplasias Cutâneas , Humanos , Fontes de Energia Elétrica , Probabilidade , Processamento de Imagem Assistida por Computador
6.
Sci Data ; 10(1): 769, 2023 11 06.
Artigo em Inglês | MEDLINE | ID: mdl-37932307

RESUMO

Macular holes, one of the most common macular diseases, require timely treatment. The morphological changes on optical coherence tomography (OCT) images provided an opportunity for direct observation of the disease, and accurate segmentation was needed to identify and quantify the lesions. Developments of such algorithms had been obstructed by a lack of high-quality datasets (the OCT images and the corresponding gold standard macular hole segmentation labels), especially for supervised learning-based segmentation algorithms. In such context, we established a large OCT image macular hole segmentation (OIMHS) dataset with 3859 B-scan images of 119 patients, and each image provided four segmentation labels: retina, macular hole, intraretinal cysts, and choroid. This dataset offered an excellent opportunity for investigating the accuracy and reliability of different segmentation algorithms for macular holes and a new research insight into the further development of clinical research for macular diseases, which included the retina, lesions, and choroid in quantitative analyses.


Assuntos
Perfurações Retinianas , Tomografia de Coerência Óptica , Humanos , Algoritmos , Perfurações Retinianas/diagnóstico por imagem , Perfurações Retinianas/patologia , Tomografia de Coerência Óptica/métodos
7.
Front Cardiovasc Med ; 10: 1250800, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37868778

RESUMO

Introduction: Changes in coronary artery luminal dimensions during the cardiac cycle can impact the accurate quantification of volumetric analyses in intravascular ultrasound (IVUS) image studies. Accurate ED-frame detection is pivotal for guiding interventional decisions, optimizing therapeutic interventions, and ensuring standardized volumetric analysis in research studies. Images acquired at different phases of the cardiac cycle may also lead to inaccurate quantification of atheroma volume due to the longitudinal motion of the catheter in relation to the vessel. As IVUS images are acquired throughout the cardiac cycle, end-diastolic frames are typically identified retrospectively by human analysts to minimize motion artefacts and enable more accurate and reproducible volumetric analysis. Methods: In this paper, a novel neural network-based approach for accurate end-diastolic frame detection in IVUS sequences is proposed, trained using electrocardiogram (ECG) signals acquired synchronously during IVUS acquisition. The framework integrates dedicated motion encoders and a bidirectional attention recurrent network (BARNet) with a temporal difference encoder to extract frame-by-frame motion features corresponding to the phases of the cardiac cycle. In addition, a spatiotemporal rotation encoder is included to capture the IVUS catheter's rotational movement with respect to the coronary artery. Results: With a prediction tolerance range of 66.7 ms, the proposed approach was able to find 71.9%, 67.8%, and 69.9% of end-diastolic frames in the left anterior descending, left circumflex and right coronary arteries, respectively, when tested against ECG estimations. When the result was compared with two expert analysts' estimation, the approach achieved a superior performance. Discussion: These findings indicate that the developed methodology is accurate and fully reproducible and therefore it should be preferred over experts for end-diastolic frame detection in IVUS sequences.

8.
Sensors (Basel) ; 23(19)2023 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-37836972

RESUMO

This paper designs a fast image-based indoor localization method based on an anchor control network (FILNet) to improve localization accuracy and shorten the duration of feature matching. Particularly, two stages are developed for the proposed algorithm. The offline stage is to construct an anchor feature fingerprint database based on the concept of an anchor control network. This introduces detailed surveys to infer anchor features according to the information of control anchors using the visual-inertial odometry (VIO) based on Google ARcore. In addition, an affine invariance enhancement algorithm based on feature multi-angle screening and supplementation is developed to solve the image perspective transformation problem and complete the feature fingerprint database construction. In the online stage, a fast spatial indexing approach is adopted to improve the feature matching speed by searching for active anchors and matching only anchor features around the active anchors. Further, to improve the correct matching rate, a homography matrix filter model is used to verify the correctness of feature matching, and the correct matching points are selected. Extensive experiments in real-world scenarios are performed to evaluate the proposed FILNet. The experimental results show that in terms of affine invariance, compared with the initial local features, FILNet significantly improves the recall of feature matching from 26% to 57% when the angular deviation is less than 60 degrees. In the image feature matching stage, compared with the initial K-D tree algorithm, FILNet significantly improves the efficiency of feature matching, and the average time of the test image dataset is reduced from 30.3 ms to 12.7 ms. In terms of localization accuracy, compared with the benchmark method based on image localization, FILNet significantly improves the localization accuracy, and the percentage of images with a localization error of less than 0.1m increases from 31.61% to 55.89%.

9.
Med Image Anal ; 89: 102922, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37598605

RESUMO

Intravascular ultrasound (IVUS) is recommended in guiding coronary intervention. The segmentation of coronary lumen and external elastic membrane (EEM) borders in IVUS images is a key step, but the manual process is time-consuming and error-prone, and suffers from inter-observer variability. In this paper, we propose a novel perceptual organisation-aware selective transformer framework that can achieve accurate and robust segmentation of the vessel walls in IVUS images. In this framework, temporal context-based feature encoders extract efficient motion features of vessels. Then, a perceptual organisation-aware selective transformer module is proposed to extract accurate boundary information, supervised by a dedicated boundary loss. The obtained EEM and lumen segmentation results will be fused in a temporal constraining and fusion module, to determine the most likely correct boundaries with robustness to morphology. Our proposed methods are extensively evaluated in non-selected IVUS sequences, including normal, bifurcated, and calcified vessels with shadow artifacts. The results show that the proposed methods outperform the state-of-the-art, with a Jaccard measure of 0.92 for lumen and 0.94 for EEM on the IVUS 2011 open challenge dataset. This work has been integrated into a software QCU-CMS2 to automatically segment IVUS images in a user-friendly environment.


Assuntos
Artefatos , Coração , Humanos , Movimento (Física) , Software , Ultrassonografia de Intervenção
10.
Med Image Anal ; 89: 102929, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37598606

RESUMO

Automated retinal blood vessel segmentation in fundus images provides important evidence to ophthalmologists in coping with prevalent ocular diseases in an efficient and non-invasive way. However, segmenting blood vessels in fundus images is a challenging task, due to the high variety in scale and appearance of blood vessels and the high similarity in visual features between the lesions and retinal vascular. Inspired by the way that the visual cortex adaptively responds to the type of stimulus, we propose a Stimulus-Guided Adaptive Transformer Network (SGAT-Net) for accurate retinal blood vessel segmentation. It entails a Stimulus-Guided Adaptive Module (SGA-Module) that can extract local-global compound features based on inductive bias and self-attention mechanism. Alongside a light-weight residual encoder (ResEncoder) structure capturing the relevant details of appearance, a Stimulus-Guided Adaptive Pooling Transformer (SGAP-Former) is introduced to reweight the maximum and average pooling to enrich the contextual embedding representation while suppressing the redundant information. Moreover, a Stimulus-Guided Adaptive Feature Fusion (SGAFF) module is designed to adaptively emphasize the local details and global context and fuse them in the latent space to adjust the receptive field (RF) based on the task. The evaluation is implemented on the largest fundus image dataset (FIVES) and three popular retinal image datasets (DRIVE, STARE, CHASEDB1). Experimental results show that the proposed method achieves a competitive performance over the other existing method, with a clear advantage in avoiding errors that commonly happen in areas with highly similar visual features. The sourcecode is publicly available at: https://github.com/Gins-07/SGAT.


Assuntos
Face , Vasos Retinianos , Humanos , Vasos Retinianos/diagnóstico por imagem , Fundo de Olho
11.
Med Phys ; 50(9): 5398-5409, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37490302

RESUMO

BACKGROUND: Myopic traction maculopathy (MTM) are retinal disorder caused by traction force on the macula, which can lead to varying degrees of vision loss in eyes with high myopia. Optical coherence tomography (OCT) is an effective imaging technique for diagnosing, detecting and classifying retinopathy. MTM has been classified into different patterns by OCT, corresponding to different clinical strategies. PURPOSE: We aimed to engineer a deep learning model that can automatically identify MTM in highly myopic (HM) eyes using OCT images. METHODS: A five-class classification model was developed using 2837 OCT images from 958 HM patients. We adopted a ResNet-34 architecture to train the model to identify MTM: no MTM (class 0), extra-foveal maculoschisis (class 1), inner lamellar macular hole (class 2), outer foveoschisis (class 3), and discontinuity or detachment of foveal outer hyperreflective layers (class 4). An independent test set of 604 images from 173 HM patients was used to evaluate the model's performance. Classification performance was assessed according to the area under the curve (AUC), accuracy, sensitivity, specificity. RESULTS: Our model exhibited a high training performance for classification (F1-score of 0.953; AUCs of 0.961 to 0.998). In test set, it achieved sensitivities (91.67%-97.78 %) and specificities (98.33%-99.17%) as good as, or better than, those of experienced clinicians. Heatmaps were generated to provide visual explanations. CONCLUSIONS: We established a deep learning model for MTM classification using OCT images. This model performed equally well or better than retinal specialists and is suitable for large-scale screening and identifying MTM in HM eyes.


Assuntos
Degeneração Macular , Miopia Degenerativa , Humanos , Miopia Degenerativa/diagnóstico , Tomografia de Coerência Óptica/métodos , Tração , Acuidade Visual , Degeneração Macular/diagnóstico por imagem , Estudos Retrospectivos
12.
Sci Data ; 10(1): 380, 2023 06 14.
Artigo em Inglês | MEDLINE | ID: mdl-37316638

RESUMO

When dentists see pediatric patients with more complex tooth development than adults during tooth replacement, they need to manually determine the patient's disease with the help of preoperative dental panoramic radiographs. To the best of our knowledge, there is no international public dataset for children's teeth and only a few datasets for adults' teeth, which limits the development of deep learning algorithms for segmenting teeth and automatically analyzing diseases. Therefore, we collected dental panoramic radiographs and cases from 106 pediatric patients aged 2 to 13 years old, and with the help of the efficient and intelligent interactive segmentation annotation software EISeg (Efficient Interactive Segmentation) and the image annotation software LabelMe. We propose the world's first dataset of children's dental panoramic radiographs for caries segmentation and dental disease detection by segmenting and detecting annotations. In addition, another 93 dental panoramic radiographs of pediatric patients, together with our three internationally published adult dental datasets with a total of 2,692 images, were collected and made into a segmentation dataset suitable for deep learning.


Assuntos
Suscetibilidade à Cárie Dentária , Doenças Estomatognáticas , Adolescente , Criança , Pré-Escolar , Humanos , Algoritmos , Conhecimento , Radiografia Panorâmica
13.
Exp Dermatol ; 32(6): 831-839, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37017196

RESUMO

Basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) are the two most common skin cancer and impose a huge medical burden on society. Histopathological examination based on whole-slide images (WSIs) remains to be the confirmatory diagnostic method for skin tumors. Accurate segmentation of tumor tissue in WSIs by deep-learning (DL) models can reduce the workload of pathologists and help surgeons ensure the complete removal of tumors. To accurately segment the tumor areas in WSIs of BCC, SCC and squamous cell papilloma (SCP, homologous to SCC) with robust models. We established a data set (ZJU-NMSC) containing 151 WSIs of BCC, SCC and SCP in total. Seven models were utilized to segment WSIs, including the state-of-the-art model, models proposed by us and other models. Dice score, intersection over union, accuracy, sensitivity and specificity were used to evaluate and compare the performance of different models. Heatmaps and tumor tissue masks were generated to reflect the results of the segmentation. The processing times of models are also recorded and compared. While the dice score of most models is higher than 0.85, deeplab v3+ has the best performance and the corresponding tumor tissue mask is more consistent with the ground truth tumor areas even with complex and small lobular lesions. This study broadens the use of DL-based segmentation models in WSIs of skin tumors in terms of tumor types and computational approaches. Segmenting tumor areas can simplify the process of histopathological inspection and benefit the diagnosis and following management of the diseases in practice.


Assuntos
Carcinoma Basocelular , Carcinoma de Células Escamosas , Aprendizado Profundo , Neoplasias Cutâneas , Humanos , Semântica , Neoplasias Cutâneas/diagnóstico por imagem , Carcinoma Basocelular/diagnóstico por imagem , Carcinoma de Células Escamosas/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
14.
Quant Imaging Med Surg ; 13(3): 1592-1604, 2023 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-36915314

RESUMO

Background: We aimed to propose a deep learning-based approach to automatically measure eyelid morphology in patients with thyroid-associated ophthalmopathy (TAO). Methods: This prospective study consecutively included 74 eyes of patients with TAO and 74 eyes of healthy volunteers visiting the ophthalmology department in a tertiary hospital. Patients diagnosed as TAO and healthy volunteers who were age- and gender-matched met the eligibility criteria for recruitment. Facial images were taken under the same light conditions. Comprehensive eyelid morphological parameters, such as palpebral fissure (PF) length, margin reflex distance (MRD), eyelid retraction distance, eyelid length, scleral area, and mid-pupil lid distance (MPLD), were automatically calculated using our deep learning-based analysis system. MRD1 and 2 were manually measured. Bland-Altman plots and intraclass correlation coefficients (ICCs) were performed to assess the agreement between automatic and manual measurements of MRDs. The asymmetry of the eyelid contour was analyzed using the temporal: nasal ratio of the MPLD. All eyelid features were compared between TAO eyes and control eyes using the independent samples t-test. Results: A strong agreement between automatic and manual measurement was indicated. Biases of MRDs in TAO eyes and control eyes ranged from -0.01 mm [95% limits of agreement (LoA): -0.64 to 0.63 mm] to 0.09 mm (LoA: -0.46 to 0.63 mm). ICCs ranged from 0.932 to 0.980 (P<0.001). Eyelid features were significantly different in TAO eyes and control eyes, including MRD1 (4.82±1.59 vs. 2.99±0.81 mm; P<0.001), MRD2 (5.89±1.16 vs. 5.47±0.73 mm; P=0.009), upper eyelid length (UEL) (27.73±4.49 vs. 25.42±4.35 mm; P=0.002), lower eyelid length (LEL) (31.51±4.59 vs. 26.34±4.72 mm; P<0.001), and total scleral area (SATOTAL) (96.14±34.38 vs. 56.91±14.97 mm2; P<0.001). The MPLDs at all angles showed significant differences in the 2 groups of eyes (P=0.008 at temporal 180°; P<0.001 at other angles). The greatest temporal-nasal asymmetry appeared at 75° apart from the midline in TAO eyes. Conclusions: Our proposed system allowed automatic, comprehensive, and objective measurement of eyelid morphology by only using facial images, which has potential application prospects in TAO. Future work with a large sample of patients that contains different TAO subsets is warranted.

15.
Quant Imaging Med Surg ; 13(1): 329-338, 2023 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-36620142

RESUMO

Background: Inferior oblique overaction (IOOA) is a common ocular motility disorder. This study aimed to propose a novel deep learning-based approach to automatically evaluate the amount of IOOA. Methods: This prospective study included 106 eyes of 72 consecutive patients attending the strabismus clinic in a tertiary referral hospital. Patients were eligible for inclusion if they were diagnosed with IOOA. IOOA was clinically graded from +1 to +4. Based on photograph in the adducted position, the height difference between the inferior corneal limbus of both eyes was manually measured using ImageJ and automatically measured by our deep learning-based image analysis system with human supervision. Correlation coefficients, Bland-Altman plots and mean absolute deviation (MAD) were analyzed between two different measurements of evaluating IOOA. Results: There were significant correlations between automated photographic measurements and clinical gradings (Kendall's tau: 0.721; 95% confidence interval: 0.652 to 0.779; P<0.001), between automated and manual photographic measurements [intraclass correlation coefficients (ICCs): 0.975; 95% confidence interval: 0.963 to 0.983; P<0.001], and between two-repeated automated photographic measurements (ICCs: 0.998; 95% confidence interval: 0.997 to 0.999; P<0.001). The biases and MADs were 0.10 [95% limits of agreement (LoA): -0.45 to 0.64] mm and 0.26±0.14 mm between automated and manual photographic measurements, and 0.01 (95% LoA: -0.14 to 0.16) mm and 0.07±0.04 mm between two-repeated automated photographic measurements, respectively. Conclusions: The automated photographic measurements of IOOA using deep learning technique were in excellent agreement with manual photographic measurements and clinical gradings. This new approach allows objective, accurate and repeatable measurement of IOOA and could be easily implemented in clinical practice using only photographs.

16.
Ophthalmol Sci ; 2(3): 100169, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36245755

RESUMO

Purpose: To automatically predict the postoperative appearance of blepharoptosis surgeries and evaluate the generated images both objectively and subjectively in a clinical setting. Design: Cross-sectional study. Participants: This study involved 970 pairs of images of 450 eyes from 362 patients undergoing blepharoptosis surgeries at our oculoplastic clinic between June 2016 and April 2021. Methods: Preoperative and postoperative facial images were used to train and test the deep learning-based postoperative appearance prediction system (POAP) consisting of 4 modules, including the data processing module (P), ocular detection module (O), analyzing module (A), and prediction module (P). Main Outcome Measures: The overall and local performance of the system were automatically quantified by the overlap ratio of eyes and by lid contour analysis using midpupil lid distances (MPLDs). Four ophthalmologists and 6 patients were invited to complete a satisfaction scale and a similarity survey with the test set of 75 pairs of images on each scale. Results: The overall performance (mean overlap ratio) was 0.858 ± 0.082. The corresponding multiple radial MPLDs showed no significant differences between the predictive results and the real samples at any angle (P > 0.05). The absolute error between the predicted marginal reflex distance-1 (MRD1) and the actual postoperative MRD1 ranged from 0.013 mm to 1.900 mm (95% within 1 mm, 80% within 0.75 mm). The participating experts and patients were "satisfied" with 268 pairs (35.7%) and "highly satisfied" with most of the outcomes (420 pairs, 56.0%). The similarity score was 9.43 ± 0.79. Conclusions: The fully automatic deep learning-based method can predict postoperative appearance for blepharoptosis surgery with high accuracy and satisfaction, thus offering the patients with blepharoptosis an opportunity to understand the expected change more clearly and to relieve anxiety. In addition, this system could be used to assist patients in selecting surgeons and the recovery phase of daily living, which may offer guidance for inexperienced surgeons as well.

17.
Sci Data ; 9(1): 475, 2022 08 04.
Artigo em Inglês | MEDLINE | ID: mdl-35927290

RESUMO

Retinal vasculature provides an opportunity for direct observation of vessel morphology, which is linked to multiple clinical conditions. However, objective and quantitative interpretation of the retinal vasculature relies on precise vessel segmentation, which is time consuming and labor intensive. Artificial intelligence (AI) has demonstrated great promise in retinal vessel segmentation. The development and evaluation of AI-based models require large numbers of annotated retinal images. However, the public datasets that are usable for this task are scarce. In this paper, we collected a color fundus image vessel segmentation (FIVES) dataset. The FIVES dataset consists of 800 high-resolution multi-disease color fundus photographs with pixelwise manual annotation. The annotation process was standardized through crowdsourcing among medical experts. The quality of each image was also evaluated. To the best of our knowledge, this is the largest retinal vessel segmentation dataset for which we believe this work will be beneficial to the further development of retinal vessel segmentation.


Assuntos
Fundo de Olho , Vasos Retinianos , Algoritmos , Inteligência Artificial , Crowdsourcing , Humanos , Processamento de Imagem Assistida por Computador , Vasos Retinianos/anatomia & histologia , Vasos Retinianos/diagnóstico por imagem
18.
Appl Bionics Biomech ; 2022: 2629140, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36032045

RESUMO

Objective: To analyze the application effect of Jiawei Sanyu Shengjing decoction combined with high ligation of the internal spermatic vein in male infertility patients with varicocele (VC). Methods: 106 male infertility patients with VC treated in our hospital from December 2018 to March 2019 were selected as examples. According to the length of stay, they were divided into the control group and observation group, with 53 cases in each group. High ligation of the internal spermatic vein was performed in both groups. On this basis, the observation group was treated with modified Sanyu Shengjing decoction, and the therapeutic effects of the two groups were compared. Results: The effective rate of 94.34% in the observation group was higher than 79.25% in the control group (P < 0.05). After treatment, the serum index level and sperm deformity rate in the observation group were lower than those in the control group, and the semen density and sperm activity were higher than those in the control group (P < 0.05). Conclusion: The treatment of VC male infertility with modified Sanyu Shengjing decoction combined with high ligation of the internal spermatic vein can effectively improve the number of sperm and reduce the density of semen.

19.
Curr Eye Res ; 47(9): 1346-1353, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35899319

RESUMO

PURPOSE: Clinical assessment of ocular movements is essential for the diagnosis and management of ocular motility disorders. This study aimed to propose a deep learning-based image analysis to automatically measure ocular movements based on photographs and to investigate the relationship between ocular movements and age. METHODS: 207 healthy volunteers (414 eyes) aged 5-60 years were enrolled in this study. Photographs were taken in the cardinal gaze positions. Ocular movements were manually measured based on a modified limbus test using ImageJ and automatically measured by our deep learning-based image analysis. Correlation analyses and Bland-Altman analyses were conducted to assess the agreement between manual and automated measurements. The relationship between ocular movements and age were analyzed using generalized estimating equations. RESULTS: The intraclass correlation coefficients between manual and automated measurements of six extraocular muscles ranged from 0.802 to 0.848 (P < 0.001), and the bias ranged from -0.63 mm to 0.71 mm. The average measurements were 8.62 ± 1.07 mm for superior rectus, 7.77 ± 1.24 mm for inferior oblique, 6.99 ± 1.23 mm for lateral rectus, 6.71 ± 1.22 mm for medial rectus, 6.81 ± 1.20 mm for inferior rectus, and 6.63 ± 1.37 mm for superior oblique, respectively. Ocular movements in each cardinal gaze position were negatively related to age (P < 0.05). CONCLUSIONS: The automated measurements of ocular movements using a deep learning-based approach were in excellent agreement with the manual measurements. This new approach allows objective assessment of ocular movements and shows great potential in the diagnosis and management of ocular motility disorders.


Assuntos
Aprendizado Profundo , Transtornos da Motilidade Ocular , Movimentos Oculares , Voluntários Saudáveis , Humanos , Transtornos da Motilidade Ocular/diagnóstico , Músculos Oculomotores
20.
J Healthc Eng ; 2022: 8331688, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35360482

RESUMO

Objective: To investigate the clinical efficacy and possible mechanism of electroacupuncture in the treatment of premature ejaculation. Methods: 50 cases of premature ejaculation patients who met the diagnostic criteria were randomly divided into 2 groups with 25 cases in each group. The observation group was treated with electroacupuncture, and the control group was treated with Longdan Xiegan decoction. The treatment period was 4 weeks. Ejaculation latency (IELT), sexual satisfaction score of patients, sexual satisfaction score of partners, testosterone test, and drug safety assessment were performed in all 4 groups before and after treatment. Results: IELT was prolonged in all groups after treatment, the difference was statistically significant (P < 0.05). At the same time, the IELT of the observation group was significantly higher than that of the control group after treatment. Life satisfaction scores of patients and spouses in 2 groups were improved after treatment compared with before treatment, the difference was statistically significant (P < 0.05). After treatment, the satisfaction scores of patients and spouses in the observation group were higher than those in the control group, and the difference was statistically significant (P < 0.05). Before treatment, there was no significant difference in serum testosterone levels among all groups (P > 0.05). Serum testosterone levels in all groups were decreased after treatment compared with before treatment, with statistical significance (P < 0.05). After treatment, the serum testosterone level of the observation group was lower than that of the control group, and the difference was statistically significant (P < 0.05). During the treatment, the adverse reactions in each group disappeared after treatment, and no obvious abnormality was observed in the safety indicators. Conclusion: Electroacupuncture can improve the symptoms of premature ejaculation, which may be related to the regulation of serum testosterone by acupuncture.


Assuntos
Terapia por Acupuntura , Ejaculação Precoce , Ejaculação , Humanos , Masculino , Ejaculação Precoce/tratamento farmacológico , Testosterona/uso terapêutico , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...