Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47.193
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-39144408

RESUMO

Objectives: We aimed to conduct a systematic review and meta-analysis to assess the value of image-enhanced endoscopy including blue laser imaging (BLI), linked color imaging, narrow-band imaging (NBI), and texture and color enhancement imaging to detect and diagnose gastric cancer (GC) compared to that of white-light imaging (WLI). Methods: Studies meeting the inclusion criteria were identified through PubMed, Cochrane Library, and Japan Medical Abstracts Society databases searches. The pooled risk ratio for dichotomous variables was calculated using the random-effects model to assess the GC detection between WLI and image-enhanced endoscopy. A random-effects model was used to calculate the overall diagnostic performance of WLI and magnifying image-enhanced endoscopy for GC. Results: Sixteen studies met the inclusion criteria. The detection rate of GC was significantly improved in linked color imaging compared with that in WLI (risk ratio, 2.20; 95% confidence interval [CI], 1.39-3.25; p < 0.01) with mild heterogeneity. Magnifying endoscopy with NBI (ME-NBI) obtained a pooled sensitivity, specificity, and area under the summary receiver operating curve of 0.84 (95 % CI, 0.80-0.88), 0.96 (95 % CI, 0.94-0.97), and 0.92, respectively. Similarly, ME-BLI showed a pooled sensitivity, specificity, and area under the curve of 0.81 (95 % CI, 0.77-0.85), 0.85 (95 % CI, 0.82-0.88), and 0.95, respectively. The diagnostic efficacy of ME-NBI/BLI for GC was evidently high compared to that of WLI, However, significant heterogeneity among the NBI studies still existed. Conclusions: Our meta-analysis showed a high detection rate for linked color imaging and a high diagnostic performance of ME-NBI/BLI for GC compared to that with WLI.

2.
J Med Imaging (Bellingham) ; 12(Suppl 1): S13004, 2025 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-39281664

RESUMO

Purpose: Chest tomosynthesis (CTS) has a relatively longer acquisition time compared with chest X-ray, which may increase the risk of motion artifacts in the reconstructed images. Motion artifacts induced by breathing motion adversely impact the image quality. This study aims to reduce these artifacts by excluding projection images identified with breathing motion prior to the reconstruction of section images and to assess if motion compensation improves overall image quality. Approach: In this study, 2969 CTS examinations were analyzed to identify examinations where breathing motion has occurred using a method based on localizing the diaphragm border in each of the projection images. A trajectory over diaphragm positions was estimated from a second-order polynomial curve fit, and projection images where the diaphragm border deviated from the trajectory were removed before reconstruction. The image quality between motion-compensated and uncompensated examinations was evaluated using the image quality criteria for anatomical structures and image artifacts in a visual grading characteristic (VGC) study. The resulting rating data were statistically analyzed using the software VGC analyzer. Results: A total of 58 examinations were included in this study with breathing motion occurring either at the beginning or end ( n = 17 ) or throughout the entire acquisition ( n = 41 ). In general, no significant difference in image quality or presence of motion artifacts was shown between the motion-compensated and uncompensated examinations. However, motion compensation significantly improved the image quality and reduced the motion artifacts in cases where motion occurred at the beginning or end. In examinations where motion occurred throughout the acquisition, motion compensation led to a significant increase in ripple artifacts and noise. Conclusions: Compensation for respiratory motion in CTS by excluding projection images may improve the image quality if the motion occurs mainly at the beginning or end of the examination. However, the disadvantages of excluding projections may outweigh the benefits of motion compensation.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38746904

RESUMO

Image-enhanced endoscopy (IEE) has advanced gastrointestinal disease diagnosis and treatment. Traditional white-light imaging has limitations in detecting all gastrointestinal diseases, prompting the development of IEE. In this review, we explore the utility of IEE, including texture and color enhancement imaging and red dichromatic imaging, in pancreatobiliary (PB) diseases. IEE includes methods such as chromoendoscopy, optical-digital, and digital methods. Chromoendoscopy, using dyes such as indigo carmine, aids in delineating lesions and structures, including pancreato-/cholangio-jejunal anastomoses. Optical-digital methods such as narrow-band imaging enhance mucosal details and vessel patterns, aiding in ampullary tumor evaluation and peroral cholangioscopy. Moreover, red dichromatic imaging with its specific color allocation, improves the visibility of thick blood vessels in deeper tissues and enhances bleeding points with different colors and see-through effects, proving beneficial in managing bleeding complications post-endoscopic sphincterotomy. Color enhancement imaging, a novel digital method, enhances tissue texture, brightness, and color, improving visualization of PB structures, such as PB orifices, anastomotic sites, ampullary tumors, and intraductal PB lesions. Advancements in IEE hold substantial potential in improving the accuracy of PB disease diagnosis and treatment. These innovative techniques offer advantages paving the way for enhanced clinical management of PB diseases. Further research is warranted to establish their standard clinical utility and explore new frontiers in PB disease management.

4.
J Biomed Opt ; 30(Suppl 1): S13703, 2025 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-39034959

RESUMO

Significance: Standardization of fluorescence molecular imaging (FMI) is critical for ensuring quality control in guiding surgical procedures. To accurately evaluate system performance, two metrics, the signal-to-noise ratio (SNR) and contrast, are widely employed. However, there is currently no consensus on how these metrics can be computed. Aim: We aim to examine the impact of SNR and contrast definitions on the performance assessment of FMI systems. Approach: We quantified the SNR and contrast of six near-infrared FMI systems by imaging a multi-parametric phantom. Based on approaches commonly used in the literature, we quantified seven SNRs and four contrast values considering different background regions and/or formulas. Then, we calculated benchmarking (BM) scores and respective rank values for each system. Results: We show that the performance assessment of an FMI system changes depending on the background locations and the applied quantification method. For a single system, the different metrics can vary up to ∼ 35 dB (SNR), ∼ 8.65 a . u . (contrast), and ∼ 0.67 a . u . (BM score). Conclusions: The definition of precise guidelines for FMI performance assessment is imperative to ensure successful clinical translation of the technology. Such guidelines can also enable quality control for the already clinically approved indocyanine green-based fluorescence image-guided surgery.


Assuntos
Benchmarking , Imagem Molecular , Imagem Óptica , Imagens de Fantasmas , Razão Sinal-Ruído , Imagem Molecular/métodos , Imagem Molecular/normas , Imagem Óptica/métodos , Imagem Óptica/normas , Processamento de Imagem Assistida por Computador/métodos
5.
Methods Mol Biol ; 2852: 159-170, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39235743

RESUMO

The functional properties of biofilms are intimately related to their spatial architecture. Structural data are therefore of prime importance to dissect the complex social and survival strategies of biofilms and ultimately to improve their control. Confocal laser scanning microscopy (CLSM) is the most widespread microscopic tool to decipher biofilm structure, enabling noninvasive three-dimensional investigation of their dynamics down to the single-cell scale. The emergence of fully automated high content screening (HCS) systems, associated with large-scale image analysis, has radically amplified the flow of available biofilm structural data. In this contribution, we present a HCS-CLSM protocol used to analyze biofilm four-dimensional structural dynamics at high throughput. Meta-analysis of the quantitative variables extracted from HCS-CLSM will contribute to a better biological understanding of biofilm traits.


Assuntos
Biofilmes , Microscopia Confocal , Biofilmes/crescimento & desenvolvimento , Microscopia Confocal/métodos , Microbiologia de Alimentos/métodos , Imageamento Tridimensional/métodos , Doenças Transmitidas por Alimentos/microbiologia , Ensaios de Triagem em Larga Escala/métodos , Processamento de Imagem Assistida por Computador/métodos
6.
Front Physiol ; 15: 1408832, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39219839

RESUMO

Introduction: Lung image segmentation plays an important role in computer-aid pulmonary disease diagnosis and treatment. Methods: This paper explores the lung CT image segmentation method by generative adversarial networks. We employ a variety of generative adversarial networks and used their capability of image translation to perform image segmentation. The generative adversarial network is employed to translate the original lung image into the segmented image. Results: The generative adversarial networks-based segmentation method is tested on real lung image data set. Experimental results show that the proposed method outperforms the state-of-the-art method. Discussion: The generative adversarial networks-based method is effective for lung image segmentation.

7.
Front Plant Sci ; 15: 1380306, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39220010

RESUMO

Introduction: Individual leaves in the image are partly veiled by other leaves, which create shadows on another leaf. To eliminate the interference of soil and leaf shadows on cotton spectra and create reliable monitoring of cotton nitrogen content, one classification method to unmanned aerial vehicle (UAV) image pixels is proposed. Methods: In this work, green light (550 nm) is divided into 10 levels to limit soil and leaf shadows (LS) on cotton spectrum. How many shadow has an influence on cotton spectra may be determined by the strong correlation between the vegetation index (VI) and leaf nitrogen content (LNC). Several machine learning methods were utilized to predict LNC using less disturbed VI. R-Square (R 2), root mean square error (RMSE), and mean absolute error (MAE) were used to evaluate the performance of the model. Results: (i) after the spectrum were preprocessed by gaussian filter (GF), SG smooth (SG), and combination of GF and SG (GF&SG), the significant relationship between VI and LNC was greatly improved, so the Standard deviation of datasets was also decreased greatly; (ii) the image pixels were classified twice sequentially. Following the first classification, the influence of soil on vegetation index (VI) decreased. Following secondary classification, the influence of soil and LS to VI can be minimized. The relationship between the VI and LNC had improved significantly; (iii) After classifying the image pixels, the VI of 2-3, 2-4, and 2-5 have a stronger relationship with LNC accordingly. Correlation coefficients (r) can reach to 0.5. That optimizes monitoring performance when combined with GF&SG to predict LNC, support vector machine regression (SVMR) has the better performance, R 2, RMSE, and MAE up to 0.86, 1.01, and 0.71, respectively. The UAV image classification technique in this study can minimize the negative effects of soil and LS on cotton spectrum, allowing for efficient and timely predict LNC.

8.
Biomed Eng Lett ; 14(5): 1023-1035, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39220023

RESUMO

Deep learning-based methods for fast target segmentation of computed tomography (CT) imaging have become increasingly popular. The success of current deep learning methods usually depends on a large amount of labeled data. Labeling medical data is a time-consuming and laborious task. Therefore, this paper aims to enhance the segmentation of CT images by using a semi-supervised learning method. In order to utilize the valid information in unlabeled data, we design a semi-supervised network model for contrastive learning based on entropy constraints. We use CNN and Transformer to capture the image's local and global feature information, respectively. In addition, the pseudo-labels generated by the teacher networks are unreliable and will lead to degradation of the model performance if they are directly added to the training. Therefore, unreliable samples with high entropy values are discarded to avoid the model extracting the wrong features. In the student network, we also introduce the residual squeeze and excitation module to learn the connection between different channels of each layer feature to obtain better segmentation performance. We demonstrate the effectiveness of the proposed method on the COVID-19 CT public dataset. We mainly considered three evaluation metrics: DSC, HD95, and JC. Compared with several existing state-of-the-art semi-supervised methods, our method improves DSC by 2.3%, JC by 2.5%, and reduces HD95 by 1.9 mm. In this paper, a semi-supervised medical image segmentation method is designed by fusing CNN and Transformer and utilizing entropy-constrained contrastive learning loss, which improves the utilization of unlabeled medical images.

9.
Biomed Eng Lett ; 14(5): 1125-1135, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39220033

RESUMO

Dual-mode optical imaging can simultaneously provide morphological and functional information. Furthermore, it can be integrated with projection mapping method to directly observe the images in the region of interest. This study was aimed to develop a dual-mode optical projection mapping system (DOPMS) that obtains laser speckle contrast image (LSCI) and subcutaneous vein image (SVI) and projects onto the region of interest, minimizing the spatial misalignment between the regions captured by the camera and projected by a projector. In in vitro and in vivo studies, LSCI and SVI were obtained and projected under single-mode illumination, where either the laser or light-emitting diode (LED) was activated, and under dual-mode illumination, where the laser and LED were activated simultaneously. In addition, fusion image (FI) of LSCI and SVI was implemented to selectively observe blood perfusion in the vein. DOPMS successfully obtained LSCI, SVI, and FI and projected them onto the identical region of interest, minimizing spatial misalignment. Single-mode illumination resulted in relatively clearer and noise-free images. Dual-mode illumination introduced speckle noise to SVI and FI but enabled real-time imaging by simultaneously employing LSCI, SVI, and FI. FI may be more effective for quasi-static evaluations before and after treatment under single-mode illumination and for real-time evaluation during treatment under dual-mode illumination owing to its faster image processing, albeit with a potential tradeoff in image quality.

10.
Biomed Eng Lett ; 14(5): 1137-1146, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39220031

RESUMO

In medical clinical scenarios for reasons such as patient privacy, information protection and data migration, when domain adaptation is needed for real scenarios, the source-domain data is often inaccessible and only the pre-trained source model on the source-domain is available. Existing solutions for this type of problem tend to forget the rich task experience previously learned on the source domain after adapting, which means that the model simply overfits the target-domain data when adapting and does not learn robust features that facilitate real task decisions. We address this problem by exploring the particular application of source-free domain adaptation in medical image segmentation and propose a two-stage additive source-free adaptation framework. We generalize the domain-invariant features by constraining the core pathological structure and semantic consistency between different perspectives. And we reduce the segmentation generated by locating and filtering elements that may have errors through Monte-Carlo uncertainty estimation. We conduct comparison experiments with some other methods on a cross-device polyp segmentation and a cross-modal brain tumor segmentation dataset, the results in both the target and source domains verify that the proposed method can effectively solve the domain offset problem and the model retains its dominance on the source domain after learning new knowledge of the target domain.This work provides valuable exploration for achieving additive learning on the target and source domains in the absence of source data and offers new ideas and methods for adaptation research in the field of medical image segmentation.

11.
J Med Imaging (Bellingham) ; 11(5): 054002, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39220049

RESUMO

Purpose: Interpreting echocardiographic exams requires substantial manual interaction as videos lack scan-plane information and have inconsistent image quality, ranging from clinically relevant to unrecognizable. Thus, a manual prerequisite step for analysis is to select the appropriate views that showcase both the target anatomy and optimal image quality. To automate this selection process, we present a method for automatic classification of routine views, recognition of unknown views, and quality assessment of detected views. Approach: We train a neural network for view classification and employ the logit activations from the neural network for unknown view recognition. Subsequently, we train a linear regression algorithm that uses feature embeddings from the neural network to predict view quality scores. We evaluate the method on a clinical test set of 2466 echocardiography videos with expert-annotated view labels and a subset of 438 videos with expert-rated view quality scores. A second observer annotated a subset of 894 videos, including all quality-rated videos. Results: The proposed method achieved an accuracy of 84.9 % ± 0.67 for the joint objective of routine view classification and unknown view recognition, whereas a second observer reached an accuracy of 87.6%. For view quality assessment, the method achieved a Spearman's rank correlation coefficient of 0.71, whereas a second observer reached a correlation coefficient of 0.62. Conclusion: The proposed method approaches expert-level performance, enabling fully automatic selection of the most appropriate views for manual or automatic downstream analysis.

12.
HardwareX ; 19: e00563, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39220164

RESUMO

Particle Image Velocimetry (PIV) is considered the gold standard technique for flow visualization. However, its cost (at least tens of thousands of dollars) can prove inhibitive in its standard form. This article presents an alternative design, leveraging off-the-shelf and open-source options for each key component involved: camera, laser module, optical components, tracer particles, and analysis software. Flow visualization is a crucial technique to connect theory to practice in teaching and researching fluid mechanics. Despite the ubiquity of this field within engineering curricula, many undergraduate institutions globally forego utilizing such equipment, given the barriers to setting it up. The availability of this low-cost alternative (∼$500) that can be built in-house offers a path forward. Characterization was done by visualizing the rotational flow generated by a magnetic stirrer in a cylindrical beaker. The velocity magnitude around the stirrer bar measured by the low-cost PIV system was compared to expected values calculated analytically. The percent difference was between 1-2% when the flow stayed two-dimensional but increased as the flow began developing into more of a 3-D flow. Repeatability varied no more than 6% between experiments. This platform holds the potential for reliable replication across institutions broadly.

13.
Front Psychol ; 15: 1458259, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39220391

RESUMO

Purpose: This paper aims to explore the relationship between body image, self-efficacy, self-esteem, and weight-loss intention among college students, offering insights to promote healthy and confident lifestyle habits. Methods: Undergraduate students from western China were selected using a stratified random sampling method. Data were analyzed using SPSS 19.0 and AMOS 21.0 statistical software. Results: (1) Body image showed a significantly positive correlation with self-efficacy and self-esteem but a negative correlation with weight-loss intention. Self-efficacy exhibited a significantly positive correlation with self-esteem and a negative correlation with weight-loss intention, while self-esteem was significantly negatively correlated with weight-loss intention. (2) Body image directly impacted weight-loss intention [Effect Size (ES) = -0.120]. Self-efficacy (ES = -0.069) and self-esteem (ES = -0.119) played significant mediating roles between body image and weight-loss intention, respectively. (3) The chained intermediary role of self-efficacy and self-esteem also reached significance (ES = -0.038). Conclusion: Body image conducts effect on the degree of weight-loss intention among college students through direct ways or indirect ways such as the intermediary role of self-efficacy and self-esteem, and also the chained intermediary role of self-efficacy and self-esteem. In addition, self-esteem is another key factors affecting college students' weight-loss intention.

14.
Heliyon ; 10(16): e35698, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39220902

RESUMO

Existing medical image segmentation methods may only consider feature extraction and information processing in spatial domain, or lack the design of interaction between frequency information and spatial information, or ignore the semantic gaps between shallow and deep features, and lead to inaccurate segmentation results. Therefore, in this paper, we propose a novel frequency selection segmentation network (FSSN), which achieves more accurate lesion segmentation by fusing local spatial features and global frequency information, better design of feature interactions, and suppressing low correlation frequency components for mitigating semantic gaps. Firstly, we propose a global-local feature aggregation module (GLAM) to simultaneously capture multi-scale local features in the spatial domain and exploits global frequency information in the frequency domain, and achieves complementary fusion of local details features and global frequency information. Secondly, we propose a feature filter module (FFM) to mitigate semantic gaps when we conduct cross-level features fusion, and makes FSSN discriminatively determine which frequency information should be preserved for accurate lesion segmentation. Finally, in order to make better use of local information, especially the boundary of lesion region, we employ deformable convolution (DC) to extract pertinent features in the local range, and makes our FSSN can focus on relevant image contents better. Extensive experiments on two public benchmark datasets show that compared with representative medical image segmentation methods, our FSSN can obtain more accurate lesion segmentation results in terms of both objective evaluation indicators and subjective visual effects with fewer parameters and lower computational complexity.

15.
Artigo em Inglês | MEDLINE | ID: mdl-39222427

RESUMO

OBJECTIVES: The purpose of this study was to generate radiographs including dentigerous cysts by applying the latest generative adversarial network (GAN; StyleGAN3) to panoramic radiography. METHODS: A total of 459 cystic lesions were selected, and 409 images were randomly assigned as training data and 50 images as test data. StyleGAN3 training was performed for 500 000 images. Fifty generated images were objectively evaluated by comparing them with 50 real images according to four metrics: Fréchet inception distance (FID), kernel inception distance (KID), precision and recall, and inception score (IS). A subjective evaluation of the generated images was performed by three specialists who compared them with the real images in a visual Turing test. RESULTS: The results of the metrics were as follows: FID, 199.28; KID, 0.14; precision, 0.0047; recall, 0.00; and IS, 2.48. The overall results of the visual Turing test were 82.3%. No significant difference was found in the human scoring of root resorption. CONCLUSIONS: The images generated by StyleGAN3 were of such high quality that specialists could not distinguish them from the real images.

16.
Cancer Sci ; 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39223070

RESUMO

Primary malignant bone tumors, such as osteosarcoma, significantly affect the pediatric and young adult populations, necessitating early diagnosis for effective treatment. This study developed a high-performance artificial intelligence (AI) model to detect osteosarcoma from X-ray images using highly accurate annotated data to improve diagnostic accuracy at initial consultations. Traditional models trained on unannotated data have shown limited success, with sensitivities of approximately 60%-70%. In contrast, our model used a data-centric approach with annotations from an experienced oncologist, achieving a sensitivity of 95.52%, specificity of 96.21%, and an area under the curve of 0.989. The model was trained using 468 X-ray images from 31 osteosarcoma cases and 378 normal knee images with a strategy to maximize diversity in the training and validation sets. It was evaluated using an independent dataset of 268 osteosarcoma and 554 normal knee images to ensure generalizability. By applying the U-net architecture and advanced image processing techniques such as renormalization and affine transformations, our AI model outperforms existing models, reducing missed diagnoses and enhancing patient outcomes by facilitating earlier treatment. This study highlights the importance of high-quality training data and advocates a shift towards data-centric AI development in medical imaging. These insights can be extended to other rare cancers and diseases, underscoring the potential of AI in transforming diagnostic processes in oncology. The integration of this AI model into clinical workflows could support physicians in early osteosarcoma detection, thereby improving diagnostic accuracy and patient care.

17.
Med Phys ; 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39225652

RESUMO

BACKGROUND: Cone beam computed tomography (CBCT) image segmentation is crucial in prostate cancer radiotherapy, enabling precise delineation of the prostate gland for accurate treatment planning and delivery. However, the poor quality of CBCT images poses challenges in clinical practice, making annotation difficult due to factors such as image noise, low contrast, and organ deformation. PURPOSE: The objective of this study is to create a segmentation model for the label-free target domain (CBCT), leveraging valuable insights derived from the label-rich source domain (CT). This goal is achieved by addressing the domain gap across diverse domains through the implementation of a cross-modality medical image segmentation framework. METHODS: Our approach introduces a multi-scale domain adaptive segmentation method, performing domain adaptation simultaneously at both the image and feature levels. The primary innovation lies in a novel multi-scale anatomical regularization approach, which (i) aligns the target domain feature space with the source domain feature space at multiple spatial scales simultaneously, and (ii) exchanges information across different scales to fuse knowledge from multi-scale perspectives. RESULTS: Quantitative and qualitative experiments were conducted on pelvic CBCT segmentation tasks. The training dataset comprises 40 unpaired CBCT-CT images with only CT images annotated. The validation and testing datasets consist of 5 and 10 CT images, respectively, all with annotations. The experimental results demonstrate the superior performance of our method compared to other state-of-the-art cross-modality medical image segmentation methods. The Dice similarity coefficients (DSC) for CBCT image segmentation results is 74.6 ± 9.3 $74.6 \pm 9.3$ %, and the average symmetric surface distance (ASSD) is 3.9 ± 1.8 mm $3.9\pm 1.8\;\mathrm{mm}$ . Statistical analysis confirms the statistical significance of the improvements achieved by our method. CONCLUSIONS: Our method exhibits superiority in pelvic CBCT image segmentation compared to its counterparts.

18.
Skin Res Technol ; 30(9): e70050, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39246259

RESUMO

BACKGROUND: AI medical image analysis shows potential applications in research on premature aging and skin. The purpose of this study was to explore the mechanism of the Zuogui pill based on artificial intelligence medical image analysis on ovarian function enhancement and skin elasticity repair in rats with premature aging. MATERIALS AND METHODS: The premature aging rat model was established by using an experimental animal model. Then Zuogui pills were injected into the rats with premature aging, and the images were detected by an optical microscope. Then, through the analysis of artificial intelligence medical images, the image data is analyzed to evaluate the indicators of ovarian function. RESULTS: Through optical microscope image detection, we observed that the Zuogui pill played an active role in repairing ovarian tissue structure and increasing the number of follicles in mice, and Zuogui pill also significantly increased the level of progesterone in the blood of mice. CONCLUSION: Most of the ZGP-induced outcomes are significantly dose-dependent.


Assuntos
Senilidade Prematura , Inteligência Artificial , Medicamentos de Ervas Chinesas , Animais , Feminino , Ratos , Medicamentos de Ervas Chinesas/farmacologia , Medicamentos de Ervas Chinesas/administração & dosagem , Camundongos , Ovário/efeitos dos fármacos , Ovário/diagnóstico por imagem , Ratos Sprague-Dawley , Envelhecimento da Pele/efeitos dos fármacos , Modelos Animais de Doenças , Pele/efeitos dos fármacos , Pele/diagnóstico por imagem , Elasticidade/efeitos dos fármacos , Progesterona/sangue , Progesterona/farmacologia , Processamento de Imagem Assistida por Computador/métodos
19.
Neural Netw ; 180: 106696, 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39255633

RESUMO

Despite significant advances in the deep clustering research, there remain three critical limitations to most of the existing approaches. First, they often derive the clustering result by associating some distribution-based loss to specific network layers, neglecting the potential benefits of leveraging the contrastive sample-wise relationships. Second, they frequently focus on representation learning at the full-image scale, overlooking the discriminative information latent in partial image regions. Third, although some prior studies perform the learning process at multiple levels, they mostly lack the ability to exploit the interaction between different learning levels. To overcome these limitations, this paper presents a novel deep image clustering approach via Partial Information discrimination and Cross-level Interaction (PICI). Specifically, we utilize a Transformer encoder as the backbone, coupled with two types of augmentations to formulate two parallel views. The augmented samples, integrated with masked patches, are processed through the Transformer encoder to produce the class tokens. Subsequently, three partial information learning modules are jointly enforced, namely, the partial information self-discrimination (PISD) module for masked image reconstruction, the partial information contrastive discrimination (PICD) module for the simultaneous instance- and cluster-level contrastive learning, and the cross-level interaction (CLI) module to ensure the consistency across different learning levels. Through this unified formulation, our PICI approach for the first time, to our knowledge, bridges the gap between the masked image modeling and the deep contrastive clustering, offering a novel pathway for enhanced representation learning and clustering. Experimental results across six image datasets demonstrate the superiority of our PICI approach over the state-of-the-art. In particular, our approach achieves an ACC of 0.772 (0.634) on the RSOD (UC-Merced) dataset, which shows an improvement of 29.7% (24.8%) over the best baseline. The source code is available at https://github.com/Regan-Zhang/PICI.

20.
Comput Biol Med ; 182: 109102, 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39255659

RESUMO

Cell imaging assays utilising fluorescence stains are essential for observing sub-cellular organelles and their responses to perturbations. Immunofluorescent staining process is routinely in labs, however the recent innovations in generative AI is challenging the idea of wet lab immunofluorescence (IF) staining. This is especially true when the availability and cost of specific fluorescence dyes is a problem to some labs. Furthermore, staining process takes time and leads to inter-intra-technician and hinders downstream image and data analysis, and the reusability of image data for other projects. Recent studies showed the use of generated synthetic IF images from brightfield (BF) images using generative AI algorithms in the literature. Therefore, in this study, we benchmark and compare five models from three types of IF generation backbones-CNN, GAN, and diffusion models-using a publicly available dataset. This paper not only serves as a comparative study to determine the best-performing model but also proposes a comprehensive analysis pipeline for evaluating the efficacy of generators in IF image synthesis. We highlighted the potential of deep learning-based generators for IF image synthesis, while also discussed potential issues and future research directions. Although generative AI shows promise in simplifying cell phenotyping using only BF images with IF staining, further research and validations are needed to address the key challenges of model generalisability, batch effects, feature relevance and computational costs.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA