RESUMO
Of the known risk factors for glaucoma, elevated intraocular pressure (IOP), is the primary one. The conventional aqueous humor outflow pathway contains the key source of IOP regulation, which is predominantly the trabecular meshwork (TM). Studies of outflow have demonstrated that the outflow pathway is not uniform around the circumference of the eye but highly segmental with regions of relative high flow (HF) and intermediate or medium flow (IF) and regions of low or no flow (LF). Herein we present protocols that we use to study outflow segmentation through the conventional outflow pathway, mostly focusing on human eyes. These methods are quite similar for nonhuman primates and other species. These studies are mostly conducted using ex vivo intact globes or perfused anterior segment organ culture. One potential therapy for IOP reduction in those with elevated IOP to reduce progression of glaucomatous optic nerve damage would be to increase HF or IF and reduce LF proportions.
Assuntos
Humor Aquoso , Pressão Intraocular , Malha Trabecular , Humor Aquoso/metabolismo , Malha Trabecular/metabolismo , Pressão Intraocular/fisiologia , Humanos , Animais , Glaucoma/metabolismo , Glaucoma/patologia , Técnicas de Cultura de Órgãos/métodosRESUMO
Purpose: Pupillary instability is a known risk factor for complications in cataract surgery. This study aims to develop and validate an innovative and reliable computational framework for the automated assessment of pupil morphologic changes during the various phases of cataract surgery. Design: Retrospective surgical video analysis. Subjects: Two hundred forty complete surgical video recordings, among which 190 surgeries were conducted without the use of pupil expansion devices (PEDs) and 50 were performed with the use of a PED. Methods: The proposed framework consists of 3 stages: feature extraction, deep learning (DL)-based anatomy recognition, and obstruction (OB) detection/compensation. In the first stage, surgical video frames undergo noise reduction using a tensor-based wavelet feature extraction method. In the second stage, DL-based segmentation models are trained and employed to segment the pupil, limbus, and palpebral fissure. In the third stage, obstructed visualization of the pupil is detected and compensated for using a DL-based algorithm. A dataset of 5700 intraoperative video frames across 190 cataract surgeries in the BigCat database was collected for validating algorithm performance. Main Outcome Measures: The pupil analysis framework was assessed on the basis of segmentation performance for both obstructed and unobstructed pupils. Classification performance of models utilizing the segmented pupil time series to predict surgeon use of a PED was also assessed. Results: An architecture based on the Feature Pyramid Network model with Visual Geometry Group 16 backbone integrated with the adaptive wavelet tensor feature extraction feature extraction method demonstrated the highest performance in anatomy segmentation, with Dice coefficient of 96.52%. Incorporation of an OB compensation algorithm improved performance further (Dice 96.82%). Downstream analysis of framework output enabled the development of a Support Vector Machine-based classifier that could predict surgeon usage of a PED prior to its placement with 96.67% accuracy and area under the curve of 99.44%. Conclusions: The experimental results demonstrate that the proposed framework (1) provides high accuracy in pupil analysis compared with human-annotated ground truth, (2) substantially outperforms isolated use of a DL segmentation model, and (3) can enable downstream analytics with clinically valuable predictive capacity. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
RESUMO
Animal movement plays a key role in many ecological processes and has a direct influence on an individual's fitness at several scales of analysis (i.e., next-step, subdiel, day-by-day, seasonal). This highlights the need to dissect movement behavior at different spatio-temporal scales and develop hierarchical movement tools for generating realistic tracks to supplement existing single-temporal-scale simulators. In reality, animal movement paths are a concatenation of fundamental movement elements (FuMEs: e.g., a step or wing flap), but these are not generally extractable from a relocation time-series track (e.g., sequential GPS fixes) from which step-length (SL, aka velocity) and turning-angle (TA) time series can be extracted. For short, fixed-length segments of track, we generate their SL and TA statistics (e.g., means, standard deviations, correlations) to obtain segment-specific vectors that can be cluster into different types. We use the centroids of these clusters to obtain a set of statistical movement elements (StaMEs; e.g.,directed fast movement versus random slow movement elements) that we use as a basis for analyzing and simulating movement tracks. Our novel concept is that sequences of StaMEs provide a basis for constructing and fitting step-selection kernels at the scale of fixed-length canonical activity modes: short fixed-length sequences of interpretable activity such as dithering, ambling, directed walking, or running. Beyond this, variable length pure or characteristic mixtures of CAMs can be interpreted as behavioral activity modes (BAMs), such as gathering resources (a sequence of dithering and walking StaMEs) or beelining (a sequence of fast directed-walk StaMEs interspersed with vigilance and navigation stops). Here we formulate a multi-modal, step-selection kernel simulation framework, and construct a 2-mode movement simulator (Numerus ANIMOVER_1), using Numerus RAMP technology. These RAMPs run as stand alone applications: they require no coding but only the input of selected parameter values. They can also be used in R programming environments as virtual R packages. We illustrate our methods for extracting StaMEs from both ANIMOVER_1 simulated data and empirical data from two barn owls (Tyto alba) in the Harod Valley, Israel. Overall, our new bottom-up approach to path segmentation allows us to both dissect real movement tracks and generate realistic synthetic ones, thereby providing a general tool for testing hypothesis in movement ecology and simulating animal movement in diverse contexts such as evaluating an individual's response to landscape changes, release of an individual into a novel environment, or identifying when individuals are sick or unusually stressed.
RESUMO
Purpose: GM1-gangliosidosis (GM1) leads to extensive neurodegenerative changes and atrophy that precludes the use of automated MRI segmentation techniques for generating brain volumetrics. We developed a standardized segmentation protocol for brain MRIs of patients with type II GM1 and then assessed the inter- and intra-rater reliability of this methodology. The volumetric data may be used as a biomarker of disease burden and progression, and standardized methodology may support research into the natural history of the disease which is currently lacking in the literature. Approach: Twenty-five brain MRIs were included in this study from 22 type II GM1 patients of which 8 were late-infantile subtype and 14 were juvenile subtype. The following structures were segmented by two rating teams on a slice-by-slice basis: whole brain, ventricles, cerebellum, lentiform nucleus, thalamus, corpus callosum, and caudate nucleus. The inter- and intra-rater reliability of the segmentation method was assessed with an intraclass correlation coefficient as well as Sorensen-Dice and Jaccard coefficients. Results: Based on the Sorensen-Dice and Jaccard coefficients, the inter- and intra-rater reliability of the segmentation method was significantly better for the juvenile patients compared to late-infantile (p < 0.01). In addition, the agreement between the two rater teams and within themselves can be considered good with all p-values < 0.05. Conclusions: The standardized segmentation approach described here has good inter- and intra-rater reliability and may provide greater accuracy and reproducibility for neuromorphological studies in this group of patients and help to further expand our understanding of the natural history of this disease.
RESUMO
Objective: The primary aim of this investigation was to devise an intelligent approach for interpreting and measuring the spatial orientation of semicircular canals based on cranial MRI. The ultimate objective is to employ this intelligent method to construct a precise mathematical model that accurately represents the spatial orientation of the semicircular canals. Methods: Using a dataset of 115 cranial MRI scans, this study employed the nnDetection deep learning algorithm to perform automated segmentation of the semicircular canals and the eyeballs (left and right). The center points of each semicircular canal were organized into an ordered structure using point characteristic analysis. Subsequently, a point-by-point plane fit was performed along these centerlines, and the normal vector of the semicircular canals was computed using the singular value decomposition method and calibrated to a standard spatial coordinate system whose transverse planes were the top of the common crus and the bottom of the eyeballs. Results: The nnDetection target recognition segmentation algorithm achieved Dice values of 0.9585 and 0.9663. The direction angles of the unit normal vectors for the left anterior, lateral, and posterior semicircular canal planes were [80.19°, 124.32°, 36.08°], [169.88°, 100.04°, 91.32°], and [79.33°, 130.63°, 137.4°], respectively. For the right side, the angles were [79.03°, 125.41°, 142.42°], [171.45°, 98.53°, 89.43°], and [80.12°, 132.42°, 44.11°], respectively. Conclusion: This study successfully achieved real-time automated understanding and measurement of the spatial orientation of semicircular canals, providing a solid foundation for personalized diagnosis and treatment optimization of vestibular diseases. It also establishes essential tools and a theoretical basis for future research into vestibular function and related diseases.
RESUMO
Background: As differentiating between lipomas and atypical lipomatous tumors (ALTs) based on imaging is challenging and requires biopsies, radiomics has been proposed to aid the diagnosis. This study aimed to externally and prospectively validate a radiomics model differentiating between lipomas and ALTs on MRI in three large, multi-center cohorts, and extend it with automatic and minimally interactive segmentation methods to increase clinical feasibility. Methods: Three study cohorts were formed, two for external validation containing data from medical centers in the United States (US) collected from 2008 until 2018 and the United Kingdom (UK) collected from 2011 until 2017, and one for prospective validation consisting of data collected from 2020 until 2021 in the Netherlands. Patient characteristics, MDM2 amplification status, and MRI scans were collected. An automatic segmentation method was developed to segment all tumors on T1-weighted MRI scans of the validation cohorts. Segmentations were subsequently quality scored. In case of insufficient quality, an interactive segmentation method was used. Radiomics performance was evaluated for all cohorts and compared to two radiologists. Findings: The validation cohorts included 150 (54% ALT), 208 (37% ALT), and 86 patients (28% ALT) from the US, UK and NL. Of the 444 cases, 78% were automatically segmented. For 22%, interactive segmentation was necessary due to insufficient quality, with only 3% of all patients requiring manual adjustment. External validation resulted in an AUC of 0.74 (95% CI: 0.66, 0.82) in US data and 0.86 (0.80, 0.92) in UK data. Prospective validation resulted in an AUC of 0.89 (0.83, 0.96). The radiomics model performed similar to the two radiologists (US: 0.79 and 0.76, UK: 0.86 and 0.86, NL: 0.82 and 0.85). Interpretation: The radiomics model extended with automatic and minimally interactive segmentation methods accurately differentiated between lipomas and ALTs in two large, multi-center external cohorts, and in prospective validation, performing similar to expert radiologists, possibly limiting the need for invasive diagnostics. Funding: Hanarth fonds.
RESUMO
This CIDACC dataset was created to determine the cell population of Chlorella vulgaris microalga during cultivation. Chlorella vulgaris has diverse applications, including use as food supplement, biofuel production, and pollutant removal. High resolution images were collected using a microscope and annotated, focusing on computer vision and machine learning models creation for automatic Chlorella cell detection, counting, size and geometry estimation. The dataset comprises 628 images, organized into hierarchical folders for easy access. Detailed segmentation masks and bounding boxes were generated using external tools enhancing the dataset's utility. The dataset's efficacy was demonstrated through preliminary experiments using deep learning architecture such as object detection and localization algorithms, as well as image segmentation algorithms, achieving high precision and accuracy. This dataset is a valuable tool for advancing computer vision applications in microalgae research and other related fields. The dataset is particularly challenging due to its dynamic nature and the complex correlations it presents across various application domains, including cell analysis in medical research. Its intricacies not only push the boundaries of current computer vision algorithms but also offer significant potential for advancements in diverse fields such as biomedical imaging, environmental monitoring, and biotechnological innovations.
RESUMO
Colonoscopy is widely recognized as the most effective method for the detection of colon polyps, which is crucial for early screening of colorectal cancer. Polyp identification and segmentation in colonoscopy images require specialized medical knowledge and are often labor-intensive and expensive. Deep learning provides an intelligent and efficient approach for polyp segmentation. However, the variability in polyp size and the heterogeneity of polyp boundaries and interiors pose challenges for accurate segmentation. Currently, Transformer-based methods have become a mainstream trend for polyp segmentation. However, these methods tend to overlook local details due to the inherent characteristics of Transformer, leading to inferior results. Moreover, the computational burden brought by self-attention mechanisms hinders the practical application of these models. To address these issues, we propose a novel CNN-Transformer hybrid model for polyp segmentation (CTHP). CTHP combines the strengths of CNN, which excels at modeling local information, and Transformer, which excels at modeling global semantics, to enhance segmentation accuracy. We transform the self-attention computation over the entire feature map into the width and height directions, significantly improving computational efficiency. Additionally, we design a new information propagation module and introduce additional positional bias coefficients during the attention computation process, which reduces the dispersal of information introduced by deep and mixed feature fusion in the Transformer. Extensive experimental results demonstrate that our proposed model achieves state-of-the-art performance on multiple benchmark datasets for polyp segmentation. Furthermore, cross-domain generalization experiments show that our model exhibits excellent generalization performance.
Assuntos
Pólipos do Colo , Colonoscopia , Aprendizado Profundo , Humanos , Pólipos do Colo/patologia , Pólipos do Colo/diagnóstico por imagem , Colonoscopia/métodos , Neoplasias Colorretais/patologia , Neoplasias Colorretais/diagnóstico por imagem , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , AlgoritmosRESUMO
Urochloa grasses are widely used forages in the Neotropics and are gaining importance in other regions due to their role in meeting the increasing global demand for sustainable agricultural practices. High-throughput phenotyping (HTP) is important for accelerating Urochloa breeding programs focused on improving forage and seed yield. While RGB imaging has been used for HTP of vegetative traits, the assessment of phenological stages and seed yield using image analysis remains unexplored in this genus. This work presents a dataset of 2,400 high-resolution RGB images of 200 Urochloa hybrid genotypes, captured over seven months and covering both vegetative and reproductive stages. Images were manually labelled as vegetative or reproductive, and a subset of 255 reproductive stage images were annotated to identify 22,340 individual racemes. This dataset enables the development of machine learning and deep learning models for automated phenological stage classification and raceme identification, facilitating HTP and accelerated breeding of Urochloa spp. hybrids with high seed yield potential.
RESUMO
BACKGROUND: Current methods for identifying blood vessels in digital images typically involve training neural networks on pixel-wise annotated data. However, manually outlining whole vessel trees in images tends to be very costly. One approach for reducing the amount of manual annotation is to pre-train networks on artificially generated vessel images. Recent pre-training approaches focus on generating proper artificial geometries for the vessels, while the appearance of the vessels is defined using general statistics of the real samples or generative networks requiring an additional training procedure to be defined. In contrast, we propose a methodology for generating blood vessels with realistic textures extracted directly from manually annotated vessel segments from real samples. The method allows the generation of artificial images having blood vessels with similar geometry and texture to the real samples using only a handful of manually annotated vessels. METHODS: The first step of the method is the manual annotation of the borders of a small vessel segment, which takes only a few seconds. The annotation is then used for creating a reference image containing the texture of the vessel, called a texture map. A procedure is then defined to allow texture maps to be placed on top of any smooth curve using a piecewise linear transformation. Artificial images are then created by generating a set of vessel geometries using Bézier curves and assigning vessel texture maps to the curves. RESULTS: The method is validated on a fluorescence microscopy (CORTEX) and a fundus photography (DRIVE) dataset. We show that manually annotating only 0.03% of the vessels in the CORTEX dataset allows pre-training a network to reach, on average, a Dice score of 0.87 ± 0.02, which is close to the baseline score of 0.92 obtained when all vessels of the training split of the dataset are annotated. For the DRIVE dataset, on average, a Dice score of 0.74 ± 0.02 is obtained by annotating only 0.29% of the vessels, which is also close to the baseline Dice score of 0.81 obtained when all vessels are annotated. CONCLUSION: The proposed method can be used for disentangling the geometry and texture of blood vessels, which allows a significant improvement of network pre-training performance when compared to other pre-training methods commonly used in the literature.
RESUMO
Volumetric assessment of edema due to anasarca can help monitor the progression of diseases such as kidney, liver or heart failure. The ability to measure edema non-invasively by automatic segmentation from abdominal CT scans may be of clinical importance. The current state-of-the-art method for edema segmentation using intensity priors is susceptible to false positives or under-segmentation errors. The application of modern supervised deep learning methods for 3D edema segmentation is limited due to challenges in manual annotation of edema. In the absence of accurate 3D annotations of edema, we propose a weakly supervised learning method that uses edema segmentations produced by intensity priors as pseudo-labels, along with pseudo-labels of muscle, subcutaneous and visceral adipose tissues for context, to produce more refined segmentations with demonstrably lower segmentation errors. The proposed method employs nnU-Nets in multiple stages to produce the final edema segmentation. The results demonstrate the potential of weakly supervised learning using edema and tissue pseudo-labels in improved quantification of edema for clinical applications.
RESUMO
BACKGROUND: 3D printing has a wide range of applications and has brought significant change to many medical fields. However, ensuring quality assurance (QA) is essential for patient safety and requires a QA program that encompasses the entire production process. This process begins with imaging and continues on with segmentation, which is the conversion of Digital Imaging and Communications in Medicine (DICOM) data into virtual 3D-models. Since segmentation is highly influenced by manual intervention the influence of the users background on segmentation accuracy should be thoroughly investigated. METHODS: Seventeen computed tomography (CT) scans of the pelvis with physiological bony structures were identified, anonymized, exported as DICOM data sets, and pelvic bones were segmented by four observers with different backgrounds. Landmarks were measured on DICOM images and in the segmentations. Intraclass correlation coefficients (ICCs) were calculated to assess inter-observer agreement, and the trueness of the segmentation results was analyzed by comparing the DICOM landmark measurements with the measurements of the segmentation results. The correlation between segmentation trueness and segmentation time was analyzed. RESULTS: The lower limits of the 95% confidence intervals of the ICCs for the seven landmarks analyzed ranged from 0.511 to 0.986. The distance between the iliac crests showed the highest agreement between observers, while the distance between the ischial tuberosities showed the lowest. The distance between the upper edge of the symphysis and the promontory showed the lowest deviation between DICOM measurements and segmentation measurements (mean deviations < 1 mm), while the intertuberous distance showed the highest deviation (mean deviations 14.5-18.2 mm). CONCLUSIONS: Investigators with diverse backgrounds in segmentation and varying experience with slice images achieved pelvic bone segmentations with landmark measurements of mostly high agreement in a setup with high realism. In contrast, high variability was observed in the segmentation of the coccyx. In general, interobserver agreement was high, but due to measurement inaccuracies, landmark-based approaches cannot conclusively show that segmentation accuracy is within a clinically tolerable range of 2 mm for the pelvis. If the segmentation is performed by a very inexperienced user, the result should be reviewed critically by the clinician in charge.
RESUMO
Introduction: Colorectal cancer (CRC) is one of the main causes of deaths worldwide. Early detection and diagnosis of its precursor lesion, the polyp, is key to reduce its mortality and to improve procedure efficiency. During the last two decades, several computational methods have been proposed to assist clinicians in detection, segmentation and classification tasks but the lack of a common public validation framework makes it difficult to determine which of them is ready to be deployed in the exploration room. Methods: This study presents a complete validation framework and we compare several methodologies for each of the polyp characterization tasks. Results: Results show that the majority of the approaches are able to provide good performance for the detection and segmentation task, but that there is room for improvement regarding polyp classification. Discussion: While studied show promising results in the assistance of polyp detection and segmentation tasks, further research should be done in classification task to obtain reliable results to assist the clinicians during the procedure. The presented framework provides a standarized method for evaluating and comparing different approaches, which could facilitate the identification of clinically prepared assisting methods.
RESUMO
Accurate segmentation of medical images is critical for generating patient-specific models suitable for computational analyses, particularly in the context of transcatheter aortic valve implantation (TAVI). This study aimed to quantify the accuracy of the segmentation process from medical images of TAVI patients to understand the uncertainty in patient-specific geometries. We also quantified discrepancies between actual and CT-related diameter measurements due to artifacts and intra-observer variability. Segmentation accuracy was assessed using both synthetic phantom models and patient-specific data. The impact of voxelization and CT scanner resolution on segmentation accuracy was evaluated, while the intersection over union (IoU) metric was used to compare the consistency of different segmentation methodologies. The voxelization process introduced a marginal error (<1%) in phantom models relative to CAD models. CT scanner resolution impacted segmented model accuracy only after a 7.5-fold increase in voxel size compared to the baseline medical image. IoU analysis revealed higher segmentation accuracy for calcification (93.4 ± 3.1 %) compared to the aortic wall (85.4 ± 8.4 %) and native valve leaflets (75.5 ± 6.3 %). Discrepancies in THV diameter measurements highlighted a â¼5 % error due to metallic artifacts, with variability among observers and at different THV heights. Errors due to voxel size, segmentation methodologies and CT-related artifacts can impact the reliability of patient-specific geometries and ultimately computational predictions used to asses clinical outcomes and enhance decision-making. This study underscores the importance of accurate segmentation and its standardization for patient-specific modeling of TAVI simulations.
RESUMO
A group of three deep-learning tools, referred to collectively as CHiMP (Crystal Hits in My Plate), were created for analysis of micrographs of protein crystallization experiments at the Diamond Light Source (DLS) synchrotron, UK. The first tool, a classification network, assigns images into categories relating to experimental outcomes. The other two tools are networks that perform both object detection and instance segmentation, resulting in masks of individual crystals in the first case and masks of crystallization droplets in addition to crystals in the second case, allowing the positions and sizes of these entities to be recorded. The creation of these tools used transfer learning, where weights from a pre-trained deep-learning network were used as a starting point and repurposed by further training on a relatively small set of data. Two of the tools are now integrated at the VMXi macromolecular crystallography beamline at DLS, where they have the potential to absolve the need for any user input, both for monitoring crystallization experiments and for triggering in situ data collections. The third is being integrated into the XChem fragment-based drug-discovery screening platform, also at DLS, to allow the automatic targeting of acoustic compound dispensing into crystallization droplets.
Assuntos
Cristalização , Aprendizado Profundo , Cristalização/métodos , Cristalografia por Raios X/métodos , Proteínas/química , Processamento de Imagem Assistida por Computador/métodos , Síncrotrons , Automação , SoftwareRESUMO
Accurate segmentation of the temporomandibular joint (TMJ) from cone beam CT (CBCT) images holds significant clinical value for diagnosing temporomandibular joint osteoarthrosis (TMJOA) and related conditions. Convolutional neural network-based medical image segmentation methods have achieved state-of-the-art performance in various segmentation tasks. However, 3D medical images segmentation requires substantial global context and rich spatial semantic information, demanding much more GPU memory and computational resources. To address these challenges in 3D medical image segmentation, we propose a novel network- the MVEL-Net (Multi-view Ensemble Learning Network) for TMJ CBCT image segmentation. By resampling images along three dimensions, we generate multiple weak learners with different spatial semantic information. A subsequent strong learning network effectively integrates the outputs from these weak learners to achieve more accurate segmentation results. We evaluated our network model using a clinical dataset comprising 88 subjects with TMJ CBCT images. The average Dice similarity coefficient (DSC) was 0.9817 ± 0.0049, the average surface distance was 0.0540 ± 0.0179 mm, and the 95% Hausdorff distance was 0.1743 ± 0.0550 mm. Our proposed MVEL-Net demonstrates excellent segmentation performance on TMJ from CBCT images, while using fewer GPU memory resources compared to other 3D networks. The effectiveness of this method in capturing spatial context could be leveraged for tasks like organ segmentation from volumetric scans. This may facilitate wider adoption of AI-based solutions for automated analysis of 3D medical images.
RESUMO
Manual segmentation is an essential tool in the researcher's technical arsenal. It is a frequent practice necessary for image analysis in many protocols, especially in neuroimaging and comparative brain anatomy. In the framework of emergence of studies focusing on alternative animal models, manual segmentation procedures play a critical role. Nevertheless, this critical task is often assigned to students, a process that, unfortunately, tends to be time-consuming and repetitive. Well-conducted and well-described segmentation procedures can potentially guide novice and even expert operators and enhance research works' internal and external validity, making it possible to harmonize studies and facilitate data sharing. Furthermore, recent advances in neuroimaging, such as ex vivo imaging or ultra-high-field MRI, enable new acquisition modalities and the identification of minute structures that are barely visible with typical approaches. In this context of increasingly detailed and multimodal brain studies, reflecting on methodology is relevant and necessary. Because it is crucial to implement good practices in manual segmentation per se but also in the description of the segmentation procedures in research papers, we propose a general roadmap for optimizing the technique, its process and the reporting of manual segmentation. For each of them, the relevant elements of the literature have been collected and cited. The article is accompanied by a checklist that the reader can use to verify that the critical steps are being followed.
RESUMO
This study modifies the U-Net architecture for pixel-based segmentation to automatically classify lesions in laryngeal endoscopic images. The advanced U-Net incorporates five-level encoders and decoders, with an autoencoder layer to derive latent vectors representing the image characteristics. To enhance performance, a WGAN was implemented to address common issues such as mode collapse and gradient explosion found in traditional GANs. The dataset consisted of 8171 images labeled with polygons in seven colors. Evaluation metrics, including the F1 score and intersection over union, revealed that benign tumors were detected with lower accuracy compared to other lesions, while cancers achieved notably high accuracy. The model demonstrated an overall accuracy rate of 99%. This enhanced U-Net model shows strong potential in improving cancer detection, reducing diagnostic errors, and enhancing early diagnosis in medical applications.
RESUMO
Cyclic fluorescence microscopy enables multiple targets to be detected simultaneously. This, in turn, has deepened our understanding of tissue composition, cell-to-cell interactions, and cell signaling. Unfortunately, analysis of these datasets can be time-prohibitive due to the sheer volume of data. In this paper, we present CycloNET, a computational pipeline tailored for analyzing raw fluorescent images obtained through cyclic immunofluorescence. The automated pipeline pre-processes raw image files, quickly corrects for translation errors between imaging cycles, and leverages a pre-trained neural network to segment individual cells and generate single-cell molecular profiles. We applied CycloNET to a dataset of 22 human samples from head and neck squamous cell carcinoma patients and trained a neural network to segment immune cells. CycloNET efficiently processed a large-scale dataset (17 fields of view per cycle and 13 staining cycles per specimen) in 10 min, delivering insights at the single-cell resolution and facilitating the identification of rare immune cell clusters. We expect that this rapid pipeline will serve as a powerful tool to understand complex biological systems at the cellular level, with the potential to facilitate breakthroughs in areas such as developmental biology, disease pathology, and personalized medicine.
Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Microscopia de Fluorescência , Humanos , Microscopia de Fluorescência/métodos , Processamento de Imagem Assistida por Computador/métodos , Análise de Célula Única/métodos , Redes Neurais de Computação , Carcinoma de Células Escamosas de Cabeça e Pescoço/patologia , Neoplasias de Cabeça e Pescoço/patologiaRESUMO
Background: Identifying different groups of customers and their preferences and needs enable countries to gain a competitive advantage in the medical tourism market. We aimed to segment medical tourists from West Asian countries seeking medical services in Iran. Methods: This cross-sectional study was conducted on 596 medical tourists who sought medical services in Iran in 2021. Data were collected using a valid questionnaire. Segmentation was performed based on medical tourism attributes (medical, destination, and tourism attributes), using cluster analysis methods; wards, and K means. The segments ' evaluation and profiling were conducted using discriminant analysis, chi-square, and oneway ANOVA tests. Results: Our study divided the market into five segments: health seekers (3.8%), health and destination seekers (8.9%), tourism seekers (17.8%), infrastructure seekers (10.23%), and perfectionism (59.45%). In all segments, the health attributes were of high importance. The perfectionism segment registered the highest score in all three attributes (more than 5 of 6). Conclusion: Improving health attributes and offering luxurious medical services can be the main strategy for Iran to attract the most medical tourists and achieve a good position in this marketplace. The implication of this study is policymaking for targeting the most profitable segment of this marketplace.