Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
Sensors (Basel) ; 24(8)2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38676200

RESUMO

In diverse realms of research, such as holographic optical tweezer mechanical measurements, colloidal particle motion state examinations, cell tracking, and drug delivery, the localization and analysis of particle motion command paramount significance. Algorithms ranging from conventional numerical methods to advanced deep-learning networks mark substantial strides in the sphere of particle orientation analysis. However, the need for datasets has hindered the application of deep learning in particle tracking. In this work, we elucidated an efficacious methodology pivoted toward generating synthetic datasets conducive to this domain that resonates with robustness and precision when applied to real-world data of tracking 3D particles. We developed a 3D real-time particle positioning network based on the CenterNet network. After conducting experiments, our network has achieved a horizontal positioning error of 0.0478 µm and a z-axis positioning error of 0.1990 µm. It shows the capability to handle real-time tracking of particles, diverse in dimensions, near the focal plane with high precision. In addition, we have rendered all datasets cultivated during this investigation accessible.

2.
Int J Mol Sci ; 22(7)2021 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-33810447

RESUMO

Molecular spectroscopy has been widely used to identify pesticides. The main limitation of this approach is the difficulty of identifying pesticides with similar molecular structures. When these pesticide residues are in trace and mixed states in plants, it poses great challenges for practical identification. This study proposed a state-of-the-art method for the rapid identification of trace (10 mg·L-1) and multiple similar benzimidazole pesticide residues on the surface of Toona sinensis leaves, mainly including benzoyl (BNL), carbendazim (BCM), thiabendazole (TBZ), and their mixtures. The new method combines high-throughput terahertz (THz) imaging technology with a deep learning framework. To further improve the model reliability beyond the THz fingerprint peaks (BNL: 0.70, 1.07, 2.20 THz; BCM: 1.16, 1.35, 2.32 THz; TBZ: 0.92, 1.24, 1.66, 1.95, 2.58 THz), we extracted the absorption spectra in frequencies of 0.2-2.2 THz from images as the input to the deep convolution neural network (DCNN). Compared with fuzzy Sammon clustering and four back-propagation neural network (BPNN) models (TrainCGB, TrainCGF, TrainCGP, and TrainRP), DCNN achieved the highest prediction accuracies of 100%, 94.51%, 96.26%, 94.64%, 98.81%, 94.90%, 96.17%, and 96.99% for the control check group, BNL, BCM, TBZ, BNL + BCM, BNL + TBZ, BCM + TBZ, and BNL + BCM + TBZ, respectively. Taking advantage of THz imaging and DCNN, the image visualization of pesticide distribution and residue types on leaves was realized simultaneously. The results demonstrated that THz imaging and deep learning can be potentially adopted for rapid-sensing detection of trace multi-residues on leaf surfaces, which is of great significance for agriculture and food safety.


Assuntos
Benzimidazóis/farmacologia , Aprendizado Profundo , Resíduos de Praguicidas/análise , Folhas de Planta , Imagem Terahertz/métodos , Toona , Benzimidazóis/análise , Carbamatos/análise , Análise por Conglomerados , Teoria da Densidade Funcional , Inocuidade dos Alimentos , Lógica Fuzzy , Redes Neurais de Computação , Praguicidas , Reprodutibilidade dos Testes , Tiabendazol/análise
3.
J Digit Imaging ; 32(1): 183-197, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30187316

RESUMO

Ophthalmic medical images, such as optical coherence tomography (OCT) images and color photo of fundus, provide valuable information for clinical diagnosis and treatment of ophthalmic diseases. In this paper, we introduce a software system specially oriented to ophthalmic images processing, analysis, and visualization (OIPAV) to assist users. OIPAV is a cross-platform system built on a set of powerful and widely used toolkit libraries. Based on the plugin mechanism, the system has an extensible framework. It provides rich functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis, and visualization. By using OIPAV, users can easily access to the ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images, and improve quantitative evaluations. With a satisfying function scalability and expandability, the software is applicable for both ophthalmic researchers and clinicians.


Assuntos
Oftalmopatias/diagnóstico por imagem , Angiofluoresceinografia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Tomografia de Coerência Óptica/métodos , Olho/diagnóstico por imagem , Humanos
4.
Surg Endosc ; 32(6): 2958-2967, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29602988

RESUMO

BACKGROUND: Augmented reality (AR) systems are currently being explored by a broad spectrum of industries, mainly for improving point-of-care access to data and images. Especially in surgery and especially for timely decisions in emergency cases, a fast and comprehensive access to images at the patient bedside is mandatory. Currently, imaging data are accessed at a distance from the patient both in time and space, i.e., at a specific workstation. Mobile technology and 3-dimensional (3D) visualization of radiological imaging data promise to overcome these restrictions by making bedside AR feasible. METHODS: In this project, AR was realized in a surgical setting by fusing a 3D-representation of structures of interest with live camera images on a tablet computer using marker-based registration. The intent of this study was to focus on a thorough evaluation of AR. Feasibility, robustness, and accuracy were thus evaluated consecutively in a phantom model and a porcine model. Additionally feasibility was evaluated in one male volunteer. RESULTS: In the phantom model (n = 10), AR visualization was feasible in 84% of the visualization space with high accuracy (mean reprojection error ± standard deviation (SD): 2.8 ± 2.7 mm; 95th percentile = 6.7 mm). In a porcine model (n = 5), AR visualization was feasible in 79% with high accuracy (mean reprojection error ± SD: 3.5 ± 3.0 mm; 95th percentile = 9.5 mm). Furthermore, AR was successfully used and proved feasible within a male volunteer. CONCLUSIONS: Mobile, real-time, and point-of-care AR for clinical purposes proved feasible, robust, and accurate in the phantom, animal, and single-trial human model shown in this study. Consequently, AR following similar implementation proved robust and accurate enough to be evaluated in clinical trials assessing accuracy, robustness in clinical reality, as well as integration into the clinical workflow. If these further studies prove successful, AR might revolutionize data access at patient bedside.


Assuntos
Imageamento Tridimensional , Sistemas Automatizados de Assistência Junto ao Leito , Cirurgia Assistida por Computador/métodos , Animais , Estudos de Viabilidade , Humanos , Imageamento por Ressonância Magnética , Masculino , Modelos Animais , Imagens de Fantasmas , Projetos Piloto , Estudos Prospectivos , Suínos , Tomografia Computadorizada por Raios X
5.
J Med Syst ; 40(5): 122, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-27037686

RESUMO

Fusion of the functional image with an anatomical image provides additional diagnostic information. It is widely used in diagnosis, treatment planning, and follow-up of oncology. Functional image is a low-resolution pseudo color image representing the uptake of radioactive tracer that gives the important metabolic information. Whereas, anatomical image is a high-resolution gray scale image that gives structural details. Fused image should consist of all the anatomical details without any changes in the functional content. This is achieved through fusion in de-correlated color model and the choice of color model has greater impact on the fusion outcome. In the present work, suitability of different color models for functional and anatomical image fusion is studied. After converting the functional image into de-correlated color model, the achromatic component of functional image is fused with an anatomical image by using proposed nonsubsampled shearlet transform (NSST) based image fusion algorithm to get new achromatic component with all the anatomical details. This new achromatic and original chromatic channels of functional image are converted to RGB format to get fused functional and anatomical image. Fusion is performed in different color models. Different cases of SPECT-MRI images are used for this color model study. Based on visual and quantitative analysis of fused images, the best color model for the stated purpose is determined.


Assuntos
Cor , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Algoritmos , Humanos
6.
J Digit Imaging ; 28(6): 636-45, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25804842

RESUMO

Finding optimal compression levels for diagnostic imaging is not an easy task. Significant compressibility variations exist between modalities, but little is known about compressibility variations within modalities. Moreover, compressibility is affected by acquisition parameters. In this study, we evaluate the compressibility of thousands of computed tomography (CT) slices acquired with different slice thicknesses, exposures, reconstruction filters, slice collimations, and pitches. We demonstrate that exposure, slice thickness, and reconstruction filters have a significant impact on image compressibility due to an increased high frequency content and a lower acquisition signal-to-noise ratio. We also show that compression ratio is not a good fidelity measure. Therefore, guidelines based on compression ratio should ideally be replaced with other compression measures better correlated with image fidelity. Value-of-interest (VOI) transformations also affect the perception of quality. We have studied the effect of value-of-interest transformation and found significant masking of artifacts when window is widened.


Assuntos
Compressão de Dados/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Artefatos , Humanos , Imagens de Fantasmas , Razão Sinal-Ruído
7.
Artigo em Inglês | MEDLINE | ID: mdl-38720815

RESUMO

3D echocardiography (3DE) is the standard modality for visualizing heart valves and their surrounding anatomical structures. Commercial cardiovascular ultrasound systems commonly offer a set of parameters that allow clinical users to modify, in real time, visual aspects of the information contained in the echocardiogram. To our knowledge, there is currently no work that demonstrates if the methods currently used by commercial platforms are optimal. In addition, current platforms have limitations in adjusting the visibility of anatomical structures, such as reducing information that obstructs anatomical structures without removing essential clinical information. To overcome this, the present work proposes a new method for 3DE visualization based on "focus + context" (F+C), a concept which aims to present a detailed region of interest while preserving a less detailed overview of the surrounding context. The new method is intended to allow clinical users to modify parameter values differently within a certain region of interest, independently from the adjustment of contextual information. To validate this new method, a user study was conducted amongst clinical experts. As part of the user study, clinical experts adjusted parameters for five echocardiograms of patients with complete atrioventricular canal defect (CAVC) using both the method conventionally used by commercial platforms and the proposed method based on F+C. The results showed relevance for the F+C-based method to visualize 3DE of CAVC patients, where users chose significantly different parameter values with the F+C-based method.

8.
Nan Fang Yi Ke Da Xue Xue Bao ; 43(6): 985-993, 2023 Jun 20.
Artigo em Chinês | MEDLINE | ID: mdl-37439171

RESUMO

OBJECTIVE: To propose a tissue- aware contrast enhancement network (T- ACEnet) for CT image enhancement and validate its accuracy in CT image organ segmentation tasks. METHODS: The original CT images were mapped to generate low dynamic grayscale images with lung and soft tissue window contrasts, and the supervised sub-network learned to recognize the optimal window width and level setting of the lung and abdominal soft tissues via the lung mask. The self-supervised sub-network then used the extreme value suppression loss function to preserve more organ edge structure information. The images generated by the T-ACEnet were fed into the segmentation network to segment multiple abdominal organs. RESULTS: The images obtained by T-ACEnet were capable of providing more window setting information in a single image, which allowed the physicians to conduct preliminary screening of the lesions. Compared with the suboptimal methods, T-ACE images achieved improvements by 0.51, 0.26, 0.10, and 14.14 in SSIM, QABF, VIFF, and PSNR metrics, respectively, with a reduced MSE by an order of magnitude. When T-ACE images were used as input for segmentation networks, the organ segmentation accuracy could be effectively improved without changing the model as compared with the original CT images. All the 5 segmentation quantitative indices were improved, with the maximum improvement of 4.16%. CONCLUSION: The T-ACEnet can perceptually improve the contrast of organ tissues and provide more comprehensive and continuous diagnostic information, and the T-ACE images generated using this method can significantly improve the performance of organ segmentation tasks.


Assuntos
Aumento da Imagem , Aprendizagem , Tomografia Computadorizada por Raios X
9.
Front Oncol ; 12: 921607, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36267969

RESUMO

Purpose: The aim of this study is to develop an augmented reality (AR)-assisted radiotherapy positioning system based on HoloLens 2 and to evaluate the feasibility and accuracy of this method in the clinical environment. Methods: The obtained simulated computed tomography (CT) images of an "ISO cube", a cube phantom, and an anthropomorphic phantom were reconstructed into three-dimensional models and imported into the HoloLens 2. On the basis of the Vuforia marker attached to the "ISO cube" placed at the isocentric position of the linear accelerator, the correlation between the virtual and real space was established. First, the optimal conditions to minimize the deviation between virtual and real objects were explored under different conditions with a cube phantom. Then, the anthropomorphic phantom-based positioning was tested under the optimal conditions, and the positioning errors were evaluated with cone-beam CT. Results: Under the normal light intensity, the registration and tracking angles are 0°, the distance is 40 cm, and the deviation reached a minimum of 1.4 ± 0.3 mm. The program would not run without light. The hologram drift caused by the light change, camera occlusion, and head movement were 0.9 ± 0.7 mm, 1.0 ± 0.6 mm, and 1.5 ± 0.9 mm, respectively. The anthropomorphic phantom-based positioning errors were 3.1 ± 1.9 mm, 2.4 ± 2.5 mm, and 4.6 ± 2.8 mm in the X (lateral), Y (vertical), and Z (longitudinal) axes, respectively, and the angle deviation of Rtn was 0.26 ± 0.14°. Conclusion: The AR-assisted radiotherapy positioning based on HoloLens 2 is a feasible method with certain advantages, such as intuitive visual guidance, radiation-free position verification, and intelligent interaction. Hardware and software upgrades are expected to further improve accuracy and meet clinicalbrendaannmae requirements.

10.
J Med Imaging (Bellingham) ; 9(Suppl 1): 012206, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36225968

RESUMO

Purpose: Among the conferences comprising the Medical Imaging Symposium is the MI104 conference currently titled Image-Guided Procedures, Robotic Interventions, and Modeling, although its name has evolved through at least nine iterations over the last 30 years. Here, we discuss the important role that this forum has presented for researchers in the field during this time. Approach: The origins of the conference are traced from its roots in Image Capture and Display in the late 1980s, and some of the major themes for which the conference and its proceedings have provided a valuable forum are highlighted. Results: These major themes include image display/visualization, surgical tracking/navigation, surgical robotics, interventional imaging, image registration, and modeling. Exceptional work from the conference is highlighted by summarizing keynote lectures, the top 50 most downloaded proceedings papers over the last 30 years, the most downloaded paper each year, and the papers earning student paper and young scientist awards. Conclusions: Looking forward and considering the burgeoning technologies, algorithms, and markets related to image-guided and robot-assisted interventions, we anticipate growth and ever increasing quality of the conference as well as increased interaction with sister conferences within the symposium.

11.
Front Rehabil Sci ; 3: 806114, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36189032

RESUMO

Currently, there is neither a standardized mode for the documentation of phantom sensations and phantom limb pain, nor for their visualization as perceived by patients. We have therefore created a tool that allows for both, as well as for the quantification of the patient's visible and invisible body image. A first version provides the principal functions: (1) Adapting a 3D avatar for self-identification of the patient; (2) modeling the shape of the phantom limb; (3) adjusting the position of the phantom limb; (4) drawing pain and cramps directly onto the avatar; and (5) quantifying their respective intensities. Our tool (C.A.L.A.) was evaluated with 33 occupational therapists, physiotherapists, and other medical staff. Participants were presented with two cases in which the appearance and the position of the phantom had to be modeled and pain and cramps had to be drawn. The usability of the software was evaluated using the System Usability Scale and its functional range was evaluated using a self-developed questionnaire and semi-structured interview. In addition, our tool was evaluated on 22 patients with limb amputations. For each patient, body image as well as phantom sensation and pain were modeled to evaluate the software's functional scope. The accuracy of the created body image was evaluated using a self-developed questionnaire and semi-structured interview. Additionally, pain sensation was assessed using the SF-McGill Pain Questionnaire. The System Usability Scale reached a level of 81%, indicating high usability. Observing the participants, though, identified several operational difficulties. While the provided functions were considered useful by most participants, the semi-structured interviews revealed the need for an improved pain documentation component. In conclusion, our tool allows for an accurate visualization of phantom limbs and phantom limb sensations. It can be used as both a descriptive and quantitative documentation tool for analyzing and monitoring phantom limbs. Thus, it can help to bridge the gap between the therapist's conception and the patient's perception. Based on the collected requirements, an improved version with extended functionality will be developed.

12.
Appl Plant Sci ; 9(5)2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-34141498

RESUMO

PREMISE: Polishing entire stem and root samples is an effective method for studying their anatomy; however, polishing fresh samples to preserve woods with soft tissues or barks is challenging given that soft tissues shrink when dried. We propose sanding fresh or liquid-preserved samples under water as an alternative, given that it preserves all tissues in an intact and clear state. METHODS AND RESULTS: By manually grinding the surface of the samples under water using three ascending grits of waterproof sandpapers, an excellent polished sanded surface is obtained. The wood swarf goes into the water without clogging the cell lumina, rendering the surfaces adequate for cell visualization and description. We show results in palms, liana stems, roots, and wood blocks. CONCLUSIONS: Using this simple, inexpensive, rapid technique, it is possible to polish either fresh, dry, or liquid-preserved woody plant samples, preserving the integrity of both the soft and hard tissues and allowing for detailed observations of the stems and roots.

13.
Diagnostics (Basel) ; 11(9)2021 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-34573904

RESUMO

Drug use disorders caused by illicit drug use are significant contributors to the global burden of disease, and it is vital to conduct early detection of people with drug use disorders (PDUD). However, the primary care clinics and emergency departments lack simple and effective tools for screening PDUD. This study proposes a novel method to detect PDUD using facial images. Various experiments are designed to obtain the convolutional neural network (CNN) model by transfer learning based on a large-scale dataset (9870 images from PDUD and 19,567 images from GP (the general population)). Our results show that the model achieved 84.68%, 87.93%, and 83.01% in accuracy, sensitivity, and specificity in the dataset, respectively. To verify its effectiveness, the model is evaluated on external datasets based on real scenarios, and we found it still achieved high performance (accuracy > 83.69%, specificity > 90.10%, sensitivity > 80.00%). Our results also show differences between PDUD and GP in different facial areas. Compared with GP, the facial features of PDUD were mainly concentrated in the left cheek, right cheek, and nose areas (p < 0.001), which also reveals the potential relationship between mechanisms of drugs action and changes in facial tissues. This is the first study to apply the CNN model to screen PDUD in clinical practice and is also the first attempt to quantitatively analyze the facial features of PDUD. This model could be quickly integrated into the existing clinical workflow and medical care to provide capabilities.

14.
Curr Med Imaging ; 16(6): 653-668, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32723236

RESUMO

BACKGROUND: Image reconstruction is the mathematical process which converts the signals obtained from the scanning machine into an image. The reconstructed image plays a fundamental role in the planning of surgery and research in the medical field. DISCUSSION: This paper introduces the first comprehensive survey of the literature about medical image reconstruction related to diseases, presenting a categorical study about the techniques and analyzing advantages and disadvantages of each technique. The images obtained by various imaging modalities like MRI, CT, CTA, Stereo radiography and Light field microscopy are included. A comparison on the basis of the reconstruction technique, Imaging Modality and Visualization, Disease, Metrics for 3D reconstruction accuracy, Dataset and Execution time, Evaluation of the technique is also performed. CONCLUSION: The survey makes an assessment of the suitable reconstruction technique for an organ, draws general conclusions and discusses the future directions.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Algoritmos , Humanos , Imageamento por Ressonância Magnética , Radiografia , Tomografia Computadorizada por Raios X
15.
Comput Methods Programs Biomed ; 179: 104983, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31443854

RESUMO

BACKGROUND AND OBJECTIVE: Digital scanners are being increasingly adopt-ed in anatomical pathology, but there is still a lack of a standardized whole slide image (WSI) format. This translates into the need for interoperability and knowledge representation for shareable and computable clinical information. This work describes a robust solution, called Visilab Viewer, able to interact and work with any WSI based on the DICOM standard. METHODS: Visilab Viewer is a web platform developed and integrated alongside a proposed web architecture following the DICOM definition. To prepare the information of the pyramid structure proposed in DICOM, a specific module was defined. The same structure is used by a second module that aggregates on the cache browser the adjacent tiles or frames of the current user's viewport with the aim of achieving fast and fluid navigation over the tissue slide. This solution was tested and compared with three different web viewers, publicly available, with 10 WSIs. RESULTS: A quantitative assessment was performed based on the average load time per frame together with the number of fully loaded frames. Kruskal-Wallis and Dunn tests were used to compare each web viewer latency results and finally to rank them. Additionally, a qualitative evaluation was done by 6 pathologists based on speed and quality for zooming, panning and usability. The proposed viewer obtained the best performance in both assessments. The entire architecture proposed was tested in the 2nd worldwide DICOM Connectathon, obtaining successful results with all participant scanner vendors. CONCLUSIONS: The online tool allows users to navigate and obtain a correct visualization of the samples avoiding any restriction of format and localization. The two strategical modules allow to reduce time in displaying the slide and therefore, offer high fluidity and usability. The web platform manages not only the visualization with the developed web viewer but also includes the insertion, manipulation and generation of new DICOM elements. Visilab Viewer can successfully exchange DICOM data. Connectathons are the ultimate interoperability tests and are therefore required to guarantee that solutions as Visilab Viewer and its architecture can successfully exchange data following the DICOM standard. Accompanying demo video. (Link to Youtube video.).


Assuntos
Internet , Software , Telepatologia/estatística & dados numéricos , Biópsia por Agulha Fina/estatística & dados numéricos , Técnicas Citológicas/estatística & dados numéricos , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Telepatologia/métodos
16.
Med Phys ; 45(6): 2583-2594, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29659023

RESUMO

PURPOSE: Transcatheter aortic valve replacement (TAVR) is a minimally invasive procedure in which a prosthetic heart valve is placed and expanded within a defective aortic valve. The device placement is commonly performed using two-dimensional (2D) fluoroscopic imaging. Within this work, we propose a novel technique to track the motion and deformation of the prosthetic valve in three dimensions based on biplane fluoroscopic image sequences. METHODS: The tracking approach uses a parameterized point cloud model of the valve stent which can undergo rigid three-dimensional (3D) transformation and different modes of expansion. Rigid elements of the model are individually rotated and translated in three dimensions to approximate the motions of the stent. Tracking is performed using an iterative 2D-3D registration procedure which estimates the model parameters by minimizing the mean-squared image values at the positions of the forward-projected model points. Additionally, an initialization technique is proposed, which locates clusters of salient features to determine the initial position and orientation of the model. RESULTS: The proposed algorithms were evaluated based on simulations using a digital 4D CT phantom as well as experimentally acquired images of a prosthetic valve inside a chest phantom with anatomical background features. The target registration error was 0.12 ± 0.04 mm in the simulations and 0.64 ± 0.09 mm in the experimental data. CONCLUSIONS: The proposed algorithm could be used to generate 3D visualization of the prosthetic valve from two projections. In combination with soft-tissue sensitive-imaging techniques like transesophageal echocardiography, this technique could enable 3D image guidance during TAVR procedures.


Assuntos
Algoritmos , Técnicas de Imagem Cardíaca/métodos , Fluoroscopia/métodos , Próteses Valvulares Cardíacas , Imageamento Tridimensional/métodos , Valva Aórtica/diagnóstico por imagem , Técnicas de Imagem Cardíaca/instrumentação , Simulação por Computador , Fluoroscopia/instrumentação , Humanos , Imageamento Tridimensional/instrumentação , Modelos Anatômicos , Modelos Teóricos , Movimento (Física) , Imagens de Fantasmas , Raios X
17.
Methods Mol Biol ; 1683: 131-148, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29082491

RESUMO

Data analysis and management in high content screening (HCS) has progressed significantly in the past 10 years. The analysis of the large volume of data generated in HCS experiments represents a significant challenge and is currently a bottleneck in many screening projects. In most screening laboratories, HCS has become a standard technology applied routinely to various applications from target identification to hit identification to lead optimization. An HCS data management and analysis infrastructure shared by several research groups can allow efficient use of existing IT resources and ensures company-wide standards for data quality and result generation. This chapter outlines typical HCS workflows and presents IT infrastructure requirements for multi-well plate-based HCS.


Assuntos
Ensaios de Triagem em Larga Escala , Processamento de Imagem Assistida por Computador , Armazenamento e Recuperação da Informação , Imagem Molecular , Sistemas de Gerenciamento de Base de Dados , Descoberta de Drogas/métodos , Humanos , Imagem Molecular/métodos , Software , Interface Usuário-Computador , Fluxo de Trabalho
18.
Artigo em Chinês | WPRIM | ID: wpr-987012

RESUMO

OBJECTIVE@#To propose a tissue- aware contrast enhancement network (T- ACEnet) for CT image enhancement and validate its accuracy in CT image organ segmentation tasks.@*METHODS@#The original CT images were mapped to generate low dynamic grayscale images with lung and soft tissue window contrasts, and the supervised sub-network learned to recognize the optimal window width and level setting of the lung and abdominal soft tissues via the lung mask. The self-supervised sub-network then used the extreme value suppression loss function to preserve more organ edge structure information. The images generated by the T-ACEnet were fed into the segmentation network to segment multiple abdominal organs.@*RESULTS@#The images obtained by T-ACEnet were capable of providing more window setting information in a single image, which allowed the physicians to conduct preliminary screening of the lesions. Compared with the suboptimal methods, T-ACE images achieved improvements by 0.51, 0.26, 0.10, and 14.14 in SSIM, QABF, VIFF, and PSNR metrics, respectively, with a reduced MSE by an order of magnitude. When T-ACE images were used as input for segmentation networks, the organ segmentation accuracy could be effectively improved without changing the model as compared with the original CT images. All the 5 segmentation quantitative indices were improved, with the maximum improvement of 4.16%.@*CONCLUSION@#The T-ACEnet can perceptually improve the contrast of organ tissues and provide more comprehensive and continuous diagnostic information, and the T-ACE images generated using this method can significantly improve the performance of organ segmentation tasks.


Assuntos
Aprendizagem , Aumento da Imagem , Tomografia Computadorizada por Raios X
19.
Artigo em Inglês | MEDLINE | ID: mdl-32733117

RESUMO

Biomedical research and clinical diagnosis would benefit greatly from full volume determinations of anatomical phenotype. Comprehensive tools for morphological phenotyping are central for the emerging field of phenomics, which requires high-throughput, systematic, accurate, and reproducible data collection from organisms affected by genetic, disease, or environmental variables. Theoretically, complete anatomical phenotyping requires the assessment of every cell type in the whole organism, but this ideal is presently untenable due to the lack of an unbiased 3D imaging method that allows histopathological assessment of any cell type despite optical opacity. Histopathology, the current clinical standard for diagnostic phenotyping, involves the microscopic study of tissue sections to assess qualitative aspects of tissue architecture, disease mechanisms, and physiological state. However, quantitative features of tissue architecture such as cellular composition and cell counting in tissue volumes can only be approximated due to characteristics of tissue sectioning, including incomplete sampling and the constraints of 2D imaging of 5 micron thick tissue slabs. We have used a small, vertebrate organism, the zebrafish, to test the potential of microCT for systematic macroscopic and microscopic morphological phenotyping. While cell resolution is routinely achieved using methods such as light sheet fluorescence microscopy and optical tomography, these methods do not provide the pancellular perspective characteristic of histology, and are constrained by the limited penetration of visible light through pigmented and opaque specimens, as characterizes zebrafish juveniles. Here, we provide an example of neuroanatomy that can be studied by microCT of stained soft tissue at 1.43 micron isotropic voxel resolution. We conclude that synchrotron microCT is a form of 3D imaging that may potentially be adopted towards more reproducible, large-scale, morphological phenotyping of optically opaque tissues. Further development of soft tissue microCT, visualization and quantitative tools will enhance its utility.

20.
Trends Cell Biol ; 25(12): 749-759, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26603943

RESUMO

Although it is widely appreciated that cells migrate in a variety of diverse environments in vivo, we are only now beginning to use experimental workflows that yield images with sufficient spatiotemporal resolution to study the molecular processes governing cell migration in 3D environments. Since cell migration is a dynamic process, it is usually studied via microscopy, but 3D movies of 3D processes are difficult to interpret by visual inspection. In this review, we discuss the technologies required to study the diversity of 3D cell migration modes with a focus on the visualization and computational analysis tools needed to study cell migration quantitatively at a level comparable to the analyses performed today on cells crawling on flat substrates.


Assuntos
Movimento Celular/fisiologia , Imageamento Tridimensional/métodos , Animais , Extensões da Superfície Celular/fisiologia , Humanos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa