Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
Sci Rep ; 14(1): 7768, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38565548

RESUMO

Repeatability of measurements from image analytics is difficult, due to the heterogeneity and complexity of cell samples, exact microscope stage positioning, and slide thickness. We present a method to define and use a reference focal plane that provides repeatable measurements with very high accuracy, by relying on control beads as reference material and a convolutional neural network focused on the control bead images. Previously we defined a reference effective focal plane (REFP) based on the image gradient of bead edges and three specific bead image features. This paper both generalizes and improves on this previous work. First, we refine the definition of the REFP by fitting a cubic spline to describe the relationship between the distance from a bead's center and pixel intensity and by sharing information across experiments, exposures, and fields of view. Second, we remove our reliance on image features that behave differently from one instrument to another. Instead, we apply a convolutional regression neural network (ResNet 18) trained on cropped bead images that is generalizable to multiple microscopes. Our ResNet 18 network predicts the location of the REFP with only a single inferenced image acquisition that can be taken across a wide range of focal planes and exposure times. We illustrate the different strategies and hyperparameter optimization of the ResNet 18 to achieve a high prediction accuracy with an uncertainty for every image tested coming within the microscope repeatability measure of 7.5 µm from the desired focal plane. We demonstrate the generalizability of this methodology by applying it to two different optical systems and show that this level of accuracy can be achieved using only 6 beads per image.

2.
PLoS One ; 19(2): e0298446, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38377138

RESUMO

To facilitate the characterization of unlabeled induced pluripotent stem cells (iPSCs) during culture and expansion, we developed an AI pipeline for nuclear segmentation and mitosis detection from phase contrast images of individual cells within iPSC colonies. The analysis uses a 2D convolutional neural network (U-Net) plus a 3D U-Net applied on time lapse images to detect and segment nuclei, mitotic events, and daughter nuclei to enable tracking of large numbers of individual cells over long times in culture. The analysis uses fluorescence data to train models for segmenting nuclei in phase contrast images. The use of classical image processing routines to segment fluorescent nuclei precludes the need for manual annotation. We optimize and evaluate the accuracy of automated annotation to assure the reliability of the training. The model is generalizable in that it performs well on different datasets with an average F1 score of 0.94, on cells at different densities, and on cells from different pluripotent cell lines. The method allows us to assess, in a non-invasive manner, rates of mitosis and cell division which serve as indicators of cell state and cell health. We assess these parameters in up to hundreds of thousands of cells in culture for more than 36 hours, at different locations in the colonies, and as a function of excitation light exposure.


Assuntos
Células-Tronco Pluripotentes Induzidas , Reprodutibilidade dos Testes , Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador/métodos , Linhagem Celular
3.
J Microsc ; 283(3): 243-258, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34115371

RESUMO

Trypan blue dye exclusion-based cell viability measurements are highly dependent upon image quality and consistency. In order to make measurements repeatable, one must be able to reliably capture images at a consistent focal plane, and with signal-to-noise ratio within appropriate limits to support proper execution of image analysis routines. Imaging chambers and imaging systems used for trypan blue analysis can be inconsistent or can drift over time, leading to a need to assure the acquisition of images prior to automated image analysis. Although cell-based autofocus techniques can be applied, the heterogeneity and complexity of the cell samples can make it difficult to assure the effectiveness, repeatability and accuracy of the routine for each measurement. Instead of auto-focusing on cells in our images, we add control beads to the images, and use them to repeatedly return to a reference focal plane. We use bead image features that have stable profiles across a wide range of focal values and exposure levels. We created a predictive model based on image quality features computed over reference datasets. Because the beads have little variation, we can determine the reference plane from bead image features computed over a single-shot image and can reproducibly return to that reference plane with each sample. The achieved accuracy (over 95%) is within the limits of the actuator repeatability. We demonstrate that a small number of beads (less than 3 beads per image) is needed to achieve this accuracy. We have also developed an open-source Graphical User Interface called Bead Benchmarking-Focus And Intensity Tool (BB-FAIT) to implement these methods for a semi-automated cell viability analyser.


It is critical for the manufacturing and release of living cell-based therapies to determine the viability, the ratio of living cells to the total number of cells (live and dead), in the therapy. Dead cells can be a safety concern for the patient, and dosing is often based on the number of living cells which are the active ingredient of the drug product. Currently, the most common approach to evaluating cell viability is based on the staining of cell samples with the trypan blue marker of cell membrane integrity: a loss in cell membrane integrity with cell death allows the dye into the cell, which can be seen using brightfield microscopy. To classify cells as live/dead, the brightness of the cells is evaluated and cells with bright centres are considered live, while those with dark centres are considered dead. Unfortunately, this approach of staining, imaging and classification is very sensitive to image acquisition settings, including image focus and brightness. This paper introduces a method to establish the required image quality for image viability analysis, providing a tool to return to image acquisition settings that will ensure image quality even when there is variability from sample to sample. In this method, polymeric beads are added to each cell sample prior to cell viability analysis. Using image processing, we extract key features from the beads in the image such as sharpness of the edges of the beads. The image features of the cells can vary significantly from sample to sample and under different cell conditions, but image features of beads have proved to be consistent across samples. We are thus able to collect reference datasets quantifying bead features over a wide range of image acquisition settings (brightness and focus), allowing us to establish a reference focal plan for image acquisition for any cell sample based on bead features. We show that with as few as three beads per image, the reference focal plane can be found from a single acquisition of beads image data over a wide range of image focuses and brightness, allowing users to consistently acquire images for cell viability that meet pre-defined quality requirements.


Assuntos
Processamento de Imagem Assistida por Computador , Azul Tripano , Razão Sinal-Ruído
4.
Opt Express ; 29(2): 1788-1804, 2021 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-33726385

RESUMO

A reconstruction algorithm for partially coherent x-ray computed tomography (XCT) including Fresnel diffraction is developed and applied to an optical fiber. The algorithm is applicable to a high-resolution tube-based laboratory-scale x-ray tomography instrument. The computing time is only a few times longer than the projective counterpart. The algorithm is used to reconstruct, with projections and diffraction, a tilt series acquired at the micrometer scale of a graded-index optical fiber using maximum likelihood and a Bayesian method based on the work of Bouman and Sauer. The inclusion of Fresnel diffraction removes some reconstruction artifacts and use of a Bayesian prior probability distribution removes others, resulting in a substantially more accurate reconstruction.

5.
Artigo em Inglês | MEDLINE | ID: mdl-34121825

RESUMO

Using a unique data collection, we are able to study the detection of dense geometric objects in image data where object density, clarity, and size vary. The data is a large set of black and white images of scatterplots, taken from journals reporting thermophysical property data of metal systems, whose plot points are represented primarily by circles, triangles, and squares. We built a highly accurate single class U-Net convolutional neural network model to identify 97 % of image objects in a defined set of test images, locating the centers of the objects to within a few pixels of the correct locations. We found an optimal way in which to mark our training data masks to achieve this level of accuracy. The optimal markings for object classification, however, required more information in the masks to identify particular types of geometries. We show a range of different patterns used to mark the training data masks, and how they help or hurt our dual goals of location and classification. Altering the annotations in the segmentation masks can increase both the accuracy of object classification and localization on the plots, more than other factors such as adding loss terms to the network calculations. However, localization of the plot points and classification of the geometric objects require different optimal training data.

6.
Microsc Microanal ; 25(1): 70-76, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30869576

RESUMO

Using a commercial X-ray tomography instrument, we have obtained reconstructions of a graded-index optical fiber with voxels of edge length 1.05 µm at 12 tube voltages. The fiber manufacturer created a graded index in the central region by varying the germanium concentration from a peak value in the center of the core to a very small value at the core-cladding boundary. Operating on 12 tube voltages, we show by a singular value decomposition that there are only two singular vectors with significant weight. Physically, this means scans beyond two tube voltages contain largely redundant information. We concentrate on an analysis of the images associated with these two singular vectors. The first singular vector is dominant and images of the coefficients of the first singular vector at each voxel look are similar to any of the single-energy reconstructions. Images of the coefficients of the second singular vector by itself appear to be noise. However, by averaging the reconstructed voxels in each of several narrow bands of radii, we can obtain values of the second singular vector at each radius. In the core region, where we expect the germanium doping to go from a peak value at the fiber center to zero at the core-cladding boundary, we find that a plot of the two coefficients of the singular vectors forms a line in the two-dimensional space consistent with the dopant decreasing linearly with radial distance from the core center. The coating, made of a polymer rather than silica, is not on this line indicating that the two-dimensional results are sensitive not only to the density but also to the elemental composition.

8.
Artigo em Inglês | MEDLINE | ID: mdl-34877164

RESUMO

Fundamental limits for the calculation of scattering corrections within X-ray computed tomography (CT) are found within the independent atom approximation from an analysis of the cross sections, CT geometry, and the Nyquist sampling theorem, suggesting large reductions in computational time compared to existing methods. By modifying the scatter by less than 1 %, it is possible to treat some of the elastic scattering in the forward direction as inelastic to achieve a smoother elastic scattering distribution. We present an analysis showing that the number of samples required for the smoother distribution can be greatly reduced. We show that fixed forced detection can be used with many fewer points for inelastic scattering, but that for pure elastic scattering, a standard Monte Carlo calculation is preferred. We use smoothing for both elastic and inelastic scattering because the intrinsic angular resolution is much poorer than can be achieved for projective tomography. Representative numerical examples are given.

9.
PLoS One ; 13(12): e0208820, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30571779

RESUMO

PURPOSE: This paper lays the groundwork for linking Hounsfield unit measurements to the International System of Units (SI), ultimately enabling traceable measurements across X-ray CT (XCT) machines. We do this by characterizing a material basis that may be used in XCT reconstruction giving linear combinations of concentrations of chemical elements (in the SI units of mol/m3) which may be observed at each voxel. By implication, linear combinations not in the set are not observable. METHODS AND MATERIALS: We formulated a model for our material basis with a set of measurements of elemental powders at four tube voltages, 80 kV, 100 kV, 120 kV, and 140 kV, on a medical XCT. The samples included 30 small plastic bottles of powders containing various compounds spanning the atomic numbers up to 20, and a bottle of water and one of air. Using the chemical formulas and measured masses, we formed a matrix giving the number of Hounsfield units per (mole per cubic meter) at each tube voltage for each of 13 chemical elements. We defined a corresponding matrix in units we call molar Hounsfield unit (HU) potency, the difference in HU values that an added mole per cubic meter in a given voxel would add to the measured HU value. We built a matrix of molar potencies for each chemical element and tube voltage and performed a singular value decomposition (SVD) on these to formulate our material basis. We determined that the dimension of this basis is two. We then compared measurements in this material space with theoretical measurements, combining XCOM cross section data with the tungsten anode spectral model using interpolating cubic splines (TASMICS), a one-parameter filter, and a simple detector model, creating a matrix similar to our experimental matrix for the first 20 chemical elements. Finally, we compared the model predictions to Hounsfield unit measurements on three XCT calibration phantoms taken from the literature. RESULTS: We predict the experimental HU potency values derived from our scans of chemical elements with our theoretical model built from XCOM data. The singular values and singular vectors of the model and powder measurements are in substantial agreement. Application of the Bayesian Information Criterion (BIC) shows that exactly two singular values and singular vectors describe the results over four tube voltages. We give a good account of the HU values from the literature, measured for the calibration phantoms at several tube voltages for several commercial instruments, compared with our theoretical model without introducing additional parameters. CONCLUSIONS: We have developed a two-dimensional material basis that specifies the degree to which individual elements in compounds effect the HU values in XCT images of samples with elements up to atomic number Z = 20. We show that two dimensions is sufficient given the contrast and noise in our experiment. The linear combination of concentrations of elements that can be observed using a medical XCT have been characterized, providing a material basis for use in dual-energy reconstruction. This approach provides groundwork for improved reconstruction and for the link of Hounsfield units to the SI.


Assuntos
Modelos Teóricos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos , Tomografia Computadorizada por Raios X/normas , Calibragem , Humanos , Tomografia Computadorizada por Raios X/instrumentação
10.
Artigo em Inglês | MEDLINE | ID: mdl-34877089

RESUMO

The goal of this study was to compare volumetric analysis in computed tomography (CT) with the length measurement prescribed by the Response Evaluation Criteria in Solid Tumors (RECIST) for a system with known mass and unknown shape. We injected 2 mL to 4 mL of water into vials of sodium polyacrylate and into disposable diapers. Volume measurements of the sodium polyacrylate powder were able to predict both mass and proportional changes in mass within a 95 % prediction interval of width 12 % and 16 %, respectively. The corresponding figures for RECIST were 102 % and 82 %.

11.
Artigo em Inglês | MEDLINE | ID: mdl-30984514

RESUMO

We present a case study in which we use natural language processing and machine learning techniques to automatically select candidate scientific articles that may contain new experimental thermophysical property data from thousands of articles available in five different relevant journals. The National Institute of Standards and Technology (NIST) Thermodynamic Research Center (TRC) maintains a large database of available thermophysical property data extracted from articles that are manually selected for content. Over time the number of articles requiring manual inspection has grown and assistance from machine-based methods is needed. Previous work used topic modeling along with classification techniques to classify these journal articles into those with data for the TRC database and those without. These techniques have produced classifications with accuracy between 85 % and 90 %. However, the TRC does not want to lose data from the misclassified articles that contain relevant information. In this study, we start with these topic modeling and classification techniques, and then enhance the model using information relevant to the TRC's selection process. Our goal is to minimize the number of articles that require manual selection without missing articles of importance. Through a series of selection methods, we eliminate those articles for which we can determine a rejection criterion. We can reduce the number of articles that are not of interest by 70.8 % while retaining 98.7 % of the articles of interest. We have also found that topic model classification improves when the corpus of words is derived from specific sections of the articles rather than the entire articles, and we improve on our classification by using a combination of topic models from different sections of the article. Our best classification used only the Experimental and Literature Cited sections.

12.
Acad Radiol ; 23(8): 940-52, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27215408

RESUMO

RATIONALE AND OBJECTIVES: Quantifying changes in lung tumor volume is important for diagnosis, therapy planning, and evaluation of response to therapy. The aim of this study was to assess the performance of multiple algorithms on a reference data set. The study was organized by the Quantitative Imaging Biomarker Alliance (QIBA). MATERIALS AND METHODS: The study was organized as a public challenge. Computed tomography scans of synthetic lung tumors in an anthropomorphic phantom were acquired by the Food and Drug Administration. Tumors varied in size, shape, and radiodensity. Participants applied their own semi-automated volume estimation algorithms that either did not allow or allowed post-segmentation correction (type 1 or 2, respectively). Statistical analysis of accuracy (percent bias) and precision (repeatability and reproducibility) was conducted across algorithms, as well as across nodule characteristics, slice thickness, and algorithm type. RESULTS: Eighty-four percent of volume measurements of QIBA-compliant tumors were within 15% of the true volume, ranging from 66% to 93% across algorithms, compared to 61% of volume measurements for all tumors (ranging from 37% to 84%). Algorithm type did not affect bias substantially; however, it was an important factor in measurement precision. Algorithm precision was notably better as tumor size increased, worse for irregularly shaped tumors, and on the average better for type 1 algorithms. Over all nodules meeting the QIBA Profile, precision, as measured by the repeatability coefficient, was 9.0% compared to 18.4% overall. CONCLUSION: The results achieved in this study, using a heterogeneous set of measurement algorithms, support QIBA quantitative performance claims in terms of volume measurement repeatability for nodules meeting the QIBA Profile criteria.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Nódulo Pulmonar Solitário/diagnóstico por imagem , Nódulo Pulmonar Solitário/patologia , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Pulmão/diagnóstico por imagem , Pulmão/patologia , Imagens de Fantasmas , Reprodutibilidade dos Testes , Carga Tumoral
13.
J Magn Reson Imaging ; 44(4): 846-55, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-27008431

RESUMO

PURPOSE: To assess the ability of a recent, anatomically designed breast phantom incorporating T1 and diffusion elements to serve as a quality control device for quantitative comparison of apparent diffusion coefficient (ADC) measurements calculated from diffusion-weighted MRI (DWI) within and across MRI systems. MATERIALS AND METHODS: A bilateral breast phantom incorporating multiple T1 and diffusion tissue mimics and a geometric distortion array was imaged with DWI on 1.5 Tesla (T) and 3.0T scanners from two different manufacturers, using three different breast coils (three configurations total). Multiple measurements were acquired to assess the bias and variability of different diffusion weighted single-shot echo-planar imaging sequences on the scanner-coil systems. RESULTS: The repeatability of ADC measurements was mixed: the standard deviation relative to baseline across scanner-coil-sequences ranged from low variability (0.47, 95% confidence interval [CI]: 0.22-1.00) to high variability (1.69, 95% CI: 0.17-17.26), depending on material, with the lowest and highest variability from the same scanner-coil-sequence. Assessment of image distortion showed that right/left measurements of the geometric distortion array were 1 to 16% larger on the left coil side compared with the right coil side independent of scanner-coil systems, diffusion weighting, and phase-encoding direction. CONCLUSION: This breast phantom can be used to measure scanner-coil-sequence bias and variability for DWI. When establishing a multisystem study, this breast phantom may be used to minimize protocol differences (e.g., due to available sequences or shimming technique), to correct for bias that cannot be minimized, and to weigh results from each system depending on respective variability. J. Magn. Reson. Imaging 2016. J. MAGN. RESON. IMAGING 2016;44:846-855.


Assuntos
Artefatos , Análise de Falha de Equipamento/instrumentação , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/instrumentação , Imageamento por Ressonância Magnética/métodos , Imagens de Fantasmas , Desenho de Equipamento , Análise de Falha de Equipamento/métodos , Feminino , Humanos , Interpretação de Imagem Assistida por Computador/instrumentação , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
14.
BMC Bioinformatics ; 16: 330, 2015 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-26472075

RESUMO

BACKGROUND: The goal of this survey paper is to overview cellular measurements using optical microscopy imaging followed by automated image segmentation. The cellular measurements of primary interest are taken from mammalian cells and their components. They are denoted as two- or three-dimensional (2D or 3D) image objects of biological interest. In our applications, such cellular measurements are important for understanding cell phenomena, such as cell counts, cell-scaffold interactions, cell colony growth rates, or cell pluripotency stability, as well as for establishing quality metrics for stem cell therapies. In this context, this survey paper is focused on automated segmentation as a software-based measurement leading to quantitative cellular measurements. METHODS: We define the scope of this survey and a classification schema first. Next, all found and manually filteredpublications are classified according to the main categories: (1) objects of interests (or objects to be segmented), (2) imaging modalities, (3) digital data axes, (4) segmentation algorithms, (5) segmentation evaluations, (6) computational hardware platforms used for segmentation acceleration, and (7) object (cellular) measurements. Finally, all classified papers are converted programmatically into a set of hyperlinked web pages with occurrence and co-occurrence statistics of assigned categories. RESULTS: The survey paper presents to a reader: (a) the state-of-the-art overview of published papers about automated segmentation applied to optical microscopy imaging of mammalian cells, (b) a classification of segmentation aspects in the context of cell optical imaging, (c) histogram and co-occurrence summary statistics about cellular measurements, segmentations, segmented objects, segmentation evaluations, and the use of computational platforms for accelerating segmentation execution, and (d) open research problems to pursue. CONCLUSIONS: The novel contributions of this survey paper are: (1) a new type of classification of cellular measurements and automated segmentation, (2) statistics about the published literature, and (3) a web hyperlinked interface to classification statistics of the surveyed papers at https://isg.nist.gov/deepzoomweb/resources/survey/index.html.


Assuntos
Algoritmos , Imagem Óptica , Animais , Automação , Humanos , Microscopia
15.
Acad Radiol ; 22(11): 1393-408, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26376841

RESUMO

RATIONALE AND OBJECTIVES: Tumor volume change has potential as a biomarker for diagnosis, therapy planning, and treatment response. Precision was evaluated and compared among semiautomated lung tumor volume measurement algorithms from clinical thoracic computed tomography data sets. The results inform approaches and testing requirements for establishing conformance with the Quantitative Imaging Biomarker Alliance (QIBA) Computed Tomography Volumetry Profile. MATERIALS AND METHODS: Industry and academic groups participated in a challenge study. Intra-algorithm repeatability and inter-algorithm reproducibility were estimated. Relative magnitudes of various sources of variability were estimated using a linear mixed effects model. Segmentation boundaries were compared to provide a basis on which to optimize algorithm performance for developers. RESULTS: Intra-algorithm repeatability ranged from 13% (best performing) to 100% (least performing), with most algorithms demonstrating improved repeatability as the tumor size increased. Inter-algorithm reproducibility was determined in three partitions and was found to be 58% for the four best performing groups, 70% for the set of groups meeting repeatability requirements, and 84% when all groups but the least performer were included. The best performing partition performed markedly better on tumors with equivalent diameters greater than 40 mm. Larger tumors benefitted by human editing but smaller tumors did not. One-fifth to one-half of the total variability came from sources independent of the algorithms. Segmentation boundaries differed substantially, not ony in overall volume but also in detail. CONCLUSIONS: Nine of the 12 participating algorithms pass precision requirements similar to what is indicated in the QIBA Profile, with the caveat that the present study was not designed to explicitly evaluate algorithm profile conformance. Change in tumor volume can be measured with confidence to within ±14% using any of these nine algorithms on tumor sizes greater than 10 mm. No partition of the algorithms was able to meet the QIBA requirements for interchangeability down to 10 mm, although the partition comprising best performing algorithms did meet this requirement for a tumor size of greater than approximately 40 mm.


Assuntos
Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/patologia , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Tomografia Computadorizada por Raios X , Carga Tumoral , Algoritmos , Feminino , Humanos , Modelos Lineares , Pulmão/diagnóstico por imagem , Pulmão/patologia , Reprodutibilidade dos Testes
16.
Radiology ; 277(1): 124-33, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25989480

RESUMO

PURPOSE: To compare image resolution from iterative reconstruction with resolution from filtered back projection for low-contrast objects on phantom computed tomographic (CT) images across vendors and exposure levels. MATERIALS AND METHODS: Randomized repeat scans of an American College of Radiology CT accreditation phantom (module 2, low contrast) were performed for multiple radiation exposures, vendors, and vendor iterative reconstruction algorithms. Eleven volunteers were presented with 900 images by using a custom-designed graphical user interface to perform a task created specifically for this reader study. Results were analyzed by using statistical graphics and analysis of variance. RESULTS: Across three vendors (blinded as A, B, and C) and across three exposure levels, the mean correct classification rate was higher for iterative reconstruction than filtered back projection (P < .01): 87.4% iterative reconstruction and 81.3% filtered back projection at 20 mGy, 70.3% iterative reconstruction and 63.9% filtered back projection at 12 mGy, and 61.0% iterative reconstruction and 56.4% filtered back projection at 7.2 mGy. There was a significant difference in mean correct classification rate between vendor B and the other two vendors. Across all exposure levels, images obtained by using vendor B's scanner outperformed the other vendors, with a mean correct classification rate of 74.4%, while the mean correct classification rate for vendors A and C was 68.1% and 68.3%, respectively. Across all readers, the mean correct classification rate for iterative reconstruction (73.0%) was higher compared with the mean correct classification rate for filtered back projection (67.0%). CONCLUSION: The potential exists to reduce radiation dose without compromising low-contrast detectability by using iterative reconstruction instead of filtered back projection. There is substantial variability across vendor reconstruction algorithms.


Assuntos
Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Exposição à Radiação , Tomógrafos Computadorizados , Tomografia Computadorizada por Raios X
17.
Radiology ; 275(3): 725-34, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25686365

RESUMO

PURPOSE: To develop and validate a metric of computed tomographic (CT) image quality that incorporates the noise texture and resolution properties of an image. MATERIALS AND METHODS: Images of the American College of Radiology CT quality assurance phantom were acquired by using three commercial CT systems at seven dose levels with filtered back projection (FBP) and iterative reconstruction (IR). Image quality was characterized by the contrast-to-noise ratio (CNR) and a detectability index (d') that incorporated noise texture and spatial resolution. The measured CNR and d' were compared with a corresponding observer study by using the Spearman rank correlation coefficient to determine how well each metric reflects the ability of an observer to detect subtle lesions. Statistical significance of the correlation between each metric and observer performance was determined by using a Student t distribution; P values less than .05 indicated a significant correlation. Additionally, each metric was used to estimate the dose reduction potential of IR algorithms while maintaining image quality. RESULTS: Across all dose levels, scanner models, and reconstruction algorithms, the d' correlated strongly with observer performance in the corresponding observer study (ρ = 0.95; P < .001), whereas the CNR correlated weakly with observer performance (ρ = 0.31; P = .21). Furthermore, the d' showed that the dose-reduction capabilities differed between clinical implementations (range, 12%-35%) and were less than those predicted from the CNR (range, 50%-54%). CONCLUSION: The strong correlation between the observer performance and the d' indicates that the d' is superior to the CNR for the evaluation of CT image quality. Moreover, the results of this study indicate that the d' improves less than the CNR with the use of IR, which indicates less potential for IR dose reduction than previously thought.


Assuntos
Processamento de Imagem Assistida por Computador , Análise e Desempenho de Tarefas , Tomografia Computadorizada por Raios X/normas , Desenho de Equipamento , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/instrumentação
18.
BMC Bioinformatics ; 15: 431, 2014 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-25547324

RESUMO

BACKGROUND: Many cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies. RESULTS: We present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation. First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce. We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images. CONCLUSIONS: FogBank produces single cell segmentation from confluent cell sheets with high accuracy. It can be applied to microscopy images of multiple cell lines and a variety of imaging modalities. The code for the segmentation method is available as open-source and includes a Graphical User Interface for user friendly execution.


Assuntos
Algoritmos , Células/citologia , Biologia Computacional/métodos , Interpretação de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência/métodos , Microscopia de Contraste de Fase/métodos , Animais , Mama/citologia , Feminino , Humanos , Camundongos , Células NIH 3T3 , Saccharomyces cerevisiae/citologia
19.
Med Phys ; 38(5): 2552-7, 2011 May.
Artigo em Inglês | MEDLINE | ID: mdl-21776790

RESUMO

PURPOSE: The authors investigate the extent to which Response Evaluation Criteria in Solid Tumors (RECIST) can predict tumor volumes in ideal geometric settings and using clinical data. METHODS: The authors consider a hierarchy of models including uniaxial ellipsoids, general ellipsoids, and composites of ellipsoids, using both analytical and numerical techniques to show how well RECIST can predict tumor volumes in each case. The models have certain features that are compared to clinical data. RESULTS: The principal conclusion is that a change in the reported RECIST value needs to be a factor of at least 1.2 to achieve a 95% confidence that one ellipsoid is larger than another assuming the ratio of maximum to minimum diameters is no more than 2, an assumption that is reasonable for some classes of tumors. There is a significant probability that RECIST will select a tumor other than the largest due to orientation effects of nonspherical tumors: in previously reported malignoma data, RECIST would have selected a tumor other than the largest in 9% of the cases. Also, the widely used spherical model connecting RECIST values for a single tumor to volumes overestimates these volumes. CONCLUSIONS: RECIST imposes a limit on the ability to determine tumor volumes, which is greater than the limit imposed by modem medical computed tomography machines. It is also likely the RECIST limit is above natural biological variability of stable lesions. The authors recommend the study of such natural variability as a fruitful avenue for further study.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Neoplasias/diagnóstico por imagem , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
Cytometry A ; 79(7): 545-59, 2011 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-21674772

RESUMO

The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability.


Assuntos
Algoritmos , Células/citologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência/métodos , Animais , Camundongos , Ratos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...