Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters











Publication year range
1.
BJU Int ; 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38961742

ABSTRACT

OBJECTIVES: To evaluate a cancer detecting artificial intelligence (AI) algorithm on serial biopsies in patients with prostate cancer on active surveillance (AS). PATIENTS AND METHODS: A total of 180 patients in the Prostate Cancer Research International Active Surveillance (PRIAS) cohort were prospectively monitored using pre-defined criteria. Diagnostic and re-biopsy slides from 2011 to 2020 (n = 4744) were scanned and analysed by an in-house AI-based cancer detection algorithm. The algorithm was analysed for sensitivity, specificity, and for accuracy to predict need for active treatment. Prognostic properties of cancer size, prostate-specific antigen (PSA) level and PSA density at diagnosis were evaluated. RESULTS: The sensitivity and specificity of the AI algorithm was 0.96 and 0.73, respectively, for correct detection of cancer areas. Original pathology report diagnosis was used as the reference method. The area of cancer estimated by the pathologists correlated highly with the AI detected cancer size (r = 0.83). By using the AI algorithm, 63% of the slides would not need to be read by a pathologist as they were classed as benign, at the risk of missing 0.55% slides containing cancer. Biopsy cancer content and PSA density at diagnosis were found to be prognostic of whether the patient stayed on AS or was discontinued for active treatment. CONCLUSION: The AI-based biopsy cancer detection algorithm could be used to reduce the pathologists' workload in an AS cohort. The detected cancer amount correlated well with the cancer length measured by the pathologist and the algorithm performed well in finding even small areas of cancer. To our knowledge, this is the first report on an AI-based algorithm in digital pathology used to detect cancer in a cohort of patients on AS.

2.
Alzheimers Res Ther ; 16(1): 61, 2024 03 19.
Article in English | MEDLINE | ID: mdl-38504336

ABSTRACT

BACKGROUND: Predicting future Alzheimer's disease (AD)-related cognitive decline among individuals with subjective cognitive decline (SCD) or mild cognitive impairment (MCI) is an important task for healthcare. Structural brain imaging as measured by magnetic resonance imaging (MRI) could potentially contribute when making such predictions. It is unclear if the predictive performance of MRI can be improved using entire brain images in deep learning (DL) models compared to using pre-defined brain regions. METHODS: A cohort of 332 individuals with SCD/MCI were included from the Swedish BioFINDER-1 study. The goal was to predict longitudinal SCD/MCI-to-AD dementia progression and change in Mini-Mental State Examination (MMSE) over four years. Four models were evaluated using different predictors: (1) clinical data only, including demographics, cognitive tests and APOE ε4 status, (2) clinical data plus hippocampal volume, (3) clinical data plus all regional MRI gray matter volumes (N = 68) extracted using FreeSurfer software, (4) a DL model trained using multi-task learning with MRI images, Jacobian determinant images and baseline cognition as input. A double cross-validation scheme, with five test folds and for each of those ten validation folds, was used. External evaluation was performed on part of the ADNI dataset, including 108 patients. Mann-Whitney U-test was used to determine statistically significant differences in performance, with p-values less than 0.05 considered significant. RESULTS: In the BioFINDER cohort, 109 patients (33%) progressed to AD dementia. The performance of the clinical data model for prediction of progression to AD dementia was area under the curve (AUC) = 0.85 and four-year cognitive decline was R2 = 0.14. The performance was improved for both outcomes when adding hippocampal volume (AUC = 0.86, R2 = 0.16). Adding FreeSurfer brain regions improved prediction of four-year cognitive decline but not progression to AD (AUC = 0.83, R2 = 0.17), while the DL model worsened the performance for both outcomes (AUC = 0.84, R2 = 0.08). A sensitivity analysis showed that the Jacobian determinant image was more informative than the MRI image, but that performance was maximized when both were included. In the external evaluation cohort from ADNI, 23 patients (21%) progressed to AD dementia. The results for predicted progression to AD dementia were similar to the results for the BioFINDER test data, while the performance for the cognitive decline was deteriorated. CONCLUSIONS: The DL model did not significantly improve the prediction of clinical disease progression in AD, compared to regression models with a single pre-defined brain region.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Deep Learning , Humans , Alzheimer Disease/complications , Alzheimer Disease/diagnostic imaging , Biomarkers , Magnetic Resonance Imaging , Brain/diagnostic imaging , Brain/pathology , Cognitive Dysfunction/diagnosis , Cognition , Atrophy/pathology , Disease Progression
3.
Res Sq ; 2023 Nov 08.
Article in English | MEDLINE | ID: mdl-37986841

ABSTRACT

Background: Predicting future Alzheimer's disease (AD)-related cognitive decline among individuals with subjective cognitive decline (SCD) or mild cognitive impairment (MCI) is an important task for healthcare. Structural brain imaging as measured by magnetic resonance imaging (MRI) could potentially contribute when making such predictions. It is unclear if the predictive performance of MRI can be improved using entire brain images in deep learning (DL) models compared to using pre-defined brain regions. Methods: A cohort of 332 individuals with SCD/MCI were included from the Swedish BioFINDER-1 study. The goal was to predict longitudinal SCD/MCI-to-AD dementia progression and change in Mini-Mental State Examination (MMSE) over four years. Four models were evaluated using different predictors: 1) clinical data only, including demographics, cognitive tests and APOE e4 status, 2) clinical data plus hippocampal volume, 3) clinical data plus all regional MRI gray matter volumes (N=68) extracted using FreeSurfer software, 4) a DL model trained using multi-task learning with MRI images, Jacobian determinant images and baseline cognition as input. Models were developed on 80% of subjects (N=267) and tested on the remaining 20% (N=65). Mann-Whitney U-test was used to determine statistically significant differences in performance, with p-values less than 0.05 considered significant. Results: In the test set, 21 patients (32.3%) progressed to AD dementia. The performance of the clinical data model for prediction of progression to AD dementia was area under the curve (AUC)=0.87 and four-year cognitive decline was R2=0.17. The performance was significantly improved for both outcomes when adding hippocampal volume (AUC=0.91, R2=0.26, p-values <0.05) or FreeSurfer brain regions (AUC=0.90, R2=0.27, p-values <0.05). Conversely, the DL model did not show any significant difference from the clinical data model (AUC=0.86, R2=0.13). A sensitivity analysis showed that the Jacobian determinant image was more informative than the MRI image, but that performance was maximized when both were included. Conclusions: The DL model did not significantly improve the prediction of clinical disease progression in AD, compared to regression models with a single pre-defined brain region.

4.
J Nucl Cardiol ; 30(1): 116-126, 2023 02.
Article in English | MEDLINE | ID: mdl-35610536

ABSTRACT

PURPOSE: Evaluate the prediction of quantitative coronary angiography (QCA) values from MPI, by means of deep learning. METHODS: 546 patients (67% men) undergoing stress 99mTc-tetrofosmin MPI in a CZT camera in the upright and supine position were included (1092 MPIs). Patients were divided into two groups: ICA group included 271 patients who performed an ICA within 6 months of MPI and a control group with 275 patients with low pre-test probability for CAD and a normal MPI. QCA analyses were performed using radiologic software and verified by an expert reader. Left ventricular myocardium was segmented using clinical nuclear cardiology software and verified by an expert reader. A deep learning model was trained using a double cross-validation scheme such that all data could be used as test data as well. RESULTS: Area under the receiver-operating characteristic curve for the prediction of QCA, with > 50% narrowing of the artery, by deep learning for the external test cohort: per patient 85% [95% confidence interval (CI) 84%-87%] and per vessel; LAD 74% (CI 72%-76%), RCA 85% (CI 83%-86%), LCx 81% (CI 78%-84%), and average 80% (CI 77%-83%). CONCLUSION: Deep learning can predict the presence of different QCA percentages of coronary artery stenosis from MPIs.


Subject(s)
Coronary Artery Disease , Coronary Stenosis , Deep Learning , Myocardial Perfusion Imaging , Male , Humans , Female , Coronary Angiography/methods , Tomography, Emission-Computed, Single-Photon/methods , Myocardial Perfusion Imaging/methods , Perfusion , Cadmium , Tellurium
5.
Eur Urol Focus ; 7(5): 995-1001, 2021 Sep.
Article in English | MEDLINE | ID: mdl-33303404

ABSTRACT

BACKGROUND: Gleason grading is the standard diagnostic method for prostate cancer and is essential for determining prognosis and treatment. The dearth of expert pathologists, the inter- and intraobserver variability, as well as the labour intensity of Gleason grading all necessitate the development of a user-friendly tool for robust standardisation. OBJECTIVE: To develop an artificial intelligence (AI) algorithm, based on machine learning and convolutional neural networks, as a tool for improved standardisation in Gleason grading in prostate cancer biopsies. DESIGN, SETTING, AND PARTICIPANTS: A total of 698 prostate biopsy sections from 174 patients were used for training. The training sections were annotated by two senior consultant pathologists. The final algorithm was tested on 37 biopsy sections from 21 patients, with digitised slide images from two different scanners. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: Correlation, sensitivity, and specificity parameters were calculated. RESULTS AND LIMITATIONS: The algorithm shows high accuracy in detecting cancer areas (sensitivity: 100%, specificity: 68%). Compared with the pathologists, the algorithm also performed well in detecting cancer areas (intraclass correlation coefficient [ICC]: 0.99) and assigning the Gleason patterns correctly: Gleason patterns 3 and 4 (ICC: 0.96 and 0.94, respectively), and to a lesser extent, Gleason pattern 5 (ICC: 0.82). Similar results were obtained using two different scanners. CONCLUSIONS: Our AI-based algorithm can reliably detect prostate cancer and quantify the Gleason patterns in core needle biopsies, with similar accuracy as pathologists. The results are reproducible on images from different scanners with a proven low level of intraobserver variability. We believe that this AI tool could be regarded as an efficient and interactive tool for pathologists. PATIENT SUMMARY: We developed a sensitive artificial intelligence tool for prostate biopsies, which detects and grades cancer with similar accuracy to pathologists. This tool holds promise to improve the diagnosis of prostate cancer.


Subject(s)
Prostate , Prostatic Neoplasms , Artificial Intelligence , Automation , Biopsy , Humans , Image Interpretation, Computer-Assisted , Male , Neoplasm Grading , Prostate/pathology , Prostatic Neoplasms/pathology
6.
Oral Maxillofac Surg ; 20(4): 385-390, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27638643

ABSTRACT

PURPOSE: The aim of this prospective study was to investigate the two-year outcomes following immediate loading of mono-cortically engaged implants. MATERIALS AND METHODS: Thirty healthy mandible edentulous patients with an average age of 67.3 years and presenting with sufficient bony ridge at the mandible symphysis were included in the study. Four Astra Tech, Ti-Oblast® implants were installed between the mental foramina using the mono-cortical anchorage technique. The primary stability of the implants was assessed by resonance frequency analysis (RFA). After uni-abutments were placed, a temporary bridge was constructed and fixed the same day. The definitive bridges were installed 6 weeks after implant surgery. Five of 120 placed implants were lost in four patients during the first 6 weeks and these patients were excluded from the follow-up. The changes in marginal bone level (n = 20) were evaluated in Brazilian and Swedish groups at baseline, 6 weeks, 6 months, 12 months and 24 months. The RFA (n = 30) was evaluated at baseline, 6 weeks, 6 months, 12 months and 24 months postoperatively. RESULTS: Compared with baseline measurements, the postoperative values for marginal bone level (6 weeks, 6 months, 12 months and 24 months) were significantly reduced (p < 0.05), while no differences were observed in the RFA analysis (12 months and 24 months). CONCLUSIONS: The immediate loading of mono-cortically engaged implants in the edentulous mandible is safe and predictable and implant stability remains excellent after 2-year follow-up.


Subject(s)
Bite Force , Dental Implantation, Endosseous/methods , Mandible/surgery , Mouth, Edentulous/surgery , Postoperative Complications/physiopathology , Weight-Bearing/physiology , Aged , Dental Prosthesis, Implant-Supported , Denture, Partial , Female , Follow-Up Studies , Humans , Male , Mandible/physiopathology , Mouth, Edentulous/physiopathology , Prospective Studies
7.
J Comput Neurosci ; 41(1): 45-63, 2016 08.
Article in English | MEDLINE | ID: mdl-27121476

ABSTRACT

This work concerns efficient and reliable numerical simulations of the dynamic behaviour of a moving-boundary model for tubulin-driven axonal growth. The model is nonlinear and consists of a coupled set of a partial differential equation (PDE) and two ordinary differential equations. The PDE is defined on a computational domain with a moving boundary, which is part of the solution. Numerical simulations based on standard explicit time-stepping methods are too time consuming due to the small time steps required for numerical stability. On the other hand standard implicit schemes are too complex due to the nonlinear equations that needs to be solved in each step. Instead, we propose to use the Peaceman-Rachford splitting scheme combined with temporal and spatial scalings of the model. Simulations based on this scheme have shown to be efficient, accurate, and reliable which makes it possible to evaluate the model, e.g. its dependency on biological and physical model parameters. These evaluations show among other things that the initial axon growth is very fast, that the active transport is the dominant reason over diffusion for the growth velocity, and that the polymerization rate in the growth cone does not affect the final axon length.


Subject(s)
Axons/physiology , Computer Simulation , Models, Neurological , Neurons/physiology , Algorithms , Animals , Humans
8.
J Proteome Res ; 11(5): 2955-67, 2012 May 04.
Article in English | MEDLINE | ID: mdl-22471554

ABSTRACT

Functional analysis of quantitative expression data is becoming common practice within the proteomics and transcriptomics fields; however, a gold standard for this type of analysis has yet not emerged. To grasp the systemic changes in biological systems, efficient and robust methods are needed for data analysis following expression regulation experiments. We discuss several conceptual and practical challenges potentially hindering the emergence of such methods and present a novel method, called FEvER, that utilizes two enrichment models in parallel. We also present analysis of three disparate differential expression data sets using our method and compare our results to other established methods. With many useful features such as pathway hierarchy overview, we believe the FEvER method and its software implementation will provide a useful tool for peers in the field of proteomics. Furthermore, we show that the method is also applicable to other types of expression data.


Subject(s)
Biosynthetic Pathways , Computational Biology/methods , Proteomics/methods , Software , Cell Line, Tumor , Databases, Protein , Dinitrochlorobenzene/pharmacology , Fungal Proteins/chemistry , Gene Expression Profiling , Humans , Mitosis , Models, Biological , Neoplasm Proteins/chemistry , Neoplasms/chemistry , Saccharomyces cerevisiae/chemistry , Saccharomyces cerevisiae/enzymology , Transcriptome
9.
Genome Biol ; 9(1): R13, 2008 Jan 21.
Article in English | MEDLINE | ID: mdl-18208590

ABSTRACT

Genomic regions with altered gene expression are a characteristic feature of cancer cells. We present a novel method for identifying such regions in gene expression maps. This method is based on total variation minimization, a classical signal restoration technique. In systematic evaluations, we show that our method combines top-notch detection performance with an ability to delineate relevant regions without excessive over-segmentation, making it a significant advance over existing methods. Software (Rendersome) is provided.


Subject(s)
Computational Biology/methods , Gene Expression Regulation, Neoplastic/genetics , Chromosome Mapping/methods , Humans , Methods , Neoplasms/genetics , Software
10.
IEEE Trans Pattern Anal Mach Intell ; 29(1): 181-4, 2007 Jan.
Article in English | MEDLINE | ID: mdl-17108394

ABSTRACT

Many visual cues for surface reconstruction from known views are sparse in nature, e.g., specularities, surface silhouettes, and salient features in an otherwise textureless region. Often, these cues are the only information available to an observer. To allow these constraints to be used either in conjunction with dense constraints such as pixel-wise similarity, or alone, we formulate such constraints in a variational framework. We propose a sparse variational constraint in the level set framework, enforcing a surface to pass through a specific point, and a sparse variational constraint on the surface normal along the observed viewing direction, as is the nature of, e.g., specularities. These constraints are capable of reconstructing surfaces from extremely sparse data. The approach has been applied and validated on the shape from specularities problem.


Subject(s)
Algorithms , Artifacts , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Information Storage and Retrieval/methods , Signal Processing, Computer-Assisted
11.
Biotechnol Bioeng ; 94(5): 961-79, 2006 Aug 05.
Article in English | MEDLINE | ID: mdl-16615160

ABSTRACT

An innovative type of biofilm model is derived by combining an individual description of microbial particles with a continuum representation of the biofilm matrix. This hybrid model retains the advantages of each approach, while providing a more realistic description of the temporal development of biofilm structure in two or three spatial dimensions. The general model derivation takes into account any possible number of soluble components. These are substrates and metabolic products, which diffuse and react in the biofilm within individual microbial cells. The cells grow, divide, and produce extracellular polymeric substances (EPS) in a multispecies model setting. The EPS matrix is described by a continuum representation as incompressible viscous fluid, which can expand and retract due to generation and consumption processes. The cells move due to a pushing mechanism between cells in colonies and by an advective mechanism supported by the EPS dynamics. Detachment of both cells and EPS follows a continuum approach, whereas cells attach in discrete events. Two case studies are presented for model illustration. Biofilm consolidation is explained by shrinking due to EPS and cell degradation processes. This mechanism describes formation of a denser layer of cells in the biofilm depth and occurrence of an irregularly shaped biofilm surface under nutrient limiting conditions. Micro-colony formation is investigated by growth of autotrophic microbial colonies in an EPS matrix produced by heterotrophic cells. Size and shape of colonies of ammonia and nitrite-oxidizing bacteria (NOB) are comparatively studied in a standard biofilm and in biofilms aerated from a membrane side.


Subject(s)
Biofilms/growth & development , Extracellular Matrix Proteins/metabolism , Extracellular Matrix/physiology , Models, Biological , Cell Aggregation/physiology , Cell Proliferation , Computer Simulation
12.
Cytometry A ; 66(1): 24-31, 2005 Jul.
Article in English | MEDLINE | ID: mdl-15915504

ABSTRACT

BACKGROUND: Morphologic examination of bone marrow and peripheral blood samples continues to be the cornerstone in diagnostic hematology. In recent years, interest in automatic leukocyte classification using image analysis has increased rapidly. Such systems collect a series of images in which each cell must be segmented accurately to be classified correctly. Although segmentation algorithms have been developed for sparse cells in peripheral blood, the problem of segmenting the complex cell clusters characterizing bone marrow images is harder and has not been addressed previously. METHODS: We present a novel algorithm for segmenting clusters of any number of densely packed cells. The algorithm first oversegments the image into cell subparts. These parts are then assembled into complete cells by solving a combinatorial optimization problem in an efficient way. RESULTS: Our experimental results show that the algorithm succeeds in correctly segmenting densely clustered leukocytes in bone marrow images. CONCLUSIONS: The presented algorithm enables image analysis-based analysis of bone marrow samples for the first time and may also be adopted for other digital cytometric applications where separation of complex cell clusters is required.


Subject(s)
Algorithms , Bone Marrow Cells/cytology , Leukocytes/cytology , Animals , Bone Marrow Cells/ultrastructure , Humans , Image Processing, Computer-Assisted , Leukocytes/ultrastructure , Software
SELECTION OF CITATIONS
SEARCH DETAIL