Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 127
Filtrar
1.
Br J Ophthalmol ; 108(2): 223-231, 2024 01 29.
Artículo en Inglés | MEDLINE | ID: mdl-36627175

RESUMEN

BACKGROUND/AIMS: To use artificial intelligence (AI) to: (1) exploit biomechanical knowledge of the optic nerve head (ONH) from a relatively large population; (2) assess ONH robustness (ie, sensitivity of the ONH to changes in intraocular pressure (IOP)) from a single optical coherence tomography (OCT) volume scan of the ONH without the need for biomechanical testing and (3) identify what critical three-dimensional (3D) structural features dictate ONH robustness. METHODS: 316 subjects had their ONHs imaged with OCT before and after acute IOP elevation through ophthalmo-dynamometry. IOP-induced lamina cribrosa (LC) deformations were then mapped in 3D and used to classify ONHs. Those with an average effective LC strain superior to 4% were considered fragile, while those with a strain inferior to 4% robust. Learning from these data, we compared three AI algorithms to predict ONH robustness strictly from a baseline (undeformed) OCT volume: (1) a random forest classifier; (2) an autoencoder and (3) a dynamic graph convolutional neural network (DGCNN). The latter algorithm also allowed us to identify what critical 3D structural features make a given ONH robust. RESULTS: All three methods were able to predict ONH robustness from a single OCT volume scan alone and without the need to perform biomechanical testing. The DGCNN (area under the curve (AUC): 0.76±0.08) outperformed the autoencoder (AUC: 0.72±0.09) and the random forest classifier (AUC: 0.69±0.05). Interestingly, to assess ONH robustness, the DGCNN mainly used information from the scleral canal and the LC insertion sites. CONCLUSIONS: We propose an AI-driven approach that can assess the robustness of a given ONH solely from a single OCT volume scan of the ONH, and without the need to perform biomechanical testing. Longitudinal studies should establish whether ONH robustness could help us identify fast visual field loss progressors. PRECIS: Using geometric deep learning, we can assess optic nerve head robustness (ie, sensitivity to a change in IOP) from a standard OCT scan that might help to identify fast visual field loss progressors.


Asunto(s)
Disco Óptico , Humanos , Disco Óptico/diagnóstico por imagen , Inteligencia Artificial , Presión Intraocular , Tonometría Ocular , Pruebas del Campo Visual , Tomografía de Coherencia Óptica
2.
Light Sci Appl ; 13(1): 4, 2024 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-38161203

RESUMEN

Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource ( https://github.com/kqwang/phase-recovery ) for readers to learn more about PR.

3.
Sci Rep ; 13(1): 19960, 2023 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-37968437

RESUMEN

Glaucoma is a slowly progressing optic neuropathy that may eventually lead to blindness. To help patients receive customized treatment, predicting how quickly the disease will progress is important. Structural assessment using optical coherence tomography (OCT) can be used to visualize glaucomatous optic nerve and retinal damage, while functional visual field (VF) tests can be used to measure the extent of vision loss. However, VF testing is patient-dependent and highly inconsistent, making it difficult to track glaucoma progression. In this work, we developed a multimodal deep learning model comprising a convolutional neural network (CNN) and a long short-term memory (LSTM) network, for glaucoma progression prediction. We used OCT images, VF values, demographic and clinical data of 86 glaucoma patients with five visits over 12 months. The proposed method was used to predict VF changes 12 months after the first visit by combining past multimodal inputs with synthesized future images generated using generative adversarial network (GAN). The patients were classified into two classes based on their VF mean deviation (MD) decline: slow progressors (< 3 dB) and fast progressors (> 3 dB). We showed that our generative model-based novel approach can achieve the best AUC of 0.83 for predicting the progression 6 months earlier. Further, the use of synthetic future images enabled the model to accurately predict the vision loss even earlier (9 months earlier) with an AUC of 0.81, compared to using only structural (AUC = 0.68) or only functional measures (AUC = 0.72). This study provides valuable insights into the potential of using synthetic follow-up OCT images for early detection of glaucoma progression.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Humanos , Campos Visuales , Presión Intraocular , Progresión de la Enfermedad , Glaucoma/diagnóstico por imagen , Pruebas del Campo Visual/métodos , Ceguera , Trastornos de la Visión , Tomografía de Coherencia Óptica/métodos
4.
JAMA Ophthalmol ; 141(9): 882-889, 2023 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-37589980

RESUMEN

Importance: The 3-dimensional (3-D) structural phenotype of glaucoma as a function of severity was thoroughly described and analyzed, enhancing understanding of its intricate pathology beyond current clinical knowledge. Objective: To describe the 3-D structural differences in both connective and neural tissues of the optic nerve head (ONH) between different glaucoma stages using traditional and artificial intelligence-driven approaches. Design, Setting, and Participants: This cross-sectional, clinic-based study recruited 541 Chinese individuals receiving standard clinical care at Singapore National Eye Centre, Singapore, and 112 White participants of a prospective observational study at Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania. The study was conducted from May 2022 to January 2023. All participants had their ONH imaged using spectral-domain optical coherence tomography and had their visual field assessed by standard automated perimetry. Main Outcomes and Measures: (1) Clinician-defined 3-D structural parameters of the ONH and (2) 3-D structural landmarks identified by geometric deep learning that differentiated ONHs among 4 groups: no glaucoma, mild glaucoma (mean deviation [MD], ≥-6.00 dB), moderate glaucoma (MD, -6.01 to -12.00 dB), and advanced glaucoma (MD, <-12.00 dB). Results: Study participants included 213 individuals without glaucoma (mean age, 63.4 years; 95% CI, 62.5-64.3 years; 126 females [59.2%]; 213 Chinese [100%] and 0 White individuals), 204 with mild glaucoma (mean age, 66.9 years; 95% CI, 66.0-67.8 years; 91 females [44.6%]; 178 Chinese [87.3%] and 26 White [12.7%] individuals), 118 with moderate glaucoma (mean age, 68.1 years; 95% CI, 66.8-69.4 years; 49 females [41.5%]; 97 Chinese [82.2%] and 21 White [17.8%] individuals), and 118 with advanced glaucoma (mean age, 68.5 years; 95% CI, 67.1-69.9 years; 43 females [36.4%]; 53 Chinese [44.9%] and 65 White [55.1%] individuals). The majority of ONH structural differences occurred in the early glaucoma stage, followed by a plateau effect in the later stages. Using a deep neural network, 3-D ONH structural differences were found to be present in both neural and connective tissues. Specifically, a mean of 57.4% (95% CI, 54.9%-59.9%, for no to mild glaucoma), 38.7% (95% CI, 36.9%-40.5%, for mild to moderate glaucoma), and 53.1 (95% CI, 50.8%-55.4%, for moderate to advanced glaucoma) of ONH landmarks that showed major structural differences were located in neural tissues with the remaining located in connective tissues. Conclusions and Relevance: This study uncovered complex 3-D structural differences of the ONH in both neural and connective tissues as a function of glaucoma severity. Future longitudinal studies should seek to establish a connection between specific 3-D ONH structural changes and fast visual field deterioration and aim to improve the early detection of patients with rapid visual field loss in routine clinical care.


Asunto(s)
Glaucoma , Disco Óptico , Femenino , Humanos , Persona de Mediana Edad , Anciano , Tomografía de Coherencia Óptica , Inteligencia Artificial , Estudios Transversales , Estudios Prospectivos , Glaucoma/diagnóstico , Progresión de la Enfermedad , Fenotipo
5.
Opt Express ; 31(10): 15355-15371, 2023 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-37157639

RESUMEN

X-ray tomography is a non-destructive imaging technique that reveals the interior of an object from its projections at different angles. Under sparse-view and low-photon sampling, regularization priors are required to retrieve a high-fidelity reconstruction. Recently, deep learning has been used in X-ray tomography. The prior learned from training data replaces the general-purpose priors in iterative algorithms, achieving high-quality reconstructions with a neural network. Previous studies typically assume the noise statistics of test data are acquired a priori from training data, leaving the network susceptible to a change in the noise characteristics under practical imaging conditions. In this work, we propose a noise-resilient deep-reconstruction algorithm and apply it to integrated circuit tomography. By training the network with regularized reconstructions from a conventional algorithm, the learned prior shows strong noise resilience without the need for additional training with noisy examples, and allows us to obtain acceptable reconstructions with fewer photons in test data. The advantages of our framework may further enable low-photon tomographic imaging where long acquisition times limit the ability to acquire a large training set.

6.
Light Sci Appl ; 12(1): 131, 2023 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-37248235

RESUMEN

Noninvasive X-ray imaging of nanoscale three-dimensional objects, such as integrated circuits (ICs), generally requires two types of scanning: ptychographic, which is translational and returns estimates of the complex electromagnetic field through the IC; combined with a tomographic scan, which collects these complex field projections from multiple angles. Here, we present Attentional Ptycho-Tomography (APT), an approach to drastically reduce the amount of angular scanning, and thus the total acquisition time. APT is machine learning-based, utilizing axial self-Attention for Ptycho-Tomographic reconstruction. APT is trained to obtain accurate reconstructions of the ICs, despite the incompleteness of the measurements. The training process includes regularizing priors in the form of typical patterns found in IC interiors, and the physics of X-ray propagation through the IC. We show that APT with ×12 reduced angles achieves fidelity comparable to the gold standard Simultaneous Algebraic Reconstruction Technique (SART) with the original set of angles. When using the same set of reduced angles, then APT also outperforms Filtered Back Projection (FBP), Simultaneous Iterative Reconstruction Technique (SIRT) and SART. The time needed to compute the reconstruction is also reduced, because the trained neural network is a forward operation, unlike the iterative nature of these alternatives. Our experiments show that, without loss in quality, for a 4.48 × 93.2 × 3.92 µm3 IC (≃6 × 108 voxels), APT reduces the total data acquisition and computation time from 67.96 h to 38 min. We expect our physics-assisted and attention-utilizing machine learning framework to be applicable to other branches of nanoscale imaging, including materials science and biological imaging.

7.
Nat Commun ; 14(1): 1159, 2023 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-36859392

RESUMEN

Extracting quantitative information about highly scattering surfaces from an imaging system is challenging because the phase of the scattered light undergoes multiple folds upon propagation, resulting in complex speckle patterns. One specific application is the drying of wet powders in the pharmaceutical industry, where quantifying the particle size distribution (PSD) is of particular interest. A non-invasive and real-time monitoring probe in the drying process is required, but there is no suitable candidate for this purpose. In this report, we develop a theoretical relationship from the PSD to the speckle image and describe a physics-enhanced autocorrelation-based estimator (PEACE) machine learning algorithm for speckle analysis to measure the PSD of a powder surface. This method solves both the forward and inverse problems together and enjoys increased interpretability, since the machine learning approximator is regularized by the physical law.

8.
Am J Ophthalmol ; 250: 38-48, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36646242

RESUMEN

PURPOSE: To compare the performance of 2 relatively recent geometric deep learning techniques in diagnosing glaucoma from a single optical coherence tomographic (OCT) scan of the optic nerve head (ONH); and to identify the 3-dimensional (3D) structural features of the ONH that are critical for the diagnosis of glaucoma. DESIGN: Comparison and evaluation of deep learning diagnostic algorithms. METHODS: In this study, we included a total of 2247 nonglaucoma and 2259 glaucoma scans from 1725 participants. All participants had their ONHs imaged in 3D with Spectralis OCT. All OCT scans were automatically segmented using deep learning to identify major neural and connective tissues. Each ONH was then represented as a 3D point cloud. We used PointNet and dynamic graph convolutional neural network (DGCNN) to diagnose glaucoma from such 3D ONH point clouds and to identify the critical 3D structural features of the ONH for glaucoma diagnosis. RESULTS: Both the DGCNN (area under the curve [AUC]: 0.97±0.01) and PointNet (AUC: 0.95±0.02) were able to accurately detect glaucoma from 3D ONH point clouds. The critical points (ie, critical structural features of the ONH) formed an hourglass pattern, with most of them located within the neuroretinal rim in the inferior and superior quadrant of the ONH. CONCLUSIONS: The diagnostic accuracy of both geometric deep learning approaches was excellent. Moreover, we were able to identify the critical 3D structural features of the ONH for glaucoma diagnosis that tremendously improved the transparency and interpretability of our method. Consequently, our approach may have strong potential to be used in clinical applications for the diagnosis and prognosis of a wide range of ophthalmic disorders.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Disco Óptico , Humanos , Disco Óptico/diagnóstico por imagen , Glaucoma/diagnóstico , Redes Neurales de la Computación , Tomografía de Coherencia Óptica/métodos
9.
Phys Rev E ; 106(4-2): 045301, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36397470

RESUMEN

Under conditions of strong scattering, a dilemma often arises regarding the best numerical method to use. Main competitors are the Born series, the beam propagation method, and direct solution of the Lippmann-Schwinger equation. However, analytical relationships between the three methods have not yet, to our knowledge, been explicitly stated. Here, we bridge this gap in the literature. In addition to overall insight about aspects of optical scattering that are best numerically captured by each method, our approach allows us to derive approximate error bounds to be expected under various scattering conditions.

10.
Opt Express ; 30(13): 23238-23259, 2022 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-36225009

RESUMEN

X-ray tomography is capable of imaging the interior of objects in three dimensions non-invasively, with applications in biomedical imaging, materials science, electronic inspection, and other fields. The reconstruction process can be an ill-conditioned inverse problem, requiring regularization to obtain satisfactory results. Recently, deep learning has been adopted for tomographic reconstruction. Unlike iterative algorithms which require a distribution that is known a priori, deep reconstruction networks can learn a prior distribution through sampling the training distributions. In this work, we develop a Physics-assisted Generative Adversarial Network (PGAN), a two-step algorithm for tomographic reconstruction. In contrast to previous efforts, our PGAN utilizes maximum-likelihood estimates derived from the measurements to regularize the reconstruction with both known physics and the learned prior. Compared with methods with less physics assisting in training, PGAN can reduce the photon requirement with limited projection angles to achieve a given error rate. The advantages of using a physics-assisted learned prior in X-ray tomography may further enable low-photon nanoscale imaging.

11.
Opt Express ; 30(2): 2247-2264, 2022 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-35209369

RESUMEN

Randomized probe imaging (RPI) is a single-frame diffractive imaging method that uses highly randomized light to reconstruct the spatial features of a scattering object. The reconstruction process, known as phase retrieval, aims to recover a unique solution for the object without measuring the far-field phase information. Typically, reconstruction is done via time-consuming iterative algorithms. In this work, we propose a fast and efficient deep learning based method to reconstruct phase objects from RPI data. The method, which we call deep k-learning, applies the physical propagation operator to generate an approximation of the object as an input to the neural network. This way, the network no longer needs to parametrize the far-field diffraction physics, dramatically improving the results. Deep k-learning is shown to be computationally efficient and robust to Poisson noise. The advantages provided by our method may enable the analysis of far larger datasets in photon starved conditions, with important applications to the study of dynamic phenomena in physical science and biological engineering.

12.
Opt Express ; 29(22): 35078-35118, 2021 Oct 25.
Artículo en Inglés | MEDLINE | ID: mdl-34808951

RESUMEN

This Roadmap article on digital holography provides an overview of a vast array of research activities in the field of digital holography. The paper consists of a series of 25 sections from the prominent experts in digital holography presenting various aspects of the field on sensing, 3D imaging and displays, virtual and augmented reality, microscopy, cell identification, tomography, label-free live cell imaging, and other applications. Each section represents the vision of its author to describe the significant progress, potential impact, important developments, and challenging issues in the field of digital holography.


Asunto(s)
Holografía/métodos , Imagenología Tridimensional/métodos , Algoritmos , Animales , Ensayos Analíticos de Alto Rendimiento , Humanos , Dispositivos Laboratorio en un Chip , Técnicas Analíticas Microfluídicas , Tomografía , Realidad Virtual
13.
Sci Adv ; 7(38): eabh1200, 2021 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-34533994

RESUMEN

The limitation of projection microstereolithography in additive manufacturing methods is that they typically use a single-aperture imaging configuration, which restricts their ability to produce microstructures in large volumes owing to the trade-off between image resolution and image field area. Here, we propose an integral lithography based on integral image reconstruction coupled with a planar lens array. The individual microlenses maintain a high numerical aperture and are used to create digital light patterns that can expand the printable area by the number of microlenses (103 to 104), thereby allowing for the scalable stereolithographic fabrication of 3D features that surpass the resolution-to-area scaling limit. We extend the capability of integral lithography for programmable printing of deterministic nonperiodic structures through the rotational overlapping or stacking of multiple exposures with controlled angular offsets. This printing platform provides new possibilities for producing periodic and aperiodic microarchitectures spanning four orders of magnitude from micrometers to centimeters.

15.
Biomed Opt Express ; 12(3): 1683-1706, 2021 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-33796381

RESUMEN

Monitoring of adherent cells in culture is routinely performed in biological and clinical laboratories, and it is crucial for large-scale manufacturing of cells needed in cell-based clinical trials and therapies. However, the lack of reliable and easily implementable label-free techniques makes this task laborious and prone to human subjectivity. We present a deep-learning-based processing pipeline that locates and characterizes mesenchymal stem cell nuclei from a few bright-field images captured at various levels of defocus under collimated illumination. Our approach builds upon phase-from-defocus methods in the optics literature and is easily applicable without the need for special microscopy hardware, for example, phase contrast objectives, or explicit phase reconstruction methods that rely on potentially bias-inducing priors. Experiments show that this label-free method can produce accurate cell counts as well as nuclei shape statistics without the need for invasive staining or ultraviolet radiation. We also provide detailed information on how the deep-learning pipeline was designed, built and validated, making it straightforward to adapt our methodology to different types of cells. Finally, we discuss the limitations of our technique and potential future avenues for exploration.

16.
Light Sci Appl ; 10(1): 74, 2021 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-33828073

RESUMEN

Limited-angle tomography of an interior volume is a challenging, highly ill-posed problem with practical implications in medical and biological imaging, manufacturing, automation, and environmental and food security. Regularizing priors are necessary to reduce artifacts by improving the condition of such problems. Recently, it was shown that one effective way to learn the priors for strongly scattering yet highly structured 3D objects, e.g. layered and Manhattan, is by a static neural network [Goy et al. Proc. Natl. Acad. Sci. 116, 19848-19856 (2019)]. Here, we present a radically different approach where the collection of raw images from multiple angles is viewed analogously to a dynamical system driven by the object-dependent forward scattering operator. The sequence index in the angle of illumination plays the role of discrete time in the dynamical system analogy. Thus, the imaging problem turns into a problem of nonlinear system identification, which also suggests dynamical learning as a better fit to regularize the reconstructions. We devised a Recurrent Neural Network (RNN) architecture with a novel Separable-Convolution Gated Recurrent Unit (SC-GRU) as the fundamental building block. Through a comprehensive comparison of several quantitative metrics, we show that the dynamic method is suitable for a generic interior-volumetric reconstruction under a limited-angle scheme. We show that this approach accurately reconstructs volume interiors under two conditions: weak scattering, when the Radon transform approximation is applicable and the forward operator well defined; and strong scattering, which is nonlinear with respect to the 3D refractive index distribution and includes uncertainty in the forward operator.

17.
Opt Express ; 29(4): 5316-5326, 2021 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-33726070

RESUMEN

Scattering generally worsens the condition of inverse problems, with the severity depending on the statistics of the refractive index gradient and contrast. Removing scattering artifacts from images has attracted much work in the literature, including recently the use of static neural networks. S. Li et al. [Optica5(7), 803 (2018)10.1364/OPTICA.5.000803] trained a convolutional neural network to reveal amplitude objects hidden by a specific diffuser; whereas Y. Li et al. [Optica5(10), 1181 (2018)10.1364/OPTICA.5.001181] were able to deal with arbitrary diffusers, as long as certain statistical criteria were met. Here, we propose a novel dynamical machine learning approach for the case of imaging phase objects through arbitrary diffusers. The motivation is to strengthen the correlation among the patterns during the training and to reveal phase objects through scattering media. We utilize the on-axis rotation of a diffuser to impart dynamics and utilize multiple speckle measurements from different angles to form a sequence of images for training. Recurrent neural networks (RNN) embedded with the dynamics filter out useful information and discard the redundancies, thus quantitative phase information in presence of strong scattering. In other words, the RNN effectively averages out the effect of the dynamic random scattering media and learns more about the static pattern. The dynamical approach reveals transparent images behind the scattering media out of speckle correlation among adjacent measurements in a sequence. This method is also applicable to other imaging applications that involve any other spatiotemporal dynamics.

18.
Opt Lett ; 46(1): 130-133, 2021 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-33362033

RESUMEN

In mask-based lensless imaging, iterative reconstruction methods based on the geometric optics model produce artifacts and are computationally expensive. We present a prototype of a lensless camera that uses a deep neural network (DNN) to realize rapid reconstruction for Fresnel zone aperture (FZA) imaging. A deep back-projection network (DBPN) is connected behind a U-Net providing an error feedback mechanism, which realizes the self-correction of features to recover the image detail. A diffraction model generates the training data under conditions of broadband incoherent imaging. In the reconstructed results, blur caused by diffraction is shown to have been ameliorated, while the computing time is 2 orders of magnitude faster than the traditional iterative image reconstruction algorithms. This strategy could drastically reduce the design and assembly costs of cameras, paving the way for integration of portable sensors and systems.

19.
Health Data Sci ; 2021: 9798302, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-36405358

RESUMEN

In the wake of the rapid surge in the COVID-19-infected cases seen in Southern and West-Central USA in the period of June-July 2020, there is an urgent need to develop robust, data-driven models to quantify the effect which early reopening had on the infected case count increase. In particular, it is imperative to address the question: How many infected cases could have been prevented, had the worst affected states not reopened early? To address this question, we have developed a novel COVID-19 model by augmenting the classical SIR epidemiological model with a neural network module. The model decomposes the contribution of quarantine strength to the infection time series, allowing us to quantify the role of quarantine control and the associated reopening policies in the US states which showed a major surge in infections. We show that the upsurge in the infected cases seen in these states is strongly corelated with a drop in the quarantine/lockdown strength diagnosed by our model. Further, our results demonstrate that in the event of a stricter lockdown without early reopening, the number of active infected cases recorded on 14 July could have been reduced by more than 40% in all states considered, with the actual number of infections reduced being more than 100,000 for the states of Florida and Texas. As we continue our fight against COVID-19, our proposed model can be used as a valuable asset to simulate the effect of several reopening strategies on the infected count evolution, for any region under consideration.

20.
Patterns (N Y) ; 1(9): 100145, 2020 Dec 11.
Artículo en Inglés | MEDLINE | ID: mdl-33225319

RESUMEN

We have developed a globally applicable diagnostic COVID-19 model by augmenting the classical SIR epidemiological model with a neural network module. Our model does not rely upon previous epidemics like SARS/MERS and all parameters are optimized via machine learning algorithms used on publicly available COVID-19 data. The model decomposes the contributions to the infection time series to analyze and compare the role of quarantine control policies used in highly affected regions of Europe, North America, South America, and Asia in controlling the spread of the virus. For all continents considered, our results show a generally strong correlation between strengthening of the quarantine controls as learnt by the model and actions taken by the regions' respective governments. In addition, we have hosted our quarantine diagnosis results for the top 70 affected countries worldwide, on a public platform.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...