Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 123
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38873338

RESUMEN

Chest X-rays (CXRs) play a pivotal role in cost-effective clinical assessment of various heart and lung related conditions. The urgency of COVID-19 diagnosis prompted their use in identifying conditions like lung opacity, pneumonia, and acute respiratory distress syndrome in pediatric patients. We propose an AI-driven solution for binary COVID-19 versus non-COVID-19 classification in pediatric CXRs. We present a Federated Self-Supervised Learning (FSSL) framework to enhance Vision Transformer (ViT) performance for COVID-19 detection in pediatric CXRs. ViT's prowess in vision-related binary classification tasks, combined with self-supervised pre-training on adult CXR data, forms the basis of the FSSL approach. We implement our strategy on the Rhino Health Federated Computing Platform (FCP), which ensures privacy and scalability for distributed data. The chest X-ray analysis using the federated SSL (CAFES) model, utilizes the FSSL-pre-trained ViT weights and demonstrated gains in accurately detecting COVID-19 when compared with a fully supervised model. Our FSSL-pre-trained ViT showed an area under the precision-recall curve (AUPR) of 0.952, which is 0.231 points higher than the fully supervised model for COVID-19 diagnosis using pediatric data. Our contributions include leveraging vision transformers for effective COVID-19 diagnosis from pediatric CXRs, employing distributed federated learning-based self-supervised pre-training on adult data, and improving pediatric COVID-19 diagnosis performance. This privacy-conscious approach aligns with HIPAA guidelines, paving the way for broader medical imaging applications.

2.
Neuroradiology ; 2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-38871879

RESUMEN

PURPOSE: The diagnosis of chronic increased intracranial pressure (IIP)is often based on subjective evaluation or clinical metrics with low predictive value. We aimed to quantify cranial bone changes associated with pediatric IIP using CT images and to identify patients at risk. METHODS: We retrospectively quantified local cranial bone thickness and mineral density from the CT images of children with chronic IIP and compared their statistical differences to normative children without IIP adjusting for age, sex and image resolution. Subsequently, we developed a classifier to identify IIP based on these measurements. Finally, we demonstrated our methods to explore signs of IIP in patients with non-syndromic sagittal craniosynostosis (NSSC). RESULTS: We quantified a significant decrease of bone density in 48 patients with IIP compared to 1,018 normative subjects (P < .001), but no differences in bone thickness (P = .56 and P = .89 for age groups 0-2 and 2-10 years, respectively). Our classifier demonstrated 83.33% (95% CI: 69.24%, 92.03%) sensitivity and 87.13% (95% CI: 84.88%, 89.10%) specificity in identifying patients with IIP. Compared to normative subjects, 242 patients with NSSC presented significantly lower cranial bone density (P < .001), but no differences were found compared to patients with IIP (P = .57). Of patients with NSSC, 36.78% (95% CI: 30.76%, 43.22%) presented signs of IIP. CONCLUSION: Cranial bone changes associated with pediatric IIP can be quantified from CT images to support earlier diagnoses of IIP, and to study the presence of IIP secondary to cranial pathology such as non-syndromic sagittal craniosynostosis.

3.
Eur J Radiol ; 174: 111397, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38452733

RESUMEN

PURPOSE: To investigate quantitative changes in MRI signal intensity (SI) and lesion volume that indicate treatment response and correlate these changes with clinical outcomes after percutaneous sclerotherapy (PS) of extremity venous malformations (VMs). METHODS: VMs were segmented manually on pre- and post-treatment T2-weighted MRI using 3D Slicer to assess changes in lesion volume and SI. Clinical outcomes were scored on a 7-point Likert scale according to patient perception of symptom improvement; treatment response (success or failure) was determined accordingly. RESULTS: Eighty-one patients with VMs underwent 125 PS sessions. Treatment success occurred in 77 patients (95 %). Mean (±SD) changes were -7.9 ± 24 cm3 in lesion volume and -123 ± 162 in SI (both, P <.001). Mean reduction in lesion volume was greater in the success group (-9.4 ± 24 cm3) than in the failure group (21 ± 20 cm3) (P =.006). Overall, lesion volume correlated with treatment response (ρ = -0.3, P =.004). On subgroup analysis, volume change correlated with clinical outcomes in children (ρ = -0.3, P =.03), in sodium tetradecyl sulfate-treated lesions (ρ = -0.5, P =.02), and in foot lesions (ρ = -0.6, P =.04). SI change correlated with clinical outcomes in VMs treated in 1 PS session (ρ = -0.3, P =.01) and in bleomycin-treated lesions (ρ = -0.4, P =.04). CONCLUSIONS: Change in lesion volume is a reliable indicator of treatment response. Lesion volume and SI correlate with clinical outcomes in specific subgroups.


Asunto(s)
Escleroterapia , Malformaciones Vasculares , Niño , Humanos , Soluciones Esclerosantes/uso terapéutico , Estudios Retrospectivos , Malformaciones Vasculares/diagnóstico por imagen , Malformaciones Vasculares/terapia , Venas , Resultado del Tratamiento
4.
J Am Heart Assoc ; 13(2): e031257, 2024 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-38226515

RESUMEN

BACKGROUND: Identification of children with latent rheumatic heart disease (RHD) by echocardiography, before onset of symptoms, provides an opportunity to initiate secondary prophylaxis and prevent disease progression. There have been limited artificial intelligence studies published assessing the potential of machine learning to detect and analyze mitral regurgitation or to detect the presence of RHD on standard portable echocardiograms. METHODS AND RESULTS: We used 511 echocardiograms in children, focusing on color Doppler images of the mitral valve. Echocardiograms were independently reviewed by an expert adjudication panel. Among 511 cases, 229 were normal, and 282 had RHD. Our automated method included harmonization of echocardiograms to localize the left atrium during systole using convolutional neural networks and RHD detection using mitral regurgitation jet analysis and deep learning models with an attention mechanism. We identified the correct view with an average accuracy of 0.99 and the correct systolic frame with an average accuracy of 0.94 (apical) and 0.93 (parasternal long axis). It localized the left atrium with an average Dice coefficient of 0.88 (apical) and 0.9 (parasternal long axis). Maximum mitral regurgitation jet measurements were similar to expert manual measurements (P value=0.83) and a 9-feature mitral regurgitation analysis showed an area under the receiver operating characteristics curve of 0.93, precision of 0.83, recall of 0.92, and F1 score of 0.87. Our deep learning model showed an area under the receiver operating characteristics curve of 0.84, precision of 0.78, recall of 0.98, and F1 score of 0.87. CONCLUSIONS: Artificial intelligence has the potential to detect RHD as accurately as expert cardiologists and to improve with more data. These innovative approaches hold promise to scale echocardiography screening for RHD.


Asunto(s)
Insuficiencia de la Válvula Mitral , Cardiopatía Reumática , Niño , Humanos , Insuficiencia de la Válvula Mitral/diagnóstico por imagen , Cardiopatía Reumática/diagnóstico por imagen , Inteligencia Artificial , Sensibilidad y Especificidad , Ecocardiografía/métodos
5.
medRxiv ; 2024 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-37961086

RESUMEN

Background: Diffuse midline gliomas (DMG) are aggressive pediatric brain tumors that are diagnosed and monitored through MRI. We developed an automatic pipeline to segment subregions of DMG and select radiomic features that predict patient overall survival (OS). Methods: We acquired diagnostic and post-radiation therapy (RT) multisequence MRI (T1, T1ce, T2, T2 FLAIR) and manual segmentations from two centers of 53 (internal cohort) and 16 (external cohort) DMG patients. We pretrained a deep learning model on a public adult brain tumor dataset, and finetuned it to automatically segment tumor core (TC) and whole tumor (WT) volumes. PyRadiomics and sequential feature selection were used for feature extraction and selection based on the segmented volumes. Two machine learning models were trained on our internal cohort to predict patient 1-year survival from diagnosis. One model used only diagnostic tumor features and the other used both diagnostic and post-RT features. Results: For segmentation, Dice score (mean [median]±SD) was 0.91 (0.94)±0.12 and 0.74 (0.83)±0.32 for TC, and 0.88 (0.91)±0.07 and 0.86 (0.89)±0.06 for WT for internal and external cohorts, respectively. For OS prediction, accuracy was 77% and 81% at time of diagnosis, and 85% and 78% post-RT for internal and external cohorts, respectively. Homogeneous WT intensity in baseline T2 FLAIR and larger post-RT TC/WT volume ratio indicate shorter OS. Conclusions: Machine learning analysis of MRI radiomics has potential to accurately and non-invasively predict which pediatric patients with DMG will survive less than one year from the time of diagnosis to provide patient stratification and guide therapy.

6.
Artículo en Inglés | MEDLINE | ID: mdl-38082727

RESUMEN

An accurate classification of upper limb movements using electroencephalogram (EEG) signals is gaining significant importance in recent years due to the prevalence of brain-computer interfaces. The upper limbs in the human body are crucial since different skeletal segments combine to make a range of motions that helps us in our trivial daily tasks. Decoding EEG-based upper limb movements can be of great help to people with spinal cord injury (SCI) or other neuro-muscular diseases such as amyotrophic lateral sclerosis (ALS), primary lateral sclerosis, and periodic paralysis. This can manifest in a loss of sensory and motor function, which could make a person reliant on others to provide care in day-to-day activities. We can detect and classify upper limb movement activities, whether they be executed or imagined using an EEG-based brain-computer interface (BCI). Toward this goal, we focus our attention on decoding movement execution (ME) of the upper limb in this study. For this purpose, we utilize a publicly available EEG dataset that contains EEG signal recordings from fifteen subjects acquired using a 61-channel EEG device. We propose a method to classify four ME classes for different subjects using spectrograms of the EEG data through pre-trained deep learning (DL) models. Our proposed method of using EEG spectrograms for the classification of ME has shown significant results, where the highest average classification accuracy (for four ME classes) obtained is 87.36%, with one subject achieving the best classification accuracy of 97.03%.Clinical relevance- This research shows that movement execution of upper limbs is classified with significant accuracy by employing a spectrogram of the EEG signals and a pre-trained deep learning model which is fine-tuned for the downstream task.


Asunto(s)
Interfaces Cerebro-Computador , Humanos , Extremidad Superior , Electroencefalografía/métodos , Movimiento , Movimiento (Física)
7.
Artículo en Inglés | MEDLINE | ID: mdl-38083430

RESUMEN

Children with optic pathway gliomas (OPGs), a low-grade brain tumor associated with neurofibromatosis type 1 (NF1-OPG), are at risk for permanent vision loss. While OPG size has been associated with vision loss, it is unclear how changes in size, shape, and imaging features of OPGs are associated with the likelihood of vision loss. This paper presents a fully automatic framework for accurate prediction of visual acuity loss using multi-sequence magnetic resonance images (MRIs). Our proposed framework includes a transformer-based segmentation network using transfer learning, statistical analysis of radiomic features, and a machine learning method for predicting vision loss. Our segmentation network was evaluated on multi-sequence MRIs acquired from 75 pediatric subjects with NF1-OPG and obtained an average Dice similarity coefficient of 0.791. The ability to predict vision loss was evaluated on a subset of 25 subjects with ground truth using cross-validation and achieved an average accuracy of 0.8. Analyzing multiple MRI features appear to be good indicators of vision loss, potentially permitting early treatment decisions.Clinical relevance- Accurately determining which children with NF1-OPGs are at risk and hence require preventive treatment before vision loss remains challenging, towards this we present a fully automatic deep learning-based framework for vision outcome prediction, potentially permitting early treatment decisions.


Asunto(s)
Neurofibromatosis 1 , Glioma del Nervio Óptico , Humanos , Niño , Glioma del Nervio Óptico/complicaciones , Glioma del Nervio Óptico/diagnóstico por imagen , Glioma del Nervio Óptico/patología , Neurofibromatosis 1/complicaciones , Neurofibromatosis 1/diagnóstico por imagen , Neurofibromatosis 1/patología , Imagen por Resonancia Magnética/métodos , Trastornos de la Visión , Agudeza Visual
8.
ArXiv ; 2023 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-38106459

RESUMEN

Pediatric brain and spinal cancers remain the leading cause of cancer-related death in children. Advancements in clinical decision-support in pediatric neuro-oncology utilizing the wealth of radiology imaging data collected through standard care, however, has significantly lagged other domains. Such data is ripe for use with predictive analytics such as artificial intelligence (AI) methods, which require large datasets. To address this unmet need, we provide a multi-institutional, large-scale pediatric dataset of 23,101 multi-parametric MRI exams acquired through routine care for 1,526 brain tumor patients, as part of the Children's Brain Tumor Network. This includes longitudinal MRIs across various cancer diagnoses, with associated patient-level clinical information, digital pathology slides, as well as tissue genotype and omics data. To facilitate downstream analysis, treatment-naïve images for 370 subjects were processed and released through the NCI Childhood Cancer Data Initiative via the Cancer Data Service. Through ongoing efforts to continuously build these imaging repositories, our aim is to accelerate discovery and translational AI models with real-world data, to ultimately empower precision medicine for children.

9.
Sci Rep ; 13(1): 20557, 2023 11 23.
Artículo en Inglés | MEDLINE | ID: mdl-37996454

RESUMEN

We present the first data-driven pediatric model that explains cranial sutural growth in the pediatric population. We segmented the cranial bones in the neurocranium from the cross-sectional CT images of 2068 normative subjects (age 0-10 years), and we used a 2D manifold-based cranial representation to establish local anatomical correspondences between subjects guided by the location of the cranial sutures. We designed a diffeomorphic spatiotemporal model of cranial bone development as a function of local sutural growth rates, and we inferred its parameters statistically from our cross-sectional dataset. We used the constructed model to predict growth for 51 independent normative patients who had longitudinal images. Moreover, we used our model to simulate the phenotypes of single suture craniosynostosis, which we compared to the observations from 212 patients. We also evaluated the accuracy predicting personalized cranial growth for 10 patients with craniosynostosis who had pre-surgical longitudinal images. Unlike existing statistical and simulation methods, our model was inferred from real image observations, explains cranial bone expansion and displacement as a consequence of sutural growth and it can simulate craniosynostosis. This pediatric cranial suture growth model constitutes a necessary tool to study abnormal development in the presence of cranial suture pathology.


Asunto(s)
Suturas Craneales , Craneosinostosis , Humanos , Niño , Recién Nacido , Lactante , Preescolar , Craneosinostosis/patología , Cráneo/patología , Cuidados Paliativos
10.
Health Informatics J ; 29(4): 14604582231207744, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37864543

RESUMEN

Cross-institution collaborations are constrained by data-sharing challenges. These challenges hamper innovation, particularly in artificial intelligence, where models require diverse data to ensure strong performance. Federated learning (FL) solves data-sharing challenges. In typical collaborations, data is sent to a central repository where models are trained. With FL, models are sent to participating sites, trained locally, and model weights aggregated to create a master model with improved performance. At the 2021 Radiology Society of North America's (RSNA) conference, a panel was conducted titled "Accelerating AI: How Federated Learning Can Protect Privacy, Facilitate Collaboration and Improve Outcomes." Two groups shared insights: researchers from the EXAM study (EMC CXR AI Model) and members of the National Cancer Institute's Early Detection Research Network's (EDRN) pancreatic cancer working group. EXAM brought together 20 institutions to create a model to predict oxygen requirements of patients seen in the emergency department with COVID-19 symptoms. The EDRN collaboration is focused on improving outcomes for pancreatic cancer patients through earlier detection. This paper describes major insights from the panel, including direct quotes. The panelists described the impetus for FL, the long-term potential vision of FL, challenges faced in FL, and the immediate path forward for FL.


Asunto(s)
Inteligencia Artificial , Neoplasias Pancreáticas , Humanos , Privacidad , Aprendizaje , Neoplasias Pancreáticas
12.
ArXiv ; 2023 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-37396608

RESUMEN

Gliomas are the most common type of primary brain tumors. Although gliomas are relatively rare, they are among the deadliest types of cancer, with a survival rate of less than 2 years after diagnosis. Gliomas are challenging to diagnose, hard to treat and inherently resistant to conventional therapy. Years of extensive research to improve diagnosis and treatment of gliomas have decreased mortality rates across the Global North, while chances of survival among individuals in low- and middle-income countries (LMICs) remain unchanged and are significantly worse in Sub-Saharan Africa (SSA) populations. Long-term survival with glioma is associated with the identification of appropriate pathological features on brain MRI and confirmation by histopathology. Since 2012, the Brain Tumor Segmentation (BraTS) Challenge have evaluated state-of-the-art machine learning methods to detect, characterize, and classify gliomas. However, it is unclear if the state-of-the-art methods can be widely implemented in SSA given the extensive use of lower-quality MRI technology, which produces poor image contrast and resolution and more importantly, the propensity for late presentation of disease at advanced stages as well as the unique characteristics of gliomas in SSA (i.e., suspected higher rates of gliomatosis cerebri). Thus, the BraTS-Africa Challenge provides a unique opportunity to include brain MRI glioma cases from SSA in global efforts through the BraTS Challenge to develop and evaluate computer-aided-diagnostic (CAD) methods for the detection and characterization of glioma in resource-limited settings, where the potential for CAD tools to transform healthcare are more likely.

13.
Comput Methods Programs Biomed ; 240: 107689, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37393741

RESUMEN

BACKGROUND AND OBJECTIVE: Accurate and repeatable detection of craniofacial landmarks is crucial for automated quantitative evaluation of head development anomalies. Since traditional imaging modalities are discouraged in pediatric patients, 3D photogrammetry has emerged as a popular and safe imaging alternative to evaluate craniofacial anomalies. However, traditional image analysis methods are not designed to operate on unstructured image data representations such as 3D photogrammetry. METHODS: We present a fully automated pipeline to identify craniofacial landmarks in real time, and we use it to assess the head shape of patients with craniosynostosis using 3D photogrammetry. To detect craniofacial landmarks, we propose a novel geometric convolutional neural network based on Chebyshev polynomials to exploit the point connectivity information in 3D photogrammetry and quantify multi-resolution spatial features. We propose a landmark-specific trainable scheme that aggregates the multi-resolution geometric and texture features quantified at every vertex of a 3D photogram. Then, we embed a new probabilistic distance regressor module that leverages the integrated features at every point to predict landmark locations without assuming correspondences with specific vertices in the original 3D photogram. Finally, we use the detected landmarks to segment the calvaria from the 3D photograms of children with craniosynostosis, and we derive a new statistical index of head shape anomaly to quantify head shape improvements after surgical treatment. RESULTS: We achieved an average error of 2.74 ± 2.70 mm identifying Bookstein Type I craniofacial landmarks, which is a significant improvement compared to other state-of-the-art methods. Our experiments also demonstrated a high robustness to spatial resolution variability in the 3D photograms. Finally, our head shape anomaly index quantified a significant reduction of head shape anomalies as a consequence of surgical treatment. CONCLUSION: Our fully automated framework provides real-time craniofacial landmark detection from 3D photogrammetry with state-of-the-art accuracy. In addition, our new head shape anomaly index can quantify significant head phenotype changes and can be used to quantitatively evaluate surgical treatment in patients with craniosynostosis.


Asunto(s)
Craneosinostosis , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Cráneo , Craneosinostosis/diagnóstico por imagen , Craneosinostosis/cirugía , Fotogrametría/métodos , Resultado del Tratamiento
14.
IEEE Trans Med Imaging ; 42(10): 3117-3126, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37216247

RESUMEN

Image segmentation, labeling, and landmark detection are essential tasks for pediatric craniofacial evaluation. Although deep neural networks have been recently adopted to segment cranial bones and locate cranial landmarks from computed tomography (CT) or magnetic resonance (MR) images, they may be hard to train and provide suboptimal results in some applications. First, they seldom leverage global contextual information that can improve object detection performance. Second, most methods rely on multi-stage algorithm designs that are inefficient and prone to error accumulation. Third, existing methods often target simple segmentation tasks and have shown low reliability in more challenging scenarios such as multiple cranial bone labeling in highly variable pediatric datasets. In this paper, we present a novel end-to-end neural network architecture based on DenseNet that incorporates context regularization to jointly label cranial bone plates and detect cranial base landmarks from CT images. Specifically, we designed a context-encoding module that encodes global context information as landmark displacement vector maps and uses it to guide feature learning for both bone labeling and landmark identification. We evaluated our model on a highly diverse pediatric CT image dataset of 274 normative subjects and 239 patients with craniosynostosis (age 0.63 ± 0.54 years, range 0-2 years). Our experiments demonstrate improved performance compared to state-of-the-art approaches.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Niño , Recién Nacido , Lactante , Preescolar , Procesamiento de Imagen Asistido por Computador/métodos , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Algoritmos
15.
Med Image Anal ; 82: 102605, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36156419

RESUMEN

Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge - 2020.


Asunto(s)
COVID-19 , Pandemias , Humanos , COVID-19/diagnóstico por imagen , Inteligencia Artificial , Tomografía Computarizada por Rayos X/métodos , Pulmón/diagnóstico por imagen
16.
Plast Reconstr Surg Glob Open ; 10(8): e4457, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35983543

RESUMEN

Available normative references of cranial bone development and suture fusion are incomplete or based on simplified assumptions due to the lack of large datasets. We present a fully data-driven normative model that represents the age- and sex-specific variability of bone shape, thickness, and density between birth and 10 years of age at every location of the calvaria. Methods: The model was built using a cross-sectional and multi-institutional pediatric computed tomography image dataset with 2068 subjects without cranial pathology (age 0-10 years). We combined principal component analysis and temporal regression to build a statistical model of cranial bone development at every location of the calvaria. We studied the influences of sex on cranial bone growth, and our bone density model allowed quantifying for the first time suture fusion as a continuous temporal process. We evaluated the predictive accuracy of our model using an independent longitudinal image dataset of 51 subjects. Results: Our model achieved temporal predictive errors of 2.98 ± 0.69 mm, 0.27 ± 0.29 mm, and 76.72 ± 91.50 HU in cranial bone shape, thickness, and mineral density changes, respectively. Significant sex differences were found in intracranial volume and bone surface areas (P < 0.01). No significant differences were found in cephalic index, bone thickness, mineral density, or suture fusion. Conclusions: We presented the first pediatric age- and sex-specific statistical reference for local cranial bone shape, thickness, and mineral density changes. We showed its predictive accuracy using an independent longitudinal dataset, we studied developmental differences associated with sex, and we quantified suture fusion as a continuous process.

17.
Comput Methods Programs Biomed ; 221: 106893, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35660764

RESUMEN

BACKGROUND AND OBJECTIVE: The fetal face is an essential source of information in the assessment of congenital malformations and neurological anomalies. Disturbance in early stages of development can lead to a wide range of effects, from subtle changes in facial and neurological features to characteristic facial shapes observed in craniofacial syndromes. Three-dimensional ultrasound (3D US) can provide more detailed information about the facial morphology of the fetus than the conventional 2D US, but its use for pre-natal diagnosis is challenging due to imaging noise, fetal movements, limited field-of-view, low soft-tissue contrast, and occlusions. METHODS: In this paper, we propose the use of a novel statistical morphable model of newborn faces, the BabyFM, for fetal face reconstruction from 3D US images. We test the feasibility of using newborn statistics to accurately reconstruct fetal faces by fitting the regularized morphable model to the noisy 3D US images. RESULTS: The results indicate that the reconstructions are quite accurate in the central-face and less reliable in the lateral regions (mean point-to-surface error of 2.35 mm vs 4.86 mm). The algorithm is able to reconstruct the whole facial morphology of babies from US scans while handle adverse conditions (e.g. missing parts, noisy data). CONCLUSIONS: The proposed algorithm has the potential to aid in-utero diagnosis for conditions that involve facial dysmorphology.


Asunto(s)
Cara , Ultrasonografía Prenatal , Cara/diagnóstico por imagen , Femenino , Feto/diagnóstico por imagen , Humanos , Imagenología Tridimensional/métodos , Recién Nacido , Embarazo , Ultrasonografía , Ultrasonografía Prenatal/métodos
18.
Plast Reconstr Surg Glob Open ; 10(6): e4383, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35720200

RESUMEN

Background: The mendosal suture joins the interparietal and inferior portions of the occipital bone. Persistent patency of this suture can result in bathrocephaly, an abnormal occipital projection. This study aims to determine normal temporal fusion of the mendosal suture and cranial shape of the patients with persistent suture patency. Methods: A retrospective review of head CT scans in patients aged 0-18 months who presented to the emergency department between 2010 and 2020 was completed. Presence and patency of the mendosal suture were assessed. Cranial shape analysis was conducted in the cases that presented with 100% suture patency and age-matched controls. An exponential regression model was used to forecast the timing of suture fusion. Results: In total, 378 patients met inclusion criteria. Median age at imaging was 6.8 months (IQR 2.9, 11.6). Initiation of mendosal suture fusion was observed as early as 4 days of age and was completed in all instances except one by age 18 months. Most patients had either a complete or partial suture fusion (66.7% versus 30.7%, respectively), and 2.6% of patients had 100% suture patency. Cranial shape analysis demonstrated increased occipital projection in patients with 100% suture patency compared with their controls. Exponential regression model suggested that the mendosal suture closure begins prenatally and typically progresses to full closure at the age of 6 months. Conclusions: Prevalence of a patent mendosal suture was 2.6% overall. Mendosal suture fusion initiates in-utero and completes ex-utero within the first 18 months of life. Delayed closure results in greater occipital projection.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...