Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 765
Filtrar
1.
BMC Oral Health ; 24(1): 1091, 2024 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-39277722

RESUMO

BACKGROUND: Accurate assessment of basal bone width is essential for distinguishing individuals with normal occlusion from patients with maxillary transverse deficiency who may require maxillary expansion. Herein, we evaluated the effectiveness of a deep learning (DL) model in measuring landmarks of basal bone width and assessed the consistency of automated measurements compared to manual measurements. METHODS: Based on the U-Net algorithm, a coarse-to-fine DL model was developed and trained using 80 cone-beam computed tomography (CBCT) images. The model's prediction capabilities were validated on 10 CBCT scans and tested on an additional 34. To evaluate the performance of the DL model, its measurements were compared with those taken manually by one junior orthodontist using the concordance correlation coefficient (CCC). RESULTS: It took approximately 1.5 s for the DL model to perform the measurement task in only CBCT images. This framework showed a mean radial error of 1.22 ± 1.93 mm and achieved successful detection rates of 71.34%, 81.37%, 86.77%, and 91.18% in the 2.0-, 2.5-, 3.0-, and 4.0-mm ranges, respectively. The CCCs (95% confidence interval) of the maxillary basal bone width and mandibular basal bone width distance between the DL model and manual measurement for the 34 cases were 0.96 (0.94-0.97) and 0.98 (0.97-0.99), respectively. CONCLUSION: The novel DL framework developed in this study improved the diagnostic accuracy of the individual assessment of maxillary width. These results emphasize the potential applicability of this framework as a computer-aided diagnostic tool in orthodontic practice.


Assuntos
Pontos de Referência Anatômicos , Tomografia Computadorizada de Feixe Cônico , Maxila , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Estudos Retrospectivos , Pontos de Referência Anatômicos/diagnóstico por imagem , Maxila/diagnóstico por imagem , Feminino , Masculino , Aprendizado Profundo , Adolescente , Algoritmos , Adulto , Adulto Jovem
2.
BMC Oral Health ; 24(1): 1064, 2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39261793

RESUMO

OBJECTIVE: This study aimed to develop a deep learning model to predict skeletal malocclusions with an acceptable level of accuracy using airway and cephalometric landmark values obtained from analyzing different CBCT images. BACKGROUND: In orthodontics, multitudinous studies have reported the correlation between orthodontic treatment and changes in the anatomy as well as the functioning of the airway. Typically, the values obtained from various measurements of cephalometric landmarks are used to determine skeletal class based on the interpretation an orthodontist experiences, which sometimes may not be accurate. METHODS: Samples of skeletal anatomical data were retrospectively obtained and recorded in Digital Imaging and Communications in Medicine (DICOM) file format. The DICOM files were used to reconstruct 3D models using 3DSlicer (slicer.org) by thresholding airway regions to build up 3D polygon models of airway regions for each sample. The 3D models were measured for different landmarks that included measurements across the nasopharynx, the oropharynx, and the hypopharynx. Male and female subjects were combined as one data set to develop supervised learning models. These measurements were utilized to build 7 artificial intelligence-based supervised learning models. RESULTS: The supervised learning model with the best accuracy was Random Forest, with a value of 0.74. All the other models were lower in terms of their accuracy. The recall scores for Class I, II, and III malocclusions were 0.71, 0.69, and 0.77, respectively, which represented the total number of actual positive cases predicted correctly, making the sensitivity of the model high. CONCLUSION: In this study, it is observed that the Random Forest model was the most accurate model for predicting the skeletal malocclusion based on various airway and cephalometric landmarks.


Assuntos
Pontos de Referência Anatômicos , Cefalometria , Tomografia Computadorizada de Feixe Cônico , Má Oclusão , Humanos , Cefalometria/métodos , Masculino , Pontos de Referência Anatômicos/diagnóstico por imagem , Feminino , Tomografia Computadorizada de Feixe Cônico/métodos , Estudos Retrospectivos , Má Oclusão/classificação , Má Oclusão/diagnóstico por imagem , Má Oclusão/patologia , Imageamento Tridimensional/métodos , Orofaringe/diagnóstico por imagem , Orofaringe/patologia , Orofaringe/anatomia & histologia , Aprendizado Profundo , Adolescente , Nasofaringe/diagnóstico por imagem , Nasofaringe/patologia , Nasofaringe/anatomia & histologia , Hipofaringe/diagnóstico por imagem , Hipofaringe/patologia
3.
IEEE Trans Biomed Eng ; 71(11): 3252-3262, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39146163

RESUMO

OBJECTIVE: The paper proposes a non-invasive original methodology to directly and automatically identify the spine line from the external position of vertebral apophyses, which are key anatomical landmarks. METHODS: Apophyses are detected directly on discrete high-density geometric models of human backs acquired by a 3D scanner. The methodology is inspired by the posturologist's approach that detects the spine line through the identification, by manual palpation, of the spinal apophyses. For this purpose, an appropriate shape index is used to identify vertebral positions. The shape index estimates the local differential geometric properties of the back surface. This index is very discriminating in locating both pronounced and blurred apophyses. To validate the method, the research involved the analysis of 21 healthy human backs acquired in both standing and asymmetric postures. For each of them, a skilled operator detected the spinal apophyses by tactile investigation and located them through cutaneous marking. Markers have been used as the reference for spinal apophyses' positions. RESULTS: A comparison of the proposed approach with state-of-the-art methods has been conducted. This study evidences the high accuracy of the methodology proposed here and the capability to recognize also blurred apophyses. CONCLUSION: The method automatically performs the spine line identification and accurately locates apophyses along both vertical and coronal directions. SIGNIFICANCE: The proposed inexpensive and easy-to-use approach significantly advances over other non-invasive methods. Its ability to detect the apophyses' location potentially offers new capabilities in detecting, diagnosing, and monitoring spinal disorders.


Assuntos
Imageamento Tridimensional , Coluna Vertebral , Humanos , Imageamento Tridimensional/métodos , Coluna Vertebral/diagnóstico por imagem , Coluna Vertebral/anatomia & histologia , Coluna Vertebral/fisiologia , Dorso/diagnóstico por imagem , Dorso/fisiologia , Dorso/anatomia & histologia , Masculino , Adulto , Feminino , Pontos de Referência Anatômicos/diagnóstico por imagem , Algoritmos , Postura/fisiologia , Modelos Anatômicos , Adulto Jovem
4.
Med Phys ; 51(10): 7191-7205, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39140650

RESUMO

BACKGROUND: Fluoroscopy guided interventions (FGIs) pose a risk of prolonged radiation exposure; personalized patient dosimetry is necessary to improve patient safety during these procedures. However, current FGIs systems do not capture the precise exposure regions of the patient, making it challenging to perform patient-procedure-specific dosimetry. Thus, there is a pressing need to develop approaches to extract and use this information to enable personalized radiation dosimetry for interventional procedures. PURPOSE: To propose a deep learning (DL) approach for the automatic localization of 3D anatomical landmarks on randomly collimated and magnified 2D head fluoroscopy images. MATERIALS AND METHODS: The model was developed with datasets comprising 800 000 pseudo 2D synthetic images (mixture of vessel-enhanced and non-enhancement), each with 55 annotated anatomical landmarks (two are landmarks for eye lenses), generated from 135 retrospectively collected head computed tomography (CT) volumetric data. Before training, dynamic random cropping was performed to mimic the varied field-size collimation in FGI procedures. Gaussian-distributed additive noise was applied to each individual image to enhance the robustness of the DL model in handling image degradation that may occur during clinical image acquisition in a clinical environment. The model was trained with 629 370 synthetic images for approximately 275 000 iterations and evaluated against a synthetic image test set and a clinical fluoroscopy test set. RESULTS: The model shows good performance in estimating in- and out-of-image landmark positions and shows feasibility to instantiate the skull shape. The model successfully detected 96.4% and 92.5% 2D and 3D landmarks, respectively, within a 10 mm error on synthetic test images. It demonstrated an average of 3.6 ± 2.3 mm mean radial error and successfully detected 96.8% 2D landmarks within 10 mm error on clinical fluoroscopy images. CONCLUSION: Our deep-learning model successfully localizes anatomical landmarks and estimates the gross shape of skull structures from collimated 2D projection views. This method may help identify the exposure region required for patient-specific organ dosimetry in FGIs procedures.


Assuntos
Pontos de Referência Anatômicos , Aprendizado Profundo , Cabeça , Processamento de Imagem Assistida por Computador , Fluoroscopia , Humanos , Cabeça/diagnóstico por imagem , Pontos de Referência Anatômicos/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Automação
5.
Angle Orthod ; 94(6): 595-601, 2024 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-39180503

RESUMO

OBJECTIVES: To develop and evaluate an automated method for combining a digital photograph with a lateral cephalogram. MATERIALS AND METHODS: A total of 985 digital photographs were collected and soft tissue landmarks were manually detected. Then 2500 lateral cephalograms were collected, and corresponding soft tissue landmarks were manually detected. Using the images and landmark identification information, two different artificial intelligence (AI) models-one for detecting soft tissue on photographs and the other for identifying soft tissue on cephalograms-were developed using different deep-learning algorithms. The digital photographs were rotated, scaled, and shifted to minimize the squared sum of distances between the soft tissue landmarks identified by the two different AI models. As a validation process, eight soft tissue landmarks were selected on digital photographs and lateral cephalometric radiographs from 100 additionally collected validation subjects. Paired t-tests were used to compare the accuracy of measures obtained between the automated and manual image integration methods. RESULTS: The validation results showed statistically significant differences between the automated and manual methods on the upper lip and soft tissue B point. Otherwise, no statistically significant difference was found. CONCLUSIONS: Automated photograph-cephalogram image integration using AI models seemed to be as reliable as manual superimposition procedures.


Assuntos
Pontos de Referência Anatômicos , Inteligência Artificial , Cefalometria , Processamento de Imagem Assistida por Computador , Fotografação , Humanos , Cefalometria/métodos , Fotografação/métodos , Processamento de Imagem Assistida por Computador/métodos , Pontos de Referência Anatômicos/diagnóstico por imagem , Algoritmos , Feminino , Face/diagnóstico por imagem , Face/anatomia & histologia , Masculino , Aprendizado Profundo , Adolescente , Reprodutibilidade dos Testes
6.
Pediatr Radiol ; 54(11): 1850-1861, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39179930

RESUMO

BACKGROUND: Micrognathia can be diagnosed in utero with ultrasound by measuring the jaw index and/or inferior facial angle, though it can be challenging due to fetal positioning. The jaw index can be measured with magnetic resonance imaging (MRI) using the masseter muscle, but indistinct margins can lead to inaccuracy; the easily visualized posterior teeth buds may be a better landmark. OBJECTIVE: We aimed to evaluate inter-reader variability, agreement with ultrasound, and association with postnatal outcomes using MRI to measure the inferior facial angle, jaw index by masseter muscle, and jaw index by posterior teeth buds. MATERIALS AND METHODS: A single-institution retrospective review was performed of singleton pregnancies with prenatally diagnosed micrognathia by ultrasound or MRI from September 2013-June 2022. Ultrasound measurements were obtained by a maternal-fetal medicine specialist and MRI measurements by two radiologists to evaluate inter-reader variability. Intraclass correlation coefficients (ICC) and Bland-Altman analysis were used to assess agreement between imaging methods and logistic regressions and ROC curves to assess associations with postnatal outcomes. RESULTS: Forty-three fetuses (median gestational age 26 weeks (IQR 22-31); 47% male (20/43)) were included. Ultrasound measurements could not be obtained for jaw index in 15/43 (35%) fetuses and inferior facial angle in 11/43 (26%); MRI measurements were obtained by at least one reader in all cases. Jaw index by teeth buds demonstrated lowest inter-reader variability (ICC = 0.82, P < 0.001) and highest agreement with ultrasound (bias -0.23, 95% CI -2.8-2.2). All MRI measurements, but not ultrasound, predicted need for mandibular distraction (inferior facial angle P = 0.02, jaw index by masseter muscle P = 0.04, jaw index by teeth buds P = 0.01). CONCLUSION: Fetal MRI measurements, particularly jaw index measured by posterior teeth buds, demonstrate low inter-reader variability and high agreement with ultrasound, and may predict need for mandibular distraction postnatally.


Assuntos
Imageamento por Ressonância Magnética , Micrognatismo , Ultrassonografia Pré-Natal , Humanos , Feminino , Imageamento por Ressonância Magnética/métodos , Ultrassonografia Pré-Natal/métodos , Estudos Retrospectivos , Gravidez , Masculino , Micrognatismo/diagnóstico por imagem , Pontos de Referência Anatômicos/diagnóstico por imagem , Arcada Osseodentária/diagnóstico por imagem , Diagnóstico Pré-Natal/métodos
7.
Anat Histol Embryol ; 53(4): e13086, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38965883

RESUMO

Medical imaging techniques such as digital radiography and ultrasonography are non-invasive and provide precise results for examining internal organs and structures within fish. Their effectiveness can be further enhanced by using body parts like scales as markers for the organs beneath them. This study utilized the number of scales as landmarks in digital radiography and ultrasonography to non-invasively evaluate the muscles, bones, and images of internal and reproductive organs of common carp (Cyprinus carpio). Digital radiography was performed in the dorsoventral and lateral views of the fish, whereas ultrasonography was conducted in longitudinal and transverse views on sequence scale numbers with brightness and colour Doppler-modes. Digital radiography of the common carp revealed the whole-body morphology, including the bony parts from the head, pectoral fins, dorsal fins, pelvic fins, anal fins, and vertebrae to the tail that appeared radiopaque. Internal organs were also observed, with the swim bladder and heart appeared radiolucent, while the intestines, liver, testes, and ovaries appeared radiopaque. Ultrasonography in brightness mode displayed the digestive organs, reproductive organs, and muscle thickness. Additionally, colour Doppler mode demonstrated blood flow within the heart's ventricle.


Assuntos
Carpas , Animais , Carpas/anatomia & histologia , Feminino , Masculino , Ultrassonografia/veterinária , Ultrassonografia/métodos , Intensificação de Imagem Radiográfica/métodos , Escamas de Animais/anatomia & histologia , Escamas de Animais/diagnóstico por imagem , Ultrassonografia Doppler em Cores/veterinária , Ultrassonografia Doppler em Cores/métodos , Pontos de Referência Anatômicos/diagnóstico por imagem , Pontos de Referência Anatômicos/anatomia & histologia , Fígado/diagnóstico por imagem , Fígado/anatomia & histologia , Osso e Ossos/diagnóstico por imagem , Osso e Ossos/anatomia & histologia
8.
Eur J Orthod ; 46(4)2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-38895901

RESUMO

OBJECTIVES: This systematic review and meta-analysis aimed to investigate the accuracy and efficiency of artificial intelligence (AI)-driven automated landmark detection for cephalometric analysis on two-dimensional (2D) lateral cephalograms and three-dimensional (3D) cone-beam computed tomographic (CBCT) images. SEARCH METHODS: An electronic search was conducted in the following databases: PubMed, Web of Science, Embase, and grey literature with search timeline extending up to January 2024. SELECTION CRITERIA: Studies that employed AI for 2D or 3D cephalometric landmark detection were included. DATA COLLECTION AND ANALYSIS: The selection of studies, data extraction, and quality assessment of the included studies were performed independently by two reviewers. The risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. A meta-analysis was conducted to evaluate the accuracy of the 2D landmarks identification based on both mean radial error and standard error. RESULTS: Following the removal of duplicates, title and abstract screening, and full-text reading, 34 publications were selected. Amongst these, 27 studies evaluated the accuracy of AI-driven automated landmarking on 2D lateral cephalograms, while 7 studies involved 3D-CBCT images. A meta-analysis, based on the success detection rate of landmark placement on 2D images, revealed that the error was below the clinically acceptable threshold of 2 mm (1.39 mm; 95% confidence interval: 0.85-1.92 mm). For 3D images, meta-analysis could not be conducted due to significant heterogeneity amongst the study designs. However, qualitative synthesis indicated that the mean error of landmark detection on 3D images ranged from 1.0 to 5.8 mm. Both automated 2D and 3D landmarking proved to be time-efficient, taking less than 1 min. Most studies exhibited a high risk of bias in data selection (n = 27) and reference standard (n = 29). CONCLUSION: The performance of AI-driven cephalometric landmark detection on both 2D cephalograms and 3D-CBCT images showed potential in terms of accuracy and time efficiency. However, the generalizability and robustness of these AI systems could benefit from further improvement. REGISTRATION: PROSPERO: CRD42022328800.


Assuntos
Pontos de Referência Anatômicos , Inteligência Artificial , Cefalometria , Imageamento Tridimensional , Cefalometria/métodos , Humanos , Pontos de Referência Anatômicos/diagnóstico por imagem , Imageamento Tridimensional/métodos , Tomografia Computadorizada de Feixe Cônico/métodos
9.
Eur J Radiol ; 177: 111588, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38944907

RESUMO

OBJECTIVES: To develop and validate an open-source deep learning model for automatically quantifying scapular and glenoid morphology using CT images of normal subjects and patients with glenohumeral osteoarthritis. MATERIALS AND METHODS: First, we used deep learning to segment the scapula from CT images and then to identify the location of 13 landmarks on the scapula, 9 of them to establish a coordinate system unaffected by osteoarthritis-related changes, and the remaining 4 landmarks on the glenoid cavity to determine the glenoid size and orientation in this scapular coordinate system. The glenoid version, glenoid inclination, critical shoulder angle, glenopolar angle, glenoid height, and glenoid width were subsequently measured in this coordinate system. A 5-fold cross-validation was performed to evaluate the performance of this approach on 60 normal/non-osteoarthritic and 56 pathological/osteoarthritic scapulae. RESULTS: The Dice similarity coefficient between manual and automatic scapular segmentations exceeded 0.97 in both normal and pathological cases. The average error in automatic scapular and glenoid landmark positioning ranged between 1 and 2.5 mm and was comparable between the automatic method and human raters. The automatic method provided acceptable estimates of glenoid version (R2 = 0.95), glenoid inclination (R2 = 0.93), critical shoulder angle (R2 = 0.95), glenopolar angle (R2 = 0.90), glenoid height (R2 = 0.88) and width (R2 = 0.94). However, a significant difference was found for glenoid inclination between manual and automatic measurements (p < 0.001). CONCLUSIONS: This open-source deep learning model enables the automatic quantification of scapular and glenoid morphology from CT scans of patients with glenohumeral osteoarthritis, with sufficient accuracy for clinical use.


Assuntos
Aprendizado Profundo , Osteoartrite , Escápula , Articulação do Ombro , Tomografia Computadorizada por Raios X , Humanos , Escápula/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Osteoartrite/diagnóstico por imagem , Masculino , Feminino , Articulação do Ombro/diagnóstico por imagem , Pessoa de Meia-Idade , Idoso , Cavidade Glenoide/diagnóstico por imagem , Adulto , Reprodutibilidade dos Testes , Pontos de Referência Anatômicos/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
10.
PLoS One ; 19(6): e0305947, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38917161

RESUMO

Cephalometric analysis is critically important and common procedure prior to orthodontic treatment and orthognathic surgery. Recently, deep learning approaches have been proposed for automatic 3D cephalometric analysis based on landmarking from CBCT scans. However, these approaches have relied on uniform datasets from a single center or imaging device but without considering patient ethnicity. In addition, previous works have considered a limited number of clinically relevant cephalometric landmarks and the approaches were computationally infeasible, both impairing integration into clinical workflow. Here our aim is to analyze the clinical applicability of a light-weight deep learning neural network for fast localization of 46 clinically significant cephalometric landmarks with multi-center, multi-ethnic, and multi-device data consisting of 309 CBCT scans from Finnish and Thai patients. The localization performance of our approach resulted in the mean distance of 1.99 ± 1.55 mm for the Finnish cohort and 1.96 ± 1.25 mm for the Thai cohort. This performance turned out to be clinically significant i.e., ≤ 2 mm with 61.7% and 64.3% of the landmarks with Finnish and Thai cohorts, respectively. Furthermore, the estimated landmarks were used to measure cephalometric characteristics successfully i.e., with ≤ 2 mm or ≤ 2° error, on 85.9% of the Finnish and 74.4% of the Thai cases. Between the two patient cohorts, 33 of the landmarks and all cephalometric characteristics had no statistically significant difference (p < 0.05) measured by the Mann-Whitney U test with Benjamini-Hochberg correction. Moreover, our method is found to be computationally light, i.e., providing the predictions with the mean duration of 0.77 s and 2.27 s with single machine GPU and CPU computing, respectively. Our findings advocate for the inclusion of this method into clinical settings based on its technical feasibility and robustness across varied clinical datasets.


Assuntos
Pontos de Referência Anatômicos , Cefalometria , Tomografia Computadorizada de Feixe Cônico , Aprendizado Profundo , Imageamento Tridimensional , Humanos , Cefalometria/métodos , Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional/métodos , Masculino , Feminino , Pontos de Referência Anatômicos/diagnóstico por imagem , Finlândia , Adulto , Tailândia , Adulto Jovem , Adolescente
11.
Sci Rep ; 14(1): 12381, 2024 05 29.
Artigo em Inglês | MEDLINE | ID: mdl-38811771

RESUMO

Automatic dense 3D surface registration is a powerful technique for comprehensive 3D shape analysis that has found a successful application in human craniofacial morphology research, particularly within the mandibular and cranial vault regions. However, a notable gap exists when exploring the frontal aspect of the human skull, largely due to the intricate and unique nature of its cranial anatomy. To better examine this region, this study introduces a simplified single-surface craniofacial bone mask comprising of 6707 quasi-landmarks, which can aid in the classification and quantification of variation over human facial bone surfaces. Automatic craniofacial bone phenotyping was conducted on a dataset of 31 skull scans obtained through cone-beam computed tomography (CBCT) imaging. The MeshMonk framework facilitated the non-rigid alignment of the constructed craniofacial bone mask with each individual target mesh. To gauge the accuracy and reliability of this automated process, 20 anatomical facial landmarks were manually placed three times by three independent observers on the same set of images. Intra- and inter-observer error assessments were performed using root mean square (RMS) distances, revealing consistently low scores. Subsequently, the corresponding automatic landmarks were computed and juxtaposed with the manually placed landmarks. The average Euclidean distance between these two landmark sets was 1.5 mm, while centroid sizes exhibited noteworthy similarity. Intraclass coefficients (ICC) demonstrated a high level of concordance (> 0.988), with automatic landmarking showing significantly lower errors and variation. These results underscore the utility of this newly developed single-surface craniofacial bone mask, in conjunction with the MeshMonk framework, as a highly accurate and reliable method for automated phenotyping of the facial region of human skulls from CBCT and CT imagery. This craniofacial template bone mask expansion of the MeshMonk toolbox not only enhances our capacity to study craniofacial bone variation but also holds significant potential for shedding light on the genetic, developmental, and evolutionary underpinnings of the overall human craniofacial structure.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Imageamento Tridimensional , Crânio , Humanos , Crânio/anatomia & histologia , Crânio/diagnóstico por imagem , Imageamento Tridimensional/métodos , Tomografia Computadorizada de Feixe Cônico/métodos , Ossos Faciais/diagnóstico por imagem , Ossos Faciais/anatomia & histologia , Pontos de Referência Anatômicos/diagnóstico por imagem , Masculino , Feminino , Reprodutibilidade dos Testes
12.
J Dent ; 146: 105056, 2024 07.
Artigo em Inglês | MEDLINE | ID: mdl-38729291

RESUMO

OBJECTIVES: The transition from manual to automatic cephalometric landmark identification has not yet reached a consensus for clinical application in orthodontic diagnosis. The present umbrella review aimed to assess artificial intelligence (AI) performance in automatic 2D and 3D cephalometric landmark identification. DATA: A combination of free text words and MeSH keywords pooled by boolean operators: Automa* AND cephalo* AND ("artificial intelligence" OR "machine learning" OR "deep learning" OR "learning"). SOURCES: A search strategy without a timeframe setting was conducted on PubMed, Scopus, Web of Science, Cochrane Library and LILACS. STUDY SELECTION: The study protocol followed the PRISMA guidelines and the PICO question was formulated according to the aim of the article. The database search led to the selection of 15 articles that were assessed for eligibility in full-text. Finally, 11 systematic reviews met the inclusion criteria and were analyzed according to the risk of bias in systematic reviews (ROBIS) tool. CONCLUSIONS: AI was not able to identify the various cephalometric landmarks with the same accuracy. Since most of the included studies' conclusions were based on a wrong 2 mm cut-off difference between the AI automatic landmark location and that allocated by human operators, future research should focus on refining the most powerful architectures to improve the clinical relevance of AI-driven automatic cephalometric analysis. CLINICAL SIGNIFICANCE: Despite a progressively improved performance, AI has exceeded the recommended magnitude of error for most cephalometric landmarks. Moreover, AI automatic landmarking on 3D CBCT appeared to be less accurate compared to that on 2D X-rays. To date, AI-driven cephalometric landmarking still requires the final supervision of an experienced orthodontist.


Assuntos
Pontos de Referência Anatômicos , Inteligência Artificial , Cefalometria , Humanos , Cefalometria/métodos , Pontos de Referência Anatômicos/diagnóstico por imagem , Revisões Sistemáticas como Assunto , Imageamento Tridimensional/métodos , Aprendizado de Máquina
13.
J Am Acad Orthop Surg ; 32(16): e826-e831, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-38773850

RESUMO

INTRODUCTION: The perfect knee lateral radiograph visualizes anatomic landmarks on the distal femur for clinical and scientific purposes. However, radiographic imaging is a two-dimensional (2D) representation of a three-dimensional (3D) physis. The aim of this study was to characterize the perceived radiographic projection of the femoral physis using perfect lateral digitally reconstructed radiographs (DRRs) and to evaluate discrepancies from this projection to the physis at the lateral and medial cortices. METHODS: Pediatric patients from a cohort of CT scans were analyzed. Inclusion criteria were an open physis; exclusion criteria were any implant or pathology affecting the physis. CT scans were imported into 3D imaging software and transformed into lateral DRRs and 3D renderings of the femur. The physis was divided into four equal segments, with fiducial markers placed at the "anterior," "midpoint," and "posterior" points. Lines extended from these points in the lateral and medial direction. The vertical distance from these lines, representing the radiographic projection of the physis, was measured relative to the physis at the lateral and medial cortex of the femur on coronal CT slices. RESULTS: Thirty-one patients were included. On the perfect lateral radiograph DRR, the physis on the medial cortex was located proximal to the visualized physis by 6.64 ± 1.74 mm, 11.95 ± 1.67 mm, and 14.30 ± 1.75 mm at the anterior (25%), midpoint (50%), and posterior (75%) locations, respectively. On the lateral side, the physis on the lateral cortex was proximal to the visualized physis by 2.19 ± 1.13 mm, 3.71 ± 1.19 mm, and 6.74 ± 1.25 mm at the anterior, midpoint, and posterior locations, respectively. DISCUSSION: In this cohort of pediatric patients, the location of the cortical physis was, in all areas measured, proximal to the projection of the visualized physis as seen on the perfect knee lateral DRR. The distance from radiographic physis to cortical physis was greater at the medial cortex compared with the lateral cortex. STUDY DESIGN: Descriptive laboratory study. LEVEL OF EVIDENCE: III, observational radiographic anatomic study.


Assuntos
Fêmur , Imageamento Tridimensional , Tomografia Computadorizada por Raios X , Humanos , Criança , Fêmur/diagnóstico por imagem , Feminino , Masculino , Tomografia Computadorizada por Raios X/métodos , Adolescente , Pontos de Referência Anatômicos/diagnóstico por imagem , Lâmina de Crescimento/diagnóstico por imagem , Lâmina de Crescimento/anatomia & histologia
14.
IEEE J Biomed Health Inform ; 28(8): 4797-4809, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38630567

RESUMO

The B-mode ultrasound based computer-aided diagnosis (CAD) has demonstrated its effectiveness for diagnosis of Developmental Dysplasia of the Hip (DDH) in infants, which can conduct the Graf's method by detecting landmarks in hip ultrasound images. However, it is still necessary to explore more valuable information around these landmarks to enhance feature representation for improving detection performance in the detection model. To this end, a novel Involution Transformer based U-Net (IT-UNet) network is proposed for hip landmark detection. The IT-UNet integrates the efficient involution operation into Transformer to develop an Involution Transformer module (ITM), which consists of an involution attention block and a squeeze-and-excitation involution block. The ITM can capture both the spatial-related information and long-range dependencies from hip ultrasound images to effectively improve feature representation. Moreover, an Involution Downsampling block (IDB) is developed to alleviate the issue of feature loss in the encoder modules, which combines involution and convolution for the purpose of downsampling. The experimental results on two DDH ultrasound datasets indicate that the proposed IT-UNet achieves the best landmark detection performance, indicating its potential applications.


Assuntos
Interpretação de Imagem Assistida por Computador , Ultrassonografia , Humanos , Ultrassonografia/métodos , Lactente , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Displasia do Desenvolvimento do Quadril/diagnóstico por imagem , Pontos de Referência Anatômicos/diagnóstico por imagem , Redes Neurais de Computação , Recém-Nascido
15.
Ultrasound Med Biol ; 50(6): 797-804, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38485534

RESUMO

OBJECTIVE: Evaluation of left ventricular (LV) function in critical care patients is useful for guidance of therapy and early detection of LV dysfunction, but the tools currently available are too time-consuming. To resolve this issue, we previously proposed a method for the continuous and automatic quantification of global LV function in critical care patients based on the detection and tracking of anatomical landmarks on transesophageal heart ultrasound. In the present study, our aim was to improve the performance of mitral annulus detection in transesophageal echocardiography (TEE). METHODS: We investigated several state-of-the-art networks for both the detection and tracking of the mitral annulus in TEE. We integrated the networks into a pipeline for automatic assessment of LV function through estimation of the mitral annular plane systolic excursion (MAPSE), called autoMAPSE. TEE recordings from a total of 245 patients were collected from St. Olav's University Hospital and used to train and test the respective networks. We evaluated the agreement between autoMAPSE estimates and manual references annotated by expert echocardiographers in 30 Echolab patients and 50 critical care patients. Furthermore, we proposed a prototype of autoMAPSE for clinical integration and tested it in critical care patients in the intensive care unit. RESULTS: Compared with manual references, we achieved a mean difference of 0.8 (95% limits of agreement: -2.9 to 4.7) mm in Echolab patients, with a feasibility of 85.7%. In critical care patients, we reached a mean difference of 0.6 (95% limits of agreement: -2.3 to 3.5) mm and a feasibility of 88.1%. The clinical prototype of autoMAPSE achieved real-time performance. CONCLUSION: Automatic quantification of LV function had high feasibility in clinical settings. The agreement with manual references was comparable to inter-observer variability of clinical experts.


Assuntos
Pontos de Referência Anatômicos , Ecocardiografia Transesofagiana , Função Ventricular Esquerda , Humanos , Ecocardiografia Transesofagiana/métodos , Função Ventricular Esquerda/fisiologia , Pontos de Referência Anatômicos/diagnóstico por imagem , Feminino , Masculino , Idoso , Pessoa de Meia-Idade , Ventrículos do Coração/diagnóstico por imagem , Ventrículos do Coração/fisiopatologia , Valva Mitral/diagnóstico por imagem , Valva Mitral/fisiopatologia , Interpretação de Imagem Assistida por Computador/métodos
16.
Dentomaxillofac Radiol ; 53(5): 289-295, 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38547394

RESUMO

OBJECTIVES: To investigate the imaging and anatomic features of the anterior lobe (AL) of the superficial parotid gland (SPG). METHODS: Computed tomographic sialography examinations were undertaken for 142 parotid glands in 77 patients. Whole computer tomography (CT) data were analyzed using multi-planar reformation and maximum intensity projection to generate sialographic CT images. The tributary ducts of the SPG were analyzed to classify the parotid morphology. Three-dimensional analyses were used to investigate the AL and its relationship with adjacent anatomic landmarks. RESULTS: Four major types (I-IV) and 2 minor types (V-VI) of the AL and the superficial parotid gland were observed. Type I AL (83/142) was contiguous and not separated from the retromandibular parotid gland. Type II AL (16/142) was detached from the retromandibular parotid gland with 1-4 tributary ducts. Type III AL (12/142) showed a small isolated lobe above the Stensen duct around the anterior edge of the masseter. Type IV (28/142) showed the absence of the AL. Type V (3/142) shows the absence of the retromandibular parotid gland. Type VI (3/142) showed the presence of ectopic salivary gland beneath the Stensen duct anterior to the retromandibular parotid gland. CONCLUSIONS: The AL gives rise to the morphological variations of the superficial parotid gland. AL also gives rise to the accessory parotid gland when it is detached from the retromandibular parotid gland.


Assuntos
Imageamento Tridimensional , Glândula Parótida , Sialografia , Tomografia Computadorizada por Raios X , Humanos , Glândula Parótida/diagnóstico por imagem , Glândula Parótida/anatomia & histologia , Sialografia/métodos , Adulto , Feminino , Masculino , Tomografia Computadorizada por Raios X/métodos , Pessoa de Meia-Idade , Idoso , Imageamento Tridimensional/métodos , Adolescente , Idoso de 80 Anos ou mais , Pontos de Referência Anatômicos/diagnóstico por imagem , Ductos Salivares/diagnóstico por imagem , Ductos Salivares/anatomia & histologia , Meios de Contraste
17.
Sci Data ; 11(1): 321, 2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38548727

RESUMO

Flexible bronchoscopy has revolutionized respiratory disease diagnosis. It offers direct visualization and detection of airway abnormalities, including lung cancer lesions. Accurate identification of airway lesions during flexible bronchoscopy plays an important role in the lung cancer diagnosis. The application of artificial intelligence (AI) aims to support physicians in recognizing anatomical landmarks and lung cancer lesions within bronchoscopic imagery. This work described the development of BM-BronchoLC, a rich bronchoscopy dataset encompassing 106 lung cancer and 102 non-lung cancer patients. The dataset incorporates detailed localization and categorical annotations for both anatomical landmarks and lesions, meticulously conducted by senior doctors at Bach Mai Hospital, Vietnam. To assess the dataset's quality, we evaluate two prevalent AI backbone models, namely UNet++ and ESFPNet, on the image segmentation and classification tasks with single-task and multi-task learning paradigms. We present BM-BronchoLC as a reference dataset in developing AI models to assist diagnostic accuracy for anatomical landmarks and lung cancer lesions in bronchoscopy data.


Assuntos
Broncoscopia , Neoplasias Pulmonares , Humanos , Inteligência Artificial , Neoplasias Pulmonares/diagnóstico por imagem , Tórax/diagnóstico por imagem , Pontos de Referência Anatômicos/diagnóstico por imagem
18.
IEEE Trans Med Imaging ; 43(7): 2679-2692, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38421850

RESUMO

In medical image analysis, anatomical landmarks usually contain strong prior knowledge of their structural information. In this paper, we propose to promote medical landmark localization by modeling the underlying landmark distribution via normalizing flows. Specifically, we introduce the flow-based landmark distribution prior as a learnable objective function into a regression-based landmark localization framework. Moreover, we employ an integral operation to make the mapping from heatmaps to coordinates differentiable to further enhance heatmap-based localization with the learned distribution prior. Our proposed Normalizing Flow-based Distribution Prior (NFDP) employs a straightforward backbone and non-problem-tailored architecture (i.e., ResNet18), which delivers high-fidelity outputs across three X-ray-based landmark localization datasets. Remarkably, the proposed NFDP can do the job with minimal additional computational burden as the normalizing flows module is detached from the framework on inferencing. As compared to existing techniques, our proposed NFDP provides a superior balance between prediction accuracy and inference speed, making it a highly efficient and effective approach. The source code of this paper is available at https://github.com/jacksonhzx95/NFDP.


Assuntos
Algoritmos , Pontos de Referência Anatômicos , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pontos de Referência Anatômicos/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
19.
Orthod Craniofac Res ; 27(4): 535-543, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38321788

RESUMO

OBJECTIVE: To investigate the accuracy of artificial intelligence-assisted growth prediction using a convolutional neural network (CNN) algorithm and longitudinal lateral cephalograms (Lat-cephs). MATERIALS AND METHODS: A total of 198 Japanese preadolescent children, who had skeletal Class I malocclusion and whose Lat-cephs were available at age 8 years (T0) and 10 years (T1), were allocated into the training, validation, and test phases (n = 161, n = 17, n = 20). Orthodontists and the CNN model identified 28 hard-tissue landmarks (HTL) and 19 soft-tissue landmarks (STL). The mean prediction error values were defined as 'excellent,' 'very good,' 'good,' 'acceptable,' and 'unsatisfactory' (criteria: 0.5 mm, 1.0 mm, 1.5 mm, and 2.0 mm, respectively). The degree of accurate prediction percentage (APP) was defined as 'very high,' 'high,' 'medium,' and 'low' (criteria: 90%, 70%, and 50%, respectively) according to the percentage of subjects that showed the error range within 1.5 mm. RESULTS: All HTLs showed acceptable-to-excellent mean PE values, while the STLs Pog', Gn', and Me' showed unsatisfactory values, and the rest showed good-to-acceptable values. Regarding the degree of APP, HTLs Ba, ramus posterior, Pm, Pog, B-point, Me, and mandibular first molar root apex exhibited low APPs. The STLs labrale superius, lower embrasure, lower lip, point of lower profile, B', Pog,' Gn' and Me' also exhibited low APPs. The remainder of HTLs and STLs showed medium-to-very high APPs. CONCLUSION: Despite the possibility of using the CNN model to predict growth, further studies are needed to improve the prediction accuracy in HTLs and STLs of the chin area.


Assuntos
Pontos de Referência Anatômicos , Inteligência Artificial , Cefalometria , Má Oclusão Classe I de Angle , Redes Neurais de Computação , Humanos , Cefalometria/métodos , Criança , Feminino , Masculino , Pontos de Referência Anatômicos/diagnóstico por imagem , Má Oclusão Classe I de Angle/diagnóstico por imagem , Algoritmos , Desenvolvimento Maxilofacial , Previsões , Mandíbula/diagnóstico por imagem , Mandíbula/crescimento & desenvolvimento
20.
Int Orthod ; 22(2): 100845, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38350255

RESUMO

BACKGROUND: Facial soft tissue analysis is becoming increasingly emphasized in orthodontic diagnosis and treatment planning. While traditional cephalometry primarily focuses on hard tissues, recent non-invasive imaging techniques offer the potential to comprehensively evaluate three-dimensional (3D) facial soft tissues. The aim of the study was to establish the geometrical 3D and cephalometric divergence between Cone Beam Computed Tomography (CBCT) derived images and scanned soft tissues. Crucial for enhancing orthodontic diagnosis, minimizing patient exposure to ionizing radiation and providing facial cephalometric parameters. MATERIAL AND METHODS: A cross-sectional study was conducted from January 2020 to May 2023. CBCT and 3D facial scans were obtained simultaneously using a specialized imaging system. Reproducible landmark points were selected for both cephalometric and soft tissue analysis. Angular and linear measurements were recorded, and correlations between CT and facial scans were statistically assessed. RESULTS: Comparisons between 10 CBCT-derived and 10 facial scan-based soft tissue representations resulted into 1.8mm mean root median square (RMS). Angular measurements, such as ANB, right gonial angle, and left gonial angle, exhibited a 0.9° of difference with their respective soft tissue variables. In contrast, linear measurements of total anterior facial height showed a lower correlation coefficient, equal to 0.51. The correlation between soft tissues and underlying hard tissues was more pronounced for gonial angles. CONCLUSION: Facial soft tissue analysis using either 3D facial scans or CBCT-derived offers similar results for orthodontic diagnosis and treatment planning. These findings support the use of non-invasive diagnostic tools in orthodontics, although further investigations are needed to comprehensively understand the complexity of hard and soft tissue relationships.


Assuntos
Cefalometria , Tomografia Computadorizada de Feixe Cônico , Face , Imageamento Tridimensional , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Estudos Transversais , Cefalometria/métodos , Face/diagnóstico por imagem , Face/anatomia & histologia , Imageamento Tridimensional/métodos , Adulto , Masculino , Feminino , Adulto Jovem , Pontos de Referência Anatômicos/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA