Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
J Appl Clin Med Phys ; 25(5): e14360, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38648734

RESUMO

PURPOSE: Breast density is a significant risk factor for breast cancer and can impact the sensitivity of screening mammography. Area-based breast density measurements may not provide an accurate representation of the tissue distribution, therefore volumetric breast density (VBD) measurements are preferred. Dual-energy mammography enables volumetric measurements without additional assumptions about breast shape. In this work we evaluated the performance of a dual-energy decomposition technique for determining VBD by applying it to virtual anthropomorphic phantoms. METHODS: The dual-energy decomposition formalism was used to quantify VBD on simulated dual-energy images of anthropomorphic virtual phantoms with known tissue distributions. We simulated 150 phantoms with volumes ranging from 50 to 709 mL and VBD ranging from 15% to 60%. Using these results, we validated a correction for the presence of skin and assessed the method's intrinsic bias and variability. As a proof of concept, the method was applied to 14 sets of clinical dual-energy images, and the resulting breast densities were compared to magnetic resonance imaging (MRI) measurements. RESULTS: Virtual phantom VBD measurements exhibited a strong correlation (Pearson's r > 0.95 $r > 0.95$ ) with nominal values. The proposed skin correction eliminated the variability due to breast size and reduced the bias in VBD to a constant value of -2%. Disagreement between clinical VBD measurements using MRI and dual-energy mammography was under 10%, and the difference in the distributions was statistically non-significant. VBD measurements in both modalities had a moderate correlation (Spearman's ρ $\rho \ $ = 0.68). CONCLUSIONS: Our results in virtual phantoms indicate that the material decomposition method can produce accurate VBD measurements if the presence of a third material (skin) is considered. The results from our proof of concept showed agreement between MRI and dual-energy mammography VBD. Assessment of VBD using dual-energy images could provide complementary information in dual-energy mammography and tomosynthesis examinations.


Assuntos
Densidade da Mama , Neoplasias da Mama , Mamografia , Imagens de Fantasmas , Imagem Radiográfica a Partir de Emissão de Duplo Fóton , Humanos , Mamografia/métodos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Imagem Radiográfica a Partir de Emissão de Duplo Fóton/métodos , Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Imageamento por Ressonância Magnética/métodos
2.
Sensors (Basel) ; 21(24)2021 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-34960313

RESUMO

COVID-19 is a transferable disease that is also a leading cause of death for a large number of people worldwide. This disease, caused by SARS-CoV-2, spreads very rapidly and quickly affects the respiratory system of the human being. Therefore, it is necessary to diagnosis this disease at the early stage for proper treatment, recovery, and controlling the spread. The automatic diagnosis system is significantly necessary for COVID-19 detection. To diagnose COVID-19 from chest X-ray images, employing artificial intelligence techniques based methods are more effective and could correctly diagnosis it. The existing diagnosis methods of COVID-19 have the problem of lack of accuracy to diagnosis. To handle this problem we have proposed an efficient and accurate diagnosis model for COVID-19. In the proposed method, a two-dimensional Convolutional Neural Network (2DCNN) is designed for COVID-19 recognition employing chest X-ray images. Transfer learning (TL) pre-trained ResNet-50 model weight is transferred to the 2DCNN model to enhanced the training process of the 2DCNN model and fine-tuning with chest X-ray images data for final multi-classification to diagnose COVID-19. In addition, the data augmentation technique transformation (rotation) is used to increase the data set size for effective training of the R2DCNNMC model. The experimental results demonstrated that the proposed (R2DCNNMC) model obtained high accuracy and obtained 98.12% classification accuracy on CRD data set, and 99.45% classification accuracy on CXI data set as compared to baseline methods. This approach has a high performance and could be used for COVID-19 diagnosis in E-Healthcare systems.


Assuntos
COVID-19 , Aprendizado Profundo , Telemedicina , Inteligência Artificial , Teste para COVID-19 , Atenção à Saúde , Humanos , SARS-CoV-2 , Raios X
3.
Sci Eng Ethics ; 26(3): 1229-1247, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31541413

RESUMO

Use of patient clinical photographs requires specific attention to confidentiality and privacy. Although there are policies and procedures for publishing clinical images, there is little systematic evidence about what patients and health professionals actually think about consent for publishing clinical images. We investigated the opinions of three stakeholder groups (patients, students and doctors) at 3 academic healthcare institutions and 37 private practices in Croatia (total 791 participants: 292 patients, 281 medical and dental students and 281 doctors of medicine or dental medicine). The questionnaire contained patient photographs with different levels of anonymization. All three respondent groups considered that more stringent forms of permission for were needed identifiable photographs than for those with higher levels of anonymization. When the entire face was presented in a photo only 33% of patients considered that written permission was required, compared with 88% of the students and 89% of the doctors. Opinions about publishing patient photographs differed among the three respondent samples: almost half of the patients thought no permission was necessary compared with one-third of students and doctors. These results show poor awareness of Croatian patients regarding the importance of written informed consent as well as unsatisfactory knowledge of health professionals about policies on the publication of patients' data in general. In conclusion, there is a need for increasing awareness of all stakeholders to achieve better protection of patient privacy rights in research and publication.


Assuntos
Publicações Periódicas como Assunto , Confidencialidade , Croácia , Estudos Transversais , Humanos , Consentimento Livre e Esclarecido , Estudantes
4.
Int J Comput Dent ; 23(3): 211-218, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32789308

RESUMO

AIM: To assess the accuracy of DigiBrain4, Inc (DB4) Dental Classifier and DB4 Smart Search Engine* in recognizing, categorizing, and classifying dental visual assets as compared with Google Search Engine, one of the largest publicly available search engines and the largest data repository. MATERIALS AND METHODS: Dental visual assets were collected and labeled according to type, category, class, and modifiers. These dental visual assets contained radiographs and clinical images of patients' teeth and occlusion from different angles of view. A modified SqueezeNet architecture was implemented using the TensorFlow r1.10 framework. The model was trained using two NVIDIA Volta graphics processing units (GPUs). A program was built to search Google Images, using Chrome driver (Google web driver) and submit the returned images to the DB4 Dental Classifier and DB4 Smart Search Engine. The categorical accuracy of the DB4 Dental Classifier and DB4 Smart Search Engine in recognizing, categorizing, and classifying dental visual assets was then compared with that of Google Search Engine. RESULTS: The categorical accuracy achieved using the DB4 Smart Search Engine for searching dental visual assets was 0.93, whereas that achieved using Google Search Engine was 0.32. CONCLUSION: The current DB4 Dental Classifier and DB4 Smart Search Engine application and add-on have proved to be accurate in recognizing, categorizing, and classifying dental visual assets. The search engine was able to label images and reject non-relevant results.


Assuntos
Redes Neurais de Computação , Ferramenta de Busca , Humanos
5.
Hum Brain Mapp ; 38(6): 3052-3068, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28371107

RESUMO

Diffusion imaging is critical for detecting acute brain injury. However, normal apparent diffusion coefficient (ADC) maps change rapidly in early childhood, making abnormality detection difficult. In this article, we explored clinical PACS and electronic healthcare records (EHR) to create age-specific ADC atlases for clinical radiology reference. Using the EHR and three rounds of multiexpert reviews, we found ADC maps from 201 children 0-6 years of age scanned between 2006 and 2013 who had brain MRIs with no reported abnormalities and normal clinical evaluations 2+ years later. These images were grouped in 10 age bins, densely sampling the first 1 year of life (5 bins, including neonates and 4 quarters) and representing the 1-6 year age range (an age bin per year). Unbiased group-wise registration was used to construct ADC atlases for 10 age bins. We used the atlases to quantify (a) cross-sectional normative ADC variations; (b) spatiotemporal heterogeneous ADC changes; and (c) spatiotemporal heterogeneous volumetric changes. The quantified age-specific whole-brain and region-wise ADC values were compared to those from age-matched individual subjects in our study and in multiple existing independent studies. The significance of this study is that we have shown that clinically acquired images can be used to construct normative age-specific atlases. These first of their kind age-specific normative ADC atlases quantitatively characterize changes of myelination-related water diffusion in the first 6 years of life. The quantified voxel-wise spatiotemporal ADC variations provide standard references to assist radiologists toward more objective interpretation of abnormalities in clinical images. Our atlases are available at https://www.nitrc.org/projects/mgh_adcatlases. Hum Brain Mapp 38:3052-3068, 2017. © 2017 Wiley Periodicals, Inc.


Assuntos
Lesões Encefálicas/patologia , Mapeamento Encefálico , Encéfalo/diagnóstico por imagem , Encéfalo/crescimento & desenvolvimento , Imagem de Difusão por Ressonância Magnética , Adulto , Lesões Encefálicas/diagnóstico por imagem , Criança , Pré-Escolar , Estudos de Coortes , Estudos Transversais , Registros Eletrônicos de Saúde/estatística & dados numéricos , Humanos , Processamento de Imagem Assistida por Computador , Lactente , Recém-Nascido , Adulto Jovem
6.
Sensors (Basel) ; 17(3)2017 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-28282957

RESUMO

The development of low profile gamma-ray detectors has encouraged the production of small field of view (SFOV) hand-held imaging devices for use at the patient bedside and in operating theatres. Early development of these SFOV cameras was focussed on a single modality-gamma ray imaging. Recently, a hybrid system-gamma plus optical imaging-has been developed. This combination of optical and gamma cameras enables high spatial resolution multi-modal imaging, giving a superimposed scintigraphic and optical image. Hybrid imaging offers new possibilities for assisting clinicians and surgeons in localising the site of uptake in procedures such as sentinel node detection. The hybrid camera concept can be extended to a multimodal detector design which can offer stereoscopic images, depth estimation of gamma-emitting sources, and simultaneous gamma and fluorescence imaging. Recent improvements to the hybrid camera have been used to produce dual-modality images in both laboratory simulations and in the clinic. Hybrid imaging of a patient who underwent thyroid scintigraphy is reported. In addition, we present data which shows that the hybrid camera concept can be extended to estimate the position and depth of radionuclide distribution within an object and also report the first combined gamma and Near-Infrared (NIR) fluorescence images.


Assuntos
Câmaras gama , Raios gama , Imagem Óptica , Cintilografia
9.
J Dent Educ ; 88(5): 606-613, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38445708

RESUMO

BACKGROUND: Tele-consultations are increasingly used for screening and diagnosis. Only a few studies have assessed dental students' visual attention to clinical images. AIM: To (i) determine dental students' gaze behavior, visual fixations, and diagnostic competence while viewing clinical images, and (ii) explore potential opportunities to strengthen the teaching-learning approaches. DESIGN: Tobii Pro Nano-device captured the eye-tracking data for 65 dental undergraduate students in this cross-sectional study. The predetermined areas of interest (AOI) for all five clinical photographs were uploaded onto Tobii software. All participants used a think-aloud protocol with no restrictions to view time. RESULTS: A total of 325 clinical pictures were analyzed, and the average view time was 189.25 ± 76.90 s. Most participants started at the center of the image (three frontal photos), spent a significant share of their view time on prominent findings, did not follow a systematic pattern, and exhibited diagnostic incompetence. Also, most participants followed a "Z" viewing pattern (oscillating movement from left to right) for the remaining two pictures. CONCLUSIONS: Subjects frequently fixated on the prominent AOI, however, failed to make the correct diagnosis. Their view patterns revealed no sequential viewing. Therefore, emphasizing knowledge about common dental abnormalities and focusing on full coverage of clinical pictures can improve dental students' diagnostic competence and view patterns.


Assuntos
Tecnologia de Rastreamento Ocular , Estudantes de Odontologia , Humanos , Estudantes de Odontologia/psicologia , Estudos Transversais , Feminino , Educação em Odontologia/métodos , Masculino , Competência Clínica , Adulto Jovem
10.
Med Sci Educ ; 34(3): 671-678, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38887412

RESUMO

Anatomical images are commonly used in the teaching process to help students understand the spatial orientation of anatomical structures. Previous research has shown that images effectively visualize the relationship between anatomical structures that are difficult to comprehend through verbal or written explanations alone. However, there is a lack of guidelines that specifically address the various methods of utilizing anatomical images and delivering them through multimedia and cognitive load principles. This article aims to provide a concise overview of the proper utilization and delivery of anatomical images and how these images can facilitate student interaction.

11.
JMIR Form Res ; 8: e59914, 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39293049

RESUMO

BACKGROUND: Labeling color fundus photos (CFP) is an important step in the development of artificial intelligence screening algorithms for the detection of diabetic retinopathy (DR). Most studies use the International Classification of Diabetic Retinopathy (ICDR) to assign labels to CFP, plus the presence or absence of macular edema (ME). Images can be grouped as referrable or nonreferrable according to these classifications. There is little guidance in the literature about how to collect and use metadata as a part of the CFP labeling process. OBJECTIVE: This study aimed to improve the quality of the Multimodal Database of Retinal Images in Africa (MoDRIA) by determining whether the availability of metadata during the image labeling process influences the accuracy, sensitivity, and specificity of image labels. MoDRIA was developed as one of the inaugural research projects of the Mbarara University Data Science Research Hub, part of the Data Science for Health Discovery and Innovation in Africa (DS-I Africa) initiative. METHODS: This is a crossover assessment with 2 groups and 2 phases. Each group had 10 randomly assigned labelers who provided an ICDR score and the presence or absence of ME for each of the 50 CFP in a test image with and without metadata including blood pressure, visual acuity, glucose, and medical history. Specificity and sensitivity of referable retinopathy were based on ICDR scores, and ME was calculated using a 2-sided t test. Comparison of sensitivity and specificity for ICDR scores and ME with and without metadata for each participant was calculated using the Wilcoxon signed rank test. Statistical significance was set at P<.05. RESULTS: The sensitivity for identifying referrable DR with metadata was 92.8% (95% CI 87.6-98.0) compared with 93.3% (95% CI 87.6-98.9) without metadata, and the specificity was 84.9% (95% CI 75.1-94.6) with metadata compared with 88.2% (95% CI 79.5-96.8) without metadata. The sensitivity for identifying the presence of ME was 64.3% (95% CI 57.6-71.0) with metadata, compared with 63.1% (95% CI 53.4-73.0) without metadata, and the specificity was 86.5% (95% CI 81.4-91.5) with metadata compared with 87.7% (95% CI 83.9-91.5) without metadata. The sensitivity and specificity of the ICDR score and the presence or absence of ME were calculated for each labeler with and without metadata. No findings were statistically significant. CONCLUSIONS: The sensitivity and specificity scores for the detection of referrable DR were slightly better without metadata, but the difference was not statistically significant. We cannot make definitive conclusions about the impact of metadata on the sensitivity and specificity of image labels in our study. Given the importance of metadata in clinical situations, we believe that metadata may benefit labeling quality. A more rigorous study to determine the sensitivity and specificity of CFP labels with and without metadata is recommended.


Assuntos
Retinopatia Diabética , Metadados , Humanos , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/diagnóstico , Uganda , Feminino , Masculino , Estudos Cross-Over , Bases de Dados Factuais , Pessoa de Meia-Idade , Fundo de Olho , Adulto , Sensibilidade e Especificidade , Retina/diagnóstico por imagem , Retina/patologia
13.
J Healthc Inform Res ; 7(1): 59-83, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36910915

RESUMO

The recent advances in artificial intelligence have led to the rapid development of computer-aided skin cancer diagnosis applications that perform on par with dermatologists. However, the black-box nature of such applications makes it difficult for physicians to trust the predicted decisions, subsequently preventing the proliferation of such applications in the clinical workflow. In this work, we aim to address this challenge by developing an interpretable skin cancer diagnosis approach using clinical images. Accordingly, a skin cancer diagnosis model consolidated with two interpretability methods is developed. The first interpretability method integrates skin cancer diagnosis domain knowledge, characterized by a skin lesion taxonomy, into model development, whereas the other method focuses on visualizing the decision-making process by highlighting the dominant of interest regions of skin lesion images. The proposed model is trained and validated on clinical images since the latter are easily obtainable by non-specialist healthcare providers. The results demonstrate the effectiveness of incorporating lesion taxonomy in improving model classification accuracy, where our model can predict the skin lesion origin as melanocytic or non-melanocytic with an accuracy of 87%, predict lesion malignancy with 77% accuracy, and provide disease diagnosis with an accuracy of 71%. In addition, the implemented interpretability methods assist understand the model's decision-making process and detecting misdiagnoses. This work is a step toward achieving interpretability in skin cancer diagnosis using clinical images. The developed approach can assist general practitioners to make an early diagnosis, thus reducing the redundant referrals that expert dermatologists receive for further investigations.

14.
Front Med (Lausanne) ; 10: 1114362, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37358994

RESUMO

Introduction: Malignant skin lesions pose a great threat to the health of patients. Due to the limitations of existing diagnostic techniques, such as poor accuracy and invasive operations, malignant skin lesions are highly similar to other skin lesions, with low diagnostic efficiency and high misdiagnosis rates. Automatic medical image classification using computer algorithms can effectively improve clinical diagnostic efficiency. However, existing clinical datasets are sparse and clinical images have complex backgrounds, problems with noise interference such as light changes and shadows, hair occlusions, etc. In addition, existing classification models lack the ability to focus on lesion regions in complex backgrounds. Methods: In this paper, we propose a DBN (double branch network) based on a two-branch network model that uses a backbone with the same structure as the original network branches and the fused network branches. The feature maps of each layer of the original network branch are extracted by our proposed CFEBlock (Common Feature Extraction Block), the common features of the feature maps between adjacent layers are extracted, and then these features are combined with the feature maps of the corresponding layers of the fusion network branch by FusionBlock, and finally the total prediction results are obtained by weighting the prediction results of both branches. In addition, we constructed a new dataset CSLI (Clinical Skin Lesion Images) by combining the publicly available dataset PAD-UFES-20 with our collected dataset, the CSLI dataset contains 3361 clinical dermatology images for six disease categories: actinic keratosis (730), cutaneous basal cell carcinoma (1136), malignant melanoma (170) cutaneous melanocytic nevus (391), squamous cell carcinoma (298) and seborrheic keratosis (636). Results: We divided the CSLI dataset into a training set, a validation set and a test set, and performed accuracy, precision, sensitivity, specificity, f1score, balanced accuracy, AUC summary, visualisation of different model training, ROC curves and confusion matrix for various diseases, ultimately showing that the network performed well overall on the test data. Discussion: The DBN contains two identical feature extraction network branches, a structure that allows shallow feature maps for image classification to be used with deeper feature maps for information transfer between them in both directions, providing greater flexibility and accuracy and enhancing the network's ability to focus on lesion regions. In addition, the dual branch structure of DBN provides more possibilities for model structure modification and feature transfer, and has great potential for development.

15.
Stud Health Technol Inform ; 309: 53-57, 2023 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-37869805

RESUMO

Numerous classification systems have been developed over the years, systems which not only provide assistance to dermatologists, but also enable individuals, especially those living in areas with low medical access, to get a diagnosis. In this paper, a Machine Learning model, which performs a binary classification, and, which for the remainder of this paper will be abbreviated as ML model, is trained and tested, so as to evaluate its effectiveness in giving the right diagnosis, as well as to point out the limitations of the given method, which include, but are not limited to, the quality of smartphone images, and the lack of FAIR image datasets for model training. The results indicate that there are many measures to be taken and improvements to be made, if such a system were to become a reliable tool in real-life circumstances.


Assuntos
Dermatologia , Melanoma , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico , Dermatologia/métodos , Melanoma/diagnóstico , Aprendizado de Máquina , Diagnóstico Precoce , Dermoscopia/métodos , Síndrome
16.
Cureus ; 15(8): e44018, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37753028

RESUMO

INTRODUCTION: Artificial intelligence in oncology has gained a lot of interest in recent years. Early detection of Oral squamous cell carcinoma (OSCC) is crucial for early management to attain a better prognosis and overall survival. Machine learning (ML) has also been used in oral cancer studies to explore the discrimination between clinically normal and oral cancer. MATERIALS AND METHODS: A dataset comprising 360 clinical intra-oral images of OSCC, Oral Potentially Malignant Disorders (OPMDs) and clinically healthy oral mucosa were used. Clinicians trained the machine learning model with the clinical images (n=300). Roboflow software (Roboflow Inc, USA) was used to classify and annotate images along with Multi-class annotation and object detection models trained by two expert oral pathologists. The test dataset (n=60) of new clinical images was again evaluated by two clinicians and Roboflow. The results were tabulated and Kappa statistics was performed using SPSS v23.0 (IBM Corp., Armonk, NY).  Results: Training dataset clinical images (n=300) were used to train the clinicians and Roboflow algorithm. The test dataset (n=60) of new clinical images was again evaluated by the clinicians and Roboflow. The observed outcomes revealed that the Mean Average Precision (mAP) was 25.4%, precision 29.8% and Recall 32.9%. Based on the kappa statistical analysis the 0.7 value shows a moderate agreement between the clinicians and the machine learning model. The test dataset showed the specificity and sensitivity of the Roboflow machine learning model to be 75% and 88.9% respectively.  Conclusion: In conclusion, machine learning showed promising results in the early detection of suspected lesions using clinical intraoral images and aids general dentists and patients in the detection of suspected lesions such as OPMDs and OSCC that require biopsy and immediate treatment.

20.
Indian J Dermatol ; 67(5): 547-551, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36865837

RESUMO

Clinical images are of utmost importance for the majority of dermatological research and publications. The rich collection of clinical images in medical journals may help in formulating machine learning programs in the future or facilitate image-based meta-analysis. However, the presence of a scale bar in those images is required for measuring the lesion from an image. We audited recent issues of three widely circulated Indian dermatology journals and found that among 345 clinical images, 2.61% had a scale with the unit. With this background, in this article, we provided three methods for capturing and processing clinical images with scale. This article would help dermatologists to think about incorporating a scale bar in the image for the progress of science.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa