Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Comput Biol Med ; 168: 107723, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38000242

RESUMEN

Reliable and accurate brain tumor segmentation is a challenging task even with the appropriate acquisition of brain images. Tumor grading and segmentation utilizing Magnetic Resonance Imaging (MRI) are necessary steps for correct diagnosis and treatment planning. There are different MRI sequence images (T1, Flair, T1ce, T2, etc.) for identifying different parts of the tumor. Due to the diversity in the illumination of each brain imaging modality, different information and details can be obtained from each input modality. Therefore, by using various MRI modalities, the diagnosis system is capable of finding more unique details that lead to a better segmentation result, especially in fuzzy borders. In this study, to achieve an automatic and robust brain tumor segmentation framework using four MRI sequence images, an optimized Convolutional Neural Network (CNN) is proposed. All weight and bias values of the CNN model are adjusted using an Improved Chimp Optimization Algorithm (IChOA). In the first step, all four input images are normalized to find some potential areas of the existing tumor. Next, by employing the IChOA, the best features are selected using a Support Vector Machine (SVM) classifier. Finally, the best-extracted features are fed to the optimized CNN model to classify each object for brain tumor segmentation. Accordingly, the proposed IChOA is utilized for feature selection and optimizing Hyperparameters in the CNN model. The experimental outcomes conducted on the BRATS 2018 dataset demonstrate superior performance (Precision of 97.41 %, Recall of 95.78 %, and Dice Score of 97.04 %) compared to the existing frameworks.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Neoplasias Encefálicas/diagnóstico por imagen , Algoritmos , Encéfalo , Imagen por Resonancia Magnética/métodos
2.
Multimed Tools Appl ; 82(24): 37855-37876, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37799146

RESUMEN

Lifelogging was introduced as the process of passively capturing personal daily events via wearable devices. It ultimately creates a visual diary encoding every aspect of one's life with the aim of future sharing or recollecting. In this paper, we present LifeSeeker, a lifelog image retrieval system participating in the Lifelog Search Challenge (LSC) for 3 years, since 2019. Our objective is to support users to seek specific life moments using a combination of textual descriptions, spatial relationships, location information, and image similarities. In addition to the LSC challenge results, a further experiment was conducted in order to evaluate the power retrieval of our system on both expert and novice users. This experiment informed us about the effectiveness of the user's interaction with the system when involving non-experts.

3.
J Biomed Semantics ; 14(1): 12, 2023 08 31.
Artículo en Inglés | MEDLINE | ID: mdl-37653549

RESUMEN

BACKGROUND: This paper proposes Cyrus, a new transparency evaluation framework, for Open Knowledge Extraction (OKE) systems. Cyrus is based on the state-of-the-art transparency models and linked data quality assessment dimensions. It brings together a comprehensive view of transparency dimensions for OKE systems. The Cyrus framework is used to evaluate the transparency of three linked datasets, which are built from the same corpus by three state-of-the-art OKE systems. The evaluation is automatically performed using a combination of three state-of-the-art FAIRness (Findability, Accessibility, Interoperability, Reusability) assessment tools and a linked data quality evaluation framework, called Luzzu. This evaluation includes six Cyrus data transparency dimensions for which existing assessment tools could be identified. OKE systems extract structured knowledge from unstructured or semi-structured text in the form of linked data. These systems are fundamental components of advanced knowledge services. However, due to the lack of a transparency framework for OKE, most OKE systems are not transparent. This means that their processes and outcomes are not understandable and interpretable. A comprehensive framework sheds light on different aspects of transparency, allows comparison between the transparency of different systems by supporting the development of transparency scores, gives insight into the transparency weaknesses of the system, and ways to improve them. Automatic transparency evaluation helps with scalability and facilitates transparency assessment. The transparency problem has been identified as critical by the European Union Trustworthy Artificial Intelligence (AI) guidelines. In this paper, Cyrus provides the first comprehensive view of transparency dimensions for OKE systems by merging the perspectives of the FAccT (Fairness, Accountability, and Transparency), FAIR, and linked data quality research communities. RESULTS: In Cyrus, data transparency includes ten dimensions which are grouped in two categories. In this paper, six of these dimensions, i.e., provenance, interpretability, understandability, licensing, availability, interlinking have been evaluated automatically for three state-of-the-art OKE systems, using the state-of-the-art metrics and tools. Covid-on-the-Web is identified to have the highest mean transparency. CONCLUSIONS: This is the first research to study the transparency of OKE systems that provides a comprehensive set of transparency dimensions spanning ethics, trustworthy AI, and data quality approaches to transparency. It also demonstrates how to perform automated transparency evaluation that combines existing FAIRness and linked data quality assessment tools for the first time. We show that state-of-the-art OKE systems vary in the transparency of the linked data generated and that these differences can be automatically quantified leading to potential applications in trustworthy AI, compliance, data protection, data governance, and future OKE system design and testing.


Asunto(s)
Inteligencia Artificial , COVID-19 , Humanos , Web Semántica
4.
Comput Biol Med ; 152: 106443, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36563539

RESUMEN

The Global Cancer Statistics 2020 reported breast cancer (BC) as the most common diagnosis of cancer type. Therefore, early detection of such type of cancer would reduce the risk of death from it. Breast imaging techniques are one of the most frequently used techniques to detect the position of cancerous cells or suspicious lesions. Computer-aided diagnosis (CAD) is a particular generation of computer systems that assist experts in detecting medical image abnormalities. In the last decades, CAD has applied deep learning (DL) and machine learning approaches to perform complex medical tasks in the computer vision area and improve the ability to make decisions for doctors and radiologists. The most popular and widely used technique of image processing in CAD systems is segmentation which consists of extracting the region of interest (ROI) through various techniques. This research provides a detailed description of the main categories of segmentation procedures which are classified into three classes: supervised, unsupervised, and DL. The main aim of this work is to provide an overview of each of these techniques and discuss their pros and cons. This will help researchers better understand these techniques and assist them in choosing the appropriate method for a given use case.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Neoplasias Mamarias Animales , Humanos , Animales , Femenino , Mamografía/métodos , Aprendizaje Automático , Neoplasias de la Mama/patología , Diagnóstico por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
5.
Comput Biol Med ; 152: 106405, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36512875

RESUMEN

BACKGROUND: Brain cancer is a destructive and life-threatening disease that imposes immense negative effects on patients' lives. Therefore, the detection of brain tumors at an early stage improves the impact of treatments and increases the patients survival rates. However, detecting brain tumors in their initial stages is a demanding task and an unmet need. METHODS: The present study presents a comprehensive review of the recent Artificial Intelligence (AI) methods of diagnosing brain tumors using MRI images. These AI techniques can be divided into Supervised, Unsupervised, and Deep Learning (DL) methods. RESULTS: Diagnosing and segmenting brain tumors usually begin with Magnetic Resonance Imaging (MRI) on the brain since MRI is a noninvasive imaging technique. Another existing challenge is that the growth of technology is faster than the rate of increase in the number of medical staff who can employ these technologies. It has resulted in an increased risk of diagnostic misinterpretation. Therefore, developing robust automated brain tumor detection techniques has been studied widely over the past years. CONCLUSION: The current review provides an analysis of the performance of modern methods in this area. Moreover, various image segmentation methods in addition to the recent efforts of researchers are summarized. Finally, the paper discusses open questions and suggests directions for future research.


Asunto(s)
Inteligencia Artificial , Neoplasias Encefálicas , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo/patología
6.
Int J Mater Form ; 15(3): 30, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35509322

RESUMEN

Metal additive manufacturing, which uses a layer-by-layer approach to fabricate parts, has many potential advantages over conventional techniques, including the ability to produced complex geometries, fast new design part production, personalised production, have lower cost and produce less material waste. While these advantages make AM an attractive option for industry, determining process parameters which result in specific properties, such as the level of porosity and tensile strength, can be a long and costly endeavour. In this review, the state-of-the-art in the control of part properties in AM is examined, including the effect of microstructure on part properties. The simulation of microstructure formation via numerical simulation and machine learning is examined which can provide process quality control and has the potential to aid in rapid process optimisation via closed loop control. In-situ monitoring of the AM process, is also discussed as a route to enable first time right production in the AM process, along with the hybrid approach of AM fabrication with post-processing steps such as shock peening, heat treatment and rolling. At the end of the paper, an outlook is presented with a view towards potential avenues for further research required in the field of metal AM.

7.
AI Soc ; 37(3): 823-835, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35039719

RESUMEN

Co-authored by a Computer Scientist and a Digital Humanist, this article examines the challenges faced by cultural heritage institutions in the digital age, which have led to the closure of the vast majority of born-digital archival collections. It focuses particularly on cultural organizations such as libraries, museums and archives, used by historians, literary scholars and other Humanities scholars. Most born-digital records held by cultural organizations are inaccessible due to privacy, copyright, commercial and technical issues. Even when born-digital data are publicly available (as in the case of web archives), users often need to physically travel to repositories such as the British Library or the Bibliothèque Nationale de France to consult web pages. Provided with enough sample data from which to learn and train their models, AI, and more specifically machine learning algorithms, offer the opportunity to improve and ease the access to digital archives by learning to perform complex human tasks. These vary from providing intelligent support for searching the archives to automate tedious and time-consuming tasks.  In this article, we focus on sensitivity review as a practical solution to unlock digital archives that would allow archival institutions to make non-sensitive information available. This promise to make archives more accessible does not come free of warnings for potential pitfalls and risks: inherent errors, "black box" approaches that make the algorithm inscrutable, and risks related to bias, fake, or partial information. Our central argument is that AI can deliver its promise to make digital archival collections more accessible, but it also creates new challenges - particularly in terms of ethics. In the conclusion, we insist on the importance of fairness, accountability and transparency in the process of making digital archives more accessible.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA