Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Más filtros




Base de datos
Asunto de la revista
Intervalo de año de publicación
1.
R Soc Open Sci ; 11(8): 231994, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39113766

RESUMEN

Global artificial intelligence (AI) governance must prioritize equity, embrace a decolonial mindset, and provide the Global South countries the authority to spearhead solution creation. Decolonization is crucial for dismantling Western-centric cognitive frameworks and mitigating biases. Integrating a decolonial approach to AI governance involves recognizing persistent colonial repercussions, leading to biases in AI solutions and disparities in AI access based on gender, race, geography, income and societal factors. This paradigm shift necessitates deliberate efforts to deconstruct imperial structures governing knowledge production, perpetuating global unequal resource access and biases. This research evaluates Sub-Saharan African progress in AI governance decolonization, focusing on indicators like AI governance institutions, national strategies, sovereignty prioritization, data protection regulations, and adherence to local data usage requirements. Results show limited progress, with only Rwanda notably responsive to decolonization among the ten countries evaluated; 80% are 'decolonization-aware', and one is 'decolonization-blind'. The paper provides a detailed analysis of each nation, offering recommendations for fostering decolonization, including stakeholder involvement, addressing inequalities, promoting ethical AI, supporting local innovation, building regional partnerships, capacity building, public awareness, and inclusive governance. This paper contributes to elucidating the challenges and opportunities associated with decolonization in SSA countries, thereby enriching the ongoing discourse on global AI governance.

2.
Cancers (Basel) ; 16(7)2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38611117

RESUMEN

Endoscopic pathological findings of the gastrointestinal tract are crucial for the early diagnosis of colorectal cancer (CRC). Previous deep learning works, aimed at improving CRC detection performance and reducing subjective analysis errors, are limited to polyp segmentation. Pathological findings were not considered and only convolutional neural networks (CNNs), which are not able to handle global image feature information, were utilized. This work introduces a novel vision transformer (ViT)-based approach for early CRC detection. The core components of the proposed approach are ViTCol, a boosted vision transformer for classifying endoscopic pathological findings, and PUTS, a vision transformer-based model for polyp segmentation. Results demonstrate the superiority of this vision transformer-based CRC detection method over existing CNN and vision transformer models. ViTCol exhibited an outstanding performance in classifying pathological findings, with an area under the receiver operating curve (AUC) value of 0.9999 ± 0.001 on the Kvasir dataset. PUTS provided outstanding results in segmenting polyp images, with mean intersection over union (mIoU) of 0.8673 and 0.9092 on the Kvasir-SEG and CVC-Clinic datasets, respectively. This work underscores the value of spatial transformers in localizing input images, which can seamlessly integrate into the main vision transformer network, enhancing the automated identification of critical image features for early CRC detection.

3.
Am J Pathol ; 194(3): 402-414, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38096984

RESUMEN

Accurate staging of human epidermal growth factor receptor 2 (HER2) expression is vital for evaluating breast cancer treatment efficacy. However, it typically involves costly and complex immunohistochemical staining, along with hematoxylin and eosin staining. This work presents customized vision transformers for staging HER2 expression in breast cancer using only hematoxylin and eosin-stained images. The proposed algorithm comprised three modules: a localization module for weakly localizing critical image features using spatial transformers, an attention module for global learning via vision transformers, and a loss module to determine proximity to a HER2 expression level based on input images by calculating ordinal loss. Results, reported with 95% CIs, reveal the proposed approach's success in HER2 expression staging: area under the receiver operating characteristic curve, 0.9202 ± 0.01; precision, 0.922 ± 0.01; sensitivity, 0.876 ± 0.01; and specificity, 0.959 ± 0.02 over fivefold cross-validation. Comparatively, this approach significantly outperformed conventional vision transformer models and state-of-the-art convolutional neural network models (P < 0.001). Furthermore, it surpassed existing methods when evaluated on an independent test data set. This work holds great importance, aiding HER2 expression staging in breast cancer treatment while circumventing the costly and time-consuming immunohistochemical staining procedure, thereby addressing diagnostic disparities in low-resource settings and low-income countries.


Asunto(s)
Neoplasias de la Mama , Receptor ErbB-2 , Humanos , Femenino , Neoplasias de la Mama/metabolismo , Hematoxilina , Eosina Amarillenta-(YS) , Coloración y Etiquetado
4.
Am J Pathol ; 193(12): 2080-2098, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37673327

RESUMEN

Accurate proliferation rate quantification can be used to devise an appropriate treatment for breast cancer. Pathologists use breast tissue biopsy glass slides stained with hematoxylin and eosin to obtain grading information. However, this manual evaluation may lead to high costs and be ineffective because diagnosis depends on the facility and the pathologists' insights and experiences. Convolutional neural network acts as a computer-based observer to improve clinicians' capacity in grading breast cancer. Therefore, this study proposes a novel scheme for automatic breast cancer malignancy grading from invasive ductal carcinoma. The proposed classifiers implement multistage transfer learning incorporating domain and histopathologic transformations. Domain adaptation using pretrained models, such as InceptionResNetV2, InceptionV3, NASNet-Large, ResNet50, ResNet101, VGG19, and Xception, was applied to classify the ×40 magnification BreaKHis data set into eight classes. Subsequently, InceptionV3 and Xception, which contain the domain and histopathology pretrained weights, were determined to be the best for this study and used to categorize the Databiox database into grades 1, 2, or 3. To provide a comprehensive report, this study offered a patchless automated grading system for magnification-dependent and magnification-independent classifications. With an overall accuracy (means ± SD) of 90.17% ± 3.08% to 97.67% ± 1.09% and an F1 score of 0.9013 to 0.9760 for magnification-dependent classification, the classifiers in this work achieved outstanding performance. The proposed approach could be used for breast cancer grading systems in clinical settings.


Asunto(s)
Neoplasias de la Mama , Redes Neurales de la Computación , Humanos , Femenino , Neoplasias de la Mama/patología , Mama/patología , Diagnóstico por Computador , Biopsia
5.
Diagnostics (Basel) ; 13(2)2023 Jan 04.
Artículo en Inglés | MEDLINE | ID: mdl-36672988

RESUMEN

Breast mass identification is a crucial procedure during mammogram-based early breast cancer diagnosis. However, it is difficult to determine whether a breast lump is benign or cancerous at early stages. Convolutional neural networks (CNNs) have been used to solve this problem and have provided useful advancements. However, CNNs focus only on a certain portion of the mammogram while ignoring the remaining and present computational complexity because of multiple convolutions. Recently, vision transformers have been developed as a technique to overcome such limitations of CNNs, ensuring better or comparable performance in natural image classification. However, the utility of this technique has not been thoroughly investigated in the medical image domain. In this study, we developed a transfer learning technique based on vision transformers to classify breast mass mammograms. The area under the receiver operating curve of the new model was estimated as 1 ± 0, thus outperforming the CNN-based transfer-learning models and vision transformer models trained from scratch. The technique can, hence, be applied in a clinical setting, to improve the early diagnosis of breast cancer.

6.
Diagnostics (Basel) ; 12(11)2022 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-36359497

RESUMEN

Convolutional neural networks (CNNs) have enhanced ultrasound image-based early breast cancer detection. Vision transformers (ViTs) have recently surpassed CNNs as the most effective method for natural image analysis. ViTs have proven their capability of incorporating more global information than CNNs at lower layers, and their skip connections are more powerful than those of CNNs, which endows ViTs with superior performance. However, the effectiveness of ViTs in breast ultrasound imaging has not yet been investigated. Here, we present BUViTNet breast ultrasound detection via ViTs, where ViT-based multistage transfer learning is performed using ImageNet and cancer cell image datasets prior to transfer learning for classifying breast ultrasound images. We utilized two publicly available ultrasound breast image datasets, Mendeley and breast ultrasound images (BUSI), to train and evaluate our algorithm. The proposed method achieved the highest area under the receiver operating characteristics curve (AUC) of 1 ± 0, Matthew's correlation coefficient (MCC) of 1 ± 0, and kappa score of 1 ± 0 on the Mendeley dataset. Furthermore, BUViTNet achieved the highest AUC of 0.968 ± 0.02, MCC of 0.961 ± 0.01, and kappa score of 0.959 ± 0.02 on the BUSI dataset. BUViTNet outperformed ViT trained from scratch, ViT-based conventional transfer learning, and CNN-based transfer learning in classifying breast ultrasound images (p < 0.01 in all cases). Our findings indicate that improved transformers are effective in analyzing breast images and can provide an improved diagnosis if used in clinical settings. Future work will consider the use of a wide range of datasets and parameters for optimized performance.

7.
Micromachines (Basel) ; 13(9)2022 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-36144131

RESUMEN

Breast cancer is the most common type of cancer and it is treated with surgical intervention, radiotherapy, chemotherapy, or a combination of these regimens. Despite chemotherapy's ample use, it has limitations such as bioavailability, adverse side effects, high-dose requirements, low therapeutic indices, multiple drug resistance development, and non-specific targeting. Drug delivery vehicles or carriers, of which nanocarriers are prominent, have been introduced to overcome chemotherapy limitations. Nanocarriers have been preferentially used in breast cancer chemotherapy because of their role in protecting therapeutic agents from degradation, enabling efficient drug concentration in target cells or tissues, overcoming drug resistance, and their relatively small size. However, nanocarriers are affected by physiological barriers, bioavailability of transported drugs, and other factors. To resolve these issues, the use of external stimuli has been introduced, such as ultrasound, infrared light, thermal stimulation, microwaves, and X-rays. Recently, ultrasound-responsive nanocarriers have become popular because they are cost-effective, non-invasive, specific, tissue-penetrating, and deliver high drug concentrations to their target. In this paper, we review recent developments in ultrasound-guided nanocarriers for breast cancer chemotherapy, discuss the relevant challenges, and provide insights into future directions.

8.
HardwareX ; 11: e00276, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35509911

RESUMEN

Around 800 women die each day from complications of pregnancy and childbirth in the world. Vital Signs monitoring (such as blood pressure, pulse rate, and temperature) are among fundamental parameters of ensuring the health and safety of women and newborns during pregnancy, labor, and childbirth. Approximately, 10% of women experience hypertension (greater than140/90) during pregnancy. High blood pressure during pregnancy can cause extra stress on the heart and kidneys and can increase the risk of heart disease. Therefore, early recognition of abnormal vital signs, which are induced due to pregnancy can allow for timely identification of clinical deterioration. Currently used technologies are expensive and complex design with implementation challenges in low-resource settings where maternal morbidity and mortality is higher. Thus, considering the above need, here a hardware device has been designed and developed, which is a low cost, and portable for pregnant women's vital signs (with cuff-less blood pressure, heart rate, and body temperature) monitoring device. The developed device would have a remarkable benefit of monitoring the maternal vital signs especially for those in low resource settings, where there is a high paucity of vital signs monitoring devices.

9.
Diagnostics (Basel) ; 12(4)2022 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-35453909

RESUMEN

The ultrasonic technique is an indispensable imaging modality for diagnosis of breast cancer in young women due to its ability in efficiently capturing the tissue properties, and decreasing nega-tive recognition rate thereby avoiding non-essential biopsies. Despite the advantages, ultrasound images are affected by speckle noise, generating fine-false structures that decrease the contrast of the images and diminish the actual boundaries of tissues on ultrasound image. Moreover, speckle noise negatively impacts the subsequent stages in image processing pipeline, such as edge detec-tion, segmentation, feature extraction, and classification. Previous studies have formulated vari-ous speckle reduction methods in ultrasound images; however, these methods suffer from being unable to retain finer edge details and require more processing time. In this study, we propose a breast ultrasound de-speckling method based on rotational invariant block matching non-local means (RIBM-NLM) filtering. The effectiveness of our method has been demonstrated by com-paring our results with three established de-speckling techniques, the switching bilateral filter (SBF), the non-local means filter (NLMF), and the optimized non-local means filter (ONLMF) on 250 images from public dataset and 6 images from private dataset. Evaluation metrics, including Self-Similarity Index Measure (SSIM), Peak Signal to Noise Ratio (PSNR), and Mean Square Error (MSE) were utilized to measure performance. With the proposed method, we were able to record average SSIM of 0.8915, PSNR of 65.97, MSE of 0.014, RMSE of 0.119, and computational speed of 82 seconds at noise variance of 20dB using the public dataset, all with p-value of less than 0.001 compared against NLMF, ONLMF, and SBF. Similarly, the proposed method achieved av-erage SSIM of 0.83, PSNR of 66.26, MSE of 0.015, RMSE of 0.124, and computational speed of 83 seconds at noise variance of 20dB using the private dataset, all with p-value of less than 0.001 compared against NLMF, ONLMF, and SBF.

10.
Cancers (Basel) ; 14(5)2022 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-35267587

RESUMEN

Despite great achievements in classifying mammographic breast-mass images via deep-learning (DL), obtaining large amounts of training data and ensuring generalizations across different datasets with robust and well-optimized algorithms remain a challenge. ImageNet-based transfer learning (TL) and patch classifiers have been utilized to address these challenges. However, researchers have been unable to achieve the desired performance for DL to be used as a standalone tool. In this study, we propose a novel multi-stage TL from ImageNet and cancer cell line image pre-trained models to classify mammographic breast masses as either benign or malignant. We trained our model on three public datasets: Digital Database for Screening Mammography (DDSM), INbreast, and Mammographic Image Analysis Society (MIAS). In addition, a mixed dataset of the images from these three datasets was used to train the model. We obtained an average five-fold cross validation AUC of 1, 0.9994, 0.9993, and 0.9998 for DDSM, INbreast, MIAS, and mixed datasets, respectively. Moreover, the observed performance improvement using our method against the patch-based method was statistically significant, with a p-value of 0.0029. Furthermore, our patchless approach performed better than patch- and whole image-based methods, improving test accuracy by 8% (91.41% vs. 99.34%), tested on the INbreast dataset. The proposed method is of significant importance in solving the need for a large training dataset as well as reducing the computational burden in training and implementing the mammography-based deep-learning models for early diagnosis of breast cancer.

11.
Diagnostics (Basel) ; 12(1)2022 Jan 06.
Artículo en Inglés | MEDLINE | ID: mdl-35054303

RESUMEN

Breast cancer diagnosis is one of the many areas that has taken advantage of artificial intelligence to achieve better performance, despite the fact that the availability of a large medical image dataset remains a challenge. Transfer learning (TL) is a phenomenon that enables deep learning algorithms to overcome the issue of shortage of training data in constructing an efficient model by transferring knowledge from a given source task to a target task. However, in most cases, ImageNet (natural images) pre-trained models that do not include medical images, are utilized for transfer learning to medical images. Considering the utilization of microscopic cancer cell line images that can be acquired in large amount, we argue that learning from both natural and medical datasets improves performance in ultrasound breast cancer image classification. The proposed multistage transfer learning (MSTL) algorithm was implemented using three pre-trained models: EfficientNetB2, InceptionV3, and ResNet50 with three optimizers: Adam, Adagrad, and stochastic gradient de-scent (SGD). Dataset sizes of 20,400 cancer cell images, 200 ultrasound images from Mendeley and 400 ultrasound images from the MT-Small-Dataset were used. ResNet50-Adagrad-based MSTL achieved a test accuracy of 99 ± 0.612% on the Mendeley dataset and 98.7 ± 1.1% on the MT-Small-Dataset, averaging over 5-fold cross validation. A p-value of 0.01191 was achieved when comparing MSTL against ImageNet based TL for the Mendeley dataset. The result is a significant improvement in the performance of artificial intelligence methods for ultrasound breast cancer classification compared to state-of-the-art methods and could remarkably improve the early diagnosis of breast cancer in young women.

12.
Clin Lymphoma Myeloma Leuk ; 21(11): e903-e914, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34493478

RESUMEN

BACKGROUND: Conventional identification of blood disorders based on visual inspection of blood smears through microscope is time consuming, error-prone and is limited by hematologist's physical acuity. Therefore, an automated optical image processing system is required to support the clinical decision-making. MATERIALS AND METHODS: Blood smear slides (n = 250) were prepared from clinical samples, imaged and analyzed in Jimma Medical Center, Hematology department. Samples were collected, analyzed and preserved from out and in-patients. The system was able to categorize four common types of leukemia's such as acute and chronic myeloid leukemia; and acute and chronic lymphoblastic leukemia, through a robust image segmentation protocol, followed by classification using the support vector machine. RESULTS: The system was able to classify leukemia types with an accuracy, sensitivity, specificity of 97.69%, 97.86% and 100%, respectively for the test datasets, and 97.5%, 98.55% and 100%, respectively, for the validation datasets. In addition, the system also showed an accuracy of 94.75% for the WBC counts that include both lymphocytes and monocytes. The computer-assisted diagnosis system took less than one minute for processing and assigning the leukemia types, compared to an average period of 30 minutes by unassisted manual approaches. Moreover, the automated system complements the healthcare workers' in their efforts, by improving the accuracy rates in diagnosis from ∼70% to over 97%. CONCLUSION: Importantly, our module is designed to assist the healthcare facilities in the rural areas of sub-Saharan Africa, equipped with fewer experienced medical experts, especially in screening patients for blood associated diseases including leukemia.


Asunto(s)
Leucemia/sangre , Leucemia/clasificación , Aprendizaje Automático/normas , Adulto , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Masculino , Persona de Mediana Edad
13.
Cancers (Basel) ; 13(4)2021 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-33578891

RESUMEN

Transfer learning is a machine learning approach that reuses a learning method developed for a task as the starting point for a model on a target task. The goal of transfer learning is to improve performance of target learners by transferring the knowledge contained in other (but related) source domains. As a result, the need for large numbers of target-domain data is lowered for constructing target learners. Due to this immense property, transfer learning techniques are frequently used in ultrasound breast cancer image analyses. In this review, we focus on transfer learning methods applied on ultrasound breast image classification and detection from the perspective of transfer learning approaches, pre-processing, pre-training models, and convolutional neural network (CNN) models. Finally, comparison of different works is carried out, and challenges-as well as outlooks-are discussed.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA