Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38819973

ABSTRACT

Determining lymphoma subtypes is a crucial step for better patient treatment targeting to potentially increase their survival chances. In this context, the existing gold standard diagnosis method, which relies on gene expression technology, is highly expensive and time-consuming, making it less accessibility. Although alternative diagnosis methods based on IHC (immunohistochemistry) technologies exist (recommended by the WHO), they still suffer from similar limitations and are less accurate. Whole Slide Image (WSI) analysis using deep learning models has shown promising potential for cancer diagnosis, that could offer cost-effective and faster alternatives to existing methods. In this work, we propose a vision transformer-based framework for distinguishing DLBCL (Diffuse Large B-Cell Lymphoma) cancer subtypes from high-resolution WSIs. To this end, we introduce a multi-modal architecture to train a classifier model from various WSI modalities. We then leverage this model through a knowledge distillation process to efficiently guide the learning of a mono-modal classifier. Our experimental study conducted on a lymphoma dataset of 157 patients shows the promising performance of our mono-modal classification model, outperforming six recent state-of-the-art methods. In addition, the power-law curve, estimated on our experimental data, suggests that with more training data from a reasonable number of additional patients, our model could achieve competitive diagnosis accuracy with IHC technologies. Furthermore, the efficiency of our framework is confirmed through an additional experimental study on an external breast cancer dataset (BCI dataset).

2.
Curr Comput Aided Drug Des ; 18(2): 81-94, 2022.
Article in English | MEDLINE | ID: mdl-35139795

ABSTRACT

BACKGROUND: The manual segmentation of cellular structures on Z-stack microscopic images is time-consuming and often inaccurate, highlighting the need to develop auto-segmentation tools to facilitate this process. OBJECTIVE: This study aimed to compare the performance of three different machine learning architectures, including random forest (RF), AdaBoost, and multi-layer perceptron (MLP), for the autosegmentation of nuclei in proliferating cervical cancer cells on Z-Stack cellular microscopy proliferation images provided by the HCS Pharma. The impact of using post-processing techniques, such as the StarDist plugin and majority voting, was also evaluated. METHODS: The RF, AdaBoost, and MLP algorithms were used to auto-segment the nuclei of cervical cancer cells on microscopic images at different Z-stack positions. Post-processing techniques were then applied to each algorithm. The performance of all algorithms was compared by an expert to globally generated ground truth by calculating the accuracy detection rate, the Dice coefficient, and the Jaccard index. RESULTS: RF achieved the best accuracy, followed by the AdaBoost and then the MLP. All algorithms achieved good pixel classifications except in regions whereby the nuclei overlapped. The majority voting and StarDist plugin improved the accuracy of the segmentation but did not resolve the nuclei overlap issue. The Z-Stack analysis revealed similar segmentation results to the Z-stack layer used to train the image. However, a worse performance was noted for segmentations performed on different Z-stack positions, which were not used to train the algorithms. CONCLUSION: All machine learning architectures provided a good segmentation of nuclei in cervical cancer cells but did not resolve the problem of overlapping nuclei and Z-stack segmentation. Further research should therefore evaluate the combined segmentation techniques and deep learning architectures to resolve these issues.


Subject(s)
Image Processing, Computer-Assisted , Uterine Cervical Neoplasms , Algorithms , Cellular Structures , Female , Humans , Image Processing, Computer-Assisted/methods , Machine Learning
3.
J Mol Graph Model ; 111: 108103, 2022 03.
Article in English | MEDLINE | ID: mdl-34959149

ABSTRACT

Proteins are essential to nearly all cellular mechanism and the effectors of the cells activities. As such, they often interact through their surface with other proteins or other cellular ligands such as ions or organic molecules. The evolution generates plenty of different proteins, with unique abilities, but also proteins with related functions hence similar 3D surface properties (shape, physico-chemical properties, …). The protein surfaces are therefore of primary importance for their activity. In the present work, we assess the ability of different methods to detect such similarities based on the geometry of the protein surfaces (described as 3D meshes), using either their shape only, or their shape and the electrostatic potential (a biologically relevant property of proteins surface). Five different groups participated in this contest using the shape-only dataset, and one group extended its pre-existing method to handle the electrostatic potential. Our comparative study reveals both the ability of the methods to detect related proteins and their difficulties to distinguish between highly related proteins. Our study allows also to analyze the putative influence of electrostatic information in addition to the one of protein shapes alone. Finally, the discussion permits to expose the results with respect to ones obtained in the previous contests for the extended method. The source codes of each presented method have been made available online.


Subject(s)
Proteins , Ligands , Models, Molecular , Protein Domains , Static Electricity
4.
J Healthc Inform Res ; 6(4): 442-460, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36688121

ABSTRACT

A novel approach of data augmentation based on irregular superpixel decomposition is proposed. This approach called SuperpixelGridMasks permits to extend original image datasets that are required by training stages of machine learning-related analysis architectures towards increasing their performances. Three variants named SuperpixelGridCut, SuperpixelGridMean, and SuperpixelGridMix are presented. These grid-based methods produce a new style of image transformations using the dropping and fusing of information. Extensive experiments using various image classification models as well as precision health and surrounding real-world datasets show that baseline performances can be significantly outperformed using our methods. The comparative study also shows that our methods can overpass the performances of other data augmentations. SuperpixelGridCut, SuperpixelGridMean, and SuperpixelGridMix codes are publicly available at https://github.com/hammoudiproject/SuperpixelGridMasks.

5.
J Med Syst ; 45(7): 75, 2021 Jun 08.
Article in English | MEDLINE | ID: mdl-34101042

ABSTRACT

Coronavirus disease 2019 (COVID-19) is an infectious disease with first symptoms similar to the flu. COVID-19 appeared first in China and very quickly spreads to the rest of the world, causing then the 2019-20 coronavirus pandemic. In many cases, this disease causes pneumonia. Since pulmonary infections can be observed through radiography images, this paper investigates deep learning methods for automatically analyzing query chest X-ray images with the hope to bring precision tools to health professionals towards screening the COVID-19 and diagnosing confirmed patients. In this context, training datasets, deep learning architectures and analysis strategies have been experimented from publicly open sets of chest X-ray images. Tailored deep learning models are proposed to detect pneumonia infection cases, notably viral cases. It is assumed that viral pneumonia cases detected during an epidemic COVID-19 context have a high probability to presume COVID-19 infections. Moreover, easy-to-apply health indicators are proposed for estimating infection status and predicting patient status from the detected pneumonia cases. Experimental results show possibilities of training deep learning models over publicly open sets of chest X-ray images towards screening viral pneumonia. Chest X-ray test images of COVID-19 infected patients are successfully diagnosed through detection models retained for their performances. The efficiency of proposed health indicators is highlighted through simulated scenarios of patients presenting infections and health problems by combining real and synthetic health data.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Pneumonia, Viral/diagnostic imaging , Radiography, Thoracic , Algorithms , Humans , Neural Networks, Computer , X-Rays
6.
Smart Health (Amst) ; 19: 100144, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33521223

ABSTRACT

Wearing face masks appears as a solution for limiting the spread of COVID-19. In this context, efficient recognition systems are expected for checking that people faces are masked in regulated areas. Hence, a large dataset of masked faces is necessary for training deep learning models towards detecting people wearing masks and those not wearing masks. Currently, there are no available large dataset of masked face images that permits to check if faces are correctly masked or not. Indeed, many people are not correctly wearing their masks due to bad practices, bad behaviors or vulnerability of individuals (e.g., children, old people). For these reasons, several mask wearing campaigns intend to sensitize people about this problem and good practices. In this sense, this work proposes an image editing approach and three types of masked face detection dataset; namely, the Correctly Masked Face Dataset (CMFD), the Incorrectly Masked Face Dataset (IMFD) and their combination for the global masked face detection (MaskedFace-Net). Realistic masked face datasets are proposed with a twofold objective: i) detecting people having their faces masked or not masked, ii) detecting faces having their masks correctly worn or incorrectly worn (e.g.; at airport portals or in crowds). To the best of our knowledge, no large dataset of masked faces provides such a granularity of classification towards mask wearing analysis. Moreover, this work globally presents the applied mask-to-face deformable model for permitting the generation of other masked face images, notably with specific masks. Our datasets of masked faces (137,016 images) are available at https://github.com/cabani/MaskedFace-Net. The dataset of face images Flickr-Faces-HQ3 (FFHQ), publicly made available online by NVIDIA Corporation, has been used for generating MaskedFace-Net.

SELECTION OF CITATIONS
SEARCH DETAIL
...