ABSTRACT
Anophelinae mosquitoes are exposed to a variety of microbes including Plasmodium parasites that cause malaria. When infected, mosquitoes mount versatile immune responses, including the production of antimicrobial peptides. Cecropins are one of the most widely distributed families of antimicrobial peptides in insects and all previously studied Anopheles members are playing roles in adult mosquito immunity. We have identified and characterized a novel member of the Anopheles gambiae cecropin family, cecropin D (CecD), that is uniquely expressed and immune-responsive at late larval stages to promote microbial clearance through its broad-spectrum antibacterial activity during larval-pupal developmental transition. Interestingly, Cecropin D also exhibited highly potent activity against Plasmodium falciparum sporozoites, the malaria parasite stage that is transmitted from mosquitoes and infects humans and thereby holds promise as a malaria transmission-blocking agent. Finally, we have defined unequivocal cecropin-specific molecular signatures to systematically organize the diversity of the cecropin family in malaria vectors.
ABSTRACT
Capturing high-resolution imagery of the Earth's surface often calls for a telescope of considerable size, even from low Earth orbits (LEOs). A large aperture often requires large and expensive platforms. For instance, achieving a resolution of 1 m at visible wavelengths from LEO typically requires an aperture diameter of at least 30 cm. Additionally, ensuring high revisit times often prompts the use of multiple satellites. In light of these challenges, a small, segmented, deployable CubeSat telescope was recently proposed creating the additional need of phasing the telescope's mirrors. Phasing methods on compact platforms are constrained by the limited volume and power available, excluding solutions that rely on dedicated hardware or demand substantial computational resources. Neural networks (NNs) are known for their computationally efficient inference and reduced onboard requirements. Therefore, we developed a NN-based method to measure co-phasing errors inherent to a deployable telescope. The proposed technique demonstrates its ability to detect phasing errors at the targeted performance level [typically a wavefront error (WFE) below 15 nm RMS for a visible imager operating at the diffraction limit] using a point source. The robustness of the NN method is verified in presence of high-order aberrations or noise and the results are compared against existing state-of-the-art techniques. The developed NN model ensures its feasibility and provides a realistic pathway towards achieving diffraction-limited images.
ABSTRACT
Deep learning has rapidly increased in popularity, leading to the development of perception solutions for autonomous driving. The latter field leverages techniques developed for computer vision in other domains for accomplishing perception tasks such as object detection. However, the black-box nature of deep neural models and the complexity of the autonomous driving context motivates the study of explainability in these models that perform perception tasks. Moreover, this work explores explainable AI techniques for the object detection task in the context of autonomous driving. An extensive and detailed comparison is carried out between gradient-based and perturbation-based methods (e.g., D-RISE). Moreover, several experimental setups are used with different backbone architectures and different datasets to observe the influence of these aspects in the explanations. All the techniques explored consist of saliency methods, making their interpretation and evaluation primarily visual. Nevertheless, numerical assessment methods are also used. Overall, D-RISE and guided backpropagation obtain more localized explanations. However, D-RISE highlights more meaningful regions, providing more human-understandable explanations. To the best of our knowledge, this is the first approach to obtaining explanations focusing on the regression of the bounding box coordinates.
ABSTRACT
Training machine learning models for artificial intelligence (AI) applications in pathology often requires extensive annotation by human experts, but there is little guidance on the subject. In this work, we aimed to describe our experience and provide a simple, useful, and practical guide addressing annotation strategies for AI development in computational pathology. Annotation methodology will vary significantly depending on the specific study's objectives, but common difficulties will be present across different settings. We summarize key aspects and issue guiding principles regarding team interaction, ground-truth quality assessment, different annotation types, and available software and hardware options and address common difficulties while annotating. This guide was specifically designed for pathology annotation, intending to help pathologists, other researchers, and AI developers with this process.
Subject(s)
Artificial Intelligence , Pathologists , Humans , Software , Machine LearningABSTRACT
Semantic segmentation consists of classifying each pixel according to a set of classes. Conventional models spend as much effort classifying easy-to-segment pixels as they do classifying hard-to-segment pixels. This is inefficient, especially when deploying to situations with computational constraints. In this work, we propose a framework wherein the model first produces a rough segmentation of the image, and then patches of the image estimated as hard to segment are refined. The framework is evaluated in four datasets (autonomous driving and biomedical), across four state-of-the-art architectures. Our method accelerates inference time by four, with additional gains for training time, at the cost of some output quality.
ABSTRACT
BACKGROUND: Breast symmetry is an essential component of breast cosmesis. The Harvard Cosmesis scale is the most widely adopted method of breast symmetry assessment. However, this scale lacks reproducibility and reliability, limiting its application in clinical practice. The VECTRA® XT 3D (VECTRA®) is a novel breast surface imaging system that, when combined with breast contour measuring software (Mirror®), aims to produce a more accurate and reproducible measurement of breast contour to aid operative planning in breast surgery. OBJECTIVES: This study aims to compare the reliability and reproducibility of subjective (Harvard Cosmesis scale) with objective (VECTRA®) symmetry assessment on the same cohort of patients. METHODS: Patients at a tertiary institution had 2D and 3D photographs of their breasts. Seven assessors scored the 2D photographs using the Harvard Cosmesis scale. Two independent assessors used Mirror® software to objectively calculate breast symmetry by analysing 3D images of the breasts. RESULTS: Intra-observer agreement ranged from none to moderate (kappa - 0.005-0.7) amongst the assessors using the Harvard Cosmesis scale. Inter-observer agreement was weak (kappa 0.078-0.454) amongst Harvard scores compared to VECTRA® measurements. Kappa values ranged 0.537-0.674 for intra-observer agreement (p < 0.001) with Root Mean Square (RMS) scores. RMS had a moderate correlation with the Harvard Cosmesis scale (rs = 0.613). Furthermore, absolute volume difference between breasts had poor correlation with RMS (R2 = 0.133). CONCLUSION: VECTRA® and Mirror® software have potential in clinical practice as objectifying breast symmetry, but in the current form, it is not an ideal test. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Subject(s)
Breast , Mammaplasty , Humans , Reproducibility of Results , Breast/surgery , Mastectomy/methods , Imaging, Three-Dimensional/methods , Technology , Mammaplasty/methods , Esthetics , Retrospective Studies , Treatment OutcomeABSTRACT
INTRODUCTION: Breast volume estimation is considered crucial for breast cancer surgery planning. A single, easy, and reproducible method to estimate breast volume is not available. This study aims to evaluate, in patients proposed for mastectomy, the accuracy of the calculation of breast volume from a low-cost 3D surface scan (Microsoft Kinect) compared to the breast MRI and water displacement technique. MATERIAL AND METHODS: Patients with a Tis/T1-T3 breast cancer proposed for mastectomy between July 2015 and March 2017 were assessed for inclusion in the study. Breast volume calculations were performed using a 3D surface scan and the breast MRI and water displacement technique. Agreement between volumes obtained with both methods was assessed with the Spearman and Pearson correlation coefficients. RESULTS: Eighteen patients with invasive breast cancer were included in the study and submitted to mastectomy. The level of agreement of the 3D breast volume compared to surgical specimens and breast MRI volumes was evaluated. For mastectomy specimen volume, an average (standard deviation) of 0.823 (0.027) and 0.875 (0.026) was obtained for the Pearson and Spearman correlations, respectively. With respect to MRI annotation, we obtained 0.828 (0.038) and 0.715 (0.018). DISCUSSION: Although values obtained by both methodologies still differ, the strong linear correlation coefficient suggests that 3D breast volume measurement using a low-cost surface scan device is feasible and can approximate both the MRI breast volume and mastectomy specimen with sufficient accuracy. CONCLUSION: 3D breast volume measurement using a depth-sensor low-cost surface scan device is feasible and can parallel MRI breast and mastectomy specimen volumes with enough accuracy. Differences between methods need further development to reach clinical applicability. A possible approach could be the fusion of breast MRI and the 3D surface scan to harmonize anatomic limits and improve volume delimitation.
Subject(s)
Breast Neoplasms , Breast/diagnostic imaging , Breast/surgery , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/surgery , Female , Humans , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Mastectomy/methodsABSTRACT
BACKGROUND: The standard configuration's set of twelve electrocardiogram (ECG) leads is optimal for the medical diagnosis of diverse cardiac conditions. However, it requires ten electrodes on the patient's limbs and chest, which is uncomfortable and cumbersome. Interlead conversion methods can reconstruct missing leads and enable more comfortable acquisitions, including in wearable devices, while still allowing for adequate diagnoses. Currently, methodologies for interlead ECG conversion either require multiple reference (input) leads and/or require input signals to be temporally aligned considering the ECG landmarks. METHODS: Unlike the methods in the literature, this paper studies the possibility of converting ECG signals into all twelve standard configuration leads using signal segments from only one reference lead, without temporal alignment (blindly-segmented). The proposed methodology is based on a deep learning encoder-decoder U-Net architecture, which is compared with adaptations based on convolutional autoencoders and label refinement networks. Moreover, the method is explored for conversion with one single shared encoder or multiple individual encoders for each lead. RESULTS: Despite the more challenging settings, the proposed methodology was able to attain state-of-the-art level performance in multiple target leads, and both lead I and lead II seem especially suitable to convert certain sets of leads. In cross-database tests, the methodology offered promising results despite acquisition setup differences. Furthermore, results show that the presence of medical conditions does not have a considerable effect on the method's performance. CONCLUSIONS: This study shows the feasibility of converting ECG signals using single-lead blindly-segmented inputs. Although the results are promising, further efforts should be devoted towards the improvement of the methodologies, especially the robustness to diverse acquisition setups, in order to be applicable to cardiac health monitoring in wearable devices and less obtrusive clinical scenarios.
Subject(s)
Heart Diseases , Wearable Electronic Devices , Humans , Electrocardiography , Electrodes , Databases, FactualABSTRACT
Cysteine-rich trypsin inhibitor-like domain (TIL)-harboring proteins are broadly distributed in nature but remain understudied in vector mosquitoes. Here we have explored the biology of a TIL domain-containing protein of the arbovirus vector Aedes aegypti, cysteine-rich venom protein 379 (CRVP379). CRVP379 was previously shown to be essential for dengue virus infection in Ae. aegypti mosquitoes. Gene expression analysis showed CRVP379 to be highly expressed in pupal stages, male testes, and female ovaries. CRVP379 expression is also increased in the ovaries at 48 h post-blood feeding. We used CRISPR-Cas9 genome editing to generate two mutant lines of CRVP379 with mutations inside or outside the TIL domain. Female mosquitoes from both mutant lines showed severe defects in their reproductive capability; mutant females also showed differences in their follicular cell morphology. However, the CRVP379 line with a mutation outside the TIL domain did not affect male reproductive performance, suggesting that some CRVP379 residues may have sexually dimorphic functions. In contrast to previous reports, we did not observe a noticeable difference in dengue virus infection between the wild-type and any of the mutant lines. The importance of CRVP379 in Ae. aegypti reproductive biology makes it an interesting candidate for the development of Ae. aegypti population control methods.
Subject(s)
Aedes , Dengue , Virus Diseases , Animals , Cysteine/metabolism , Female , Male , Mosquito Vectors/genetics , Reproduction/genetics , Trypsin/metabolism , Trypsin Inhibitors/metabolismABSTRACT
In recent years, deep neural networks have shown significant progress in computer vision due to their large generalization capacity; however, the overfitting problem ubiquitously threatens the learning process of these highly nonlinear architectures. Dropout is a recent solution to mitigate overfitting that has witnessed significant success in various classification applications. Recently, many efforts have been made to improve the Standard dropout using an unsupervised merit-based semantic selection of neurons in the latent space. However, these studies do not consider the task-relevant information quality and quantity and the diversity of the latent kernels. To solve the challenge of dropping less informative neurons in deep learning, we propose an efficient end-to-end dropout algorithm that selects the most informative neurons with the highest correlation with the target output considering the sparsity in its selection procedure. First, to promote activation diversity, we devise an approach to select the most diverse set of neurons by making use of determinantal point process (DPP) sampling. Furthermore, to incorporate task specificity into deep latent features, a mutual information (MI)-based merit function is developed. Leveraging the proposed MI with DPP sampling, we introduce the novel DPPMI dropout that adaptively adjusts the retention rate of neurons based on their contribution to the neural network task. Empirical studies on real-world classification benchmarks including, MNIST, SVHN, CIFAR10, CIFAR100, demonstrate the superiority of our proposed method over recent state-of-the-art dropout algorithms in the literature.
ABSTRACT
Inertial Measurement Units (IMUs) have become a popular solution for tracking human motion. The main problem of using IMU data for deriving the position of different body segments throughout time is related to the accumulation of the errors in the inertial data. The solution to this problem is necessary to improve the use of IMUs for position tracking. In this work, we present several Machine Learning (ML) methods to improve the position tracking of various body segments when performing different movements. Firstly, classifiers were used to identify the periods in which the IMUs were stopped (zero-velocity detection). The models Random Forest, Support Vector Machine (SVM) and neural networks based on Long-Short-Term Memory (LSTM) layers were capable of identifying those periods independently of the motion and body segment with a substantially higher performance than the traditional fixed-threshold zero-velocity detectors. Afterwards, these techniques were combined with ML regression models based on LSTMs capable of estimating the displacement of the sensors during periods of movement. These models did not show significant improvements when compared with the more straightforward double integration of the linear acceleration data with drift removal for translational motion estimate. Finally, we present a model based on LSTMs that combined simultaneously zero-velocity detection with the translational motion of sensors estimate. This model revealed a lower average error for position tracking than the combination of the previously referred methodologies.
Subject(s)
Machine Learning , Movement , Neural Networks, Computer , Acceleration , Humans , Support Vector MachineABSTRACT
Breast cancer treatments can have a negative impact on breast aesthetics, in case when surgery is intended to intersect tumor. For many years mastectomy was the only surgical option, but more recently breast conserving surgery (BCS) has been promoted as a liable alternative to treat cancer while preserving most part of the breast. However, there is still a significant number of BCS intervened patients who are unpleasant with the result of the treatment, which leads to self-image issues and emotional overloads. Surgeons recognize the value of a tool to predict the breast shape after BCS to facilitate surgeon/patient communication and allow more educated decisions; however, no such tool is available that is suited for clinical usage. These tools could serve as a way of visually sensing the aesthetic consequences of the treatment. In this research, it is intended to propose a methodology for predict the deformation after BCS by using machine learning techniques. Nonetheless, there is no appropriate dataset containing breast data before and after surgery in order to train a learning model. Therefore, an in-house semi-synthetic dataset is proposed to fulfill the requirement of this research. Using the proposed dataset, several learning methodologies were investigated, and promising outcomes are obtained.
Subject(s)
Mastectomy, Segmental , Breast , Breast Neoplasms , Humans , MastectomyABSTRACT
Electrocardiogram signals acquired through a steering wheel could be the key to seamless, highly comfortable, and continuous human recognition in driving settings. This paper focuses on the enhancement of the unprecedented lesser quality of such signals, through the combination of Savitzky-Golay and moving average filters, followed by outlier detection and removal based on normalised cross-correlation and clustering, which was able to render ensemble heartbeats of significantly higher quality. Discrete Cosine Transform (DCT) and Haar transform features were extracted and fed to decision methods based on Support Vector Machines (SVM), k-Nearest Neighbours (kNN), Multilayer Perceptrons (MLP), and Gaussian Mixture Models - Universal Background Models (GMM-UBM) classifiers, for both identification and authentication tasks. Additional techniques of user-tuned authentication and past score weighting were also studied. The method's performance was comparable to some of the best recent state-of-the-art methods (94.9% identification rate (IDR) and 2.66% authentication equal error rate (EER)), despite lesser results with scarce train data (70.9% IDR and 11.8% EER). It was concluded that the method was suitable for biometric recognition with driving electrocardiogram signals, and could, with future developments, be used on a continuous system in seamless and highly noisy settings.
Subject(s)
Biometry , Biometric Identification , Electrocardiography , Heart Rate , Humans , Support Vector MachineABSTRACT
Microscopy examination has been the pillar of malaria diagnosis, being the recommended procedure when its quality can be maintained. However, the need for trained personnel and adequate equipment limits its availability and accessibility in malaria-endemic areas. Rapid, accurate, accessible diagnostic tools are increasingly required, as malaria control programs extend parasite-based diagnosis and the prevalence decreases. This paper presents an image processing and analysis methodology using supervised classification to assess the presence of malaria parasites and determine the species and life cycle stage in Giemsa-stained thin blood smears. The main differentiation factor is the usage of microscopic images exclusively acquired with low cost and accessible tools such as smartphones, a dataset of 566 images manually annotated by an experienced parasilogist being used. Eight different species-stage combinations were considered in this work, with an automatic detection performance ranging from 73.9% to 96.2% in terms of sensitivity and from 92.6% to 99.3% in terms of specificity. These promising results attest to the potential of using this approach as a valid alternative to conventional microscopy examination, with comparable detection performances and acceptable computational times.
Subject(s)
Life Cycle Stages , Malaria/parasitology , Plasmodium/classification , Plasmodium/growth & development , Humans , Image Processing, Computer-Assisted , Malaria/diagnosis , Microscopy , Sensitivity and SpecificityABSTRACT
Fingerprint liveness detection methods have been developed as an attempt to overcome the vulnerability of fingerprint biometric systems to spoofing attacks. Traditional approaches have been quite optimistic about the behavior of the intruder assuming the use of a previously known material. This assumption has led to the use of supervised techniques to estimate the performance of the methods, using both live and spoof samples to train the predictive models and evaluate each type of fake samples individually. Additionally, the background was often included in the sample representation, completely distorting the decision process. Therefore, we propose that an automatic segmentation step should be performed to isolate the fingerprint from the background and truly decide on the liveness of the fingerprint and not on the characteristics of the background. Also, we argue that one cannot aim to model the fake samples completely since the material used by the intruder is unknown beforehand. We approach the design by modeling the distribution of the live samples and predicting as fake the samples very unlikely according to that model. Our experiments compare the performance of the supervised approaches with the semi-supervised ones that rely solely on the live samples. The results obtained differ from the ones obtained by the more standard approaches which reinforces our conviction that the results in the literature are misleadingly estimating the true vulnerability of the biometric system.
Subject(s)
Biometric Identification , Dermatoglyphics/classification , Fingers/physiology , Security Measures , Signal Processing, Computer-Assisted , Biometric Identification/methods , Biometric Identification/standards , Fingers/anatomy & histology , Humans , Models, BiologicalABSTRACT
Humans perform and rely on face recognition routinely and effortlessly throughout their daily lives. Multiple works in recent years have sought to replicate this process in a robust and automatic way. However, it is known that the performance of face recognition algorithms is severely compromised in non-ideal image acquisition scenarios. In an attempt to deal with conditions, such as occlusion and heterogeneous illumination, we propose a new approach motivated by the global precedent hypothesis of the human brain's cognitive mechanisms of perception. An automatic modeling of SIFT keypoint descriptors using a Gaussian mixture model (GMM)-based universal background model method is proposed. A decision is, then, made in an innovative hierarchical sense, with holistic information gaining precedence over a more detailed local analysis. The algorithm was tested on the ORL, ARand Extended Yale B Face databases and presented state-of-the-art performance for a variety of experimental setups.
ABSTRACT
"Taking less treating better" has been one of the major improvements of breast cancer surgery in the last four decades. The application of this principle translates into equivalent survival of breast cancer conserving treatment (BCT) when compared to mastectomy, with a better cosmetic outcome. While it is relatively easy to evaluate the oncological results of BCT, the cosmetic outcome is more difficult to measure due to the lack of an effective and consensual procedure. The assessment of cosmetic outcome has been mainly subjective, undertaken by a panel of expert observers or/and by patient self-assessment. Unfortunately, the reproducibility of these methods is low. Objective methods have higher values of reproducibility but still lack the inclusion of several features considered by specialists in BCT to be fundamental for cosmetic outcome. The recent addition of volume information obtained with 3D images seems promising. Until now, unfortunately, no method is considered to be the standard of care. This paper revises the history of cosmetic evaluation and guides us into the future aiming at a method that can easily be used and accepted by all, caregivers and caretakers, allowing not only the comparison of results but the improvement of performance.
Subject(s)
Breast Neoplasms/surgery , Cosmetics , Mastectomy, Segmental , Female , Humans , Patient SatisfactionABSTRACT
Case-based explanations are an intuitive method to gain insight into the decision-making process of deep learning models in clinical contexts. However, medical images cannot be shared as explanations due to privacy concerns. To address this problem, we propose a novel method for disentangling identity and medical characteristics of images and apply it to anonymize medical images. The disentanglement mechanism replaces some feature vectors in an image while ensuring that the remaining features are preserved, obtaining independent feature vectors that encode the images' identity and medical characteristics. We also propose a model to manufacture synthetic privacy-preserving identities to replace the original image's identity and achieve anonymization. The models are applied to medical and biometric datasets, demonstrating their capacity to generate realistic-looking anonymized images that preserve their original medical content. Additionally, the experiments show the network's inherent capacity to generate counterfactual images through the replacement of medical features.
Subject(s)
Data Anonymization , Humans , Deep LearningABSTRACT
Purpose: 2-[18F]FDG PET/CT plays an important role in the management of pulmonary nodules. Convolutional neural networks (CNNs) automatically learn features from images and have the potential to improve the discrimination between malignant and benign pulmonary nodules. The purpose of this study was to develop and validate a CNN model for classification of pulmonary nodules from 2-[18F]FDG PET images. Methods: One hundred thirteen participants were retrospectively selected. One nodule per participant. The 2-[18F]FDG PET images were preprocessed and annotated with the reference standard. The deep learning experiment entailed random data splitting in five sets. A test set was held out for evaluation of the final model. Four-fold cross-validation was performed from the remaining sets for training and evaluating a set of candidate models and for selecting the final model. Models of three types of 3D CNNs architectures were trained from random weight initialization (Stacked 3D CNN, VGG-like and Inception-v2-like models) both in original and augmented datasets. Transfer learning, from ImageNet with ResNet-50, was also used. Results: The final model (Stacked 3D CNN model) obtained an area under the ROC curve of 0.8385 (95% CI: 0.6455-1.0000) in the test set. The model had a sensibility of 80.00%, a specificity of 69.23% and an accuracy of 73.91%, in the test set, for an optimised decision threshold that assigns a higher cost to false negatives. Conclusion: A 3D CNN model was effective at distinguishing benign from malignant pulmonary nodules in 2-[18F]FDG PET images. Supplementary Information: The online version contains supplementary material available at 10.1007/s13139-023-00821-6.
ABSTRACT
Nuclear-derived morphological features and biomarkers provide relevant insights regarding the tumour microenvironment, while also allowing diagnosis and prognosis in specific cancer types. However, manually annotating nuclei from the gigapixel Haematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs) is a laborious and costly task, meaning automated algorithms for cell nuclei instance segmentation and classification could alleviate the workload of pathologists and clinical researchers and at the same time facilitate the automatic extraction of clinically interpretable features for artificial intelligence (AI) tools. But due to high intra- and inter-class variability of nuclei morphological and chromatic features, as well as H&E-stains susceptibility to artefacts, state-of-the-art algorithms cannot correctly detect and classify instances with the necessary performance. In this work, we hypothesize context and attention inductive biases in artificial neural networks (ANNs) could increase the performance and generalization of algorithms for cell nuclei instance segmentation and classification. To understand the advantages, use-cases, and limitations of context and attention-based mechanisms in instance segmentation and classification, we start by reviewing works in computer vision and medical imaging. We then conduct a thorough survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging, while providing a comprehensive discussion of the challenges being tackled with context and attention. Besides, we illustrate some limitations of current approaches and present ideas for future research. As a case study, we extend both a general (Mask-RCNN) and a customized (HoVer-Net) instance segmentation and classification methods with context- and attention-based mechanisms and perform a comparative analysis on a multicentre dataset for colon nuclei identification and counting. Although pathologists rely on context at multiple levels while paying attention to specific Regions of Interest (RoIs) when analysing and annotating WSIs, our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms in ANNs, the scientific understanding of these methods should first be addressed.