Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 55
1.
IEEE Trans Nanobioscience ; 23(2): 355-367, 2024 Apr.
Article En | MEDLINE | ID: mdl-38349839

Advancements in biotechnology and molecular communication have enabled the utilization of nanomachines in Wireless Body Area Networks (WBAN2) for applications such as drug delivery, cancer detection, and emergency rescue services. To study these networks effectively, it is essential to develop an ideal propagation model that includes the channel response between each pair of in-range nanomachines and accounts for the interference received at each receiver node. In this paper, we employ an advection-diffusion equation to obtain a deterministic channel matrix through a vascular WBAN2. Additionally, the closed forms of inter-symbol interference (ISI) and co-channel interference (CCI) are derived for both full duplex (FDX) and half duplex transmission (HDX) modes. By applying these deterministic formulations, we then present the stochastic equivalents of the ideal channel model and interference to provide an innovative communication model by simultaneously incorporating CCI, ISI, and background noise. Finally, we evaluate the results with numerous experiments and use signal-to-interference-plus-noise ratio (SINR) and capacity as metrics.


Biotechnology , Communication , Diffusion , Drug Delivery Systems , Computer Communication Networks , Wireless Technology
2.
J Pathol Inform ; 15: 100357, 2024 Dec.
Article En | MEDLINE | ID: mdl-38420608

Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.

3.
PLoS One ; 18(3): e0282121, 2023.
Article En | MEDLINE | ID: mdl-36862633

The main objective of this study is to develop a robust deep learning-based framework to distinguish COVID-19, Community-Acquired Pneumonia (CAP), and Normal cases based on volumetric chest CT scans, which are acquired in different imaging centers using different scanners and technical settings. We demonstrated that while our proposed model is trained on a relatively small dataset acquired from only one imaging center using a specific scanning protocol, it performs well on heterogeneous test sets obtained by multiple scanners using different technical parameters. We also showed that the model can be updated via an unsupervised approach to cope with the data shift between the train and test sets and enhance the robustness of the model upon receiving a new external dataset from a different center. More specifically, we extracted the subset of the test images for which the model generated a confident prediction and used the extracted subset along with the training set to retrain and update the benchmark model (the model trained on the initial train set). Finally, we adopted an ensemble architecture to aggregate the predictions from multiple versions of the model. For initial training and development purposes, an in-house dataset of 171 COVID-19, 60 CAP, and 76 Normal cases was used, which contained volumetric CT scans acquired from one imaging center using a single scanning protocol and standard radiation dose. To evaluate the model, we collected four different test sets retrospectively to investigate the effects of the shifts in the data characteristics on the model's performance. Among the test cases, there were CT scans with similar characteristics as the train set as well as noisy low-dose and ultra-low-dose CT scans. In addition, some test CT scans were obtained from patients with a history of cardiovascular diseases or surgeries. This dataset is referred to as the "SPGC-COVID" dataset. The entire test dataset used in this study contains 51 COVID-19, 28 CAP, and 51 Normal cases. Experimental results indicate that our proposed framework performs well on all test sets achieving total accuracy of 96.15% (95%CI: [91.25-98.74]), COVID-19 sensitivity of 96.08% (95%CI: [86.54-99.5]), CAP sensitivity of 92.86% (95%CI: [76.50-99.19]), Normal sensitivity of 98.04% (95%CI: [89.55-99.95]) while the confidence intervals are obtained using the significance level of 0.05. The obtained AUC values (One class vs Others) are 0.993 (95%CI: [0.977-1]), 0.989 (95%CI: [0.962-1]), and 0.990 (95%CI: [0.971-1]) for COVID-19, CAP, and Normal classes, respectively. The experimental results also demonstrate the capability of the proposed unsupervised enhancement approach in improving the performance and robustness of the model when being evaluated on varied external test sets.


COVID-19 , Humans , COVID-19/diagnostic imaging , Retrospective Studies , Tomography, X-Ray Computed , Cone-Beam Computed Tomography , Benchmarking
4.
Entropy (Basel) ; 24(5)2022 May 10.
Article En | MEDLINE | ID: mdl-35626556

This article proposes the Bayesian surprise as the main methodology that drives the cognitive radar to estimate a target's future state (i.e., velocity, distance) from noisy measurements and execute a decision to minimize the estimation error over time. The research aims to demonstrate whether the cognitive radar as an autonomous system can modify its internal model (i.e., waveform parameters) to gain consecutive informative measurements based on the Bayesian surprise. By assuming that the radar measurements are constructed from linear Gaussian state-space models, the paper applies Kalman filtering to perform state estimation for a simple vehicle-following scenario. According to the filter's estimate, the sensor measures the contribution of prospective waveforms-which are available from the sensor profile library-to state estimation and selects the one that maximizes the expectation of Bayesian surprise. Numerous experiments examine the estimation performance of the proposed cognitive radar for single-target tracking in practical highway and urban driving environments. The robustness of the proposed method is compared to the state-of-the-art for various error measures. Results indicate that the Bayesian surprise outperforms its competitors with respect to the mean square relative error when one-step and multiple-step planning is considered.

5.
Sci Rep ; 12(1): 4827, 2022 03 22.
Article En | MEDLINE | ID: mdl-35318368

Reverse transcription-polymerase chain reaction is currently the gold standard in COVID-19 diagnosis. It can, however, take days to provide the diagnosis, and false negative rate is relatively high. Imaging, in particular chest computed tomography (CT), can assist with diagnosis and assessment of this disease. Nevertheless, it is shown that standard dose CT scan gives significant radiation burden to patients, especially those in need of multiple scans. In this study, we consider low-dose and ultra-low-dose (LDCT and ULDCT) scan protocols that reduce the radiation exposure close to that of a single X-ray, while maintaining an acceptable resolution for diagnosis purposes. Since thoracic radiology expertise may not be widely available during the pandemic, we develop an Artificial Intelligence (AI)-based framework using a collected dataset of LDCT/ULDCT scans, to study the hypothesis that the AI model can provide human-level performance. The AI model uses a two stage capsule network architecture and can rapidly classify COVID-19, community acquired pneumonia (CAP), and normal cases, using LDCT/ULDCT scans. Based on a cross validation, the AI model achieves COVID-19 sensitivity of [Formula: see text], CAP sensitivity of [Formula: see text], normal cases sensitivity (specificity) of [Formula: see text], and accuracy of [Formula: see text]. By incorporating clinical data (demographic and symptoms), the performance further improves to COVID-19 sensitivity of [Formula: see text], CAP sensitivity of [Formula: see text], normal cases sensitivity (specificity) of [Formula: see text] , and accuracy of [Formula: see text]. The proposed AI model achieves human-level diagnosis based on the LDCT/ULDCT scans with reduced radiation exposure. We believe that the proposed AI model has the potential to assist the radiologists to accurately and promptly diagnose COVID-19 infection and help control the transmission chain during the pandemic.


Artificial Intelligence , COVID-19 , COVID-19/diagnostic imaging , COVID-19 Testing , Humans , Radionuclide Imaging , Tomography, X-Ray Computed
6.
Sensors (Basel) ; 22(4)2022 Feb 11.
Article En | MEDLINE | ID: mdl-35214293

Development of distributed Multi-Agent Reinforcement Learning (MARL) algorithms has attracted an increasing surge of interest lately. Generally speaking, conventional Model-Based (MB) or Model-Free (MF) RL algorithms are not directly applicable to the MARL problems due to utilization of a fixed reward model for learning the underlying value function. While Deep Neural Network (DNN)-based solutions perform well, they are still prone to overfitting, high sensitivity to parameter selection, and sample inefficiency. In this paper, an adaptive Kalman Filter (KF)-based framework is introduced as an efficient alternative to address the aforementioned problems by capitalizing on unique characteristics of KF such as uncertainty modeling and online second order learning. More specifically, the paper proposes the Multi-Agent Adaptive Kalman Temporal Difference (MAK-TD) framework and its Successor Representation-based variant, referred to as the MAK-SR. The proposed MAK-TD/SR frameworks consider the continuous nature of the action-space that is associated with high dimensional multi-agent environments and exploit Kalman Temporal Difference (KTD) to address the parameter uncertainty. The proposed MAK-TD/SR frameworks are evaluated via several experiments, which are implemented through the OpenAI Gym MARL benchmarks. In these experiments, different number of agents in cooperative, competitive, and mixed (cooperative-competitive) scenarios are utilized. The experimental results illustrate superior performance of the proposed MAK-TD/SR frameworks compared to their state-of-the-art counterparts.

7.
Sci Rep ; 12(1): 3212, 2022 02 25.
Article En | MEDLINE | ID: mdl-35217712

Novel Coronavirus disease (COVID-19) is a highly contagious respiratory infection that has had devastating effects on the world. Recently, new COVID-19 variants are emerging making the situation more challenging and threatening. Evaluation and quantification of COVID-19 lung abnormalities based on chest Computed Tomography (CT) images can help determining the disease stage, efficiently allocating limited healthcare resources, and making informed treatment decisions. During pandemic era, however, visual assessment and quantification of COVID-19 lung lesions by expert radiologists become expensive and prone to error, which raises an urgent quest to develop practical autonomous solutions. In this context, first, the paper introduces an open-access COVID-19 CT segmentation dataset containing 433 CT images from 82 patients that have been annotated by an expert radiologist. Second, a Deep Neural Network (DNN)-based framework is proposed, referred to as the [Formula: see text], that autonomously segments lung abnormalities associated with COVID-19 from chest CT images. Performance of the proposed [Formula: see text] framework is evaluated through several experiments based on the introduced and external datasets. Third, an unsupervised enhancement approach is introduced that can reduce the gap between the training set and test set and improve the model generalization. The enhanced results show a dice score of 0.8069 and specificity and sensitivity of 0.9969 and 0.8354, respectively. Furthermore, the results indicate that the [Formula: see text] model can efficiently segment COVID-19 lesions in both 2D CT images and whole lung volumes. Results on the external dataset illustrate generalization capabilities of the [Formula: see text] model to CT images obtained from a different scanner.


COVID-19/diagnostic imaging , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Radiography, Thoracic , Tomography, X-Ray Computed , Datasets as Topic , Female , Humans , Male , Middle Aged
8.
Philos Trans A Math Phys Eng Sci ; 379(2207): 20200362, 2021 Oct 04.
Article En | MEDLINE | ID: mdl-34398647

Symbiotic autonomous systems (SAS) are advanced intelligent and cognitive systems that exhibit autonomous collective intelligence enabled by coherent symbiosis of human-machine interactions in hybrid societies. Basic research in the emerging field of SAS has triggered advanced general-AI technologies that either function without human intervention or synergize humans and intelligent machines in coherent cognitive systems. This work presents a theoretical framework of SAS underpinned by the latest advances in intelligence, cognition, computer, and system sciences. SAS are characterized by the composition of autonomous and symbiotic systems that adopt bio-brain-social-inspired and heterogeneously synergized structures and autonomous behaviours. This paper explores the cognitive and mathematical foundations of SAS. The challenges to seamless human-machine interactions in a hybrid environment are addressed. SAS-based collective intelligence is explored in order to augment human capability by autonomous machine intelligence towards the next generation of general AI, cognitive computers, and trustworthy mission-critical intelligent systems. Emerging paradigms and engineering applications of SAS are elaborated via autonomous knowledge learning systems that symbiotically work between humans and cognitive robots. This article is part of the theme issue 'Towards symbiotic autonomous systems'.

9.
Front Artif Intell ; 4: 598932, 2021.
Article En | MEDLINE | ID: mdl-34113843

The newly discovered Coronavirus Disease 2019 (COVID-19) has been globally spreading and causing hundreds of thousands of deaths around the world as of its first emergence in late 2019. The rapid outbreak of this disease has overwhelmed health care infrastructures and arises the need to allocate medical equipment and resources more efficiently. The early diagnosis of this disease will lead to the rapid separation of COVID-19 and non-COVID cases, which will be helpful for health care authorities to optimize resource allocation plans and early prevention of the disease. In this regard, a growing number of studies are investigating the capability of deep learning for early diagnosis of COVID-19. Computed tomography (CT) scans have shown distinctive features and higher sensitivity compared to other diagnostic tests, in particular the current gold standard, i.e., the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. Current deep learning-based algorithms are mainly developed based on Convolutional Neural Networks (CNNs) to identify COVID-19 pneumonia cases. CNNs, however, require extensive data augmentation and large datasets to identify detailed spatial relations between image instances. Furthermore, existing algorithms utilizing CT scans, either extend slice-level predictions to patient-level ones using a simple thresholding mechanism or rely on a sophisticated infection segmentation to identify the disease. In this paper, we propose a two-stage fully automated CT-based framework for identification of COVID-19 positive cases referred to as the "COVID-FACT". COVID-FACT utilizes Capsule Networks, as its main building blocks and is, therefore, capable of capturing spatial information. In particular, to make the proposed COVID-FACT independent from sophisticated segmentations of the area of infection, slices demonstrating infection are detected at the first stage and the second stage is responsible for classifying patients into COVID and non-COVID cases. COVID-FACT detects slices with infection, and identifies positive COVID-19 cases using an in-house CT scan dataset, containing COVID-19, community acquired pneumonia, and normal cases. Based on our experiments, COVID-FACT achieves an accuracy of 90.82 % , a sensitivity of 94.55 % , a specificity of 86.04 % , and an Area Under the Curve (AUC) of 0.98, while depending on far less supervision and annotation, in comparison to its counterparts.

10.
Curr Res Food Sci ; 4: 233-249, 2021.
Article En | MEDLINE | ID: mdl-33937871

The quality and safety of food is an important issue to the whole society, since it is at the basis of human health, social development and stability. Ensuring food quality and safety is a complex process, and all stages of food processing must be considered, from cultivating, harvesting and storage to preparation and consumption. However, these processes are often labour-intensive. Nowadays, the development of machine vision can greatly assist researchers and industries in improving the efficiency of food processing. As a result, machine vision has been widely used in all aspects of food processing. At the same time, image processing is an important component of machine vision. Image processing can take advantage of machine learning and deep learning models to effectively identify the type and quality of food. Subsequently, follow-up design in the machine vision system can address tasks such as food grading, detecting locations of defective spots or foreign objects, and removing impurities. In this paper, we provide an overview on the traditional machine learning and deep learning methods, as well as the machine vision techniques that can be applied to the field of food processing. We present the current approaches and challenges, and the future trends.

11.
Sci Data ; 8(1): 121, 2021 04 29.
Article En | MEDLINE | ID: mdl-33927208

Novel Coronavirus (COVID-19) has drastically overwhelmed more than 200 countries affecting millions and claiming almost 2 million lives, since its emergence in late 2019. This highly contagious disease can easily spread, and if not controlled in a timely fashion, can rapidly incapacitate healthcare systems. The current standard diagnosis method, the Reverse Transcription Polymerase Chain Reaction (RT- PCR), is time consuming, and subject to low sensitivity. Chest Radiograph (CXR), the first imaging modality to be used, is readily available and gives immediate results. However, it has notoriously lower sensitivity than Computed Tomography (CT), which can be used efficiently to complement other diagnostic methods. This paper introduces a new COVID-19 CT scan dataset, referred to as COVID-CT-MD, consisting of not only COVID-19 cases, but also healthy and participants infected by Community Acquired Pneumonia (CAP). COVID-CT-MD dataset, which is accompanied with lobe-level, slice-level and patient-level labels, has the potential to facilitate the COVID-19 research, in particular COVID-CT-MD can assist in development of advanced Machine Learning (ML) and Deep Neural Network (DNN) based solutions.


COVID-19/diagnostic imaging , Deep Learning , Tomography, X-Ray Computed , Adult , Aged , Female , Humans , Machine Learning , Male , Middle Aged , Neural Networks, Computer
12.
IEEE Syst J ; 15(4): 5367-5378, 2021 Dec.
Article En | MEDLINE | ID: mdl-35582390

While contact tracing is of paramount importance in preventing the spreading of infectious diseases, manual contact tracing is inefficient and time consuming as those in close contact with infected individuals are informed hours, if not days, later. This article proposes a smart contact tracing (SCT) system utilizing the smartphone's Bluetooth low energy signals and machine learning classifiers to automatically detect those possible contacts to infectious individuals. SCT's contribution is two-fold: a) classification of the user's contact as high/low-risk using precise proximity sensing, and b) user anonymity using a privacy-preserving communication protocol. To protect the user's privacy, both broadcasted and observed signatures are stored in the user's smartphone locally and only disseminate the stored signatures through a secure database when a user is confirmed by public health authorities to be infected. Using received signal strength each smartphone estimates its distance from other user's phones and issues real-time alerts when social distancing rules are violated. Extensive experimentation utilizing real-life smartphone positions and a comparative evaluation of five machine learning classifiers indicate that a decision tree classifier outperforms other state-of-the-art classification methods with an accuracy of about 90% when two users carry their smartphone in a similar manner. Finally, to facilitate research in this area while contributing to the timely development, the dataset of six experiments with about 123 000 data points is made publicly available.

13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1075-1079, 2020 07.
Article En | MEDLINE | ID: mdl-33018172

Brain tumor is among the deadliest cancers, whose effective treatment is partially dependent on the accurate diagnosis of the tumor type. Convolutional neural networks (CNNs), which have been the state-of-the-art in brain tumor classification, fail to identify the spatial relations in the image. Capsule networks, proposed to overcome this drawback, are sensitive to miscellaneous backgrounds and cannot manage to focus on the main target. To address this shortcoming, we have recently proposed a capsule network-based architecture capable of taking both brain images and tumor rough boundary boxes as inputs, to have access to the surrounding tissue as well as the main target. Similar to other architectures, however, this network requires extensive search within the space of all possible configurations, to find the optimal architecture. To eliminate this need, in this study, we propose a boosted capsule network, referred to as BoostCaps, which takes advantage of the ability of boosting methods to handle weak learners, by gradually boosting the models. BoosCaps, to the best of our knowledge, is the first capsule network model that incorporates an internal boosting mechanism. Our results show that the proposed BoostCaps framework outperforms its single capsule network counterpart.


Brain Neoplasms , Brain , Dietary Supplements , Humans , Neural Networks, Computer
14.
PLoS One ; 15(10): e0240530, 2020.
Article En | MEDLINE | ID: mdl-33052964

Deep learning has achieved a great success in natural image classification. To overcome data-scarcity in computational pathology, recent studies exploit transfer learning to reuse knowledge gained from natural images in pathology image analysis, aiming to build effective pathology image diagnosis models. Since transferability of knowledge heavily depends on the similarity of the original and target tasks, significant differences in image content and statistics between pathology images and natural images raise the questions: how much knowledge is transferable? Is the transferred information equally contributed by pre-trained layers? If not, is there a sweet spot in transfer learning that balances transferred model's complexity and performance? To answer these questions, this paper proposes a framework to quantify knowledge gain by a particular layer, conducts an empirical investigation in pathology image centered transfer learning, and reports some interesting observations. Particularly, compared to the performance baseline obtained by a random-weight model, though transferability of off-the-shelf representations from deep layers heavily depend on specific pathology image sets, the general representation generated by early layers does convey transferred knowledge in various image classification applications. The trade-off between transferable performance and transferred model's complexity observed in this study encourages further investigation of specific metric and tools to quantify effectiveness of transfer learning in future.


Deep Learning , Image Processing, Computer-Assisted/methods , Pathology/methods , Breast Neoplasms/diagnostic imaging , Computer Simulation , Female , Humans
15.
Pattern Recognit Lett ; 138: 638-643, 2020 Oct.
Article En | MEDLINE | ID: mdl-32958971

Novel Coronavirus disease (COVID-19) has abruptly and undoubtedly changed the world as we know it at the end of the 2nd decade of the 21st century. COVID-19 is extremely contagious and quickly spreading globally making its early diagnosis of paramount importance. Early diagnosis of COVID-19 enables health care professionals and government authorities to break the chain of transition and flatten the epidemic curve. The common type of COVID-19 diagnosis test, however, requires specific equipment and has relatively low sensitivity. Computed tomography (CT) scans and X-ray images, on the other hand, reveal specific manifestations associated with this disease. Overlap with other lung infections makes human-centered diagnosis of COVID-19 challenging. Consequently, there has been an urgent surge of interest to develop Deep Neural Network (DNN)-based diagnosis solutions, mainly based on Convolutional Neural Networks (CNNs), to facilitate identification of positive COVID-19 cases. CNNs, however, are prone to lose spatial information between image instances and require large datasets. The paper presents an alternative modeling framework based on Capsule Networks, referred to as the COVID-CAPS, being capable of handling small datasets, which is of significant importance due to sudden and rapid emergence of COVID-19. Our results based on a dataset of X-ray images show that COVID-CAPS has advantage over previous CNN-based models. COVID-CAPS achieved an Accuracy of 95.7%, Sensitivity of 90%, Specificity of 95.8%, and Area Under the Curve (AUC) of 0.97, while having far less number of trainable parameters in comparison to its counterparts. To potentially and further improve diagnosis capabilities of the COVID-CAPS, pre-training and transfer learning are utilized based on a new dataset constructed from an external dataset of X-ray images. This is in contrary to existing works on COVID-19 detection where pre-training is performed based on natural images. Pre-training with a dataset of similar nature further improved accuracy to 98.3% and specificity to 98.6%.

16.
Sci Rep ; 10(1): 12366, 2020 07 23.
Article En | MEDLINE | ID: mdl-32703973

Hand-crafted radiomics has been used for developing models in order to predict time-to-event clinical outcomes in patients with lung cancer. Hand-crafted features, however, are pre-defined and extracted without taking the desired target into account. Furthermore, accurate segmentation of the tumor is required for development of a reliable predictive model, which may be objective and a time-consuming task. To address these drawbacks, we propose a deep learning-based radiomics model for the time-to-event outcome prediction, referred to as DRTOP that takes raw images as inputs, and calculates the image-based risk of death or recurrence, for each patient. Our experiments on an in-house dataset of 132 lung cancer patients show that the obtained image-based risks are significant predictors of the time-to-event outcomes. Computed Tomography (CT)-based features are predictors of the overall survival (OS), with the hazard ratio (HR) of 1.35, distant control (DC), with HR of 1.06, and local control (LC), with HR of 2.66. The Positron Emission Tomography (PET)-based features are predictors of OS and recurrence free survival (RFS), with hazard ratios of 1.67 and 1.18, respectively. The concordance indices of [Formula: see text], [Formula: see text], and [Formula: see text] for predicting the OS, DC, and RFS show that the deep learning-based radiomics model is as accurate or better in predicting predefined clinical outcomes compared to hand-crafted radiomics, with concordance indices of [Formula: see text], [Formula: see text], and [Formula: see text], for predicting the OS, DC, and RFS, respectively. Deep learning-based radiomics has the potential to offer complimentary predictive information in the personalized management of lung cancer patients.


Databases, Factual , Deep Learning , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/mortality , Positron-Emission Tomography , Tomography, X-Ray Computed , Aged , Aged, 80 and over , Disease-Free Survival , Female , Humans , Male , Middle Aged , Predictive Value of Tests , Survival Rate
17.
Sci Rep ; 10(1): 7948, 2020 05 14.
Article En | MEDLINE | ID: mdl-32409715

Despite the advances in automatic lung cancer malignancy prediction, achieving high accuracy remains challenging. Existing solutions are mostly based on Convolutional Neural Networks (CNNs), which require a large amount of training data. Most of the developed CNN models are based only on the main nodule region, without considering the surrounding tissues. Obtaining high sensitivity is challenging with lung nodule malignancy prediction. Moreover, the interpretability of the proposed techniques should be a consideration when the end goal is to utilize the model in a clinical setting. Capsule networks (CapsNets) are new and revolutionary machine learning architectures proposed to overcome shortcomings of CNNs. Capitalizing on the success of CapsNet in biomedical domains, we propose a novel model for lung tumor malignancy prediction. The proposed framework, referred to as the 3D Multi-scale Capsule Network (3D-MCN), is uniquely designed to benefit from: (i) 3D inputs, providing information about the nodule in 3D; (ii) Multi-scale input, capturing the nodule's local features, as well as the characteristics of the surrounding tissues, and; (iii) CapsNet-based design, being capable of dealing with a small number of training samples. The proposed 3D-MCN architecture predicted lung nodule malignancy with a high accuracy of 93.12%, sensitivity of 94.94%, area under the curve (AUC) of 0.9641, and specificity of 90% when tested on the LIDC-IDRI dataset. When classifying patients as having a malignant condition (i.e., at least one malignant nodule is detected) or not, the proposed model achieved an accuracy of 83%, and a sensitivity and specificity of 84% and 81% respectively.


Computational Biology , Lung Neoplasms/diagnosis , Neural Networks, Computer , Humans
18.
IEEE Trans Image Process ; 29: 250-264, 2020.
Article En | MEDLINE | ID: mdl-31380758

In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of finite impulse response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the point spread function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.

19.
Cancers (Basel) ; 11(10)2019 10 22.
Article En | MEDLINE | ID: mdl-31652628

Survival and life quality of breast cancer patients could be improved by more aggressive chemotherapy for those at high metastasis risk and less intense treatments for low-risk patients. Such personalized treatment cannot be currently achieved due to the insufficient reliability of metastasis risk prognosis. The purpose of this study was therefore, to identify novel histopathological prognostic markers of metastasis risk through exhaustive computational image analysis of 80 size and shape subsets of epithelial clusters in breast tumors. The group of 102 patients had a follow-up median of 12.3 years, without lymph node spread and systemic treatments. Epithelial cells were stained by the AE1/AE3 pan-cytokeratin antibody cocktail. The size and shape subsets of the stained epithelial cell clusters were defined in each image by use of the circularity and size filters and analyzed for prognostic performance. Epithelial areas with the optimal prognostic performance were uniformly small and round and could be recognized as individual epithelial cells scattered in tumor stroma. Their count achieved an area under the receiver operating characteristic curve (AUC) of 0.82, total area (AUC = 0.77), average size (AUC = 0.63), and circularity (AUC = 0.62). In conclusion, by use of computational image analysis as a hypothesis-free discovery tool, this study reveals the histomorphological marker with a high prognostic value that is simple and therefore easy to quantify by visual microscopy.

...