RESUMO
The mutant strains of COVID-19 caused a global explosion of infections, including many cities of China. In 2020, a hybrid AI model was proposed by Zheng et al., which accurately predicted the epidemic in Wuhan. As the main part of the hybrid AI model, ISI method makes two important assumptions to avoid over-fitting. However, the assumptions cannot be effectively applied to new mutant strains. In this paper, a more general method, named the multi-weight susceptible-infected model (MSI) is proposed to predict COVID-19 in Chinese Mainland. First, a Gaussian pre-processing method is proposed to solve the problem of data fluctuation based on the quantity consistency of cumulative infection number and the trend consistency of daily infection number. Then, we improve the model from two aspects: changing the grouped multi-parameter strategy to the multi-weight strategy, and removing the restriction of weight distribution of viral infectivity. Experiments on the outbreaks in many places in China from the end of 2021 to May 2022 show that, in China, an individual infected by Delta or Omicron strains of SARS-CoV-2 can infect others within 3-4 days after he/she got infected. Especially, the proposed method effectively predicts the trend of the epidemics in Xi'an, Tianjin, Henan, and Shanghai from December 2021 to May 2022.
RESUMO
Age estimation can aid in forensic medicine applications, diagnosis, and treatment planning for orthodontics and pediatrics. Existing dental age estimation methods rely heavily on specialized knowledge and are highly subjective, wasting time, and energy, which can be perfectly solved by machine learning techniques. As the key factor affecting the performance of machine learning models, there are usually two methods for feature extraction: human interference and autonomous extraction without human interference. However, previous studies have rarely applied these two methods for feature extraction in the same image analysis task. Herein, we present two types of convolutional neural networks (CNNs) for dental age estimation. One is an automated dental stage evaluation model (ADSE model) based on specified manually defined features, and the other is an automated end-to-end dental age estimation model (ADAE model), which autonomously extracts potential features for dental age estimation. Although the mean absolute error (MAE) of the ADSE model for stage classification is 0.17 stages, its accuracy in dental age estimation is unsatisfactory, with the MAE (1.63 years) being only 0.04 years lower than the manual dental age estimation method (MDAE model). However, the MAE of the ADAE model is 0.83 years, being reduced by half that of the MDAE model. The results show that fully automated feature extraction in a deep learning model without human interference performs better in dental age estimation, prominently increasing the accuracy and objectivity. This indicates that without human interference, machine learning may perform better in the application of medical imaging.
Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Criança , Humanos , Processamento de Imagem Assistida por Computador , Lactente , RadiografiaRESUMO
Automatic detection of thin-cap fibroatheroma (TCFA) is essential to prevent acute coronary syndrome. Hence, in this paper, a method is proposed to detect TCFAs by directly classifying each A-line using multi-view intravascular optical coherence tomography (IVOCT) images. To solve the problem of false positives, a multi-input-output network was developed to implement image-level classification and A-line-based classification at the same time, and a contrastive consistency term was designed to ensure consistency between two tasks. In addition, to learn spatial and global information and obtain the complete extent of TCFAs, an architecture and a regional connectivity constraint term are proposed to classify each A-line of IVOCT images. Experimental results obtained on the 2017 China Computer Vision Conference IVOCT dataset show that the proposed method achieved state-of-art performance with a total score of 88.7±0.88%, overlap rate of 88.64±0.26%, precision rate of 84.34±0.86%, and recall rate of 93.67±2.29%.
Assuntos
Placa Aterosclerótica , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Placa Aterosclerótica/diagnóstico por imagem , Vasos CoronáriosRESUMO
In this paper, the problem of spatial signature estimation using a uniform linear array (ULA) with unknown sensor gain and phase errors is considered. As is well known, the directions-of-arrival (DOAs) can only be determined within an unknown rotational angle in this array model. However, the phase ambiguity has no impact on the identification of the spatial signature. Two auto-calibration methods are presented for spatial signature estimation. In our methods, the rotational DOAs and model error parameters are firstly obtained, and the spatial signature is subsequently calculated. The first method extracts two subarrays from the ULA to construct an estimator, and the elements of the array can be used several times in one subarray. The other fully exploits multiple invariances in the interior of the sensor array, and a multidimensional nonlinear problem is formulated. A Gauss-Newton iterative algorithm is applied for solving it. The first method can provide excellent initial inputs for the second one. The effectiveness of the proposed algorithms is demonstrated by several simulation results.
RESUMO
BACKGROUND: Atherosclerotic cardiovascular disease is the leading cause of death worldwide. Early detection of carotid atherosclerosis can prevent the progression of cardiovascular disease. Many (semi-) automatic methods have been designed for the segmentation of carotid vessel wall and the diagnosis of carotid atherosclerosis (i.e., the lumen segmentation, the outer wall segmentation, and the carotid atherosclerosis diagnosis) on black blood magnetic resonance imaging (BB-MRI). However, most of these methods ignore the intrinsic correlation among different tasks on BB-MRI, leading to limited performance. PURPOSE: Thus, we model the intrinsic correlation among the lumen segmentation, the outer wall segmentation, and the carotid atherosclerosis diagnosis tasks on BB-MRI by using the multi-task learning technique and propose a gated multi-task network (GMT-Net) to perform three related tasks in a neural network (i.e., carotid artery lumen segmentation, outer wall segmentation, and carotid atherosclerosis diagnosis). METHODS: In the proposed method, the GMT-Net is composed of three modules, including the sharing module, the segmentation module, and the diagnosis module, which interact with each other to achieve better learning performance. At the same time, two new adaptive layers, namely, the gated exchange layer and the gated fusion layer, are presented to exchange and merge branch features. RESULTS: The proposed method is applied to the CAREII dataset (i.e., 1057 scans) for the lumen segmentation, the outer wall segmentation, and the carotid atherosclerosis diagnosis. The proposed method can achieve promising segmentation performances (0.9677 Dice for the lumen and 0.9669 Dice for the outer wall) and better diagnosis accuracy of carotid atherosclerosis (0.9516 AUC and 0.9024 Accuracy) in the "CAREII test" dataset (i.e., 106 scans). The results show that the proposed method has statistically significant accuracy and efficiency. CONCLUSIONS: Even without the intervention of reviewers required for the previous works, the proposed method automatically segments the lumen and outer wall together and diagnoses carotid atherosclerosis with high performance. The proposed method can be used in clinical trials to help radiologists get rid of tedious reading tasks, such as screening review to separate normal carotid arteries from atherosclerotic arteries and to outline vessel wall contours.
Assuntos
Doenças Cardiovasculares , Doenças das Artérias Carótidas , Humanos , Doenças Cardiovasculares/patologia , Artérias Carótidas/diagnóstico por imagem , Artérias Carótidas/patologia , Doenças das Artérias Carótidas/diagnóstico por imagem , Doenças das Artérias Carótidas/patologia , Angiografia por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/métodosRESUMO
BACKGROUND: Liver lesions mainly occur inside the liver parenchyma, which are difficult to locate and have complicated relationships with essential vessels. Thus, preoperative planning is crucial for the resection of liver lesions. Accurate segmentation of the hepatic and portal veins (PVs) on computed tomography (CT) images is of great importance for preoperative planning. However, manually labeling the mask of vessels is laborious and time-consuming, and the labeling results of different clinicians are prone to inconsistencies. Hence, developing an automatic segmentation algorithm for hepatic and PVs on CT images has attracted the attention of researchers. Unfortunately, existing deep learning based automatic segmentation methods are prone to misclassifying peripheral vessels into wrong categories. PURPOSE: This study aims to provide a fully automatic and robust semantic segmentation algorithm for hepatic and PVs, guiding subsequent preoperative planning. In addition, to address the deficiency of the public dataset for hepatic and PV segmentation, we revise the annotations of the Medical Segmentation Decathlon (MSD) hepatic vessel segmentation dataset and add the masks of the hepatic veins (HVs) and PVs. METHODS: We proposed a structure with a dual-stream encoder combining convolution and Transformer block, named Dual-stream Hepatic Portal Vein segmentation Network, to extract local features and long-distance spatial information, thereby extracting anatomical information of hepatic and portal vein, avoiding misdivisions of adjacent peripheral vessels. Besides, a multi-scale feature fusion block based on dilated convolution is proposed to extract multi-scale features on expanded perception fields for local features, and a multi-level fusing attention module is introduced for efficient context information extraction. Paired t-test is conducted to evaluate the significant difference in dice between the proposed methods and the comparing methods. RESULTS: Two datasets are constructed from the original MSD dataset. For each dataset, 50 cases are randomly selected for model evaluation in the scheme of 5-fold cross-validation. The results show that our method outperforms the state-of-the-art Convolutional Neural Network-based and transformer-based methods. Specifically, for the first dataset, our model reaches 0.815, 0.830, and 0.807 at overall dice, precision, and sensitivity. The dice of the hepatic and PVs are 0.835 and 0.796, which also exceed the numeric result of the comparing methods. Almost all the p-values of paired t-tests on the proposed approach and comparing approaches are smaller than 0.05. On the second dataset, the proposed algorithm achieves 0.749, 0.762, 0.726, 0.835, and 0.796 for overall dice, precision, sensitivity, dice for HV, and dice for PV, among which the first four numeric results exceed comparing methods. CONCLUSIONS: The proposed method is effective in solving the problem of misclassifying interlaced peripheral veins for the HV and PV segmentation task and outperforming the comparing methods on the relabeled dataset.
Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Veia Porta , Tomografia Computadorizada por Raios X , Veia Porta/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Humanos , Veias Hepáticas/diagnóstico por imagem , Aprendizado Profundo , Fígado/diagnóstico por imagem , Fígado/irrigação sanguíneaRESUMO
Vessel centerline extraction is essential for carotid stenosis assessment and atherosclerotic plaque identification in clinical diagnosis. Simultaneously, it provides a region of interest identification and boundary initialization for computer-assisted diagnosis tools. In magnetic resonance imaging (MRI) cross-sectional images, the lumen shape and vascular topology result in a challenging task to extract the centerline accurately. To this end, we propose a space-refine framework, which exploits the positional continuity of the carotid artery from frame to frame to extract the carotid artery centerline. The proposed framework consists of a detector and a refinement module. Specifically, the detector roughly extracts the carotid lumen region from the original image. Then, we introduce a refinement module that uses the cascade of regressors from a detector to perform sequence realignment of lumen bounding boxes for each subject. It improves the lumen localization results and further enhances the centerline extraction accuracy. Verified by large carotid artery data, the proposed framework achieves state-of-the-art performance compared to conventional vessel centerline extraction methods or standard convolutional neural network approaches.Clinical relevance- Our proposed framework can be used as an important aid for physicians to quantitatively analyze the carotid artery in clinical practice. It is also used as a new paradigm for extracting the centerline of carotid vessels in computer-assisted tools.
Assuntos
Artérias Carótidas , Placa Aterosclerótica , Humanos , Artérias Carótidas/diagnóstico por imagem , Redes Neurais de Computação , Artéria Carótida Primitiva , Placa Aterosclerótica/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodosRESUMO
Automatic detection of thin-cap fibroatheroma (TCFA) on intravascular optical coherence tomography images is essential for the prevention of acute coronary syndrome. However, existing methods need to mark the exact location of TCFAs on each frame as supervision, which is extremely time-consuming and expensive. Hence, a new weakly supervised framework is proposed to detect TCFAs using only image-level tags as supervision. The framework comprises cut, feature extraction, relation, and detection modules. First, based on prior knowledge, a cut module was designed to generate a small number of specific region proposals. Then, to learn global information, a relation module was designed to learn the spatial adjacency and order relationships at the feature level, and an attention-based strategy was introduced in the detection module to effectively aggregate the classification results of region proposals as the image-level predicted score. The results demonstrate that the proposed method surpassed the state-of-the-art weakly supervised detection methods.
Assuntos
Placa Aterosclerótica , Humanos , Placa Aterosclerótica/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Aprendizado de Máquina SupervisionadoRESUMO
Lymphomas are a group of malignant tumors developed from lymphocytes, which may occur in many organs. Therefore, accurately distinguishing lymphoma from solid tumors is of great clinical significance. Due to the strong ability of graph structure to capture the topology of the micro-environment of cells, graph convolutional networks (GCNs) have been widely used in pathological image processing. Nevertheless, the softmax classification layer of the graph convolutional models cannot drive learned representations compact enough to distinguish some types of lymphomas and solid tumors with strong morphological analogies on H&E-stained images. To alleviate this problem, a prototype learning based model is proposed, namely graph convolutional prototype network (GCPNet). Specifically, the method follows the patch-to-slide architecture first to perform patch-level classification and obtain image-level results by fusing patch-level predictions. The classification model is assembled with a graph convolutional feature extractor and prototype-based classification layer to build more robust feature representations for classification. For model training, a dynamic prototype loss is proposed to give the model different optimization priorities at different stages of training. Besides, a prototype reassignment operation is designed to prevent the model from getting stuck in local minima during optimization. Experiments are conducted on a dataset of 183 Whole slide images (WSI) of gastric mucosa biopsy. The proposed method achieved superior performance than existing methods.Clinical relevance- The work proposed a new deep learning framework tailored to lymphoma recognition on pathological image of gastric mucosal biopsy to differentiate lymphoma, adenocarcinoma and inflammation.
Assuntos
Linfoma , Estômago , Humanos , Biópsia , Mucosa Gástrica , Gastroscopia , Linfoma/diagnóstico , Microambiente TumoralRESUMO
Automated analysis of the vessel structure in intravascular optical coherence tomography (IVOCT) images is critical to assess the health status of vessels and monitor coronary artery disease progression. However, deep learning-based methods usually require well-annotated large datasets, which are difficult to obtain in the field of medical image analysis. Hence, an automatic layers segmentation method based on meta-learning was proposed, which can simultaneously extract the surfaces of the lumen, intima, media, and adventitia using a handful of annotated samples. Specifically, we leverage a bi-level gradient strategy to train a meta-learner for capturing the shared meta-knowledge among different anatomical layers and quickly adapting to unknown anatomical layers. Then, a Claw-type network and a contrast consistency loss were designed to better learn the meta-knowledge according to the characteristic of annotation of the lumen and anatomical layers. Experimental results on the two cardiovascular IVOCT datasets show that the proposed method achieved state-of-art performance.
Assuntos
Doença da Artéria Coronariana , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , PulmãoRESUMO
Localization is a fundamental and crucial module for autonomous vehicles. Most of the existing localization methodologies, such as signal-dependent methods (RTK-GPS and Bluetooth), simultaneous localization and mapping (SLAM), and map-based methods, have been utilized in outdoor autonomous driving vehicles and indoor robot positioning. However, they suffer from severe limitations, such as signal-blocked scenes of GPS, computing resource occupation explosion in large-scale scenarios, intolerable time delay, and registration divergence of SLAM/map-based methods. In this article, a self-localization framework, without relying on GPS or any other wireless signals, is proposed. We demonstrate that the proposed homogeneous normal distribution transform algorithm and two-way information interaction mechanism could achieve centimeter-level localization accuracy, which reaches the requirement of autonomous vehicle localization for instantaneity and robustness. In addition, benefitting from hardware and software co-design, the proposed localization approach is extremely light-weighted enough to be operated on an embedded computing system, which is different from other LiDAR localization methods relying on high-performance CPU/GPU. Experiments on a public dataset (Baidu Apollo SouthBay dataset) and real-world verified the effectiveness and advantages of our approach compared with other similar algorithms.
Assuntos
Algoritmos , Veículos AutônomosRESUMO
Head movement during long scan sessions degrades the quality of reconstruction in positron emission tomography (PET) and introduces artifacts, which limits clinical diagnosis and treatment. Recent deep learning-based motion correction work utilized raw PET list-mode data and hardware motion tracking (HMT) to learn head motion in a supervised manner. However, motion prediction results were not robust to testing subjects outside the training data domain. In this paper, we integrate a cross-attention mechanism into the supervised deep learning network to improve motion correction across test subjects. Specifically, cross-attention learns the spatial correspondence between the reference images and moving images to explicitly focus the model on the most correlative inherent information - the head region the motion correction. We validate our approach on brain PET data from two different scanners: HRRT without time of flight (ToF) and mCT with ToF. Compared with traditional and deep learning benchmarks, our network improved the performance of motion correction by 58% and 26% in translation and rotation, respectively, in multi-subject testing in HRRT studies. In mCT studies, our approach improved performance by 66% and 64% for translation and rotation, respectively. Our results demonstrate that cross-attention has the potential to improve the quality of brain PET image reconstruction without the dependence on HMT. All code will be released on GitHub: https://github.com/OnofreyLab/dl_hmc_attention_mlcn2023.
RESUMO
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver's visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible.
Assuntos
Radar , Visão Monocular , AlgoritmosRESUMO
Pseudo-label-based unsupervised domain adaptation (UDA) has increasingly gained interest in medical image analysis, aiming to solve the problem of performance degradation of deep neural networks when dealing with unseen data. Although it has achieved great success, it still faced two significant challenges: improving pseudo labels' precision and mitigating the effects caused by noisy pseudo labels. To solve these problems, we propose a novel UDA framework based on label distribution learning, where the problem is formulated as noise label correcting and can be solved by converting a fixed categorical value (pseudo labels on target data) to a distribution and iteratively update both network parameters and label distribution to correct noisy pseudo labels, and then these labels are used to re-train the model. We have extensively evaluated our framework with vulnerable plaques detection between two IVOCT datasets. Experimental results show that our UDA framework is effective in improving the detection performance of unlabeled target images.
Assuntos
Processamento de Imagem Assistida por Computador , Placa Aterosclerótica , Humanos , Redes Neurais de Computação , Placa AmiloideRESUMO
Clinically, the Fundus Fluorescein Angiography (FA) is a more common mean for Diabetic Retinopathy (DR) detection since the DR appears in FA much more contrasty than in Color Fundus Image (CF). However, acquiring FA has a risk of death due to the fluorescent allergy. Thus, in this paper, we explore a novel unpaired CycleGAN-based model for the FA synthesis from CF, where some strict structure similarity constraints are employed to guarantee the perfectly mapping from one domain to another one. First, a triple multi-scale network architecture with multi-scale inputs, multi-scale discriminators and multi-scale cycle consistency losses is proposed to enhance the similarity between two retinal modalities from different scales. Second, the self-attention mechanism is introduced to improve the adaptive domain mapping ability of the model. Third, to further improve strict constraints in the feather level, quality loss is employed between each process of generation and reconstruction. Qualitative examples, as well as quantitative evaluation, are provided to support the robustness and the accuracy of our proposed method.
Assuntos
Retinopatia Diabética , Retina , Atenção , Retinopatia Diabética/diagnóstico , Angiofluoresceinografia , Fundo de Olho , Humanos , Retina/diagnóstico por imagemRESUMO
The coronavirus disease 2019 (COVID-19) breaking out in late December 2019 is gradually being controlled in China, but it is still spreading rapidly in many other countries and regions worldwide. It is urgent to conduct prediction research on the development and spread of the epidemic. In this article, a hybrid artificial-intelligence (AI) model is proposed for COVID-19 prediction. First, as traditional epidemic models treat all individuals with coronavirus as having the same infection rate, an improved susceptible-infected (ISI) model is proposed to estimate the variety of the infection rates for analyzing the transmission laws and development trend. Second, considering the effects of prevention and control measures and the increase of the public's prevention awareness, the natural language processing (NLP) module and the long short-term memory (LSTM) network are embedded into the ISI model to build the hybrid AI model for COVID-19 prediction. The experimental results on the epidemic data of several typical provinces and cities in China show that individuals with coronavirus have a higher infection rate within the third to eighth days after they were infected, which is more in line with the actual transmission laws of the epidemic. Moreover, compared with the traditional epidemic models, the proposed hybrid AI model can significantly reduce the errors of the prediction results and obtain the mean absolute percentage errors (MAPEs) with 0.52%, 0.38%, 0.05%, and 0.86% for the next six days in Wuhan, Beijing, Shanghai, and countrywide, respectively.
Assuntos
Inteligência Artificial , Betacoronavirus , Infecções por Coronavirus/epidemiologia , Modelos Estatísticos , Pneumonia Viral/epidemiologia , COVID-19 , China/epidemiologia , Humanos , Processamento de Linguagem Natural , Pandemias , SARS-CoV-2RESUMO
PURPOSE: Early detection of carotid atherosclerosis on the vessel wall (VW) magnetic resonance imaging (MRI) (VW-MRI) images can prevent the progression of cardiovascular disease. However, the manual inspection process of the VW-MRI images is cumbersome and has low reproducibility. Therefore in this paper, by using the convolutional neural networks (CNNs), we develop a deep morphology aided diagnosis (DeepMAD) network for automated segmentation of the VW of carotid artery and for automated diagnosis of the carotid atherosclerosis with the black-blood (BB) VW-MRI (i.e., the T1-weighted MRI) in a slice-by-slice manner. METHODS: The proposed DeepMAD network consists of a segmentation subnetwork and a diagnosis subnetwork for performing the segmentation and diagnosis tasks on the BB-VW-MRI images, where the manual labeled lumen area, the manual labeled outer wall area and the manual labeled lesion Types based on the modified American Heart Association (AHA) criteria are used as the ground-truth. Specifically, a deep U-shape CNN with a weighted fusion layer is designed as the segmentation subnetwork, where the lumen area and the outer wall area can be simultaneously segmented under the supervision of the triple Dice loss to provide the vessel wall map as morphological information. Then, the image stream from the BB-VWMRI image and the morphology stream from the obtained vessel wall map are extracted from two deep CNNs and combined to obtain the diagnosis results of atherosclerosis in the diagnosis subnetwork. In addition, the triple input set is formed by three carotid regions of interest (ROIs) from three consecutive slices of the MRI sequence and input to the DeepMAD network, where the first and last slices used as additional adjacent slices to provide 2.5D spatial information along the carotid artery centerline for the intermediate slice, which is the target slice for segmentation and diagnosis in the study. RESULTS: Compared to other existing methods, the DeepMAD network can achieve promising segmentation performances (0.9594 Dice for the lumen and 0.9657 Dice for the outer wall) and better diagnosis Accuracy of the carotid atherosclerosis (0.9503 AUC and 0.8916 Accuracy) in the test dataset (including invisible subjects) from same source as the training dataset. In addition, the trained DeepMAD model can be successfully transferred to another test dataset for segmentation and diagnosis tasks with remarkable performance (0.9475 Dice for the lumen and 0.9542 Dice for the outer wall, 0. 9227 AUC and 0.8679 Accuracy for diagnosis). CONCLUSIONS: Even without the intervention of reviewers required for previous works, the proposed DeepMAD network automatically segments the lumen and the outer wall together and diagnoses the carotid atherosclerosis with high performances. The DeepMAD network can be used in clinical trials to help radiologists get rid of tedious reading tasks, such as screening review to separate the normal carotid from the atherosclerotic arteries and outlining the vessel wall contours.
Assuntos
Artérias Carótidas/diagnóstico por imagem , Artérias Carótidas/patologia , Doenças das Artérias Carótidas/diagnóstico por imagem , Doenças das Artérias Carótidas/patologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , HumanosRESUMO
Fine-tuning of the nanoscale morphologies of the active layers in polymer solar cells (PSCs) through various techniques plays a vital role in improving the photovoltaic performance. However, for emerging nonfullerene (NF) PSCs, the morphology optimization of the active-layer films empirically follows the methods originally developed in fullerene-based blends and lacks systematic studies. In this work, two solid additives with different volatilities, SA-4 and SA-7, are applied to investigate their influence on the morphologies and photovoltaic performances of NF-PSCs. Although both solid additives effectively promote the molecular packing of the NF acceptors, due to the higher volatility of SA-4, the devices processed with SA-4 exhibit a power conversion efficiency of 13.5%, higher than that of the control devices, and the devices processed with SA-7 exhibit poor performances. Through a series of detailed morphological analyses, it is found that the volatilization of SA-4 after thermal annealing is beneficial for the self-assembly packing of acceptors, while the residuals due to the incomplete volatilization of SA-7 have a negative effect on the film morphology. The results delineate the feasibility of applying volatilizable solid additives and provide deeper insights into the working mechanism, establishing guidelines for further material design of solid additives.
RESUMO
A self-driven closed-loop parallel testing system implements more challenging tests to accelerate evaluation and development of autonomous vehicles.
RESUMO
Previous studies have proved that the vulnerable plaque is a major factor leading to the onset of acute coronary syndrome (ACS). Recognizing vulnerable plaques is essential for cardiologists to treat illnesses, early. However, this task often comes with the challenge of insufficient annotated data sets and subtle differences between lesion regions and normal regions. In this paper, we apply the visual attention model with deep neural network to improve the performance of recognizing vulnerable plaques. There are two key ideas about our method: 1) using a top-down attention model to extract salient regions (blood vessels) according to the doctor's prior knowledge, and 2) employing a multi-task neural network to complete the recognition task. The first branch, a typical classification task, is to distinguish whether the image contains vulnerable plaques. The other branch uses a column-wise segmentation to locate vulnerable plaques. We have verified the effectiveness of our proposed method on the data set provided by 2017 CCCV-IVOCT Challenge. The proposed method obtains good performance.