Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Chembiochem ; 23(19): e202200399, 2022 10 06.
Artigo em Inglês | MEDLINE | ID: mdl-35920326

RESUMO

Pathophysiological functions of proteins critically depend on both their chemical composition, including post-translational modifications, and their three-dimensional structure, commonly referred to as structure-activity relationship. Current analytical methods, like capillary electrophoresis or mass spectrometry, suffer from limitations, such as the detection of unexpected modifications at low abundance and their insensitivity to conformational changes. Building on previous enzyme-based analytical methods, we here introduce a fluorescence-based enzyme cascade (fEC), which can detect diverse chemical and conformational variations in protein samples and assemble them into digital databases. Together with complementary analytical methods an automated fEC analysis established unique modification-function relationships, which can be expanded to a proteome-wide scale, i. e. a functionally annotated modificatome. The fEC offers diverse applications, including hypersensitive biomarker detection in complex samples.


Assuntos
Processamento de Proteína Pós-Traducional , Proteoma , Bases de Dados Factuais , Bases de Dados de Proteínas , Espectrometria de Massas/métodos , Proteoma/análise
2.
Sensors (Basel) ; 22(7)2022 Mar 31.
Artigo em Inglês | MEDLINE | ID: mdl-35408311

RESUMO

Compression is a way of encoding digital data so that it takes up less storage and requires less network bandwidth to be transmitted, which is currently an imperative need for iris recognition systems due to the large amounts of data involved, while deep neural networks trained as image auto-encoders have recently emerged a promising direction for advancing the state-of-the-art in image compression, yet the generalizability of these schemes to preserve the unique biometric traits has been questioned when utilized in the corresponding recognition systems. For the first time, we thoroughly investigate the compression effectiveness of DSSLIC, a deep-learning-based image compression model specifically well suited for iris data compression, along with an additional deep-learning based lossy image compression technique. In particular, we relate Full-Reference image quality as measured in terms of Multi-scale Structural Similarity Index (MS-SSIM) and Local Feature Based Visual Security (LFBVS), as well as No-Reference images quality as measured in terms of the Blind Reference-less Image Spatial Quality Evaluator (BRISQUE), to the recognition scores as obtained by a set of concrete recognition systems. We further compare the DSSLIC model performance against several state-of-the-art (non-learning-based) lossy image compression techniques including: the ISO standard JPEG2000, JPEG, H.265 derivate BPG, HEVC, VCC, and AV1 to figure out the most suited compression algorithm which can be used for this purpose. The experimental results show superior compression and promising recognition performance of the model over all other techniques on different iris databases.


Assuntos
Compressão de Dados , Algoritmos , Compressão de Dados/métodos , Bases de Dados Factuais , Processamento de Imagem Assistida por Computador , Iris , Redes Neurais de Computação
3.
Sensors (Basel) ; 21(7)2021 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-33805005

RESUMO

Recent developments enable biometric recognition systems to be available as mobile solutions or to be even integrated into modern smartphone devices. Thus, smartphone devices can be used as mobile fingerprint image acquisition devices, and it has become feasible to process fingerprints on these devices, which helps police authorities carry out identity verification. In this paper, we provide a comprehensive and in-depth engineering study on the different stages of the fingerprint recognition toolchain. The insights gained throughout this study serve as guidance for future work towards developing a contactless mobile fingerprint solution based on the iPhone 11, working without any additional hardware. The targeted solution will be capable of acquiring 4 fingers at once (except the thumb) in a contactless manner, automatically segmenting the fingertips, pre-processing them (including a specific enhancement), and thus enabling fingerprint comparison against contact-based datasets. For fingertip detection and segmentation, various traditional handcrafted feature-based approaches as well as deep-learning-based ones are investigated. Furthermore, a run-time analysis and first results on the biometric recognition performance are included.

4.
Sensors (Basel) ; 19(22)2019 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-31744197

RESUMO

Vascular pattern based biometric recognition is gaining more and more attention, with a trend towards contactless acquisition. An important requirement for conducting research in vascular pattern recognition are available datasets. These datasets can be established using a suitable biometric capturing device. A sophisticated capturing device design is important for good image quality and, furthermore, at a decent recognition rate. We propose a novel contactless capturing device design, including technical details of its individual parts. Our capturing device is suitable for finger and hand vein image acquisition and is able to acquire palmar finger vein images using light transmission as well as palmar hand vein images using reflected light. An experimental evaluation using several well-established vein recognition schemes on a dataset acquired with the proposed capturing device confirms its good image quality and competitive recognition performance. This challenging dataset, which is one of the first publicly available contactless finger and hand vein datasets, is published as well.


Assuntos
Biometria , Dedos/diagnóstico por imagem , Mãos/diagnóstico por imagem , Veias/diagnóstico por imagem , Algoritmos , Dedos/irrigação sanguínea , Mãos/irrigação sanguínea , Humanos , Processamento de Imagem Assistida por Computador , Reconhecimento Automatizado de Padrão , Veias/fisiologia
5.
Pattern Recognit ; 48(8): 2633-2644, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26240440

RESUMO

Local Binary Patterns (LBPs) have been used in a wide range of texture classification scenarios and have proven to provide a highly discriminative feature representation. A major limitation of LBP is its sensitivity to affine transformations. In this work, we present a scale- and rotation-invariant computation of LBP. Rotation-invariance is achieved by explicit alignment of features at the extraction level, using a robust estimate of global orientation. Scale-adapted features are computed in reference to the estimated scale of an image, based on the distribution of scale normalized Laplacian responses in a scale-space representation. Intrinsic-scale-adaption is performed to compute features, independent of the intrinsic texture scale, leading to a significantly increased discriminative power for a large amount of texture classes. In a final step, the rotation- and scale-invariant features are combined in a multi-resolution representation, which improves the classification accuracy in texture classification scenarios with scaling and rotation significantly.

6.
Holz Roh Werkst ; 81(3): 669-683, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37070119

RESUMO

The proof of origin of wood logs is becoming more and more important. In the context of Industry 4.0 and to combat illegal logging, there is an increased interest to track each individual log. There were already previous publications on wood log tracing using image data from logs, but these publications used experimental setups that cannot simulate a practical application where logs are tracked between different stages of the wood processing chain, like e.g. from the forest to the sawmill. In this work, we employ image data from the same 100 logs that were acquired at different stages of the wood processing chain (two datasets at the forest, one at a laboratory and two at the sawmill including one acquired with a CT scanner). Cross-dataset wood tracking experiments are applied using (a) the two forest datasets, (b) one forest and the RGB sawmill dataset and (c) different RGB datasets and the CT sawmill dataset. In our experiments we employ two CNN based method, 2 shape descriptors and two methods from the biometric areas of iris and fingerprint recognition. We will show that wood log tracing between different stages of the wood processing chain is feasible, even if the images at different stages are obtained at different image domains (RGB-CT). But it only works if the log cross sections from different stages of the wood processing chain either offer a good visibility of the annual ring pattern or share the same woodcut pattern.

7.
Signal Process Image Commun ; 27(2-2): 192-207, 2012 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26869746

RESUMO

Universal Multimedia Access (UMA) calls for solutions where content is created once and subsequently adapted to given requirements. With regard to UMA and scalability, which is required often due to a wide variety of end clients, the best suited codecs are wavelet based (like the MC-EZBC) due to their inherent high number of scaling options. However, most transport technologies for delivering videos to end clients are targeted toward the H.264/AVC standard or, if scalability is required, the H.264/SVC. In this paper we will introduce a mapping of the MC-EZBC bitstream to existing H.264/SVC based streaming and scaling protocols. This enables the use of highly scalable wavelet based codecs on the one hand and the utilization of already existing network technologies without accruing high implementation costs on the other hand. Furthermore, we will evaluate different scaling options in order to choose the best option for given requirements. Additionally, we will evaluate different encryption options based on transport and bitstream encryption for use cases where digital rights management is required.

8.
J Imaging ; 8(5)2022 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-35621912

RESUMO

Finger vein recognition has evolved into a major biometric trait in recent years. Despite various improvements in recognition accuracy and usability, finger vein recognition is still far from being perfect as it suffers from low-contrast images and other imaging artefacts. Three-dimensional or multi-perspective finger vein recognition technology provides a way to tackle some of the current problems, especially finger misplacement and rotations. In this work we present a novel multi-perspective finger vein capturing device that is based on mirrors, in contrast to most of the existing devices, which are usually based on multiple cameras. This new device only uses a single camera, a single illumination module and several mirrors to capture the finger at different rotational angles. To derive the need for this new device, we at first summarise the state of the art in multi-perspective finger vein recognition and identify the potential problems and shortcomings of the current devices.

9.
Comput Med Imaging Graph ; 86: 101798, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33075676

RESUMO

In this work we present a technique to deal with one of the biggest problems for the application of convolutional neural networks (CNNs) in the area of computer assisted endoscopic image diagnosis, the insufficient amount of training data. Based on patches from endoscopic images of colonic polyps with given label information, our proposed technique acquires additional (labeled) training data by tracking the area shown in the patches through the corresponding endoscopic videos and by extracting additional image patches from frames of these areas. So similar to the widely used augmentation strategies, additional training data is produced by adding images with different orientations, scales and points of view than the original images. However, contrary to augmentation techniques, we do not artificially produce image data but use real image data from videos under different image recording conditions (different viewpoints and image qualities). By means of our proposed method and by filtering out all extracted images with insufficient image quality, we are able to increase the amount of labeled image data by factor 39. We will show that our proposed method clearly and continuously improves the performance of CNNs.


Assuntos
Pólipos do Colo , Redes Neurais de Computação , Pólipos do Colo/diagnóstico por imagem , Diagnóstico por Computador , Humanos , Processamento de Imagem Assistida por Computador
10.
Comput Biol Med ; 117: 103592, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-32072961

RESUMO

OBJECTIVE: Differential diagnosis of mild cognitive impairment MCI and temporal lobe epilepsy TLE is a debated issue, specifically because these conditions may coincide in the elderly population. We evaluate automated differential diagnosis based on characteristics derived from structural brain MRI of different brain regions. METHODS: In 22 healthy controls, 19 patients with MCI, and 17 patients with TLE we used scale invariant feature transform (SIFT), local binary patterns (LBP), and wavelet-based features and investigate their predictive performance for MCI and TLE. RESULTS: The classification based on SIFT features resulted in an accuracy of 81% of MCI vs. TLE and reasonable generalizability. Local binary patterns yielded satisfactory diagnostic performance with up to 94.74% sensitivity and 88.24% specificity in the right Thalamus for the distinction of MCI vs. TLE, but with limited generalizable. Wavelet features yielded similar results as LPB with 94.74% sensitivity and 82.35% specificity but generalize better. SIGNIFICANCE: Features beyond volume analysis are a valid approach when applied to specific regions of the brain. Most significant information could be extracted from the thalamus, frontal gyri, and temporal regions, among others. These results suggest that analysis of changes of the central nervous system should not be limited to the most typical regions of interest such as the hippocampus and parahippocampal areas. Region-independent approaches can add considerable information for diagnosis. We emphasize the need to characterize generalizability in future studies, as our results demonstrate that not doing so can lead to overestimation of classification results. LIMITATIONS: The data used within this study allows for separation of MCI and TLE subjects using a simple age threshold. While we present a strong indication that the presented method is age-invariant and therefore agnostic to this situation, new data would be needed for a rigorous empirical assessment of this findings.


Assuntos
Disfunção Cognitiva , Epilepsia do Lobo Temporal , Idoso , Disfunção Cognitiva/diagnóstico por imagem , Epilepsia do Lobo Temporal/diagnóstico por imagem , Hipocampo , Humanos , Imageamento por Ressonância Magnética , Neuroimagem
11.
Comput Intell Neurosci ; 2020: 8915961, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32549888

RESUMO

Cognitive decline is a severe concern of patients with mild cognitive impairment. Also, in patients with temporal lobe epilepsy, memory problems are a frequently encountered problem with potential progression. On the background of a unifying hypothesis for cognitive decline, we merged knowledge from dementia and epilepsy research in order to identify biomarkers with a high predictive value for cognitive decline across and beyond these groups that can be fed into intelligent systems. We prospectively assessed patients with temporal lobe epilepsy (N = 9), mild cognitive impairment (N = 19), and subjective cognitive complaints (N = 4) and healthy controls (N = 18). All had structural cerebral MRI, EEG at rest and during declarative verbal memory performance, and a neuropsychological assessment which was repeated after 18 months. Cognitive decline was defined as significant change on neuropsychological subscales. We extracted volumetric and shape features from MRI and brain network measures from EEG and fed these features alongside a baseline testing in neuropsychology into a machine learning framework with feature subset selection and 5-fold cross validation. Out of 50 patients, 27 had a decline over time in executive functions, 23 in visual-verbal memory, 23 in divided attention, and 7 patients had an increase in depression scores. The best sensitivity/specificity for decline was 72%/82% for executive functions based on a feature combination from MRI volumetry and EEG partial coherence during recall of memories; 95%/74% for visual-verbal memory by combination of MRI-wavelet features and neuropsychology; 84%/76% for divided attention by combination of MRI-wavelet features and neuropsychology; and 81%/90% for increase of depression by combination of EEG partial directed coherence factor at rest and neuropsychology. Combining information from EEG, MRI, and neuropsychology in order to predict neuropsychological changes in a heterogeneous population could create a more general model of cognitive performance decline.


Assuntos
Cognição/fisiologia , Disfunção Cognitiva/psicologia , Epilepsia do Lobo Temporal/psicologia , Transtornos da Memória/psicologia , Atenção/fisiologia , Eletroencefalografia/métodos , Humanos , Imageamento por Ressonância Magnética/métodos , Memória/fisiologia , Rememoração Mental/fisiologia , Testes Neuropsicológicos
12.
Cells ; 8(10)2019 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-31627327

RESUMO

The lymphocyte function-associated antigen 1 (LFA-1) is a member of the beta2-integrin family and plays a pivotal role for T cell activation and leukocyte trafficking under inflammatory conditions. Blocking LFA-1 has reduced or aggravated inflammation depending on the inflammation model. To investigate the effect of LFA-1 in myocarditis, mice with experimental autoimmune myocarditis (EAM) were treated with a function blocking anti-LFA-1 antibody from day 1 of disease until day 21, the peak of inflammation. Cardiac inflammation was evaluated by measuring infiltration of leukocytes into the inflamed cardiac tissue using histology and flow cytometry and was assessed by analysis of the heart weight/body weight ratio. LFA-1 antibody treatment severely enhanced leukocyte infiltration, in particular infiltration of CD11b+ monocytes, F4/80+ macrophages, CD4+ T cells, Ly6G+ neutrophils, and CD133+ progenitor cells at peak of inflammation which was accompanied by an increased heart weight/body weight ratio. Thus, blocking LFA-1 starting at the time of immunization severely aggravated acute cardiac inflammation in the EAM model.


Assuntos
Antibacterianos/farmacologia , Doenças Autoimunes/imunologia , Doenças Autoimunes/patologia , Antígeno-1 Associado à Função Linfocitária/metabolismo , Doença Autoimune do Sistema Nervoso Experimental/imunologia , Doença Autoimune do Sistema Nervoso Experimental/patologia , Antígeno AC133/metabolismo , Animais , Peso Corporal/efeitos dos fármacos , Antígeno CD11b/metabolismo , Linfócitos T CD4-Positivos/efeitos dos fármacos , Linfócitos T CD4-Positivos/metabolismo , Citometria de Fluxo , Inflamação/imunologia , Inflamação/patologia , Infiltração Leucêmica/imunologia , Infiltração Leucêmica/patologia , Macrófagos/efeitos dos fármacos , Macrófagos/metabolismo , Masculino , Camundongos , Camundongos Endogâmicos BALB C , Monócitos/efeitos dos fármacos , Monócitos/metabolismo , Neutrófilos/efeitos dos fármacos , Neutrófilos/metabolismo , Tamanho do Órgão/efeitos dos fármacos , Células-Tronco/efeitos dos fármacos , Células-Tronco/metabolismo
13.
World J Gastroenterol ; 25(10): 1197-1209, 2019 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-30886503

RESUMO

BACKGROUND: It was shown in previous studies that high definition endoscopy, high magnification endoscopy and image enhancement technologies, such as chromoendoscopy and digital chromoendoscopy [narrow-band imaging (NBI), i-Scan] facilitate the detection and classification of colonic polyps during endoscopic sessions. However, there are no comprehensive studies so far that analyze which endoscopic imaging modalities facilitate the automated classification of colonic polyps. In this work, we investigate the impact of endoscopic imaging modalities on the results of computer-assisted diagnosis systems for colonic polyp staging. AIM: To assess which endoscopic imaging modalities are best suited for the computer-assisted staging of colonic polyps. METHODS: In our experiments, we apply twelve state-of-the-art feature extraction methods for the classification of colonic polyps to five endoscopic image databases of colonic lesions. For this purpose, we employ a specifically designed experimental setup to avoid biases in the outcomes caused by differing numbers of images per image database. The image databases were obtained using different imaging modalities. Two databases were obtained by high-definition endoscopy in combination with i-Scan technology (one with chromoendoscopy and one without chromoendoscopy). Three databases were obtained by high-magnification endoscopy (two databases using narrow band imaging and one using chromoendoscopy). The lesions are categorized into non-neoplastic and neoplastic according to the histological diagnosis. RESULTS: Generally, it is feature-dependent which imaging modalities achieve high results and which do not. For the high-definition image databases, we achieved overall classification rates of up to 79.2% with chromoendoscopy and 88.9% without chromoendoscopy. In the case of the database obtained by high-magnification chromoendoscopy, the classification rates were up to 81.4%. For the combination of high-magnification endoscopy with NBI, results of up to 97.4% for one database and up to 84% for the other were achieved. Non-neoplastic lesions were classified more accurately in general than non-neoplastic lesions. It was shown that the image recording conditions highly affect the performance of automated diagnosis systems and partly contribute to a stronger effect on the staging results than the used imaging modality. CONCLUSION: Chromoendoscopy has a negative impact on the results of the methods. NBI is better suited than chromoendoscopy. High-definition and high-magnification endoscopy are equally suited.


Assuntos
Pólipos do Colo/diagnóstico por imagem , Colonoscopia/métodos , Neoplasias Colorretais/prevenção & controle , Diagnóstico por Computador/métodos , Lesões Pré-Cancerosas/diagnóstico por imagem , Pólipos do Colo/patologia , Corantes/administração & dosagem , Humanos , Aumento da Imagem/métodos , Imagem de Banda Estreita/métodos , Lesões Pré-Cancerosas/patologia , Gravação em Vídeo/métodos
14.
J Exp Med ; 216(2): 350-368, 2019 02 04.
Artigo em Inglês | MEDLINE | ID: mdl-30647120

RESUMO

Heart failure due to dilated cardiomyopathy is frequently caused by myocarditis. However, the pathogenesis of myocarditis remains incompletely understood. Here, we report the presence of neutrophil extracellular traps (NETs) in cardiac tissue of patients and mice with myocarditis. Inhibition of NET formation in experimental autoimmune myocarditis (EAM) of mice substantially reduces inflammation in the acute phase of the disease. Targeting the cytokine midkine (MK), which mediates NET formation in vitro, not only attenuates NET formation in vivo and the infiltration of polymorphonuclear neutrophils (PMNs) but also reduces fibrosis and preserves systolic function during EAM. Low-density lipoprotein receptor-related protein 1 (LRP1) acts as the functionally relevant receptor for MK-induced PMN recruitment as well as NET formation. In summary, NETosis substantially contributes to the pathogenesis of myocarditis and drives cardiac inflammation, probably via MK, which promotes PMN trafficking and NETosis. Thus, MK as well as NETs may represent novel therapeutic targets for the treatment of cardiac inflammation.


Assuntos
Doenças Autoimunes/imunologia , Movimento Celular/imunologia , Armadilhas Extracelulares/imunologia , Midkina/imunologia , Miocardite/imunologia , Miocárdio/imunologia , Neutrófilos/imunologia , Animais , Doenças Autoimunes/genética , Doenças Autoimunes/patologia , Movimento Celular/genética , Armadilhas Extracelulares/genética , Humanos , Proteína-1 Relacionada a Receptor de Lipoproteína de Baixa Densidade/genética , Proteína-1 Relacionada a Receptor de Lipoproteína de Baixa Densidade/imunologia , Camundongos , Camundongos Transgênicos , Midkina/genética , Miocardite/genética , Miocardite/patologia , Miocárdio/patologia , Neutrófilos/patologia , Receptores de LDL/genética , Receptores de LDL/imunologia , Proteínas Supressoras de Tumor/genética , Proteínas Supressoras de Tumor/imunologia
15.
J Med Imaging (Bellingham) ; 5(3): 034504, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30840751

RESUMO

We propose an approach for the automated diagnosis of celiac disease (CD) and colonic polyps (CP) based on applying Fisher encoding to the activations of convolutional layers. In our experiments, three different convolutional neural network (CNN) architectures (AlexNet, VGG-f, and VGG-16) are applied to three endoscopic image databases (one CD database and two CP databases). For each network architecture, we perform experiments using a version of the net that is pretrained on the ImageNet database, as well as a version of the net that is trained on a specific endoscopic image database. The Fisher representations of convolutional layer activations are classified using support vector machines. Additionally, experiments are performed by concatenating the Fisher representations of several layers to combine the information of these layers. We will show that our proposed CNN-Fisher approach clearly outperforms other CNN- and non-CNN-based approaches and that our approach requires no training on the target dataset, which results in substantial time savings compared with other CNN-based approaches.

16.
Front Neurol ; 9: 955, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30510537

RESUMO

Brain computer interfaces (BCIs) are thought to revolutionize rehabilitation after SCI, e.g., by controlling neuroprostheses, exoskeletons, functional electrical stimulation, or a combination of these components. However, most BCI research was performed in healthy volunteers and it is unknown whether these results can be translated to patients with spinal cord injury because of neuroplasticity. We sought to examine whether high-density EEG (HD-EEG) could improve the performance of motor-imagery classification in patients with SCI. We recorded HD-EEG with 256 channels in 22 healthy controls and 7 patients with 14 recordings (4 patients had more than one recording) in an event related design. Participants were instructed acoustically to either imagine, execute, or observe foot and hand movements, or to rest. We calculated Fast Fourier Transform (FFT) and full frequency directed transfer function (ffDTF) for each condition and classified conditions pairwise with support vector machines when using only 2 channels over the sensorimotor area, full 10-20 montage, high-density montage of the sensorimotor cortex, and full HD-montage. Classification accuracies were comparable between patients and controls, with an advantage for controls for classifications that involved the foot movement condition. Full montages led to better results for both groups (p < 0.001), and classification accuracies were higher for FFT than for ffDTF (p < 0.001), for which the feature vector might be too long. However, full-montage 10-20 montage was comparable to high-density configurations. Motor-imagery driven control of neuroprostheses or BCI systems may perform as well in patients as in healthy volunteers with adequate technical configuration. We suggest the use of a whole-head montage and analysis of a broad frequency range.

17.
Comput Biol Med ; 102: 251-259, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-29773226

RESUMO

BACKGROUND: In medical image data sets, the number of images is usually quite small. The small number of training samples does not allow to properly train classifiers which leads to massive overfitting to the training data. In this work, we investigate whether increasing the number of training samples by merging datasets from different imaging modalities can be effectively applied to improve predictive performance. Further, we investigate if the extracted features from the employed image representations differ between different imaging modalities and if domain adaption helps to overcome these differences. METHOD: We employ twelve feature extraction methods to differentiate between non-neoplastic and neoplastic lesions. Experiments are performed using four different classifier training strategies, each with a different combination of training data. The specifically designed setup for these experiments enables a fair comparison between the four training strategies. RESULTS: Combining high definition with high magnification training data and chromoscopic with non-chromoscopic training data partly improved the results. The usage of domain adaptation has only a small effect on the results compared to just using non-adapted training data. CONCLUSION: Merging datasets from different imaging modalities turned out to be partially beneficial for the case of combining high definition endoscopic data with high magnification endoscopic data and for combining chromoscopic with non-chromoscopic data. NBI and chromoendoscopy on the other hand are mostly too different with respect to the extracted features to combine images of these two modalities for classifier training.


Assuntos
Pólipos do Colo/diagnóstico por imagem , Diagnóstico por Computador/métodos , Reconhecimento Automatizado de Padrão , Algoritmos , Colonoscopia , Neoplasias Colorretais/diagnóstico por imagem , Diagnóstico por Imagem/métodos , Endoscopia , Humanos , Aumento da Imagem/métodos
18.
Front Hum Neurosci ; 11: 441, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28912704

RESUMO

Measures of interaction (connectivity) of the EEG are at the forefront of current neuroscientific research. Unfortunately, test-retest reliability can be very low, depending on the measure and its estimation, the EEG-frequency of interest, the length of the signal, and the population under investigation. In addition, artifacts can hamper the continuity of the EEG signal, and in some clinical situations it is impractical to exclude artifacts. We aimed to examine factors that moderate test-retest reliability of measures of interaction. The study involved 40 patients with a range of neurological diseases and memory impairments (age median: 60; range 21-76; 40% female; 22 mild cognitive impairment, 5 subjective cognitive complaints, 13 temporal lobe epilepsy), and 20 healthy controls (age median: 61.5; range 23-74; 70% female). We calculated 14 measures of interaction based on the multivariate autoregressive model from two EEG-recordings separated by 2 weeks. We characterized test-retest reliability by correlating the measures between the two EEG-recordings for variations of data length, data discontinuity, artifact exclusion, model order, and frequency over all combinations of channels and all frequencies, individually for each subject, yielding a correlation coefficient for each participant. Excluding artifacts had strong effects on reliability of some measures, such as classical, real valued coherence (~0.1 before, ~0.9 after artifact exclusion). Full frequency directed transfer function was highly reliable and robust against artifacts. Variation of data length decreased reliability in relation to poor adjustment of model order and signal length. Variation of discontinuity had no effect, but reliabilities were different between model orders, frequency ranges, and patient groups depending on the measure. Pathology did not interact with variation of signal length or discontinuity. Our results emphasize the importance of documenting reliability, which may vary considerably between measures of interaction. We recommend careful selection of measures of interaction in accordance with the properties of the data. When only short data segments are available and when the signal length varies strongly across subjects after exclusion of artifacts, reliability becomes an issue. Finally, measures which show high reliability irrespective of the presence of artifacts could be extremely useful in clinical situations when exclusion of artifacts is impractical.

19.
Front Aging Neurosci ; 9: 290, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28936173

RESUMO

Single photon emission computed tomography (SPECT) and Electroencephalography (EEG) have become established tools in routine diagnostics of dementia. We aimed to increase the diagnostic power by combining quantitative markers from SPECT and EEG for differential diagnosis of disorders with amnestic symptoms. We hypothesize that the combination of SPECT with measures of interaction (connectivity) in the EEG yields higher diagnostic accuracy than the single modalities. We examined 39 patients with Alzheimer's dementia (AD), 69 patients with depressive cognitive impairment (DCI), 71 patients with amnestic mild cognitive impairment (aMCI), and 41 patients with amnestic subjective cognitive complaints (aSCC). We calculated 14 measures of interaction from a standard clinical EEG-recording and derived graph-theoretic network measures. From regional brain perfusion measured by 99mTc-hexamethyl-propylene-aminoxime (HMPAO)-SPECT in 46 regions, we calculated relative cerebral perfusion in these patients. Patient groups were classified pairwise with a linear support vector machine. Classification was conducted separately for each biomarker, and then again for each EEG- biomarker combined with SPECT. Combination of SPECT with EEG-biomarkers outperformed single use of SPECT or EEG when classifying aSCC vs. AD (90%), aMCI vs. AD (70%), and AD vs. DCI (100%), while a selection of EEG measures performed best when classifying aSCC vs. aMCI (82%) and aMCI vs. DCI (90%). Only the contrast between aSCC and DCI did not result in above-chance classification accuracy (60%). In general, accuracies were higher when measures of interaction (i.e., connectivity measures) were applied directly than when graph-theoretical measures were derived. We suggest that quantitative analysis of EEG and machine-learning techniques can support differentiating AD, aMCI, aSCC, and DCC, especially when being combined with imaging methods such as SPECT. Quantitative analysis of EEG connectivity could become an integral part for early differential diagnosis of cognitive impairment.

20.
Front Hum Neurosci ; 11: 350, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28725190

RESUMO

Alterations of interaction (connectivity) of the EEG reflect pathological processes in patients with neurologic disorders. Nevertheless, it is questionable whether these patterns are reliable over time in different measures of interaction and whether this reliability of the measures is the same across different patient populations. In order to address this topic we examined 22 patients with mild cognitive impairment, five patients with subjective cognitive complaints, six patients with right-lateralized temporal lobe epilepsy, seven patients with left lateralized temporal lobe epilepsy, and 20 healthy controls. We calculated 14 measures of interaction from two EEG-recordings separated by 2 weeks. In order to characterize test-retest reliability, we correlated these measures for each group and compared the correlations between measures and between groups. We found that both measures of interaction as well as groups differed from each other in terms of reliability. The strongest correlation coefficients were found for spectrum, coherence, and full frequency directed transfer function (average rho > 0.9). In the delta (2-4 Hz) range, reliability was lower for mild cognitive impairment compared to healthy controls and left lateralized temporal lobe epilepsy. In the beta (13-30 Hz), gamma (31-80 Hz), and high gamma (81-125 Hz) frequency ranges we found decreased reliability in subjective cognitive complaints compared to mild cognitive impairment. In the gamma and high gamma range we found increased reliability in left lateralized temporal lobe epilepsy patients compared to healthy controls. Our results emphasize the importance of documenting reliability of measures of interaction, which may vary considerably between measures, but also between patient populations. We suggest that studies claiming clinical usefulness of measures of interaction should provide information on the reliability of the results. In addition, differences between patient groups in reliability of interactions in the EEG indicate the potential of reliability to serve as a new biomarker for pathological memory decline as well as for epilepsy. While the brain concert of information flow is generally variable, high reliability, and thus, low variability may reflect abnormal firing patterns.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA