Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Biomed Mater Eng ; 35(3): 249-264, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38189746

RESUMO

BACKGROUND: The scientific revolution in the treatment of many illnesses has been significantly aided by stem cells. This paper presents an optimal control on a mathematical model of chemotherapy and stem cell therapy for cancer treatment. OBJECTIVE: To develop effective hybrid techniques that combine the optimal control theory (OCT) with the evolutionary algorithm and multi-objective swarm algorithm. The developed technique is aimed to reduce the number of cancerous cells while utilizing the minimum necessary chemotherapy medications and minimizing toxicity to protect patients' health. METHODS: Two hybrid techniques are proposed in this paper. Both techniques combined OCT with the evolutionary algorithm and multi-objective swarm algorithm which included MOEA/D, MOPSO, SPEA II and PESA II. This study evaluates the performance of two hybrid techniques in terms of reducing cancer cells and drug concentrations, as well as computational time consumption. RESULTS: In both techniques, MOEA/D emerges as the most effective algorithm due to its superior capability in minimizing tumour size and cancer drug concentration. CONCLUSION: This study highlights the importance of integrating OCT and evolutionary algorithms as a robust approach for optimizing cancer chemotherapy treatment.


Assuntos
Algoritmos , Antineoplásicos , Neoplasias , Humanos , Neoplasias/terapia , Neoplasias/tratamento farmacológico , Antineoplásicos/uso terapêutico , Simulação por Computador , Terapia Combinada , Transplante de Células-Tronco/métodos , Modelos Biológicos , Inteligência Artificial
2.
PeerJ Comput Sci ; 9: e1325, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346512

RESUMO

Oil palm is a key agricultural resource in Malaysia. However, palm disease, most prominently basal stem rot caused at least RM 255 million of annual economic loss. Basal stem rot is caused by a fungus known as Ganoderma boninense. An infected tree shows few symptoms during early stage of infection, while potentially suffers an 80% lifetime yield loss and the tree may be dead within 2 years. Early detection of basal stem rot is crucial since disease control efforts can be done. Laboratory BSR detection methods are effective, but the methods have accuracy, biosafety, and cost concerns. This review article consists of scientific articles related to the oil palm tree disease, basal stem rot, Ganoderma Boninense, remote sensors and deep learning that are listed in the Web of Science since year 2012. About 110 scientific articles were found that is related to the index terms mentioned and 60 research articles were found to be related to the objective of this research thus included in this review article. From the review, it was found that the potential use of deep learning methods were rarely explored. Some research showed unsatisfactory results due to limitations on dataset. However, based on studies related to other plant diseases, deep learning in combination with data augmentation techniques showed great potentials, showing remarkable detection accuracy. Therefore, the feasibility of analyzing oil palm remote sensor data using deep learning models together with data augmentation techniques should be studied. On a commercial scale, deep learning used together with remote sensors and unmanned aerial vehicle technologies showed great potential in the detection of basal stem rot disease.

3.
Wirel Pers Commun ; 129(3): 2213-2237, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36987507

RESUMO

Social media platforms such as Twitter and Facebook have become popular channels for people to record and express their feelings, opinions, and feedback in the last decades. With proper extraction techniques such as sentiment analysis, this information is useful in many aspects, including product marketing, behavior analysis, and pandemic management. Sentiment analysis is a technique to analyze people's thoughts, feelings and emotions, and to categorize them into positive, negative, or neutral. There are many ways for someone to express their feelings and emotions. These sentiments are sometimes accompanied by sarcasm, especially when conveying intense emotion. Sarcasm is defined as a positive sentence with underlying negative intention. Most of the current research work treats them as two distinct tasks. To date, most sentiment and sarcasm classification approaches have been treated primarily and standalone as a text categorization problem. In recent years, research work using deep learning algorithms have significantly improved performance for these standalone classifiers. One of the major issues faced by these approaches is that they could not correctly classify sarcastic sentences as negative. With this in mind, we claim that knowing how to spot sarcasm will help sentiment classification and vice versa. Our work has shown that these two tasks are correlated. This paper proposes a multi-task learning-based framework utilizing a deep neural network to model this correlation to improve sentiment analysis's overall performance. The proposed method outperforms the existing methods by a margin of 3%, with an F1-score of 94%.

4.
Front Comput Neurosci ; 17: 1038636, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36814932

RESUMO

Alzheimer's disease (AD) is a neurodegenerative disorder that causes memory degradation and cognitive function impairment in elderly people. The irreversible and devastating cognitive decline brings large burdens on patients and society. So far, there is no effective treatment that can cure AD, but the process of early-stage AD can slow down. Early and accurate detection is critical for treatment. In recent years, deep-learning-based approaches have achieved great success in Alzheimer's disease diagnosis. The main objective of this paper is to review some popular conventional machine learning methods used for the classification and prediction of AD using Magnetic Resonance Imaging (MRI). The methods reviewed in this paper include support vector machine (SVM), random forest (RF), convolutional neural network (CNN), autoencoder, deep learning, and transformer. This paper also reviews pervasively used feature extractors and different types of input forms of convolutional neural network. At last, this review discusses challenges such as class imbalance and data leakage. It also discusses the trade-offs and suggestions about pre-processing techniques, deep learning, conventional machine learning methods, new techniques, and input type selection.

5.
Comput Intell Neurosci ; 2023: 4208231, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36756163

RESUMO

Cardiac health diseases are one of the key causes of death around the globe. The number of heart patients has considerably increased during the pandemic. Therefore, it is crucial to assess and analyze the medical and cardiac images. Deep learning architectures, specifically convolutional neural networks have profoundly become the primary choice for the assessment of cardiac medical images. The left ventricle is a vital part of the cardiovascular system where the boundary and size perform a significant role in the evaluation of cardiac function. Due to automatic segmentation and good promising results, the left ventricle segmentation using deep learning has attracted a lot of attention. This article presents a critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography. This study also demonstrates the details of the network architecture, software, and hardware used for training along with publicly available cardiac image datasets and self-prepared dataset details incorporated. The summary of the evaluation matrices with results used by different researchers is also presented in this study. Finally, all this information is summarized and comprehended in order to assist the readers to understand the motivation and methodology of various deep learning models, as well as exploring potential solutions to future challenges in LV segmentation.


Assuntos
Aprendizado Profundo , Cardiopatias , Humanos , Ventrículos do Coração/diagnóstico por imagem , Coração , Redes Neurais de Computação , Imageamento por Ressonância Magnética , Processamento de Imagem Assistida por Computador/métodos
6.
Life (Basel) ; 13(1)2023 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-36676073

RESUMO

The segmentation of the left ventricle (LV) is one of the fundamental procedures that must be performed to obtain quantitative measures of the heart, such as its volume, area, and ejection fraction. In clinical practice, the delineation of LV is still often conducted semi-automatically, leaving it open to operator subjectivity. The automatic LV segmentation from echocardiography images is a challenging task due to poorly defined boundaries and operator dependency. Recent research has demonstrated that deep learning has the capability to employ the segmentation process automatically. However, the well-known state-of-the-art segmentation models still lack in terms of accuracy and speed. This study aims to develop a single-stage lightweight segmentation model that precisely and rapidly segments the LV from 2D echocardiography images. In this research, a backbone network is used to acquire both low-level and high-level features. Two parallel blocks, known as the spatial feature unit and the channel feature unit, are employed for the enhancement and improvement of these features. The refined features are merged by an integrated unit to segment the LV. The performance of the model and the time taken to segment the LV are compared to other established segmentation models, DeepLab, FCN, and Mask RCNN. The model achieved the highest values of the dice similarity index (0.9446), intersection over union (0.8445), and accuracy (0.9742). The evaluation metrics and processing time demonstrate that the proposed model not only provides superior quantitative results but also trains and segments the LV in less time, indicating its improved performance over competing segmentation models.

7.
Front Public Health ; 10: 981019, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36091529

RESUMO

One of the primary factors contributing to death across all age groups is cardiovascular disease. In the analysis of heart function, analyzing the left ventricle (LV) from 2D echocardiographic images is a common medical procedure for heart patients. Consistent and accurate segmentation of the LV exerts significant impact on the understanding of the normal anatomy of the heart, as well as the ability to distinguish the aberrant or diseased structure of the heart. Therefore, LV segmentation is an important and critical task in medical practice, and automated LV segmentation is a pressing need. The deep learning models have been utilized in research for automatic LV segmentation. In this work, three cutting-edge convolutional neural network architectures (SegNet, Fully Convolutional Network, and Mask R-CNN) are designed and implemented to segment the LV. In addition, an echocardiography image dataset is generated, and the amount of training data is gradually increased to measure segmentation performance using evaluation metrics. The pixel's accuracy, precision, recall, specificity, Jaccard index, and dice similarity coefficients are applied to evaluate the three models. The Mask R-CNN model outperformed the other two models in these evaluation metrics. As a result, the Mask R-CNN model is used in this study to examine the effect of training data. For 4,000 images, the network achieved 92.21% DSC value, 85.55% Jaccard index, 98.76% mean accuracy, 96.81% recall, 93.15% precision, and 96.58% specificity value. Relatively, the Mask R-CNN outperformed other architectures, and the performance achieves stability when the model is trained using more than 4,000 training images.


Assuntos
Aprendizado Profundo , Ventrículos do Coração , Ventrículos do Coração/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
8.
Comput Intell Neurosci ; 2022: 2801663, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35634043

RESUMO

Intraoperative neuromonitoring (IONM) has been used to help monitor the integrity of the nervous system during spine surgery. Transcranial motor-evoked potential (TcMEP) has been used lately for lower lumbar surgery to prevent nerve root injuries and also to predict positive functional outcomes of patients. There were a number of studies that proved that the TcMEP signal's improvement is significant towards positive functional outcomes of patients. In this paper, we explored the possibilities of using a machine learning approach to TcMEP signal to predict positive functional outcomes of patients. With 55 patients who underwent various types of lumbar surgeries, the data were divided into 70 : 30 and 80 : 20 ratios for training and testing of the machine learning models. The highest sensitivity and specificity were achieved by Fine KNN of 80 : 20 ratio with 87.5% and 33.33%, respectively. In the meantime, we also tested the existing improvement criteria presented in the literature, and 50% of TcMEP improvement criteria achieved 83.33% sensitivity and 75% specificity. But the rigidness of this threshold method proved unreliable in this study when different datasets were used as the sensitivity and specificity dropped. The proposed method by using machine learning has more room to advance with a larger dataset and various signals' features to choose from.


Assuntos
Potencial Evocado Motor , Procedimentos Neurocirúrgicos , Potencial Evocado Motor/fisiologia , Humanos , Aprendizado de Máquina , Procedimentos Neurocirúrgicos/métodos , Sensibilidade e Especificidade
9.
Comput Intell Neurosci ; 2022: 9167707, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35498184

RESUMO

In the late December of 2019, a novel coronavirus was discovered in Wuhan, China. In March 2020, WHO announced this epidemic had become a global pandemic and that the novel coronavirus may be mild to most people. However, some people may experience a severe illness that results in hospitalization or maybe death. COVID-19 classification remains challenging due to the ambiguity and similarity with other known respiratory diseases such as SARS, MERS, and other viral pneumonia. The typical symptoms of COVID-19 are fever, cough, chills, shortness of breath, loss of smell and taste, headache, sore throat, chest pains, confusion, and diarrhoea. This research paper suggests the concept of transfer learning using the deterministic algorithm in all binary classification models and evaluates the performance of various CNN architectures. The datasets of 746 CT images of COVID-19 and non-COVID-19 were divided for training, validation, and testing. Various augmentation techniques were applied to increase the number of datasets except for testing images. The images were then pretrained using CNN to obtain a binary class. ResNeXt101 and ResNet152 have the best F1 score of 0.978 and 0.938, whereas GoogleNet has an F1 score of 0.762. ResNeXt101 and ResNet152 have an accuracy of 97.81% and 93.80%. ResNeXt101, DenseNet201, and ResNet152 have 95.71%, 93.81%, and 90% sensitivity, whereas ResNeXt101, ResNet101, and ResNet152 have 100%, 99.58%, and 98.33 specificity, respectively.


Assuntos
COVID-19 , COVID-19/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Pandemias , SARS-CoV-2 , Tomografia Computadorizada por Raios X
10.
Sensors (Basel) ; 22(7)2022 Mar 31.
Artigo em Inglês | MEDLINE | ID: mdl-35408308

RESUMO

The Internet of Things (IoT) technology has revolutionized the healthcare industry by enabling a new paradigm for healthcare delivery. This paradigm is known as the Internet of Medical Things (IoMT). IoMT devices are typically connected via a wide range of wireless communication technologies, such as Bluetooth, radio-frequency identification (RFID), ZigBee, Wi-Fi, and cellular networks. The ZigBee protocol is considered to be an ideal protocol for IoMT communication due to its low cost, low power usage, easy implementation, and appropriate level of security. However, maintaining ZigBee's high reliability is a major challenge due to multi-path fading and interference from coexisting wireless networks. This has increased the demand for more efficient channel coding schemes that can achieve a more reliable transmission of vital patient data for ZigBee-based IoMT communications. To meet this demand, a novel coding scheme called inter-multilevel super-orthogonal space-time coding (IM-SOSTC) can be implemented by combining the multilevel coding and set partitioning of super-orthogonal space-time block codes based on the coding gain distance (CGD) criterion. The proposed IM-SOSTC utilizes a technique that provides inter-level dependency between adjacent multilevel coded blocks to facilitate high spectral efficiency, which has been compromised previously by the high coding gain due to the multilevel outer code. In this paper, the performance of IM-SOSTC is compared to other related schemes via a computer simulation that utilizes the quasi-static Rayleigh fading channel. The simulation results show that IM-SOSTC outperforms other related coding schemes and is capable of providing the optimal trade-off between coding gain and spectral efficiency whilst guaranteeing full diversity and low complexity.


Assuntos
Internet das Coisas , Comunicação , Simulação por Computador , Humanos , Reprodutibilidade dos Testes , Tecnologia sem Fio
11.
Sensors (Basel) ; 22(2)2022 Jan 14.
Artigo em Inglês | MEDLINE | ID: mdl-35062601

RESUMO

Image noise is a variation of uneven pixel values that occurs randomly. A good estimation of image noise parameters is crucial in image noise modeling, image denoising, and image quality assessment. To the best of our knowledge, there is no single estimator that can predict all noise parameters for multiple noise types. The first contribution of our research was to design a noise data feature extractor that can effectively extract noise information from the image pair. The second contribution of our work leveraged other noise parameter estimation algorithms that can only predict one type of noise. Our proposed method, DE-G, can estimate additive noise, multiplicative noise, and impulsive noise from single-source images accurately. We also show the capability of the proposed method in estimating multiple corruptions.


Assuntos
Algoritmos , Razão Sinal-Ruído
12.
Behav Neurol ; 2021: 2684855, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34777631

RESUMO

Spine surgeries impose risk to the spine's surrounding anatomical and physiological structures especially the spinal cord and the nerve roots. Intraoperative neuromonitoring (IONM) is a technology developed to monitor the integrity of the spinal cord and the nerve roots via the surgery. Transcranial motor evoked potential (TcMEP) (one of the IONM modalities) is adopted to monitor the integrity of the motor pathway of the spinal cord and the motor nerve roots. Recent research suggested that the IONM is conducive as a prognostic tool towards the patient's functional outcome. This paper summarizes the researches of IONM being adopted as a prognostic tool. In addition, this paper highlights the problems associated with the signal parameters as the improvement criteria in the previous researches. Lastly, we review the challenges of TcMEP to achieve a prognostic tool focusing on the factors that could interfere with the generation of a stable TcMEP response. The final section will discuss recommendations for IONM technology to achieve an objective prognostic tool.


Assuntos
Potencial Evocado Motor , Monitorização Neurofisiológica Intraoperatória , Humanos , Procedimentos Neurocirúrgicos , Medula Espinal , Coluna Vertebral/cirurgia
13.
Arab J Sci Eng ; : 1-18, 2021 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-34422543

RESUMO

Hospital readmission shortly after discharge threatens the quality of patient care and leads to increased medical care costs. In the United States, hospitals with high readmission rates are subject to federal financial penalties. This concern calls for incentives for healthcare facilities to reduce their readmission rates by predicting patients who are at high risk of readmission. Conventional practices involve the use of rule-based assessment scores and traditional statistical methods, such as logistic regression, in developing risk prediction models. The recent advancements in machine learning driven by improved computing power and sophisticated algorithms have the potential to produce highly accurate predictions. However, the value of such models could be overrated. Meanwhile, the use of other flexible models that leverage simple algorithms offer great transparency in terms of feature interpretation, which is beneficial in clinical settings. This work presents an overview of the current trends in risk prediction models developed in the field of readmission. The various techniques adopted by researchers in recent years are described, and the topic of whether complex models outperform simple ones in readmission risk stratification is investigated.

14.
EURASIP J Adv Signal Process ; 2021(1): 50, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34335736

RESUMO

Coronavirus disease of 2019 or COVID-19 is a rapidly spreading viral infection that has affected millions all over the world. With its rapid spread and increasing numbers, it is becoming overwhelming for the healthcare workers to rapidly diagnose the condition and contain it from spreading. Hence it has become a necessity to automate the diagnostic procedure. This will improve the work efficiency as well as keep the healthcare workers safe from getting exposed to the virus. Medical image analysis is one of the rising research areas that can tackle this issue with higher accuracy. This paper conducts a comparative study of the use of the recent deep learning models (VGG16, VGG19, DenseNet121, Inception-ResNet-V2, InceptionV3, Resnet50, and Xception) to deal with the detection and classification of coronavirus pneumonia from pneumonia cases. This study uses 7165 chest X-ray images of COVID-19 (1536) and pneumonia (5629) patients. Confusion metrics and performance metrics were used to analyze each model. Results show DenseNet121 (99.48% of accuracy) showed better performance when compared with the other models in this study.

15.
Sensors (Basel) ; 21(14)2021 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-34300577

RESUMO

Distracted driving is the prime factor of motor vehicle accidents. Current studies on distraction detection focus on improving distraction detection performance through various techniques, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs). However, the research on detection of distracted drivers through pose estimation is scarce. This work introduces an ensemble of ResNets, which is named Optimally-weighted Image-Pose Approach (OWIPA), to classify the distraction through original and pose estimation images. The pose estimation images are generated from HRNet and ResNet. We use ResNet101 and ResNet50 to classify the original images and the pose estimation images, respectively. An optimum weight is determined through grid search method, and the predictions from both models are weighted through this parameter. The experimental results show that our proposed approach achieves 94.28% accuracy on AUC Distracted Driver Dataset.


Assuntos
Direção Distraída , Redes Neurais de Computação , Acidentes de Trânsito
16.
Comput Math Methods Med ; 2021: 5528144, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34194535

RESUMO

Pneumonia is an infamous life-threatening lung bacterial or viral infection. The latest viral infection endangering the lives of many people worldwide is the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which causes COVID-19. This paper is aimed at detecting and differentiating viral pneumonia and COVID-19 disease using digital X-ray images. The current practices include tedious conventional processes that solely rely on the radiologist or medical consultant's technical expertise that are limited, time-consuming, inefficient, and outdated. The implementation is easily prone to human errors of being misdiagnosed. The development of deep learning and technology improvement allows medical scientists and researchers to venture into various neural networks and algorithms to develop applications, tools, and instruments that can further support medical radiologists. This paper presents an overview of deep learning techniques made in the chest radiography on COVID-19 and pneumonia cases.


Assuntos
Teste para COVID-19/métodos , COVID-19/diagnóstico por imagem , Aprendizado Profundo , SARS-CoV-2 , Algoritmos , COVID-19/diagnóstico , Teste para COVID-19/estatística & dados numéricos , Biologia Computacional , Diagnóstico Diferencial , Humanos , Conceitos Matemáticos , Redes Neurais de Computação , Pneumonia Viral/diagnóstico , Pneumonia Viral/diagnóstico por imagem , Radiografia Torácica/estatística & dados numéricos , Tomografia Computadorizada por Raios X/estatística & dados numéricos
17.
Curr Med Imaging ; 16(6): 739-751, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32723246

RESUMO

BACKGROUND: Ultrasound (US) imaging can be a convenient and reliable substitute for magnetic resonance imaging in the investigation or screening of articular cartilage injury. However, US images suffer from two main impediments, i.e., low contrast ratio and presence of speckle noise. AIMS: A variation of anisotropic diffusion is proposed that can reduce speckle noise without compromising the image quality of the edges and other important details. METHODS: For this technique, four gradient thresholds were adopted instead of one. A new diffusivity function that preserves the edge of the resultant image is also proposed. To automatically terminate the iterative procedures, the Mean Absolute Error as its stopping criterion was implemented. RESULTS: Numerical results obtained by simulations unanimously indicate that the proposed method outperforms conventional speckle reduction techniques. Nevertheless, this preliminary study has been conducted based on a small number of asymptomatic subjects. CONCLUSION: Future work must investigate the feasibility of this method in a large cohort and its clinical validity through testing subjects with a symptomatic cartilage injury.


Assuntos
Cartilagem Articular/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Articulação do Joelho/diagnóstico por imagem , Ultrassonografia/métodos , Anisotropia , Humanos , Osteoartrite do Joelho/diagnóstico por imagem , Razão Sinal-Ruído
18.
J Environ Manage ; 236: 245-253, 2019 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-30735943

RESUMO

Microwave-steam activation (MSA), an innovative pyrolysis approach combining the use of microwave heating and steam activation, was investigated for its potential production of high grade activated carbon (AC) from waste palm shell (WPS) for methylene blue removal. MSA was performed via pyrolytic carbonization of WPS to produce biochar as the first step followed by steam activation of the biochar using microwave heating to form AC. Optimum yield and adsorption efficiency of methylene blue were obtained using response surface methodology involving several key process parameters. The resulting AC was characterized for its porous characteristics, surface morphology, proximate analysis and elemental compositions. MSA provided a high activation temperature above 500 °C with short process time of 15 min and rapid heating rate (≤150 °C/min). The results from optimization showed that one gram of AC produced from steam activation under 10 min of microwave heating at 550 °C can remove up to 38.5 mg of methylene blue. The AC showed a high and uniform surface porosity consisting high fixed carbon (73 wt%), micropore and BET surface area of 763.1 and 570.8 m2/g respectively, hence suggesting the great potential of MSA as a promising approach to produce high grade adsorbent for dye removal.


Assuntos
Carvão Vegetal , Vapor , Adsorção , Micro-Ondas , Pirólise
19.
Microsc Res Tech ; 76(6): 648-52, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23553907

RESUMO

This article presents a pixellated solid-state photon detector designed specifically to improve certain aspects of the existing Everhart-Thornley detector. The photon detector was constructed and fabricated in an Austriamicrosystems 0.35 µm complementary metal-oxide-semiconductor process technology. This integrated circuit consists of an array of high-responsivity photodiodes coupled to corresponding low-noise transimpedance amplifiers, a selector-combiner circuit and a variable-gain postamplifier. Simulated and experimental results show that the photon detector can achieve a maximum transimpedance gain of 170 dBΩ and minimum bandwidth of 3.6 MHz. It is able to detect signals with optical power as low as 10 nW and produces a minimum signal-to-noise ratio (SNR) of 24 dB regardless of gain configuration. The detector has been proven to be able to effectively select and combine signals from different pixels. The key advantages of this detector are smaller dimensions, higher cost effectiveness, lower voltage and power requirements and better integration. The photon detector supports pixel-selection configurability which may improve overall SNR and also potentially generate images for different analyses. This work has contributed to the future research of system-level integration of a pixellated solid-state detector for secondary electron detection in the scanning electron microscope.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...