Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 94
Filtrar
1.
Front Mol Neurosci ; 17: 1431549, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39296283

RESUMO

Alpha-synuclein (aSyn) aggregates in the central nervous system are the main pathological hallmark of Parkinson's disease (PD). ASyn aggregates have also been detected in many peripheral tissues, including the skin, thus providing a novel and accessible target tissue for the detection of PD pathology. Still, a well-established validated quantitative biomarker for early diagnosis of PD that also allows for tracking of disease progression remains lacking. The main goal of this research was to characterize aSyn aggregates in skin biopsies as a comparative and quantitative measure for PD pathology. Using direct stochastic optical reconstruction microscopy (dSTORM) and computational tools, we imaged total and phosphorylated-aSyn at the single molecule level in sweat glands and nerve bundles of skin biopsies from healthy controls (HCs) and PD patients. We developed a user-friendly analysis platform that offers a comprehensive toolkit for researchers that combines analysis algorithms and applies a series of cluster analysis algorithms (i.e., DBSCAN and FOCAL) onto dSTORM images. Using this platform, we found a significant decrease in the ratio of the numbers of neuronal marker molecules to phosphorylated-aSyn molecules, suggesting the existence of damaged nerve cells in fibers highly enriched with phosphorylated-aSyn molecules. Furthermore, our analysis found a higher number of aSyn aggregates in PD subjects than in HC subjects, with differences in aggregate size, density, and number of molecules per aggregate. On average, aSyn aggregate radii ranged between 40 and 200 nm and presented an average density of 0.001-0.1 molecules/nm2. Our dSTORM analysis thus highlights the potential of our platform for identifying quantitative characteristics of aSyn distribution in skin biopsies not previously described for PD patients while offering valuable insight into PD pathology by elucidating patient aSyn aggregation status.

2.
Stud Health Technol Inform ; 316: 214-215, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176711

RESUMO

Automatic extraction of body-text within clinical PDF documents is necessary to enhance downstream NLP tasks but remains a challenge. This study presents an unsupervised algorithm designed to extract body-text leveraging large volume of data. Using DBSCAN clustering over aggregate pages, our method extracts and organize text blocks using their content and coordinates. Evaluation results demonstrate precision scores ranging from 0.82 to 0.98, recall scores from 0.62 to 0.94, and F1-scores from 0.71 to 0.96 across various medical specialty sources. Future work includes dynamic parameter adjustments for improved accuracy and using larger datasets.


Assuntos
Processamento de Linguagem Natural , Algoritmos , Mineração de Dados/métodos , Humanos , Registros Eletrônicos de Saúde , Aprendizado de Máquina não Supervisionado
3.
Sensors (Basel) ; 24(13)2024 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-39000854

RESUMO

In the shipbuilding industry, welding automation using welding robots often relies on arc-sensing techniques due to spatial limitations. However, the reliability of the feedback current value, core sensing data, is reduced when welding target workpieces have significant curvature or gaps between curved workpieces due to the control of short-circuit transition, leading to seam tracking failure and subsequent damage to the workpieces. To address these problems, this study proposes a new algorithm, MBSC (median-based spatial clustering), based on the DBSCAN (density-based spatial clustering of applications with noise) clustering algorithm. By performing clustering based on the median value of data in each weaving area and considering the characteristics of the feedback current data, the proposed technique utilizes detected outliers to enhance seam tracking accuracy and responsiveness in unstructured and challenging welding environments. The effectiveness of the proposed technique was verified through actual welding experiments in a yard environment.

4.
Sensors (Basel) ; 24(10)2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38793871

RESUMO

The sky may seem big enough for two flying vehicles to collide, but the facts show that mid-air collisions still occur occasionally and are a significant concern. Pilots learn manual tactics to avoid collisions, such as see-and-avoid, but these rules have limitations. Automated solutions have reduced collisions, but these technologies are not mandatory in all countries or airspaces, and they are expensive. These problems have prompted researchers to continue the search for low-cost solutions. One attractive solution is to use computer vision to detect obstacles in the air due to its reduced cost and weight. A well-trained deep learning solution is appealing because object detection is fast in most cases, but it relies entirely on the training data set. The algorithm chosen for this study is optical flow. The optical flow vectors can help us to separate the motion caused by camera motion from the motion caused by incoming objects without relying on training data. This paper describes the development of an optical flow-based airborne obstacle detection algorithm to avoid mid-air collisions. The approach uses the visual information from a monocular camera and detects the obstacles using morphological filters, optical flow, focus of expansion, and a data clustering algorithm. The proposal was evaluated using realistic vision data obtained with a self-developed simulator. The simulator provides different environments, trajectories, and altitudes of flying objects. The results showed that the optical flow-based algorithm detected all incoming obstacles along their trajectories in the experiments. The results showed an F-score greater than 75% and a good balance between precision and recall.

5.
Methods Mol Biol ; 2800: 167-187, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38709484

RESUMO

Analyzing the dynamics of mitochondrial content in developing T cells is crucial for understanding the metabolic state during T cell development. However, monitoring mitochondrial content in real-time needs a balance of cell viability and image resolution. In this chapter, we present experimental protocols for measuring mitochondrial content in developing T cells using three modalities: bulk analysis via flow cytometry, volumetric imaging in laser scanning confocal microscopy, and dynamic live-cell monitoring in spinning disc confocal microscopy. Next, we provide an image segmentation and centroid tracking-based analysis pipeline for automated quantification of a large number of microscopy images. These protocols together offer comprehensive approaches to investigate mitochondrial dynamics in developing T cells, enabling a deeper understanding of their metabolic processes.


Assuntos
Citometria de Fluxo , Microscopia Confocal , Mitocôndrias , Análise de Célula Única , Linfócitos T , Citometria de Fluxo/métodos , Mitocôndrias/metabolismo , Análise de Célula Única/métodos , Linfócitos T/metabolismo , Linfócitos T/citologia , Microscopia Confocal/métodos , Animais , Processamento de Imagem Assistida por Computador/métodos , Humanos , Camundongos , Dinâmica Mitocondrial
6.
Entropy (Basel) ; 26(4)2024 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-38667875

RESUMO

In underground industries, practitioners frequently employ argots to communicate discreetly and evade surveillance by investigative agencies. Proposing an innovative approach using word vectors and large language models, we aim to decipher and understand the myriad of argots in these industries, providing crucial technical support for law enforcement to detect and combat illicit activities. Specifically, positional differences in semantic space distinguish argots, and pre-trained language models' corpora are crucial for interpreting them. Expanding on these concepts, the article assesses the semantic coherence of word vectors in the semantic space based on the concept of information entropy. Simultaneously, we devised a labeled argot dataset, MNGG, and developed an argot recognition framework named CSRMECT, along with an argot interpretation framework called LLMResolve. These frameworks leverage the MECT model, the large language model, prompt engineering, and the DBSCAN clustering algorithm. Experimental results demonstrate that the CSRMECT framework outperforms the current optimal model by 10% in terms of the F1 value for argot recognition on the MNGG dataset, while the LLMResolve framework achieves a 4% higher accuracy in interpretation compared to the current optimal model.The related experiments undertaken also indicate a potential correlation between vector information entropy and model performance.

7.
PeerJ Comput Sci ; 10: e1921, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38660211

RESUMO

The density-based clustering method is considered a robust approach in unsupervised clustering technique due to its ability to identify outliers, form clusters of irregular shapes and automatically determine the number of clusters. These unique properties helped its pioneering algorithm, the Density-based Spatial Clustering on Applications with Noise (DBSCAN), become applicable in datasets where various number of clusters of different shapes and sizes could be detected without much interference from the user. However, the original algorithm exhibits limitations, especially towards its sensitivity on its user input parameters minPts and ɛ. Additionally, the algorithm assigned inconsistent cluster labels to data objects found in overlapping density regions of separate clusters, hence lowering its accuracy. To alleviate these specific problems and increase the clustering accuracy, we propose two methods that use the statistical data from a given dataset's k-nearest neighbor density distribution in order to determine the optimal ɛ values. Our approach removes the burden on the users, and automatically detects the clusters of a given dataset. Furthermore, a method to identify the accurate border objects of separate clusters is proposed and implemented to solve the unpredictability of the original algorithm. Finally, in our experiments, we show that our efficient re-implementation of the original algorithm to automatically cluster datasets and improve the clustering quality of adjoining cluster members provides increase in clustering accuracy and faster running times when compared to earlier approaches.

8.
Water Sci Technol ; 89(7): 1757-1770, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38619901

RESUMO

The water reuse facilities of industrial parks face the challenge of managing a growing variety of wastewater sources as their inlet water. Typically, this clustering outcome is designed by engineers with extensive expertise. This paper presents an innovative application of unsupervised learning methods to classify inlet water in Chinese water reuse stations, aiming to reduce reliance on engineer experience. The concept of 'water quality distance' was incorporated into three unsupervised learning clustering algorithms (K-means, DBSCAN, and AGNES), which were validated through six case studies. Of the six cases, three were employed to illustrate the feasibility of the unsupervised learning clustering algorithm. The results indicated that the clustering algorithm exhibited greater stability and excellence compared to both artificial clustering and ChatGPT-based clustering. The remaining three cases were utilized to showcase the reliability of the three clustering algorithms. The findings revealed that the AGNES algorithm demonstrated superior potential application ability. The average purity in six cases of K-means, DBSCAN, and AGNES were 0.947, 0.852, and 0.955, respectively.


Assuntos
Baías , Aprendizado de Máquina não Supervisionado , Reprodutibilidade dos Testes , Algoritmos , Análise por Conglomerados
9.
Sensors (Basel) ; 24(3)2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38339461

RESUMO

In this study, we present a novel machine learning framework for web server anomaly detection that uniquely combines the Isolation Forest algorithm with expert evaluation, focusing on individual user activities within NGINX server logs. Our approach addresses the limitations of traditional methods by effectively isolating and analyzing subtle anomalies in vast datasets. Initially, the Isolation Forest algorithm was applied to extensive NGINX server logs, successfully identifying outlier user behaviors that conventional methods often overlook. We then employed DBSCAN for detailed clustering of these anomalies, categorizing them based on user request times and types. A key innovation of our methodology is the incorporation of post-clustering expert analysis. Cybersecurity professionals evaluated the identified clusters, adding a crucial layer of qualitative assessment. This enabled the accurate distinction between benign and potentially harmful activities, leading to targeted responses such as access restrictions or web server configuration adjustments. Our approach demonstrates a significant advancement in network security, offering a more refined understanding of user behavior. By integrating algorithmic precision with expert insights, we provide a comprehensive and nuanced strategy for enhancing cybersecurity measures. This study not only advances anomaly detection techniques but also emphasizes the critical need for a multifaceted approach in protecting web server infrastructures.

10.
Sensors (Basel) ; 24(3)2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38339655

RESUMO

During a heavy traffic flow featuring a substantial number of vehicles, the data reflecting the strain response of asphalt pavement under the vehicle load exhibit notable fluctuations with abnormal values, which can be attributed to the complex operating environment. Thus, there is a need to create a real-time anomalous-data diagnosis system which could effectively extract dynamic strain features, such as peak values and peak separation from the large amount of data. This paper presents a dynamic response signal data analysis method that utilizes the DBSCAN clustering algorithm and the findpeaks function. This method is designed to analyze data collected by sensors installed within the pavement. The first step involves denoising the data using low-pass filters and other techniques. Subsequently, the DBSCAN algorithm, which has been improved using the K-Dist method, is used to diagnose abnormal data after denoising. The refined findpeaks function is further implemented to carry out the adaptive feature extraction of the denoised data which is free from anomalies. The enhanced DBSCAN algorithm is tested via simulation and illustrates its effectiveness while detecting abnormal data in the road dynamic response signal. The findpeaks function enables the relatively accurate identification of peak values, thus leading to the identification of strain signal peaks of complex multi-axle lorries. This study is valuable for efficient data processing and effective information utilization in pavement monitoring.

11.
Comput Methods Programs Biomed ; 246: 108042, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38310712

RESUMO

Improving the quality of breast ultrasound images is of great significance for clinical diagnosis which can greatly boost the diagnostic accuracy of ultrasonography. However, due to the influence of ultrasound imaging principles and acquisition equipment, the collected ultrasound images naturally contain a large amount of speckle noise, which leads to a decrease in image quality and affects clinical diagnosis. To overcome this problem, we propose an improved denoising algorithm combining multi-filter DFrFT (Discrete Fractional Fourier Transform) and the adaptive fast BM3D (Block Matching and 3D collaborative filtering) method. Firstly, we provide the multi-filtering DFrFT method for preprocessing the original breast ultrasound image so as to remove some speckle noise early in fractional transformation domain. Based on the fractional frequency spectrum characteristics of breast ultrasound images, three types of filters are designed correspondingly in low, medium, and high frequency domains. And by integrating filtered images, the enhanced images are obtained which not only remove some speckle noise in background but also preserve the details of breast lesions. Secondly, for further enhancing the image quality on the basis of multi-filter DFrFT, we propose the adaptive fast BM3D method by introducing the DBSCAN-based super pixel segmentation to block matching process, which utilizes super pixel segmentation labels to provide a reference on how similar it is between target block and retrieval blocks. It reduces the number of blocks to be retrieved and make the matched blocks with more similar features. At last, the local noise parameter estimation is also adopted in the hard threshold filtering process of traditional BM3D algorithm to achieve local adaptive filtering and further improving the denoising effect. The synthetic data and real breast ultrasound data examples show that this combined method can improve the speckle suppression level and keep the fidelity of structure effectively without increasing time cost.


Assuntos
Algoritmos , Ultrassonografia Mamária , Feminino , Humanos , Ultrassonografia/métodos
12.
Sensors (Basel) ; 23(21)2023 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-37960452

RESUMO

Laser altimetry data from the Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) contain a lot of noise, which necessitates the requirement for a signal photon extraction method. In this study, we propose a density clustering method, which combines slope and elevation information from optical stereo images and adaptively adjusts the neighborhood search direction in the along-track direction. The local classification density threshold was calculated adaptively according to the uneven spatial distribution of noise and signal density, and reliable surface signal points were extracted. The performance of the algorithm was validated for strong and weak beam laser altimetry data using optical stereo images with different resolutions and positioning accuracies. The results were compared qualitatively and quantitatively with those obtained using the ATL08 algorithm. The signal extraction quality was better than that of the ATL08 algorithm for steep slope and low signal-to-noise ratio (SNR) regions. The proposed method can better balance the relationship between recall and precision, and its F1-score was higher than that of the ATL08 algorithm. The method can accurately extract continuous and reliable surface signals for both strong and weak beams among different terrains and land cover types.

13.
Sensors (Basel) ; 23(20)2023 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-37896672

RESUMO

Currently, e-noses are used for measuring odorous compounds at wastewater treatment plants. These devices mimic the mammalian olfactory sense, comprising an array of multiple non-specific gas sensors. An array of sensors creates a unique set of signals called a "gas fingerprint", which enables it to differentiate between the analyzed samples of gas mixtures. However, appropriate advanced analyses of multidimensional data need to be conducted for this purpose. The failures of the wastewater treatment process are directly connected to the odor nuisance of bioreactors and are reflected in the level of pollution indicators. Thus, it can be assumed that using the appropriately selected methods of data analysis from a gas sensors array, it will be possible to distinguish and classify the operating states of bioreactors (i.e., phases of normal operation), as well as the occurrence of malfunction. This work focuses on developing a complete protocol for analyzing and interpreting multidimensional data from a gas sensor array measuring the properties of the air headspace in a bioreactor. These methods include dimensionality reduction and visualization in two-dimensional space using the principal component analysis (PCA) method, application of data clustering using an unsupervised method by Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, and at the last stage, application of extra trees as a supervised machine learning method to achieve the best possible accuracy and precision in data classification.


Assuntos
Esgotos , Águas Residuárias , Nariz Eletrônico , Algoritmos , Reatores Biológicos
14.
Med Biol Eng Comput ; 61(11): 3035-3048, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37608081

RESUMO

Extracting "high ranking" or "prime protein targets" (PPTs) as potent MRSA drug candidates from a given set of ligands is a key challenge in efficient molecular docking. This study combines protein-versus-ligand matching molecular docking (MD) data extracted from 10 independent molecular docking (MD) evaluations - ADFR, DOCK, Gemdock, Ledock, Plants, Psovina, Quickvina2, smina, vina, and vinaxb to identify top MRSA drug candidates. Twenty-nine active protein targets (APT) from the enhanced DUD-E repository ( http://DUD-E.decoys.org ) are matched against 1040 ligands using "forward modeling" machine learning for initial "data mining and modeling" (DDM) to extract PPTs and the corresponding high affinity ligands (HALs). K-means clustering (KMC) is then performed on 400 ligands matched against 29 PTs, with each cluster accommodating HALs, and the corresponding PPTs. Performance of KMC is then validated against randomly chosen head, tail, and middle active ligands (ALs). KMC outcomes have been validated against two other clustering methods, namely, Gaussian mixture model (GMM) and density based spatial clustering of applications with noise (DBSCAN). While GMM shows similar results as with KMC, DBSCAN has failed to yield more than one cluster and handle the noise (outliers), thus affirming the choice of KMC or GMM. Databases obtained from ADFR to mine PPTs are then ranked according to the number of the corresponding HAL-PPT combinations (HPC) inside the derived clusters, an approach called "reverse modeling" (RM). From the set of 29 PTs studied, RM predicts high fidelity of 5 PPTs (17%) that bind with 76 out of 400, i.e., 19% ligands leading to a prediction of next-generation MRSA drug candidates: PPT2 (average HPC is 41.1%) is the top choice, followed by PPT14 (average HPC 25.46%), and then PPT15 (average HPC 23.12%). This algorithm can be generically implemented irrespective of pathogenic forms and is particularly effective for sparse data.


Assuntos
Desenho de Fármacos , Proteínas , Simulação de Acoplamento Molecular , Algoritmos , Aprendizado de Máquina
15.
Cell Rep Methods ; 3(6): 100485, 2023 06 26.
Artigo em Inglês | MEDLINE | ID: mdl-37426753

RESUMO

While combination therapy completely suppresses HIV-1 replication in blood, functional virus persists in CD4+ T cell subsets in non-peripheral compartments that are not easily accessible. To fill this gap, we investigated tissue-homing properties of cells that transiently appear in the circulating blood. Through cell separation and in vitro stimulation, the HIV-1 "Gag and Envelope reactivation co-detection assay" (GERDA) enables sensitive detection of Gag+/Env+ protein-expressing cells down to about one cell per million using flow cytometry. By associating GERDA with proviral DNA and polyA-RNA transcripts, we corroborate the presence and functionality of HIV-1 in critical body compartments utilizing t-distributed stochastic neighbor embedding (tSNE) and density-based spatial clustering of applications with noise (DBSCAN) clustering with low viral activity in circulating cells early after diagnosis. We demonstrate transcriptional HIV-1 reactivation at any time, potentially giving rise to intact, infectious particles. With single-cell level resolution, GERDA attributes virus production to lymph-node-homing cells with central memory T cells (TCMs) as main players, critical for HIV-1 reservoir eradication.


Assuntos
Infecções por HIV , Soropositividade para HIV , HIV-1 , Humanos , HIV-1/genética , Linfócitos T CD4-Positivos , Subpopulações de Linfócitos T
16.
Sensors (Basel) ; 23(12)2023 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-37420934

RESUMO

Point cloud registration plays a crucial role in 3D mapping and localization. Urban scene point clouds pose significant challenges for registration due to their large data volume, similar scenarios, and dynamic objects. Estimating the location by instances (bulidings, traffic lights, etc.) in urban scenes is a more humanized matter. In this paper, we propose PCRMLP (point cloud registration MLP), a novel model for urban scene point cloud registration that achieves comparable registration performance to prior learning-based methods. Compared to previous works that focused on extracting features and estimating correspondence, PCRMLP estimates transformation implicitly from concrete instances. The key innovation lies in the instance-level urban scene representation method, which leverages semantic segmentation and density-based spatial clustering of applications with noise (DBSCAN) to generate instance descriptors, enabling robust feature extraction, dynamic object filtering, and logical transformation estimation. Then, a lightweight network consisting of Multilayer Perceptrons (MLPs) is employed to obtain transformation in an encoder-decoder manner. Experimental validation on the KITTI dataset demonstrates that PCRMLP achieves satisfactory coarse transformation estimates from instance descriptors within a remarkable time of 0.0028 s. With the incorporation of an ICP refinement module, our proposed method outperforms prior learning-based approaches, yielding a rotation error of 2.01° and a translation error of 1.58 m. The experimental results highlight PCRMLP's potential for coarse registration of urban scene point clouds, thereby paving the way for its application in instance-level semantic mapping and localization.


Assuntos
Redes Neurais de Computação , Computação em Nuvem , Aprendizado de Máquina
17.
Sensors (Basel) ; 23(14)2023 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37514863

RESUMO

This article proposes the development of a novel tool that allows real-time monitoring of the balance of a press during the stamping process. This is performed by means of a virtual sensor that, by using the tonnage information in real time, allows us to calculate the gravity centre of a virtual load that moves the slide up and down. The present development follows the philosophy shown in our previous work for the development of industrialised predictive systems, that is, the use of the information available in the system to develop IIoT tools. This philosophy is defined as I3oT (industrializable industrial Internet of Things). The tonnage data are part of a set of new criteria, called Criterion-360, used to obtain this information. This criterion stores data from a sensor each time the encoder indicates that the position of the main axis has rotated by one degree. Since the main axis turns in a complete cycle of the press, this criterion allows us to obtain information on the phases of the process and easily shows where the measured data are in the cycle. The new system allows us to detect anomalies due to imbalance or discontinuity in the stamping process by using the DBSCAN algorithm, which allows us to avoid unexpected stops and serious breakdowns. Tests were conducted to verify that our system actually detects minimal imbalances in the stamping process. Subsequently, the system was connected to normal production for one year. At the end of this work, we explain the anomalies detected as well as the conclusions of the article and future works.

18.
Environ Sci Technol ; 57(27): 10030-10038, 2023 07 11.
Artigo em Inglês | MEDLINE | ID: mdl-37378593

RESUMO

Low-cost air quality monitors are increasingly being deployed in various indoor environments. However, data of high temporal resolution from those sensors are often summarized into a single mean value, with information about pollutant dynamics discarded. Further, low-cost sensors often suffer from limitations such as a lack of absolute accuracy and drift over time. There is a growing interest in utilizing data science and machine learning techniques to overcome those limitations and take full advantage of low-cost sensors. In this study, we developed an unsupervised machine learning model for automatically recognizing decay periods from concentration time series data and estimating pollutant loss rates. The model uses k-means and DBSCAN clustering to extract decays and then mass balance equations to estimate loss rates. Applications on data collected from various environments suggest that the CO2 loss rate was consistently lower than the PM2.5 loss rate in the same environment, while both varied spatially and temporally. Further, detailed protocols were established to select optimal model hyperparameters and filter out results with high uncertainty. Overall, this model provides a novel solution to monitoring pollutant removal rates with potentially wide applications such as evaluating filtration and ventilation and characterizing indoor emission sources.


Assuntos
Poluentes Atmosféricos , Poluição do Ar em Ambientes Fechados , Poluentes Ambientais , Poluentes Atmosféricos/análise , Material Particulado/análise , Monitoramento Ambiental/métodos , Análise por Conglomerados , Poluição do Ar em Ambientes Fechados/análise
19.
Sensors (Basel) ; 23(11)2023 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-37299951

RESUMO

Online monitoring of laser welding depth is increasingly important, with the growing demand for the precise welding depth in the field of power battery manufacturing for new energy vehicles. The indirect methods of welding depth measurement based on optical radiation, visual image and acoustic signals in the process zone have low accuracy in the continuous monitoring. Optical coherence tomography (OCT) provides a direct welding depth measurement during laser welding and shows high achievable accuracy in continuous monitoring. Statistical evaluation approach accurately extracts the welding depth from OCT data but suffers from complexity in noise removal. In this paper, an efficient method coupled DBSCAN (Density-Based Spatial Clustering of Application with Noise) and percentile filter for laser welding depth determination was proposed. The noise of the OCT data were viewed as outliers and detected by DBSCAN. After eliminating the noise, the percentile filter was used to extract the welding depth. By comparing the welding depth determined by this approach and the actual weld depth of longitudinal cross section, an average error of less than 5% was obtained. The precise laser welding depth can be efficiently achieved by the method.


Assuntos
Tomografia de Coerência Óptica , Soldagem , Tomografia de Coerência Óptica/métodos , Lasers
20.
Entropy (Basel) ; 25(5)2023 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-37238536

RESUMO

The density-based spatial clustering of application with noise (DBSCAN) algorithm is able to cluster arbitrarily structured datasets. However, the clustering result of this algorithm is exceptionally sensitive to the neighborhood radius (Eps) and noise points, and it is hard to obtain the best result quickly and accurately with it. To solve the above problems, we propose an adaptive DBSCAN method based on the chameleon swarm algorithm (CSA-DBSCAN). First, we take the clustering evaluation index of the DBSCNA algorithm as the objective function and use the chameleon swarm algorithm (CSA) to iteratively optimize the evaluation index value of the DBSCAN algorithm to obtain the best Eps value and clustering result. Then, we introduce the theory of deviation in the data point spatial distance of the nearest neighbor search mechanism to assign the identified noise points, which solves the problem of over-identification of the algorithm noise points. Finally, we construct color image superpixel information to improve the CSA-DBSCAN algorithm's performance regarding image segmentation. The simulation results of synthetic datasets, real-world datasets, and color images show that the CSA-DBSCAN algorithm can quickly find accurate clustering results and segment color images effectively. The CSA-DBSCAN algorithm has certain clustering effectiveness and practicality.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA