Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 258
Filter
1.
Entropy (Basel) ; 26(7)2024 Jul 04.
Article in English | MEDLINE | ID: mdl-39056938

ABSTRACT

Non-Euclidean data, such as social networks and citation relationships between documents, have node and structural information. The Graph Convolutional Network (GCN) can automatically learn node features and association information between nodes. The core ideology of the Graph Convolutional Network is to aggregate node information by using edge information, thereby generating a new node feature. In updating node features, there are two core influencing factors. One is the number of neighboring nodes of the central node; the other is the contribution of the neighboring nodes to the central node. Due to the previous GCN methods not simultaneously considering the numbers and different contributions of neighboring nodes to the central node, we design the adaptive attention mechanism (AAM). To further enhance the representational capability of the model, we utilize Multi-Head Graph Convolution (MHGC). Finally, we adopt the cross-entropy (CE) loss function to describe the difference between the predicted results of node categories and the ground truth (GT). Combined with backpropagation, this ultimately achieves accurate node classification. Based on the AAM, MHGC, and CE, we contrive the novel Graph Adaptive Attention Network (GAAN). The experiments show that classification accuracy achieves outstanding performances on Cora, Citeseer, and Pubmed datasets.

2.
Stud Hist Philos Sci ; 106: 165-176, 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38986224

ABSTRACT

Faced with the mathematical possibility of non-Euclidean geometries, 19th Century geometers were tasked with the problem of determining which among the possible geometries corresponds to that of our space. In this context, the contribution of the Belgian philosopher-mathematician, Joseph Delboeuf, has been unduly neglected. The aim of this essay is to situate Delboeuf's ideas within the context of the philosophies of geometry of his contemporaries, such as Helmholtz, Russell and Poincaré. We elucidate the central thesis, according to which Euclidean geometry is given special status on the basis of the relativity of magnitudes, we uncover its hidden history and show that it is defensible within the context of the philosophies of geometry of the epoch. Through this discussion, we also develop various ideas that have some relevance to present-day methods in gravitational physics and cosmology.

3.
Sensors (Basel) ; 24(14)2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39066027

ABSTRACT

Strip steel plays a crucial role in modern industrial production, where enhancing the accuracy and real-time capabilities of surface defect classification is essential. However, acquiring and annotating defect samples for training deep learning models are challenging, further complicated by the presence of redundant information in these samples. These issues hinder the classification of strip steel surface defects. To address these challenges, this paper introduces a high real-time network, ODNet (Orthogonal Decomposition Network), designed for few-shot strip steel surface defect classification. ODNet utilizes ResNet as its backbone and incorporates orthogonal decomposition technology to reduce the feature redundancies. Furthermore, it integrates skip connection to preserve essential correlation information in the samples, preventing excessive elimination. The model optimizes the parameter efficiency by employing Euclidean distance as the classifier. The orthogonal decomposition not only helps reduce redundant image information but also ensures compatibility with the Euclidean distance requirement for orthogonal input. Extensive experiments conducted on the FSC-20 benchmark demonstrate that ODNet achieves superior real-time performance, accuracy, and generalization compared to alternative methods, effectively addressing the challenges of few-shot strip steel surface defect classification.

4.
Biol Sport ; 41(3): 15-28, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38952897

ABSTRACT

To improve soccer performance, coaches should be able to replicate the match's physical efforts during the training sessions. For this goal, small-sided games (SSGs) are widely used. The main purpose of the current study was to develop similarity and overload scores to quantify the degree of similarity and the extent to which the SSG was able to replicate match intensity. GPSs were employed to collect external load and were grouped in three vectors (kinematic, metabolic, and mechanical). Euclidean distance was used to calculate the distance between training and match vectors, which was subsequently converted into a similarity score. The average of the pairwise difference between vectors was used to develop the overload scores. Three similarity (Simkin, Simmet, Simmec) and three overload scores (OVERkin, OVERmet, OVERmec) were defined for kinematic, metabolic, and mechanical vectors. Simmet and OVERmet were excluded from further analysis, showing a very large correlation (r > 0.7, p < 0.01) with Simkin and OVERkin. The scores were subsequently analysed considering teams' level (First team vs. U19 team) and SSGs' characteristics in the various playing roles. The independent-sample t-test showed (p < 0.01) that the First team presented greater Simkin (d = 0.91), OVERkin (d = 0.47), and OVERmec (d = 0.35) scores. Moreover, a generalized linear mixed model (GLMM) was employed to evaluate differences according to SSG characteristics. The results suggest that a specific SSG format could lead to different similarity and overload scores according to the playing position. This process could simplify data interpretation and categorize SSGs based on their scores.

5.
ACS Nano ; 18(26): 17007-17017, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38952324

ABSTRACT

Neuromorphic computing promises an energy-efficient alternative to traditional digital processors in handling data-heavy tasks, primarily driven by the development of both volatile (neuronal) and nonvolatile (synaptic) resistive switches or memristors. However, despite their energy efficiency, memristor-based technologies presently lack functional tunability, thus limiting their competitiveness with arbitrarily programmable (general purpose) digital computers. This work introduces a two-terminal bilayer memristor, which can be tuned among neuronal, synaptic, and hybrid behaviors. The varying behaviors are accessed via facile control over the filament formed within the memristor, enabled by the interplay between the two active ionic species (oxygen vacancies and metal cations). This solution is unlike single-species ion migration employed in most other memristors, which makes their behavior difficult to control. By reconfiguring a single crossbar array of hybrid memristors, two different applications that usually require distinct types of devices are demonstrated - reprogrammable heterogeneous reservoir computing and arbitrary non-Euclidean graph networks. Thus, this work outlines a potential path toward functionally reconfigurable postdigital computers.

6.
Front Neurosci ; 18: 1360709, 2024.
Article in English | MEDLINE | ID: mdl-39071181

ABSTRACT

Introduction: Event-related potentials (ERPs), such as P300, are widely utilized for non-invasive monitoring of brain activity in brain-computer interfaces (BCIs) via electroencephalogram (EEG). However, the non-stationary nature of EEG signals and different data distributions among subjects create significant challenges for implementing real-time P300-based BCIs. This requires time-consuming calibration and a large number of training samples. Methods: To address these challenges, this study proposes a transfer learning-based approach that uses a convolutional neural network for high-level feature extraction, followed by Euclidean space data alignment to ensure similar distributions of extracted features. Furthermore, a source selection technique based on the Euclidean distance metric was applied to measure the distance between each source feature sample and a reference point from the target domain. The samples with the lowest distance were then chosen to increase the similarity between source and target datasets. Finally, the transferred features are applied to a discriminative restricted Boltzmann machine classifier for P300 detection. Results: The proposed method was evaluated on the state-of-the-art BCI Competition III dataset II and rapid serial visual presentation dataset. The results demonstrate that the proposed technique achieves an average accuracy of 97% for both online and offline after 15 repetitions, which is comparable to the state-of-the-art methods. Notably, the proposed approach requires <½ of the training samples needed by previous studies. Discussion: Therefore, this technique offers an efficient solution for developing ERP-based BCIs with robust performance against reduced a number of training data.

7.
Math Geosci ; 56(3): 437-464, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38846625

ABSTRACT

This paper describes a geostatistical approach to model and visualize the space-time distribution of groundwater contaminants. It is illustrated using data from one of the world's largest plume of trichloroethylene (TCE) contamination, extending over 23 km2, which has polluted drinking water wells in northern Michigan. A total of 613 TCE concentrations were recorded at 36 wells between May 2003 and October 2018. To account for the non-stationarity of the spatial covariance, the data were first projected in a new space using multidimensional scaling. During this spatial deformation the domain is stretched in regions of relatively lower spatial correlation (i.e., higher spatial dispersion), while being contracted in regions of higher spatial correlation. The range of temporal autocorrelation is 43 months, while the spatial range is 11 km. The sample semivariogram was fitted using three different types of non-separable space-time models, and their prediction performance was compared using cross-validation. The sum-metric and product-sum semivariogram models performed equally well, with a mean absolute error of prediction corresponding to 23% of the mean TCE concentration. The observations were then interpolated every 6 months to the nodes of a 150 m spacing grid covering the study area and results were visualized using a three-dimensional space-time cube. This display highlights how TCE concentrations increased over time in the northern part of the study area, as the plume is flowing to the so-called Chain of Lakes.

8.
Ann Surg Open ; 5(2): e406, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38911657

ABSTRACT

Objective: The aim of this systematic review and meta-analysis is to identify current robotic assistance systems for percutaneous liver ablations, compare approaches, and determine how to achieve standardization of procedural concepts for optimized ablation outcomes. Background: Image-guided surgical approaches are increasingly common. Assistance by navigation and robotic systems allows to optimize procedural accuracy, with the aim to consistently obtain adequate ablation volumes. Methods: Several databases (PubMed/MEDLINE, ProQuest, Science Direct, Research Rabbit, and IEEE Xplore) were systematically searched for robotic preclinical and clinical percutaneous liver ablation studies, and relevant original manuscripts were included according to the Preferred Reporting items for Systematic Reviews and Meta-Analyses guidelines. The endpoints were the type of device, insertion technique (freehand or robotic), planning, execution, and confirmation of the procedure. A meta-analysis was performed, including comparative studies of freehand and robotic techniques in terms of radiation dose, accuracy, and Euclidean error. Results: The inclusion criteria were met by 33/755 studies. There were 24 robotic devices reported for percutaneous liver surgery. The most used were the MAXIO robot (8/33; 24.2%), Zerobot, and AcuBot (each 2/33, 6.1%). The most common tracking system was optical (25/33, 75.8%). In the meta-analysis, the robotic approach was superior to the freehand technique in terms of individual radiation (0.5582, 95% confidence interval [CI] = 0.0167-1.0996, dose-length product range 79-2216 mGy.cm), accuracy (0.6260, 95% CI = 0.1423-1.1097), and Euclidean error (0.8189, 95% CI = -0.1020 to 1.7399). Conclusions: Robotic assistance in percutaneous ablation for liver tumors achieves superior results and reduces errors compared with manual applicator insertion. Standardization of concepts and reporting is necessary and suggested to facilitate the comparison of the different parameters used to measure liver ablation results. The increasing use of image-guided surgery has encouraged robotic assistance for percutaneous liver ablations. This systematic review analyzed 33 studies and identified 24 robotic devices, with optical tracking prevailing. The meta-analysis favored robotic assessment, showing increased accuracy and reduced errors compared with freehand technique, emphasizing the need for conceptual standardization.

9.
Talanta ; 278: 126426, 2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38908135

ABSTRACT

BACKGROUND: Ankylosing spondylitis (AS), Osteoarthritis (OA), and Sjögren's syndrome (SS) are three prevalent autoimmune diseases. If left untreated, which can lead to severe joint damage and greatly limit mobility. Once the disease worsens, patients may face the risk of long-term disability, and in severe cases, even life-threatening consequences. RESULT: In this study, the Raman spectral data of AS, OA, and SS are analyzed to auxiliary disease diagnosis. For the first time, the Euclidean distance(ED) upscaling technique was used for the conversation from one-dimensional(1D) disease spectral data to two-dimensional(2D) spectral images. A dual-attention mechanism network was then constructed to analyze these two-dimensional spectral maps for disease diagnosis. The results demonstrate that the dual-attention mechanism network achieves a diagnostic accuracy of 100 % when analyzing 2D ED spectrograms. Furthermore, a comparison and analysis with s-transforms(ST), short-time fourier transforms(STFT), recurrence maps(RP), markov transform field(MTF), and Gramian angle fields(GAF) highlight the significant advantage of the proposed method, as it significantly shortens the conversion time while supporting disease-assisted diagnosis. Mutual information(MI) was utilized for the first time to validate the 2D Raman spectrograms generated, including ED, ST, STFT, RP, MTF, and GAF spectrograms. This allowed for evaluation of the similarity between the original 1D spectral data and the generated 2D spectrograms. SIGNIFICANT: The results indicate that utilizing ED to transform 1D spectral data into 2D images, coupled with the application of convolutional neural network(CNN) for analyzing 2D ED Raman spectrograms, holds great promise as a valuable tool in assisting disease diagnosis. The research demonstrated that the 2D spectrogram created with ED closely resembles the original 1D spectral data. This indicates that ED effectively captures key features and important information from the original data, providing a strong descript.

10.
Phys Med Biol ; 69(14)2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38838678

ABSTRACT

Objective.Left ventricular hypertrophy (LVH) is the thickening of the left ventricle wall of the heart. The objective of this study is to develop a novel approach for the accurate assessment of LVH) severity, addressing the limitations of traditional manual grading systems.Approach.We propose the Multi-purpose Siamese Weighted Euclidean Distance Model (MSWED), which utilizes convolutional Siamese neural networks and zero-shot/few-shot learning techniques. Unlike traditional methods, our model introduces a cutoff distance-based approach for zero-shot learning, enhancing accuracy. We also incorporate a weighted Euclidean distance targeting informative regions within echocardiograms.Main results.We collected comprehensive datasets labeled by experienced echocardiographers, including Normal heart and various levels of LVH severity. Our model outperforms existing techniques, demonstrating significant precision enhancement, with improvements of up to 13% for zero-shot and few-shot learning approaches.Significance.Accurate assessment of LVH severity is crucial for clinical prognosis and treatment decisions. Our proposed MSWED model offers a more reliable and efficient solution compared to traditional grading systems, reducing subjectivity and errors while providing enhanced precision in severity classification.


Subject(s)
Hypertrophy, Left Ventricular , Neural Networks, Computer , Humans , Hypertrophy, Left Ventricular/diagnostic imaging , Hypertrophy, Left Ventricular/physiopathology , Echocardiography , Image Processing, Computer-Assisted/methods
11.
J Neural Eng ; 21(3)2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38776898

ABSTRACT

Objective:Electroencephalography signals are frequently used for various Brain-Computer interface (BCI) tasks. While deep learning (DL) techniques have shown promising results, they are hindered by the substantial data requirements. By leveraging data from multiple subjects, transfer learning enables more effective training of DL models. A technique that is gaining popularity is Euclidean alignment (EA) due to its ease of use, low computational complexity, and compatibility with DL models. However, few studies evaluate its impact on the training performance of shared and individual DL models. In this work, we systematically evaluate the effect of EA combined with DL for decoding BCI signals.Approach:We used EA as a pre-processing step to train shared DL models with data from multiple subjects and evaluated their transferability to new subjects.Main results:Our experimental results show that it improves decoding in the target subject by 4.33% and decreases convergence time by more than 70%. We also trained individual models for each subject to use as a majority-voting ensemble classifier. In this scenario, using EA improved the 3-model ensemble accuracy by 3.71%. However, when compared to the shared model with EA, the ensemble accuracy was 3.62% lower.Significance:EA succeeds in the task of improving transfer learning performance with DL models and, could be used as a standard pre-processing technique.


Subject(s)
Brain-Computer Interfaces , Deep Learning , Electroencephalography , Electroencephalography/methods , Humans , Male , Adult , Female , Algorithms
12.
J Oral Maxillofac Pathol ; 28(1): 111-118, 2024.
Article in English | MEDLINE | ID: mdl-38800435

ABSTRACT

Aims: The study aims to identify sexual dimorphic features in the arch patterns based on tooth arrangement patterns and the maxillary and mandibular arches using Euclidean Distance Matrix Analysis (EDMA). Settings and Design: A total of 96 Nepalese subjects, aged 18 to 25 were assessed using casts and photographs. Materials and Methods: Thirteen landmarks representing the most facial portions of the proximal contact areas on the maxillary and mandibular casts were digitised. Seventy-eight possible, Euclidean distances between the 13 landmarks were calculated using the Analysis ToolPak of Microsoft Excel®. The male-to-female ratios of the corresponding distances were computed and ratios were compared to evaluate the arch form for variation in the genders, among the Nepalese population. Statistical Analysis Used: Microsoft Excel Analysis ToolPak and SPSS 20.0 (IBM Chicago) were used to perform EDMA and an independent t-test to compare the significant differences between the two genders. Results: The maxillary arch's largest ratio (1.008179001) was discovered near the location of the right and left lateral incisors, indicating that the anterior region may have experienced the greatest change. The posterior-molar region is where the smallest ratio was discovered, suggesting less variation. At the intercanine region, female arches were wider than male ones; however, at the interpremolar and intermolar sections, they were similar in width. Females' maxillary arches were discovered to be bigger antero-posteriorly than those of males. The highest ratio (1.014336113) in the mandibular arch was discovered at the intermolar area, suggesting that males had a larger mandibular posterior arch morphology. At the intercanine area, the breadth of the arch form was greater in males and nearly the same in females at the interpremolar and intermolar regions. Female mandibular arch forms were also discovered to be longer than those of males from the anterior to the posterior. Conclusions: The male and female arches in the Nepalese population were inferred to be different in size and shape. With references to the landmarks demonstrating such a shift, the EDMA established objectively the presence of square arch forms in Nepali males and tapering arch forms in Nepalese females.

13.
Sci Total Environ ; 941: 173488, 2024 Sep 01.
Article in English | MEDLINE | ID: mdl-38810748

ABSTRACT

Wildfires strongly alter hydrological processes and surface and groundwater quality in forested environments. The paired-watershed method, consisting of comparing a burnt (altered) watershed with an unburnt (control) watershed, is commonly adopted in studies addressing the hydrological effects of wildfires. This approach requires a calibration period to assess the pre-perturbation differences and relationships between the control and the altered watershed. Unfortunately, in many studies, the calibration phase is lacking due to the unpredictability of wildfires and the large number of processes that should be investigated. So far, no information is available on the possible bias induced by the lack of the calibration period in the paired-watershed method when assessing the hydrological impacts of wildfires. Through a literature review, the consequences of the lack of calibration on the assessment of wildfire hydrological changes were evaluated, along with the most used watershed pairing strategies. The literature analysis showed that if calibration is lacking, misestimation of wildfire impacts is likely, particularly when addressing low-severity or long-term wildfire effects. The Euclidean distance based on physical descriptors (geology, morphology, vegetation) was proposed as a metric of watersheds similarity and tested in mountain watersheds in Central Italy. The Euclidean distance proved to be an effective metric for selecting the most similar watershed pairs. This work raises awareness of biases exerted by lacking calibration in paired-watershed studies and proposes a rigorous and objective methodology for future studies on the hydrological effects of wildfires.

14.
Mikrochim Acta ; 191(6): 327, 2024 May 13.
Article in English | MEDLINE | ID: mdl-38740592

ABSTRACT

In the ratiometric fluorescent (RF) strategy, the selection of fluorophores and their respective ratios helps to create visual quantitative detection of target analytes. This study presents a framework for optimizing ratiometric probes, employing both two-component and three-component RF designs. For this purpose, in a two-component ratiometric nanoprobe designed for detecting methyl parathion (MP), an organophosphate pesticide, yellow-emissive thioglycolic acid-capped CdTe quantum dots (Y-QDs) (analyte-responsive), and blue-emissive carbon dots (CDs) (internal reference) were utilized. Mathematical polynomial equations modeled the emission profiles of CDs and Y-QDs in the absence of MP, as well as the emission colors of Y-QDs in the presence of MP separately. In other two-/three-component examples, the detection of dopamine hydrochloride (DA) was investigated using an RF design based on blue-emissive carbon dots (B-CDs) (internal reference) and N-acetyl L-cysteine functionalized CdTe quantum dots with red/green emission colors (R-QDs/G-QDs) (analyte-responsive). The colors of binary/ternary mixtures in the absence and presence of MP/DA were predicted using fitted equations and additive color theory. Finally, the Euclidean distance method in the normalized CIE XYZ color space calculated the distance between predicted colors, with the maximum distance defining the real-optimal concentration of fluorophores. This strategy offers a more efficient and precise method for determining optimal probe concentrations compared to a trial-and-error approach. The model's effectiveness was confirmed through experimental validation, affirming its efficacy.

15.
Stat Med ; 43(16): 3051-3061, 2024 Jul 20.
Article in English | MEDLINE | ID: mdl-38803077

ABSTRACT

The matrix profile serves as a fundamental tool to provide insights into similar patterns within time series. Existing matrix profile algorithms have been primarily developed for the normalized Euclidean distance, which may not be a proper distance measure in many settings. The methodology work of this paper was motivated by statistical analysis of beat-to-beat interval (BBI) data collected from smartwatches to monitor e-cigarette users' heart rate change patterns for which the original Euclidean distance ( L 2 $$ {L}_2 $$ -norm) would be a more suitable choice. Yet, incorporating the Euclidean distance into existing matrix profile algorithms turned out to be computationally challenging, especially when the time series is long with extended query sequences. We propose a novel methodology to efficiently compute matrix profile for long time series data based on the Euclidean distance. This methodology involves four key steps including (1) projection of the time series onto eigenspace; (2) enhancing singular value decomposition (SVD) computation; (3) early abandon strategy; and (4) determining lower bounds based on the first left singular vector. Simulation studies based on BBI data from the motivating example have demonstrated remarkable reductions in computational time, ranging from one-fourth to one-twentieth of the time required by the conventional method. Unlike the conventional method of which the performance deteriorates sharply as the time series length or the query sequence length increases, the proposed method consistently performs well across a wide range of the time series length or the query sequence length.


Subject(s)
Algorithms , Heart Rate , Humans , Heart Rate/physiology , Electronic Nicotine Delivery Systems , Models, Statistical , Data Interpretation, Statistical
16.
Head Face Med ; 20(1): 34, 2024 May 18.
Article in English | MEDLINE | ID: mdl-38762519

ABSTRACT

BACKGROUND: We aimed to establish a novel method for automatically constructing three-dimensional (3D) median sagittal plane (MSP) for mandibular deviation patients, which can increase the efficiency of aesthetic evaluating treatment progress. We developed a Euclidean weighted Procrustes analysis (EWPA) algorithm for extracting 3D facial MSP based on the Euclidean distance matrix analysis, automatically assigning weight to facial anatomical landmarks. METHODS: Forty patients with mandibular deviation were recruited, and the Procrustes analysis (PA) algorithm based on the original mirror alignment and EWPA algorithm developed in this study were used to construct the MSP of each facial model of the patient as experimental groups 1 and 2, respectively. The expert-defined regional iterative closest point algorithm was used to construct the MSP as the reference group. The angle errors of the two experimental groups were compared to those of the reference group to evaluate their clinical suitability. RESULTS: The angle errors of the MSP constructed by the two EWPA and PA algorithms for the 40 patients were 1.39 ± 0.85°, 1.39 ± 0.78°, and 1.91 ± 0.80°, respectively. The two EWPA algorithms performed best in patients with moderate facial asymmetry, and in patients with severe facial asymmetry, the angle error was below 2°, which was a significant improvement over the PA algorithm. CONCLUSIONS: The clinical application of the EWPA algorithm based on 3D facial morphological analysis for constructing a 3D facial MSP for patients with mandibular deviated facial asymmetry deformity showed a significant improvement over the conventional PA algorithm and achieved the effect of a dental clinical expert-level diagnostic strategy.


Subject(s)
Algorithms , Facial Asymmetry , Imaging, Three-Dimensional , Humans , Facial Asymmetry/diagnostic imaging , Male , Female , Imaging, Three-Dimensional/methods , Anatomic Landmarks , Mandible/diagnostic imaging , Adolescent , Adult , Young Adult , Cephalometry/methods , Face/diagnostic imaging
17.
Sensors (Basel) ; 24(9)2024 May 06.
Article in English | MEDLINE | ID: mdl-38733051

ABSTRACT

This paper proposes an improved initial alignment method for a strap-down inertial navigation system/global navigation satellite system (SINS/GNSS) integrated navigation system with large misalignment angles. Its methodology is based on the three-dimensional special Euclidean group and extended Kalman filter (SE2(3)/EKF) and aims to overcome the challenges of achieving fast alignment under large misalignment angles using traditional methods. To accurately characterize the state errors of attitude, velocity, and position, these elements are constructed as elements of a Lie group. The nonlinear error on the Lie group can then be well quantified. Additionally, a group vector mixed error model is developed, taking into account the zero bias errors of gyroscopes and accelerometers. Using this new error definition, a GNSS-assisted SINS dynamic initial alignment algorithm is derived, which is based on the invariance of velocity and position measurements. Simulation experiments demonstrate that the alignment method based on SE2(3)/EKF can achieve a higher accuracy in various scenarios with large misalignment angles, while the attitude error can be rapidly reduced to a lower level.

18.
Photodiagnosis Photodyn Ther ; 46: 104081, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38588873

ABSTRACT

SIGNIFICANCE: Vascular-targeted photodynamic therapy (V-PDT) is a clinically approved therapeutic approach for treating vascular-related diseases, such as port-wine stains (PWS). For accurate treatment, varying light irradiance is required for different lesions due to the irregularity of vascular size, shape and degree of disease, which commonly alters during different stages of V-PDT. This makes quantitative analysis of the treatment efficiency urgently needed. APPROACH: Lesion images pre- and post- V-PDT treatment of patients with PWS were used to construct a quantitative method to evaluate the differences among lesions. Image analysis techniques were applied to evaluate the V-PDT efficiency for PWS by determining the Euclidean distances and two-dimensional correlation coefficients. RESULTS: According to the image analysis, V-PDT with good treatment efficiency resulted in a larger Euclidean distance and a smaller correlation coefficient compared with the case having lower V-PDT efficiency. CONCLUSIONS: A new method to quantify the Euclidean distances and correlation coefficients has been proposed, which is promising for the quantitative analysis of V-PDT efficiency for PWS.


Subject(s)
Photochemotherapy , Photosensitizing Agents , Port-Wine Stain , Port-Wine Stain/drug therapy , Photochemotherapy/methods , Humans , Photosensitizing Agents/therapeutic use , Female , Male , Adult , Aminolevulinic Acid/therapeutic use , Child , Adolescent
19.
Acta Crystallogr A Found Adv ; 80(Pt 3): 282-292, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38683646

ABSTRACT

Following the work of Day & Hawthorne [Acta Cryst. (2022), A78, 212-233] and Day et al. [Acta Cryst. (2024), A80, 258-281], the program GraphT-T has been developed to embed graphical representations of observed and hypothetical chains of (SiO4)4- tetrahedra into 2D and 3D Euclidean space. During embedding, the distance between linked vertices (T-T distances) and the distance between unlinked vertices (T...T separations) in the resultant unit-distance graph are restrained to the average observed distance between linked Si tetrahedra (3.06±0.15 Å) and the minimum separation between unlinked vertices is restrained to be equal to or greater than the minimum distance between unlinked Si tetrahedra (3.713 Å) in silicate minerals. The notional interactions between vertices are described by a 3D spring-force algorithm in which the attractive forces between linked vertices behave according to Hooke's law and the repulsive forces between unlinked vertices behave according to Coulomb's law. Embedding parameters (i.e. spring coefficient, k, and Coulomb's constant, K) are iteratively refined during embedding to determine if it is possible to embed a given graph to produce a unit-distance graph with T-T distances and T...T separations that are compatible with the observed T-T distances and T...T separations in crystal structures. The resultant unit-distance graphs are denoted as compatible and may form crystal structures if and only if all distances between linked vertices (T-T distances) agree with the average observed distance between linked Si tetrahedra (3.06±0.15 Å) and the minimum separation between unlinked vertices is equal to or greater than the minimum distance between unlinked Si tetrahedra (3.713 Å) in silicate minerals. If the unit-distance graph does not satisfy these conditions, it is considered incompatible and the corresponding chain of tetrahedra is unlikely to form crystal structures. Using GraphT-T, Day et al. [Acta Cryst. (2024), A80, 258-281] have shown that several topological properties of chain graphs influence the flexibility (and rigidity) of the corresponding chains of Si tetrahedra and may explain why particular compatible chain arrangements (and the minerals in which they occur) are more common than others and/or why incompatible chain arrangements do not occur in crystals despite being topologically possible.

20.
Acta Crystallogr A Found Adv ; 80(Pt 3): 258-281, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38683645

ABSTRACT

In Part I of this series, all topologically possible 1-periodic infinite graphs (chain graphs) representing chains of tetrahedra with up to 6-8 vertices (tetrahedra) per repeat unit were generated. This paper examines possible restraints on embedding these chain graphs into Euclidean space such that they are compatible with the metrics of chains of tetrahedra in observed crystal structures. Chain-silicate minerals with T = Si4+ (plus P5+, V5+, As5+, Al3+, Fe3+, B3+, Be2+, Zn2+ and Mg2+) have a grand nearest-neighbour ⟨T-T⟩ distance of 3.06±0.15 Šand a minimum T...T separation of 3.71 Šbetween non-nearest-neighbour tetrahedra, and in order for embedded chain graphs (called unit-distance graphs) to be possible atomic arrangements in crystals, they must conform to these metrics, a process termed equalization. It is shown that equalization of all acyclic chain graphs is possible in 2D and 3D, and that equalization of most cyclic chain graphs is possible in 3D but not necessarily in 2D. All unique ways in which non-isomorphic vertices may be moved are designated modes of geometric modification. If a mode (m) is applied to an equalized unit-distance graph such that a new geometrically distinct unit-distance graph is produced without changing the lengths of any edges, the mode is designated as valid (mv); if a new geometrically distinct unit-distance graph cannot be produced, the mode is invalid (mi). The parameters mv and mi are used to define ranges of rigidity of the unit-distance graphs, and are related to the edge-to-vertex ratio, e/n, of the parent chain graph. The program GraphT-T was developed to embed any chain graph into Euclidean space subject to the metric restraints on T-T and T...T. Embedding a selection of chain graphs with differing e/n ratios shows that the principal reason why many topologically possible chains cannot occur in crystal structures is due to violation of the requirement that T...T > 3.71 Å. Such a restraint becomes increasingly restrictive as e/n increases and indicates why chains with stoichiometry TO<2.5 do not occur in crystal structures.

SELECTION OF CITATIONS
SEARCH DETAIL