Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 690
Filtrar
1.
MethodsX ; 13: 102935, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39295629

RESUMO

Aerial drone imaging is an efficient tool for mapping and monitoring of coastal habitats at high spatial and temporal resolution. Specifically, drone imaging allows for time- and cost-efficient mapping covering larger areas than traditional mapping and monitoring techniques, while also providing more detailed information than those from airplanes and satellites, enabling for example to differentiate various types of coastal vegetation. Here, we present a systematic method for shallow water habitat classification based on drone imagery. The method includes:•Collection of drone images and creation of orthomosaics.•Gathering ground-truth data in the field to guide the image annotation and to validate the final map product.•Annotation of drone images into - potentially hierarchical - habitat classes and training of machine learning algorithms for habitat classification.As a case study, we present a field campaign that employed these methods to map a coastal site dominated by seagrass, seaweed and kelp, in addition to sediments and rock. Such detailed but efficient mapping and classification can aid to understand and sustainably manage ecologically and valuable marine ecosystems.

2.
PeerJ Comput Sci ; 10: e2260, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39314711

RESUMO

Point clouds are highly regarded in the field of 3D object detection for their superior geometric properties and versatility. However, object occlusion and defects in scanning equipment frequently result in sparse and missing data within point clouds, adversely affecting the final prediction. Recognizing the synergistic potential between the rich semantic information present in images and the geometric data in point clouds for scene representation, we introduce a two-stage fusion framework (TSFF) for 3D object detection. To address the issue of corrupted geometric information in point clouds caused by object occlusion, we augment point features with image features, thereby enhancing the reference factor of the point cloud during the voting bias phase. Furthermore, we implement a constrained fusion module to selectively sample voting points using a 2D bounding box, integrating valuable image features while reducing the impact of background points in sparse scenes. Our methodology was evaluated on the SUNRGB-D dataset, where it achieved a 3.6 mean average percent (mAP) improvement in the mAP@0.25 evaluation criterion over the baseline. In comparison to other great 3D object detection methods, our method had excellent performance in the detection of some objects.

3.
Sci Rep ; 14(1): 21938, 2024 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-39304703

RESUMO

We present an open access dataset for development, evaluation, and comparison of algorithms for individual tree detection in dense mixed forests. The dataset consists of a detailed field inventory and overlapping UAV LiDAR and RGB orthophoto, which make it possible to develop algorithms that fuse multimodal data to improve detection results. Along with the dataset, we describe and implement a basic local maxima filtering baseline and an algorithm for automatically matching detection results to the ground truth trees for detection algorithm evaluation.

4.
Front Plant Sci ; 15: 1445490, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39309178

RESUMO

Introduction: Monitoring the leaf area index (LAI), which is directly related to the growth status of rice, helps to optimize and meet the crop's fertilizer requirements for achieving high quality, high yield, and environmental sustainability. The remote sensing technology of the unmanned aerial vehicle (UAV) has great potential in precision monitoring applications in agriculture due to its efficient, nondestructive, and rapid characteristics. The spectral information currently widely used is susceptible to the influence of factors such as soil background and canopy structure, leading to low accuracy in estimating the LAI in rice. Methods: In this paper, the RGB and multispectral images of the critical period were acquired through rice field experiments. Based on the remote sensing images above, the spectral indices and texture information of the rice canopy were extracted. Furthermore, the texture information of various images at multiple scales was acquired through resampling, which was utilized to assess the estimation capacity of LAI. Results and discussion: The results showed that the spectral indices (SI) based on RGB and multispectral imagery saturated in the middle and late stages of rice, leading to low accuracy in estimating LAI. Moreover, multiscale texture analysis revealed that the texture of multispectral images derived from the 680 nm band is less affected by resolution, whereas the texture of RGB images is resolution dependent. The fusion of spectral and texture features using random forest and multiple stepwise regression algorithms revealed that the highest accuracy in estimating LAI can be achieved based on SI and texture features (0.48 m) from multispectral imagery. This approach yielded excellent prediction results for both high and low LAI values. With the gradual improvement of satellite image resolution, the results of this study are expected to enable accurate monitoring of rice LAI on a large scale.

5.
Heliyon ; 10(18): e37356, 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39309856

RESUMO

Monocular Simultaneous Localization and Mapping (SLAM), Visual Odometry (VO), and Structure from Motion (SFM) are techniques that have emerged recently to address the problem of reconstructing objects or environments using monocular cameras. Monocular pure visual techniques have become attractive solutions for 3D reconstruction tasks due to their affordability, lightweight, easy deployment, good outdoor performance, and availability in most handheld devices without requiring additional input devices. In this work, we comprehensively overview the SLAM, VO, and SFM solutions for the 3D reconstruction problem that uses a monocular RGB camera as the only source of information to gather basic knowledge of this ill-posed problem and classify the existing techniques following a taxonomy. To achieve this goal, we extended the existing taxonomy to cover all the current classifications in the literature, comprising classic, machine learning, direct, indirect, dense, and sparse methods. We performed a detailed overview of 42 methods, considering 18 classic and 24 machine learning methods according to the ten categories defined in our extended taxonomy, comprehensively systematizing their algorithms and providing their basic formulations. Relevant information about each algorithm was summarized in nine criteria for classic methods and eleven criteria for machine learning methods to provide the reader with decision components to implement, select or design a 3D reconstruction system. Finally, an analysis of the temporal evolution of each category was performed, which determined that the classical-sparse-indirect and classical-dense-indirect categories have been the most accepted solutions to the monocular 3D reconstruction problem over the last 18 years.

6.
Mikrochim Acta ; 191(10): 619, 2024 09 25.
Artigo em Inglês | MEDLINE | ID: mdl-39320528

RESUMO

A wax-patterned paper analytical device (µPAD) has been developed for point-of-care colourimetric testing of serum glutamic oxaloacetic transaminase (SGOT). The detection method was based on the transamination reaction of aspartate with α-ketoglutarate, leading to the formation of oxaloacetate which reacts with the reagent Fast Blue BB salt and forms a cavern pink colour. The intensity of the cavern pink colour grows as the concentration of SGOT increases. UV-visible spectroscopy was utilized to optimize reaction conditions, and the optimized reagents were dropped onto the wax-patterned paper. The coloured PADs, after the addition of SGOT, have been photographed, and a colour band has been generated to correlate the SGOT concentration visually. The images were used to calculate the intensity values using ImageJ software, which inturn was used to calculate the SGOT concentration. The PADs were also tested with serum samples, and SGOT spiked serum samples. The PAD could detect the SGOT concentration ranging from 5 to 200 U/L. The analysis yielded highly accurate results with less than 6% relative error compared to the clinical sample. This colourimetric test demonstrated exceptional selectivity in the presence of other biomolecules in the blood serum, with a detection limit of 2.77 U/L and a limit of quantification of 9.25 U/L. Additionally, a plasma separation membrane was integrated with the PAD to directly test SGOT from finger-prick blood samples.


Assuntos
Aspartato Aminotransferases , Colorimetria , Testes Imediatos , Humanos , Aspartato Aminotransferases/sangue , Colorimetria/métodos , Papel , Limite de Detecção , Ácidos Cetoglutáricos/sangue , Ácidos Cetoglutáricos/química , Ácido Aspártico/sangue , Ácido Aspártico/química
7.
Ann Pharm Fr ; 2024 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-39222709

RESUMO

OBJECTIVE: To develop and validate a rapid, accurate, economical, effective and greenery RP-HPLC method for the determination of Zolmitriptan in tablet dosage form. MATERIAL AND METHOD: RP-HPLC method was developed using Luna (C18) (4.6×250mm, 5µm) column and Sodium phosphate buffer (pH 4.7): Methanol [75: 25, v/v] was used as mobile phase at a flow rate of 1.0mL/min. The detection was carried out at 227nm. Further, eco-friendliness, productivity and performance of the optimized analytical method were assessed by green and white tools. RESULTS: The retention time of Zolmitriptan was found to be 3.25min with acceptable chromatographic parameters. The optimized RP-HPLC method was more eco-friendly, efficient, throughput and practicable than the reported methods as confirmed by AES, AGREE, GAPI and RGB tools. Further, the proposed analytical method showed all the validation parameters within the acceptance limit of ICH Q2 R1 guidelines. The linear regression analysis indicated a good linear response in the 10 to 120µg/mL concentration range with R2 of 0.99998. The percentage content and percentage assay of Zolmitriptan in Zomig-5mg tablet was found to be 103.36±0.356% and 97.86±0.693%. CONCLUSION: The developed and validated method has several advantages compared to the reported HPLC methods and is useful in the systematic analysis of Zolmitriptan in its dosage form.

8.
Technol Health Care ; 2024 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-39240596

RESUMO

BACKGROUND: In radiography procedures, radiographers' suboptimal positioning and exposure parameter settings may necessitate image retakes, subjecting patients to unnecessary ionizing radiation exposure. Reducing retakes is crucial to minimize patient X-ray exposure and conserve medical resources. OBJECTIVE: We propose a Digital Radiography (DR) Pre-imaging All-round Assistant (PIAA) that leverages Artificial Intelligence (AI) technology to enhance traditional DR. METHODS: PIAA consists of an RGB-Depth (RGB-D) multi-camera array, an embedded computing platform, and multiple software components. It features an Adaptive RGB-D Image Acquisition (ARDIA) module that automatically selects the appropriate RGB camera based on the distance between the cameras and patients. It includes a 2.5D Selective Skeletal Keypoints Estimation (2.5D-SSKE) module that fuses depth information with 2D keypoints to estimate the pose of target body parts. Thirdly, it also uses a Domain expertise (DE) embedded Full-body Exposure Parameter Estimation (DFEPE) module that combines 2.5D-SSKE and DE to accurately estimate parameters for full-body DR views. RESULTS: Optimizes DR workflow, significantly enhancing operational efficiency. The average time required for positioning patients and preparing exposure parameters was reduced from 73 seconds to 8 seconds. CONCLUSIONS: PIAA shows significant promise for extension to full-body examinations.

9.
Artigo em Inglês | MEDLINE | ID: mdl-39271573

RESUMO

PURPOSE: RGB-D cameras in the operating room (OR) provide synchronized views of complex surgical scenes. Assimilation of this multi-view data into a unified representation allows for downstream tasks such as object detection and tracking, pose estimation, and action recognition. Neural radiance fields (NeRFs) can provide continuous representations of complex scenes with limited memory footprint. However, existing NeRF methods perform poorly in real-world OR settings, where a small set of cameras capture the room from entirely different vantage points. In this work, we propose NeRF-OR, a method for 3D reconstruction of dynamic surgical scenes in the OR. METHODS: Where other methods for sparse-view datasets use either time-of-flight sensor depth or dense depth estimated from color images, NeRF-OR uses a combination of both. The depth estimations mitigate the missing values that occur in sensor depth images due to reflective materials and object boundaries. We propose to supervise with surface normals calculated from the estimated depths, because these are largely scale invariant. RESULTS: We fit NeRF-OR to static surgical scenes in the 4D-OR dataset and show that its representations are geometrically accurate, where state of the art collapses to sub-optimal solutions. Compared to earlier work, NeRF-OR grasps fine scene details while training 30 × faster. Additionally, NeRF-OR can capture whole-surgery videos while synthesizing views at intermediate time values with an average PSNR of 24.86 dB. Last, we find that our approach has merit in sparse-view settings beyond those in the OR, by benchmarking on the NVS-RGBD dataset that contains as few as three training views. NeRF-OR synthesizes images with a PSNR of 26.72 dB, a 1.7% improvement over state of the art. CONCLUSION: Our results show that NeRF-OR allows for novel view synthesis with videos captured by a small number of cameras with entirely different vantage points, which is the typical camera setting in the OR. Code is available via: github.com/Beerend/NeRF-OR .

10.
Sensors (Basel) ; 24(17)2024 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-39275384

RESUMO

Accurate 6DoF (degrees of freedom) pose and focal length estimation are important in extended reality (XR) applications, enabling precise object alignment and projection scaling, thereby enhancing user experiences. This study focuses on improving 6DoF pose estimation using single RGB images of unknown camera metadata. Estimating the 6DoF pose and focal length from an uncontrolled RGB image, obtained from the internet, is challenging because it often lacks crucial metadata. Existing methods such as FocalPose and Focalpose++ have made progress in this domain but still face challenges due to the projection scale ambiguity between the translation of an object along the z-axis (tz) and the camera's focal length. To overcome this, we propose a two-stage strategy that decouples the projection scaling ambiguity in the estimation of z-axis translation and focal length. In the first stage, tz is set arbitrarily, and we predict all the other pose parameters and focal length relative to the fixed tz. In the second stage, we predict the true value of tz while scaling the focal length based on the tz update. The proposed two-stage method reduces projection scale ambiguity in RGB images and improves pose estimation accuracy. The iterative update rules constrained to the first stage and tailored loss functions including Huber loss in the second stage enhance the accuracy in both 6DoF pose and focal length estimation. Experimental results using benchmark datasets show significant improvements in terms of median rotation and translation errors, as well as better projection accuracy compared to the existing state-of-the-art methods. In an evaluation across the Pix3D datasets (chair, sofa, table, and bed), the proposed two-stage method improves projection accuracy by approximately 7.19%. Additionally, the incorporation of Huber loss resulted in a significant reduction in translation and focal length errors by 20.27% and 6.65%, respectively, in comparison to the Focalpose++ method.

11.
Sensors (Basel) ; 24(17)2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39275569

RESUMO

The digitization of pathology departments in hospitals around the world is now a reality. The current commercial solutions applied to digitize histopathological samples consist of a robotic microscope with an RGB-type camera attached to it. This technology is very limited in terms of information captured, as it only works with three spectral bands of the visible electromagnetic spectrum. Therefore, we present an automated system that combines RGB and hyperspectral technology. Throughout this work, the hardware of the system and its components are described along with the developed software and a working methodology to ensure the correct capture of histopathological samples. The software is integrated by the controller of the microscope, which features an autofocus functionality, whole slide scanning with a stitching algorithm, and hyperspectral scanning functionality. As a reference, the time to capture and process a complete sample with 20 regions of high biological interest using the proposed method is estimated at a maximum of 79 min, reducing the time required by a manual operator by at least three times. Both hardware and software can be easily adapted to other systems that might benefit from the advantages of hyperspectral technology.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Microscopia , Software , Microscopia/métodos , Microscopia/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Humanos , Bases de Dados Factuais , Imageamento Hiperespectral/métodos , Imageamento Hiperespectral/instrumentação
12.
Sensors (Basel) ; 24(17)2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39275705

RESUMO

Crop height and biomass are the two important phenotyping traits to screen forage population types at local and regional scales. This study aims to compare the performances of multispectral and RGB sensors onboard drones for quantitative retrievals of forage crop height and biomass at very high resolution. We acquired the unmanned aerial vehicle (UAV) multispectral images (MSIs) at 1.67 cm spatial resolution and visible data (RGB) at 0.31 cm resolution and measured the forage height and above-ground biomass over the alfalfa (Medicago sativa L.) breeding trials in the Canadian Prairies. (1) For height estimation, the digital surface model (DSM) and digital terrain model (DTM) were extracted from MSI and RGB data, respectively. As the resolution of the DTM is five times less than that of the DSM, we applied an aggregation algorithm to the DSM to constrain the same spatial resolution between DSM and DTM. The difference between DSM and DTM was computed as the canopy height model (CHM), which was at 8.35 cm and 1.55 cm for MSI and RGB data, respectively. (2) For biomass estimation, the normalized difference vegetation index (NDVI) from MSI data and excess green (ExG) index from RGB data were analyzed and regressed in terms of ground measurements, leading to empirical models. The results indicate better performance of MSI for above-ground biomass (AGB) retrievals at 1.67 cm resolution and better performance of RGB data for canopy height retrievals at 1.55 cm. Although the retrieved height was well correlated with the ground measurements, a significant underestimation was observed. Thus, we developed a bias correction function to match the retrieval with the ground measurements. This study provides insight into the optimal selection of sensor for specific targeted vegetation growth traits in a forage crop.


Assuntos
Biomassa , Algoritmos , Dispositivos Aéreos não Tripulados , Medicago sativa/crescimento & desenvolvimento , Produtos Agrícolas/crescimento & desenvolvimento
13.
Toxins (Basel) ; 16(8)2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39195764

RESUMO

Fusarium head blight (FHB) is a plant disease caused by various species of the Fusarium fungus. One of the major concerns associated with Fusarium spp. is their ability to produce mycotoxins. Mycotoxin contamination in small grain cereals is a risk to human and animal health and leads to major economic losses. A reliable site-specific precise Fusarium spp. infection early warning model is, therefore, needed to ensure food and feed safety by the early detection of contamination hotspots, enabling effective and efficient fungicide applications, and providing FHB prevention management advice. Such precision farming techniques contribute to environmentally friendly production and sustainable agriculture. This study developed a predictive model, Sága, for on-site FHB detection in wheat using imaging spectroscopy and deep learning. Data were collected from an experimental field in 2021 including (1) an experimental field inoculated with Fusarium spp. (52.5 m × 3 m) and (2) a control field (52.5 m × 3 m) not inoculated with Fusarium spp. and sprayed with fungicides. Imaging spectroscopy data (hyperspectral images) were collected from both the experimental and control fields with the ground truth of Fusarium-infected ear and healthy ear, respectively. Deep learning approaches (pretrained YOLOv5 and DeepMAC on Global Wheat Head Detection (GWHD) dataset) were used to segment wheat ears and XGBoost was used to analyze the hyperspectral information related to the wheat ears and make predictions of Fusarium-infected wheat ear and healthy wheat ear. The results showed that deep learning methods can automatically detect and segment the ears of wheat by applying pretrained models. The predictive model can accurately detect infected areas in a wheat field, achieving mean accuracy and F1 scores exceeding 89%. The proposed model, Sága, could facilitate the early detection of Fusarium spp. to increase the fungicide use efficiency and limit mycotoxin contamination.


Assuntos
Aprendizado Profundo , Grão Comestível , Fusarium , Doenças das Plantas , Triticum , Triticum/microbiologia , Fusarium/isolamento & purificação , Grão Comestível/microbiologia , Grão Comestível/química , Doenças das Plantas/microbiologia , Contaminação de Alimentos/análise , Micotoxinas/análise , Fungicidas Industriais/análise
14.
J Sci Food Agric ; 2024 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-39149861

RESUMO

BACKGROUND: Leaf area index (LAI) is an important indicator for assessing plant growth and development, and is also closely related to photosynthesis in plants. The realization of rapid accurate estimation of crop LAI plays an important role in guiding farmland production. In study, the UAV-RGB technology was used to estimate LAI based on 65 winter wheat varieties at different fertility periods, the wheat varieties including farm varieties, main cultivars, new lines, core germplasm and foreign varieties. Color indices (CIs) and texture features were extracted from RGB images to determine their quantitative link to LAI. RESULTS: The results revealed that among the extracted image features, LAI exhibited a significant positive correlation with CIs (r = 0.801), whereas there was a significant negative correlation with texture features (r = -0.783). Furthermore, the visible atmospheric resistance index, the green-red vegetation index, the modified green-red vegetation index in the CIs, and the mean in the texture features demonstrated a strong correlation with the LAI with r > 0.8. With reference to the model input variables, the backpropagation neural network (BPNN) model of LAI based on the CIs and texture features (R2 = 0.730, RMSE = 0.691, RPD = 1.927) outperformed other models constructed by individual variables. CONCLUSION: This study offers a theoretical basis and technical reference for precise monitor on winter wheat LAI based on consumer-level UAVs. The BPNN model, incorporating CIs and texture features, proved to be superior in estimating LAI, and offered a reliable method for monitoring the growth of winter wheat. © 2024 Society of Chemical Industry.

15.
Front Neurosci ; 18: 1453419, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39176387

RESUMO

Integrating RGB and Event (RGBE) multi-domain information obtained by high-dynamic-range and temporal-resolution event cameras has been considered an effective scheme for robust object tracking. However, existing RGBE tracking methods have overlooked the unique spatio-temporal features over different domains, leading to object tracking failure and inefficiency, especally for objects against complex backgrounds. To address this problem, we propose a novel tracker based on adaptive-time feature extraction hybrid networks, namely Siamese Event Frame Tracker (SiamEFT), which focuses on the effective representation and utilization of the diverse spatio-temporal features of RGBE. We first design an adaptive-time attention module to aggregate event data into frames based on adaptive-time weights to enhance information representation. Subsequently, the SiamEF module and cross-network fusion module combining artificial neural networks and spiking neural networks hybrid network are designed to effectively extract and fuse the spatio-temporal features of RGBE. Extensive experiments on two RGBE datasets (VisEvent and COESOT) show that the SiamEFT achieves a success rate of 0.456 and 0.574, outperforming the state-of-the-art competing methods and exhibiting a 2.3-fold enhancement in efficiency. These results validate the superior accuracy and efficiency of SiamEFT in diverse and challenging scenes.

16.
Spectrochim Acta A Mol Biomol Spectrosc ; 323: 124874, 2024 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-39096673

RESUMO

Peptide-fluorophore conjugates (PFCs) have been expeditiously utilized for metal ion recognition owing to their distinctive characteristics. Selective detection and quantification of aluminum is essential to minimize health and environmental risks. Herein, we report the synthesis and characterization of a new chemoprobe with aggregation-induced emission characteristics by chemically conjugating rhodamine-B fluorophore with a tripeptide. The probe revealed ß-sheet secondary conformation in both solid and solution states, as confirmed by FT-IR, PXRD, and CD experiments. AIE characteristics of the probe in water-MeCN mixtures revealed the formation of spherically shaped nanoaggregates with an average size of 353 ± 7 nm, as confirmed by SEM, TEM, and DLS studies. The probe exhibited a large stokes shift (175 nm) and displayed selective colorimetric and fluorometric responses towards Al3+ ions with an extremely low detection limit (51 nm) and a fast response time (≤15 s). Comparative NMR studies confirmed the cleavage of spirolactam ring upon aluminum binding. The probe's practicality was enhanced through integration into test strips and thin films, allowing solid-phase detection of Al3+ ions. Furthermore, an RGB-Arduino enabled optosensing device has been developed to enable instant quantifiable analysis of aluminum concentrations in real-time conditions.

17.
IEEE J Transl Eng Health Med ; 12: 580-588, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39155921

RESUMO

OBJECTIVE: Low-cost, portable RGB-D cameras with integrated motion tracking functionality enable easy-to-use 3D motion analysis without requiring expensive facilities and specialized personnel. However, the accuracy of existing systems is insufficient for most clinical applications, particularly when applied to children. In previous work, we developed an RGB-D camera-based motion tracking method and showed that it accurately captures body joint positions of children and young adults in 3D. In this study, the validity and accuracy of clinically relevant motion parameters that were computed from kinematics of our motion tracking method are evaluated in children and young adults. METHODS: Twenty-three typically developing children and healthy young adults (5-29 years, 110-189 cm) performed five movement tasks while being recorded simultaneously with a marker-based Vicon system and an Azure Kinect RGB-D camera. Motion parameters were computed from the extracted kinematics of both methods: time series measurements, i.e., measurements over time, peak measurements, i.e., measurements at a single time instant, and movement smoothness. The agreement of these parameter values was evaluated using Pearson's correlation coefficients r for time series data, and mean absolute error (MAE) and Bland-Altman plots with limits of agreement for peak measurements and smoothness. RESULTS: Time series measurements showed strong to excellent correlations (r-values between 0.8 and 1.0), MAE for angles ranged from 1.5 to 5 degrees and for smoothness parameters (SPARC) from 0.02-0.09, while MAE for distance-related parameters ranged from 9 to 15 mm. CONCLUSION: Extracted motion parameters are valid and accurate for various movement tasks in children and young adults, demonstrating the suitability of our tracking method for clinical motion analysis. CLINICAL IMPACT: The low-cost portable hardware in combination with our tracking method enables motion analysis outside of specialized facilities while providing measurements that are close to those of the clinical gold-standard.


Assuntos
Imageamento Tridimensional , Movimento , Humanos , Criança , Adolescente , Adulto Jovem , Adulto , Masculino , Feminino , Movimento/fisiologia , Imageamento Tridimensional/instrumentação , Imageamento Tridimensional/métodos , Fenômenos Biomecânicos , Pré-Escolar , Reprodutibilidade dos Testes , Gravação em Vídeo/instrumentação , Gravação em Vídeo/métodos , Fotografação/instrumentação , Fotografação/métodos
18.
Chemistry ; : e202402708, 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39136930

RESUMO

In this study, a novel multi-stimulus responsive RGB fluorescent organic molecule, RTPE-NH2, was designed and synthesized based on the combination of aggregation-induced emission tetraphenylethylene (TPE) luminophore and acid-responsive fluorescent molecular switch Rhodamine B. RTPE-NH2 exhibits aggregation-induced emission behavior, as well as UV irradiation-stimulus and acid-stimulus responsive fluorescence properties. It could emit orange-red (R), green(G), and blue(B) light in both solution and PMMA film under 365 nm excitation. The dark through-bond energy transfer (DTBET) mechanism was proposed and supported by control experiments and TD-DFT calculations. The synthesis and application of RTPE-NH2 could accelerate the development of organic smart materials with high sensitivity and excellent optical properties.

19.
Sensors (Basel) ; 24(15)2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39124084

RESUMO

The sturgeon is an important commercial aquaculture species in China. The measurement of sturgeon mass plays a remarkable role in aquaculture management. Furthermore, the measurement of sturgeon mass serves as a key phenotype, offering crucial information for enhancing growth traits through genetic improvement. Until now, the measurement of sturgeon mass is usually conducted by manual sampling, which is work intensive and time consuming for farmers and invasive and stressful for the fish. Therefore, a noninvasive volume reconstruction model for estimating the mass of swimming sturgeon based on RGB-D sensor was proposed in this paper. The volume of individual sturgeon was reconstructed by integrating the thickness of the upper surface of the sturgeon, where the difference in depth between the surface and the bottom was used as the thickness measurement. To verify feasibility, three experimental groups were conducted, achieving prediction accuracies of 0.897, 0.861, and 0.883, which indicated that the method can obtain the reliable, accurate mass of the sturgeon. The strategy requires no special hardware or intensive calculation, and it provides a key to uncovering noncontact, high-throughput, and highly sensitive mass evaluation of sturgeon while holding potential for evaluating the mass of other cultured fishes.


Assuntos
Aquicultura , Peixes , Natação , Animais , Peixes/fisiologia , Natação/fisiologia , Aquicultura/métodos
20.
Diagnostics (Basel) ; 14(15)2024 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-39125501

RESUMO

The implementation of tumor grading tasks with image processing and machine learning techniques has progressed immensely over the past several years. Multispectral imaging enabled us to capture the sample as a set of image bands corresponding to different wavelengths in the visible and infrared spectrums. The higher dimensional image data can be well exploited to deliver a range of discriminative features to support the tumor grading application. This paper compares the classification accuracy of RGB and multispectral images, using a case study on colorectal tumor grading with the QU-Al Ahli Dataset (dataset I). Rotation-invariant local phase quantization (LPQ) features with an SVM classifier resulted in 80% accuracy for the RGB images compared to 86% accuracy with the multispectral images in dataset I. However, the higher dimensionality elevates the processing time. We propose a band-selection strategy using mutual information between image bands. This process eliminates redundant bands and increases classification accuracy. The results show that our band-selection method provides better results than normal RGB and multispectral methods. The band-selection algorithm was also tested on another colorectal tumor dataset, the Texas University Dataset (dataset II), to further validate the results. The proposed method demonstrates an accuracy of more than 94% with 10 bands, compared to using the whole set of 16 multispectral bands. Our research emphasizes the advantages of multispectral imaging over the RGB imaging approach and proposes a band-selection method to address the higher computational demands of multispectral imaging.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA