Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.017
Filter
1.
Npj Imaging ; 2(1): 15, 2024.
Article in English | MEDLINE | ID: mdl-38962496

ABSTRACT

Batch effects (BEs) refer to systematic technical differences in data collection unrelated to biological variations whose noise is shown to negatively impact machine learning (ML) model generalizability. Here we release CohortFinder (http://cohortfinder.com), an open-source tool aimed at mitigating BEs via data-driven cohort partitioning. We demonstrate CohortFinder improves ML model performance in downstream digital pathology and medical image processing tasks. CohortFinder is freely available for download at cohortfinder.com.

2.
J Biophotonics ; : e202400105, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38955359

ABSTRACT

Nail fold capillaroscopy is an important means of monitoring human health. Panoramic nail fold images improve the efficiency and accuracy of examinations. However, the acquisition of panoramic nail fold images is seldom studied and the problem manifests of few matching feature points when image stitching is used for such images. Therefore, this paper presents a method for panoramic nail fold image stitching based on vascular contour enhancement, which first solves the problem of few matching feature points by pre-processing the image with contrast-constrained adaptive histogram equalization (CLAHE), bilateral filtering (BF), and sharpening algorithms. The panoramic images of the nail fold blood vessels are then successfully stitched using the fast robust feature (SURF), fast library of approximate nearest neighbors (FLANN) and random sample agreement (RANSAC) algorithms. The experimental results show that the panoramic image stitched by this paper's algorithm has a field of view width of 7.43 mm, which improves the efficiency and accuracy of diagnosis.

4.
J Dent ; 148: 105216, 2024 Jun 29.
Article in English | MEDLINE | ID: mdl-38950768

ABSTRACT

OBJECTIVE: To digitally evaluate the three-dimensional (3D) remodelling of FGG used to treat RT2 gingival recessions and lack of keratinized tissue on mandibular incisor teeth. METHODS: Data from 45 patients included in a previous multicentric RCT were analyzed. Silicone impressions were taken before (baseline) and 3, 6 and 12 months after standardized FGG placement. Casts were scanned and images were superimposed, using digital software, to obtain measurements of estimated soft tissue thickness (eTT; 1, 3, and 5 mm apical to baseline gingival margin). In addition, soft tissue volume (STV) and creeping attachment (CA) were assessed. RESULTS: All patients exhibited postoperative eTT and STV increases, at all time points. The greatest mean thickness gain was observed at eTT3 (1.0 ± 0.4 mm) at 12 months. At 12 months, STV was 52.3 ± 21.1 mm3, without relevant changes compared to the 3- and 6-month follow-up. CA, which was observed as early as six months postoperatively, was evident in ∼85 % of teeth at 12 months. CONCLUSIONS: Application of FGG was an effective phenotype modification therapy, as shown by the significantly increased tissue thickness postoperatively. Despite the use of FGG technique not aiming for root coverage, digital 3D assessment documented the early and frequent postoperative occurrence of CA, which helped improve recession treatment outcomes. CLINICAL SIGNIFICANCE: The use of 3D assessment methodology allows precise identification of the tissue gain obtained with FGG, which, regardless of technique, results in predictable phenotype modification and frequent occurrence of creeping attachment.

5.
Microbiol Spectr ; : e0003224, 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38980028

ABSTRACT

Time-lapse microscopy offers a powerful approach for analyzing cellular activity. In particular, this technique is valuable for assessing the behavior of bacterial populations, which can exhibit growth and intercellular interactions in a monolayer. Such time-lapse imaging typically generates large quantities of data, limiting the options for manual investigation. Several image-processing software packages have been developed to facilitate analysis. It can thus be a challenge to identify the software package best suited to a particular research goal. Here, we compare four software packages that support the analysis of 2D time-lapse images of cellular populations: CellProfiler, SuperSegger-Omnipose, DeLTA, and FAST. We compare their performance against benchmarked results on time-lapse observations of Escherichia coli populations. Performance varies across the packages, with each of the four outperforming the others in at least one aspect of the analysis. Not surprisingly, the packages that have been in development for longer showed the strongest performance. We found that deep learning-based approaches to object segmentation outperformed traditional approaches, but the opposite was true for frame-to-frame object tracking. We offer these comparisons, together with insight into usability, computational efficiency, and feature availability, as a guide to researchers seeking image-processing solutions. IMPORTANCE: Time-lapse microscopy provides a detailed window into the world of bacterial behavior. However, the vast amount of data produced by these techniques is difficult to analyze manually. We have analyzed four software tools designed to process such data and compared their performance, using populations of commonly studied bacterial species as our test subjects. Our findings offer a roadmap to scientists, helping them choose the right tool for their research. This comparison bridges a gap between microbiology and computational analysis, streamlining research efforts.

6.
Environ Sci Technol ; 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38952258

ABSTRACT

There is a notable lack of continuous monitoring of air pollutants in the Global South, especially for measuring chemical composition, due to the high cost of regulatory monitors. Using our previously developed low-cost method to quantify black carbon (BC) in fine particulate matter (PM2.5) by analyzing reflected red light from ambient particle deposits on glass fiber filters, we estimated hourly ambient BC concentrations with filter tapes from beta attenuation monitors (BAMs). BC measurements obtained through this method were validated against a reference aethalometer between August 2 and 23, 2023 in Addis Ababa, Ethiopia, demonstrating a very strong agreement (R2 = 0.95 and slope = 0.97). We present hourly BC for three cities in sub-Saharan Africa (SSA) and one in North America: Abidjan (Côte d'Ivoire), Accra (Ghana), Addis Ababa (Ethiopia), and Pittsburgh (USA). The average BC concentrations for the measurement period at the Abidjan, Accra, Addis Ababa Central summer, Addis Ababa Central winter, Addis Ababa Jacros winter, and Pittsburgh sites were 3.85 µg/m3, 5.33 µg/m3, 5.63 µg/m3, 3.89 µg/m3, 9.14 µg/m3, and 0.52 µg/m3, respectively. BC made up 14-20% of PM2.5 mass in the SSA cities compared to only 5.6% in Pittsburgh. The hourly BC data at all sites (SSA and North America) show a pronounced diurnal pattern with prominent peaks during the morning and evening rush hours on workdays. A comparison between our measurements and the Goddard Earth Observing System Composition Forecast (GEOS-CF) estimates shows that the model performs well in predicting PM2.5 for most sites but struggles to predict BC at an hourly resolution. Adding more ground measurements could help evaluate and improve the performance of chemical transport models. Our method can potentially use existing BAM networks, such as BAMs at U.S. Embassies around the globe, to measure hourly BC concentrations. The PM2.5 composition data, thus acquired, can be crucial in identifying emission sources and help in effective policymaking in SSA.

7.
ACS Appl Bio Mater ; 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38967050

ABSTRACT

Titanium-based implants have long been studied and used for applications in bone tissue engineering, thanks to their outstanding mechanical properties and appropriate biocompatibility. However, many implants struggle with osseointegration and attachment and can be vulnerable to the development of infections. In this work, we have developed a composite coating via electrophoretic deposition, which is both bioactive and antibacterial. Mesoporous bioactive glass particles with gentamicin were electrophoretically deposited onto a titanium substrate. In order to validate the hypothesis that the quantity of particles in the coatings is sufficiently high and uniform in each deposition process, an easy-to-use image processing algorithm was designed to minimize human dependence and ensure reproducible results. The addition of loaded mesoporous particles did not affect the good adhesion of the coating to the substrate although roughness was clearly enhanced. After 7 days of immersion, the composite coatings were almost dissolved and released, but phosphate-related compounds started to nucleate at the surface. With a simple and low-cost technique like electrophoretic deposition, and optimized stir and suspension times, we were able to synthesize a hemocompatible coating that significantly improves the antibacterial activity when compared to the bare substrate for both Gram-positive and Gram-negative bacteria.

8.
Materials (Basel) ; 17(12)2024 Jun 08.
Article in English | MEDLINE | ID: mdl-38930173

ABSTRACT

The article presents the results of the characterization of the geometric structure of the surface of unalloyed structural steel and alloyed (martensitic) steel subjected to chemical processing. Prior to phosphating, the samples were heat-treated. Both the surfaces and the cross-sections of the samples were investigated. Detailed studies were made using scanning electron microscopy (SEM), XRD, metallographic microscopy, chemical composition analysis and fractal analysis. The characteristics of the surface geometry involved such parameters as circularity, roundness, solidity, Feret's diameter, watershed diameter, fractal dimensions and corner frequencies, which were calculated by numerical processing of SEM images.

9.
Materials (Basel) ; 17(12)2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38930412

ABSTRACT

The development of urbanization and the resulting expansion of residential and transport infrastructures pose new challenges related to ensuring comfort for city dwellers. The emission of transport vibrations and household noise reduces the quality of life in the city. To counteract this unfavorable phenomenon, vibration isolation is widely used to reduce the propagation of vibrations and noise. A proper selection of vibration isolation is necessary to ensure comfort. This selection can be made based on a deep understanding of the material parameters of the vibration isolation used. This mainly includes dynamic stiffness and damping. This article presents a comparison of the method for testing dynamic stiffness and damping using a single degree of freedom (SDOF) system and the method using image processing, which involves tracking the movement of a free-falling steel ball onto a sample of the tested material. Rubber granules, rubber granules with rubber fibers, and rebound polyurethanes were selected for testing. Strong correlations were found between the relative indentation and dynamic stiffness (at 10-60 MN/m3) and the relative rebound and damping (for 6-12%). Additionally, a very strong relationship was determined between the density and fraction of the critical damping factor/dynamic stiffness. The relative indentation and relative rebound measurement methods can be used as an alternative method to measure the dynamic stiffness and critical damping factor, respectively.

10.
Sensors (Basel) ; 24(12)2024 Jun 08.
Article in English | MEDLINE | ID: mdl-38931524

ABSTRACT

Building occupancy information is significant for a variety of reasons, from allocation of resources in smart buildings to responding during emergency situations. As most people spend more than 90% of their time indoors, a comfortable indoor environment is crucial. To ensure comfort, traditional HVAC systems condition rooms assuming maximum occupancy, accounting for more than 50% of buildings' energy budgets in the US. Occupancy level is a key factor in ensuring energy efficiency, as occupancy-controlled HVAC systems can reduce energy waste by conditioning rooms based on actual usage. Numerous studies have focused on developing occupancy estimation models leveraging existing sensors, with camera-based methods gaining popularity due to their high precision and widespread availability. However, the main concern with using cameras for occupancy estimation is the potential violation of occupants' privacy. Unlike previous video-/image-based occupancy estimation methods, we addressed the issue of occupants' privacy in this work by proposing and investigating both motion-based and motion-independent occupancy counting methods on intentionally blurred video frames. Our proposed approach included the development of a motion-based technique that inherently preserves privacy, as well as motion-independent techniques such as detection-based and density-estimation-based methods. To improve the accuracy of the motion-independent approaches, we utilized deblurring methods: an iterative statistical technique and a deep-learning-based method. Furthermore, we conducted an analysis of the privacy implications of our motion-independent occupancy counting system by comparing the original, blurred, and deblurred frames using different image quality assessment metrics. This analysis provided insights into the trade-off between occupancy estimation accuracy and the preservation of occupants' visual privacy. The combination of iterative statistical deblurring and density estimation achieved a 16.29% counting error, outperforming our other proposed approaches while preserving occupants' visual privacy to a certain extent. Our multifaceted approach aims to contribute to the field of occupancy estimation by proposing a solution that seeks to balance the trade-off between accuracy and privacy. While further research is needed to fully address this complex issue, our work provides insights and a step towards a more privacy-aware occupancy estimation system.

11.
Sensors (Basel) ; 24(12)2024 Jun 09.
Article in English | MEDLINE | ID: mdl-38931537

ABSTRACT

It is common to see cases in which, when performing tasks in close vision in front of a digital screen, the posture or position of the head is not adequate, especially in young people; it is essential to have a correct posture of the head to avoid visual, muscular, or joint problems. Most of the current systems to control head inclination require an external part attached to the subject's head. The aim of this study is the validation of a procedure that, through a detection algorithm and eye tracking, can control the correct position of the head in real time when subjects are in front of a digital device. The system only needs a digital device with a CCD receiver and downloadable software through which we can detect the inclination of the head, indicating if a bad posture is adopted due to a visual problem or simply inadequate visual-postural habits, alerting us to the postural anomaly to correct it.The system was evaluated in subjects with disparate interpupillary distances, at different working distances in front of the digital device, and at each distance, different tilt angles were evaluated. The system evaluated favorably in different lighting environments, correctly detecting the subjects' pupils. The results showed that for most of the variables, particularly good absolute and relative reliability values were found when measuring head tilt with lower accuracy than most of the existing systems. The evaluated results have been positive, making it a considerably inexpensive and easily affordable system for all users. It is the first application capable of measuring the head tilt of the subject at their working or reading distance in real time by tracking their eyes.


Subject(s)
Algorithms , Head , Posture , Humans , Posture/physiology , Head/physiology , Artificial Intelligence , Software , Male , Female , Adult
12.
Ultrastruct Pathol ; 48(4): 310-316, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38828684

ABSTRACT

OBJECTIVE: Thyroid carcinoma ranks as the 9th most prevalent global cancer, accounting for 586,202 cases and 43,636 deaths in 2020. Computerized image analysis, utilizing artificial intelligence algorithms, emerges as a potential tool for tumor evaluation. AIM: This study aims to assess and compare chromatin textural characteristics and nuclear dimensions in follicular neoplasms through gray-level co-occurrence matrix (GLCM), fractal, and morphometric analysis. METHOD: A retrospective cross-sectional study involving 115 thyroid malignancies, specifically 49 papillary thyroid carcinomas with follicular morphology, was conducted from July 2021 to July 2023. Ethical approval was obtained, and histopathological examination, along with image analysis, was performed using ImageJ software. RESULTS: A statistically significant difference was observed in contrast (2.426 (1.774-3.412) vs 2.664 (1.963-3.610), p = .002), correlation (1.202 (1.071-1.298) vs 0.892 (0.833-0.946), p = .01), and ASM (0.071 (0.090-0.131) vs 0.044 (0.019-0.102), p = .036) between NIFTP and IFVPTC. However, morphometric parameters did not yield statistically significant differences among histological variants. CONCLUSION: Computerized image analysis, though promising in subtype discrimination, requires further refinement and integration with traditional diagnostic parameters. The study suggests potential applications in scenarios where conventional histopathological assessment faces limitations due to limited tissue availability. Despite limitations such as a small sample size and a retrospective design, the findings contribute to understanding thyroid carcinoma characteristics and underscore the need for comprehensive evaluations integrating various diagnostic modalities.


Subject(s)
Adenocarcinoma, Follicular , Chromatin , Fractals , Thyroid Cancer, Papillary , Thyroid Neoplasms , Humans , Thyroid Neoplasms/pathology , Retrospective Studies , Cross-Sectional Studies , Adenocarcinoma, Follicular/pathology , Thyroid Cancer, Papillary/pathology , Diagnosis, Differential , Cell Nucleus/pathology , Female
13.
Front Cell Infect Microbiol ; 14: 1397316, 2024.
Article in English | MEDLINE | ID: mdl-38912211

ABSTRACT

While the world struggles to recover from the devastation wrought by the widespread spread of COVID-19, monkeypox virus has emerged as a new global pandemic threat. In this paper, a high precision and lightweight classification network MpoxNet based on ConvNext is proposed to meet the need of fast and safe detection of monkeypox classification. In this method, a two-branch depth-separable convolution residual Squeeze and Excitation module is designed. This design aims to extract more feature information with two branches, and greatly reduces the number of parameters in the model by using depth-separable convolution. In addition, our method introduces a convolutional attention module to enhance the extraction of key features within the receptive field. The experimental results show that MpoxNet has achieved remarkable results in monkeypox disease classification, the accuracy rate is 95.28%, the precision rate is 96.40%, the recall rate is 93.00%, and the F1-Score is 95.80%. This is significantly better than the current mainstream classification model. It is worth noting that the FLOPS and the number of parameters of MpoxNet are only 30.68% and 31.87% of those of ConvNext-Tiny, indicating that the model has a small computational burden and model complexity while efficient performance.


Subject(s)
Mpox (monkeypox) , Neural Networks, Computer , Mpox (monkeypox)/virology , Humans , COVID-19 , Algorithms , SARS-CoV-2/classification , Monkeypox virus/classification , Deep Learning
14.
Biomed Phys Eng Express ; 10(4)2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38848695

ABSTRACT

Recent advancements in computational intelligence, deep learning, and computer-aided detection have had a significant impact on the field of medical imaging. The task of image segmentation, which involves accurately interpreting and identifying the content of an image, has garnered much attention. The main objective of this task is to separate objects from the background, thereby simplifying and enhancing the significance of the image. However, existing methods for image segmentation have their limitations when applied to certain types of images. This survey paper aims to highlight the importance of image segmentation techniques by providing a thorough examination of their advantages and disadvantages. The accurate detection of cancer regions in medical images is crucial for ensuring effective treatment. In this study, we have also extensive analysis of Computer-Aided Diagnosis (CAD) systems for cancer identification, with a focus on recent research advancements. The paper critically assesses various techniques for cancer detection and compares their effectiveness. Convolutional neural networks (CNNs) have attracted particular interest due to their ability to segment and classify medical images in large datasets, thanks to their capacity for self- learning and decision-making.


Subject(s)
Algorithms , Artificial Intelligence , Diagnostic Imaging , Image Processing, Computer-Assisted , Neoplasms , Neural Networks, Computer , Humans , Neoplasms/diagnostic imaging , Neoplasms/diagnosis , Image Processing, Computer-Assisted/methods , Diagnostic Imaging/methods , Diagnosis, Computer-Assisted/methods , Deep Learning
15.
Cells ; 13(12)2024 Jun 08.
Article in English | MEDLINE | ID: mdl-38920634

ABSTRACT

BACKGROUND: Identifying cells engaged in fundamental cellular processes, such as proliferation or living/death statuses, is pivotal across numerous research fields. However, prevailing methods relying on molecular biomarkers are constrained by high costs, limited specificity, protracted sample preparation, and reliance on fluorescence imaging. METHODS: Based on cellular morphology in phase contrast images, we developed a deep-learning model named Detector of Mitosis, Apoptosis, Interphase, Necrosis, and Senescence (D-MAINS). RESULTS: D-MAINS utilizes machine learning and image processing techniques, enabling swift and label-free categorization of cell death, division, and senescence at a single-cell resolution. Impressively, D-MAINS achieved an accuracy of 96.4 ± 0.5% and was validated with established molecular biomarkers. D-MAINS underwent rigorous testing under varied conditions not initially present in the training dataset. It demonstrated proficiency across diverse scenarios, encompassing additional cell lines, drug treatments, and distinct microscopes with different objective lenses and magnifications, affirming the robustness and adaptability of D-MAINS across multiple experimental setups. CONCLUSIONS: D-MAINS is an example showcasing the feasibility of a low-cost, rapid, and label-free methodology for distinguishing various cellular states. Its versatility makes it a promising tool applicable across a broad spectrum of biomedical research contexts, particularly in cell death and oncology studies.


Subject(s)
Apoptosis , Cellular Senescence , Deep Learning , Interphase , Mitosis , Necrosis , Humans , Cell Line, Tumor , Neoplasms/pathology , Neoplasms/metabolism , Image Processing, Computer-Assisted/methods
16.
J Imaging ; 10(6)2024 May 22.
Article in English | MEDLINE | ID: mdl-38921603

ABSTRACT

Addressing the pressing issue of food waste is vital for environmental sustainability and resource conservation. While computer vision has been widely used in food waste reduction research, existing food image datasets are typically aggregated into broad categories (e.g., fruits, meat, dairy, etc.) rather than the fine-grained singular food items required for this research. The aim of this study is to develop a model capable of identifying individual food items to be integrated into a mobile application that allows users to photograph their food items, identify them, and offer suggestions for recipes. This research bridges the gap in available datasets and contributes to a more fine-grained approach to utilising existing technology for food waste reduction, emphasising both environmental and research significance. This study evaluates various (n = 7) convolutional neural network architectures for multi-class food image classification, emphasising the nuanced impact of parameter tuning to identify the most effective configurations. The experiments were conducted with a custom dataset comprising 41,949 food images categorised into 20 food item classes. Performance evaluation was based on accuracy and loss. DenseNet architecture emerged as the top-performing out of the seven examined, establishing a baseline performance (training accuracy = 0.74, training loss = 1.25, validation accuracy = 0.68, and validation loss = 2.89) on a predetermined set of parameters, including the RMSProp optimiser, ReLU activation function, '0.5' dropout rate, and a 160×160 image size. Subsequent parameter tuning involved a comprehensive exploration, considering six optimisers, four image sizes, two dropout rates, and five activation functions. The results show the superior generalisation capabilities of the optimised DenseNet, showcasing performance improvements over the established baseline across key metrics. Specifically, the optimised model demonstrated a training accuracy of 0.99, a training loss of 0.01, a validation accuracy of 0.79, and a validation loss of 0.92, highlighting its improved performance compared to the baseline configuration. The optimal DenseNet has been integrated into a mobile application called FridgeSnap, designed to recognise food items and suggest possible recipes to users, thus contributing to the broader mission of minimising food waste.

17.
J Imaging ; 10(6)2024 May 28.
Article in English | MEDLINE | ID: mdl-38921608

ABSTRACT

Hyperspectral images include information from a wide range of spectral bands deemed valuable for computer vision applications in various domains such as agriculture, surveillance, and reconnaissance. Anomaly detection in hyperspectral images has proven to be a crucial component of change and abnormality identification, enabling improved decision-making across various applications. These abnormalities/anomalies can be detected using background estimation techniques that do not require the prior knowledge of outliers. However, each hyperspectral anomaly detection (HS-AD) algorithm models the background differently. These different assumptions may fail to consider all the background constraints in various scenarios. We have developed a new approach called Greedy Ensemble Anomaly Detection (GE-AD) to address this shortcoming. It includes a greedy search algorithm to systematically determine the suitable base models from HS-AD algorithms and hyperspectral unmixing for the first stage of a stacking ensemble and employs a supervised classifier in the second stage of a stacking ensemble. It helps researchers with limited knowledge of the suitability of the HS-AD algorithms for the application scenarios to select the best methods automatically. Our evaluation shows that the proposed method achieves a higher average F1-macro score with statistical significance compared to the other individual methods used in the ensemble. This is validated on multiple datasets, including the Airport-Beach-Urban (ABU) dataset, the San Diego dataset, the Salinas dataset, the Hydice Urban dataset, and the Arizona dataset. The evaluation using the airport scenes from the ABU dataset shows that GE-AD achieves a 14.97% higher average F1-macro score than our previous method (HUE-AD), at least 17.19% higher than the individual methods used in the ensemble, and at least 28.53% higher than the other state-of-the-art ensemble anomaly detection algorithms. As using the combination of greedy algorithm and stacking ensemble to automatically select suitable base models and associated weights have not been widely explored in hyperspectral anomaly detection, we believe that our work will expand the knowledge in this research area and contribute to the wider application of this approach.

18.
J Imaging ; 10(6)2024 May 28.
Article in English | MEDLINE | ID: mdl-38921607

ABSTRACT

Meat characterized by a high marbling value is typically anticipated to display enhanced sensory attributes. This study aimed to predict the marbling scores of rib-eye, steaks sourced from the Longissimus dorsi muscle of different cattle types, namely Boran, Senga, and Sheko, by employing digital image processing and machine-learning algorithms. Marbling was analyzed using digital image processing coupled with an extreme gradient boosting (GBoost) machine learning algorithm. Meat texture was assessed using a universal texture analyzer. Sensory characteristics of beef were evaluated through quantitative descriptive analysis with a trained panel of twenty. Using selected image features from digital image processing, the marbling score was predicted with R2 (prediction) = 0.83. Boran cattle had the highest fat content in sirloin and chuck cuts (12.68% and 12.40%, respectively), followed by Senga (11.59% and 11.56%) and Sheko (11.40% and 11.17%). Tenderness scores for sirloin and chuck cuts differed among the three breeds: Boran (7.06 ± 2.75 and 3.81 ± 2.24, respectively), Senga (5.54 ± 1.90 and 5.25 ± 2.47), and Sheko (5.43 ± 2.76 and 6.33 ± 2.28 Nmm). Sheko and Senga had similar sensory attributes. Marbling scores were higher in Boran (4.28 ± 1.43 and 3.68 ± 1.21) and Senga (2.88 ± 0.69 and 2.83 ± 0.98) compared to Sheko (2.73 ± 1.28 and 2.90 ± 1.52). The study achieved a remarkable milestone in developing a digital tool for predicting marbling scores of Ethiopian beef breeds. Furthermore, the relationship between quality attributes and beef marbling score has been verified. After further validation, the output of this research can be utilized in the meat industry and quality control authorities.

19.
J Imaging ; 10(6)2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38921614

ABSTRACT

Recent advancements in computer vision, especially deep learning models, have shown considerable promise in tasks related to plant image object detection. However, the efficiency of these deep learning models heavily relies on input image quality, with low-resolution images significantly hindering model performance. Therefore, reconstructing high-quality images through specific techniques will help extract features from plant images, thus improving model performance. In this study, we explored the value of super-resolution technology for improving object detection model performance on plant images. Firstly, we built a comprehensive dataset comprising 1030 high-resolution plant images, named the PlantSR dataset. Subsequently, we developed a super-resolution model using the PlantSR dataset and benchmarked it against several state-of-the-art models designed for general image super-resolution tasks. Our proposed model demonstrated superior performance on the PlantSR dataset, indicating its efficacy in enhancing the super-resolution of plant images. Furthermore, we explored the effect of super-resolution on two specific object detection tasks: apple counting and soybean seed counting. By incorporating super-resolution as a pre-processing step, we observed a significant reduction in mean absolute error. Specifically, with the YOLOv7 model employed for apple counting, the mean absolute error decreased from 13.085 to 5.71. Similarly, with the P2PNet-Soy model utilized for soybean seed counting, the mean absolute error decreased from 19.159 to 15.085. These findings underscore the substantial potential of super-resolution technology in improving the performance of object detection models for accurately detecting and counting specific plants from images. The source codes and associated datasets related to this study are available at Github.

20.
J Imaging ; 10(6)2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38921619

ABSTRACT

This article presents a computer vision-based approach to switching electric locomotive power supplies as the vehicle approaches a railway neutral section. Neutral sections are defined as a phase break in which the objective is to separate two single-phase traction supplies on an overhead railway supply line. This separation prevents flashovers due to high voltages caused by the locomotives shorting both electrical phases. The typical system of switching traction supplies automatically employs the use of electro-mechanical relays and induction magnets. In this paper, an image classification approach is proposed to replace the conventional electro-mechanical system with two unique visual markers that represent the 'Open' and 'Close' signals to initiate the transition. When the computer vision model detects either marker, the vacuum circuit breakers inside the electrical locomotive will be triggered to their respective positions depending on the identified image. A Histogram of Oriented Gradient technique was implemented for feature extraction during the training phase and a Linear Support Vector Machine algorithm was trained for the target image classification. For the task of image segmentation, the Circular Hough Transform shape detection algorithm was employed to locate the markers in the captured images and provided cartesian plane coordinates for segmenting the Object of Interest. A signal marker classification accuracy of 94% with 75 objects per second was achieved using a Linear Support Vector Machine during the experimental testing phase.

SELECTION OF CITATIONS
SEARCH DETAIL
...