Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 5.747
Filtrar
1.
J Biomed Opt ; 30(Suppl 1): S13703, 2025 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-39034959

RESUMO

Significance: Standardization of fluorescence molecular imaging (FMI) is critical for ensuring quality control in guiding surgical procedures. To accurately evaluate system performance, two metrics, the signal-to-noise ratio (SNR) and contrast, are widely employed. However, there is currently no consensus on how these metrics can be computed. Aim: We aim to examine the impact of SNR and contrast definitions on the performance assessment of FMI systems. Approach: We quantified the SNR and contrast of six near-infrared FMI systems by imaging a multi-parametric phantom. Based on approaches commonly used in the literature, we quantified seven SNRs and four contrast values considering different background regions and/or formulas. Then, we calculated benchmarking (BM) scores and respective rank values for each system. Results: We show that the performance assessment of an FMI system changes depending on the background locations and the applied quantification method. For a single system, the different metrics can vary up to ∼ 35 dB (SNR), ∼ 8.65 a . u . (contrast), and ∼ 0.67 a . u . (BM score). Conclusions: The definition of precise guidelines for FMI performance assessment is imperative to ensure successful clinical translation of the technology. Such guidelines can also enable quality control for the already clinically approved indocyanine green-based fluorescence image-guided surgery.


Assuntos
Benchmarking , Imagem Molecular , Imagem Óptica , Imagens de Fantasmas , Razão Sinal-Ruído , Imagem Molecular/métodos , Imagem Molecular/normas , Imagem Óptica/métodos , Imagem Óptica/normas , Processamento de Imagem Assistida por Computador/métodos
2.
Methods Mol Biol ; 2852: 159-170, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39235743

RESUMO

The functional properties of biofilms are intimately related to their spatial architecture. Structural data are therefore of prime importance to dissect the complex social and survival strategies of biofilms and ultimately to improve their control. Confocal laser scanning microscopy (CLSM) is the most widespread microscopic tool to decipher biofilm structure, enabling noninvasive three-dimensional investigation of their dynamics down to the single-cell scale. The emergence of fully automated high content screening (HCS) systems, associated with large-scale image analysis, has radically amplified the flow of available biofilm structural data. In this contribution, we present a HCS-CLSM protocol used to analyze biofilm four-dimensional structural dynamics at high throughput. Meta-analysis of the quantitative variables extracted from HCS-CLSM will contribute to a better biological understanding of biofilm traits.


Assuntos
Biofilmes , Microscopia Confocal , Biofilmes/crescimento & desenvolvimento , Microscopia Confocal/métodos , Microbiologia de Alimentos/métodos , Imageamento Tridimensional/métodos , Doenças Transmitidas por Alimentos/microbiologia , Ensaios de Triagem em Larga Escala/métodos , Processamento de Imagem Assistida por Computador/métodos
3.
J Appl Crystallogr ; 57(Pt 5): 1557-1565, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39387086

RESUMO

Here, a morphologically based approach is used for the in situ characterization of 3D growth rates of facetted crystals from the solution phase. Crystal images of single crystals of the ß-form of l-glutamic acid are captured in situ during their growth at a relative supersaturation of 1.05 using transmission optical microscopy. The crystal growth rates estimated for both the {101} capping and {021} prismatic faces through image processing are consistent with those determined using reflection light mode [Jiang, Ma, Hazlehurst, Ilett, Jackson, Hogg & Roberts (2024 ▸). Cryst. Growth Des. 24, 3277-3288]. The growth rate in the {010} face is, for the first time, estimated from the shadow widths of the {021} prismatic faces and found to be typically about half that of the {021} prismatic faces. Analysis of the 3D shape during growth reveals that the initial needle-like crystal morphology develops during the growth process to become more tabular, associated with the Zingg factor evolving from 2.9 to 1.7 (>1). The change in relative solution supersaturation during the growth process is estimated from calculations of the crystal volume, offering an alternative approach to determine this dynamically from visual observations.

4.
Plant Methods ; 20(1): 154, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39350215

RESUMO

BACKGROUND: The manual study of root dynamics using images requires huge investments of time and resources and is prone to previously poorly quantified annotator bias. Artificial intelligence (AI) image-processing tools have been successful in overcoming limitations of manual annotation in homogeneous soils, but their efficiency and accuracy is yet to be widely tested on less homogenous, non-agricultural soil profiles, e.g., that of forests, from which data on root dynamics are key to understanding the carbon cycle. Here, we quantify variance in root length measured by human annotators with varying experience levels. We evaluate the application of a convolutional neural network (CNN) model, trained on a software accessible to researchers without a machine learning background, on a heterogeneous minirhizotron image dataset taken in a multispecies, mature, deciduous temperate forest. RESULTS: Less experienced annotators consistently identified more root length than experienced annotators. Root length annotation also varied between experienced annotators. The CNN root length results were neither precise nor accurate, taking ~ 10% of the time but significantly overestimating root length compared to expert manual annotation (p = 0.01). The CNN net root length change results were closer to manual (p = 0.08) but there remained substantial variation. CONCLUSIONS: Manual root length annotation is contingent on the individual annotator. The only accessible CNN model cannot yet produce root data of sufficient accuracy and precision for ecological applications when applied to a complex, heterogeneous forest image dataset. A continuing evaluation and development of accessible CNNs for natural ecosystems is required.

5.
J Pathol Clin Res ; 10(6): e70004, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39358807

RESUMO

EGFR mutations are a major prognostic factor in lung adenocarcinoma. However, current detection methods require sufficient samples and are costly. Deep learning is promising for mutation prediction in histopathological image analysis but has limitations in that it does not sufficiently reflect tumor heterogeneity and lacks interpretability. In this study, we developed a deep learning model to predict the presence of EGFR mutations by analyzing histopathological patterns in whole slide images (WSIs). We also introduced the EGFR mutation prevalence (EMP) score, which quantifies EGFR prevalence in WSIs based on patch-level predictions, and evaluated its interpretability and utility. Our model estimates the probability of EGFR prevalence in each patch by partitioning the WSI based on multiple-instance learning and predicts the presence of EGFR mutations at the slide level. We utilized a patch-masking scheduler training strategy to enable the model to learn various histopathological patterns of EGFR. This study included 868 WSI samples from lung adenocarcinoma patients collected from three medical institutions: Hallym University Medical Center, Inha University Hospital, and Chungnam National University Hospital. For the test dataset, 197 WSIs were collected from Ajou University Medical Center to evaluate the presence of EGFR mutations. Our model demonstrated prediction performance with an area under the receiver operating characteristic curve of 0.7680 (0.7607-0.7720) and an area under the precision-recall curve of 0.8391 (0.8326-0.8430). The EMP score showed Spearman correlation coefficients of 0.4705 (p = 0.0087) for p.L858R and 0.5918 (p = 0.0037) for exon 19 deletions in 64 samples subjected to next-generation sequencing analysis. Additionally, high EMP scores were associated with papillary and acinar patterns (p = 0.0038 and p = 0.0255, respectively), whereas low EMP scores were associated with solid patterns (p = 0.0001). These results validate the reliability of our model and suggest that it can provide crucial information for rapid screening and treatment plans.


Assuntos
Adenocarcinoma de Pulmão , Aprendizado Profundo , Receptores ErbB , Neoplasias Pulmonares , Mutação , Humanos , Receptores ErbB/genética , Adenocarcinoma de Pulmão/genética , Adenocarcinoma de Pulmão/patologia , Neoplasias Pulmonares/genética , Neoplasias Pulmonares/patologia , Análise Mutacional de DNA , Feminino , Interpretação de Imagem Assistida por Computador
6.
J Eat Disord ; 12(1): 152, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39354605

RESUMO

BACKGROUND: Previous studies of emotion recognition abilities of people with eating disorders used accuracy to identify performance deficits for these individuals. The current study examined eating disorder symptom severity as a function of emotion categorization abilities, using a visual cognition paradigm that offers insights into how emotional faces may be categorized, as opposed to just how well these faces are categorized. METHODS: Undergraduate students (N = 87, 50 women, 34 men, 3 non-binary) completed the Bubbles task and a standard emotion categorization task, as well as a set of questionnaires assessing their eating disorder symptomology and comorbid disorders. We examined the relationship between visual information use (assessed via Bubbles) and eating disorder symptomology (EDDS) while controlling for anxiety (STAI), depression (BDI-II), alexithymia (TAS), and emotion regulation difficulties (DERS-sf). RESULTS: Overall visual information use (i.e. how well participants used facial features important for accurate emotion categorization) was not significantly related to eating disorder symptoms, despite producing interpretable patterns for each emotion category. Emotion categorization accuracy was also not related to eating disorder symptoms. CONCLUSIONS: Results from this study must be interpreted with caution, given the non-clinical sample. Future research may benefit from comparing visual information use in patients with an eating disorder and healthy controls, as well as employing designs focused on specific emotion categories, such as anger.


Men and women with severe eating disorder symptoms may find it harder to identify and describe emotions than people with less severe eating disorder symptoms. However, previous work makes it difficult to determine why emotion recognition deficits exist, and what underlying abilities or strategies are actually different due to a deficit. In addition to a typical emotion recognition task (emotion categorization), this study used the Bubbles task, which allowed us to determine which parts of an image are important for emotion recognition, and whether participants used these parts during the task. In 87 undergraduate students (47 female; 49 with clinically-significant eating disorder symptoms), there was no significant relationship between task performance and eating disorder symptom severity, before and after controlling for the relationship with other comorbid disorders. Our results imply that emotion recognition deficits are unlikely to be an important mechanism underlying eating disorder pathology in participants with a range of eating disorders symptoms.

7.
Development ; 2024 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-39373366

RESUMO

For investigations into fate specification and morphogenesis in time-lapse images of preimplantation embryos, automated 3D instance segmentation and tracking of nuclei are invaluable. Low signal-to-noise ratio, high voxel anisotropy, high nuclear density, and variable nuclear shapes can limit the performance of segmentation methods, while tracking is complicated by cell divisions, low frame rates, and sample movements. Supervised machine learning approaches can radically improve segmentation accuracy and enable easier tracking, but they often require large amounts of annotated 3D data. Here we first report a novel mouse line expressing near-infrared nuclear reporter H2B-miRFP720. We then generate a dataset (termed BlastoSPIM) of 3D images of H2B-miRFP720-expressing embryos with ground truth for nuclear instances. Using BlastoSPIM, we benchmark seven convolutional neural networks and identify Stardist-3D as the most accurate instance segmentation method. With our BlastoSPIM-trained Stardist-3D models, we construct a complete pipeline for nuclear instance segmentation and lineage tracking from the 8-cell stage to the end of preimplantation development (>100 nuclei). Finally, we demonstrate BlastoSPIM's usefulness as pre-train data for related problems, both for a different imaging modality and for different model systems.

8.
New Phytol ; 2024 Oct 06.
Artigo em Inglês | MEDLINE | ID: mdl-39370539

RESUMO

Roots are important in agricultural and natural systems for determining plant productivity and soil carbon inputs. Sometimes, the amount of roots in a sample is too much to fit into a single scanned image, so the sample is divided among several scans, and there is no standard method to aggregate the data. Here, we describe and validate two methods for standardizing measurements across multiple scans: image concatenation and statistical aggregation. We developed a Python script that identifies which images belong to the same sample and returns a single, larger concatenated image. These concatenated images and the original images were processed with RhizoVision Explorer, a free and open-source software. An R script was developed, which identifies rows of data belonging to the same sample and applies correct statistical methods to return a single data row for each sample. These two methods were compared using example images from switchgrass, poplar, and various tree and ericaceous shrub species from a northern peatland and the Arctic. Most root measurements were nearly identical between the two methods except median diameter, which cannot be accurately computed by statistical aggregation. We believe the availability of these methods will be useful to the root biology community.

9.
J Pharm Sci ; 2024 Oct 08.
Artigo em Inglês | MEDLINE | ID: mdl-39389538

RESUMO

Subvisible particle count is a biotherapeutics stability indicator widely used by pharmaceutical industries. A variety of stresses that biotherapeutics are exposed to during development can impact particle morphology. By classifying particle morphological differences, stresses that have been applied to monoclonal antibodies (mAbs) can be identified. This study aims to evaluate common biotherapeutic drug storage and shipment conditions that are known to impact protein aggregation. Two different studies were conducted to capture particle images using micro-flow imaging and to classify particles using a convolutional neural network. The first study evaluated particles produced in response to agitation, heat, and freeze-thaw stresses in one mAb formulated in five different formulations. The second study evaluated particles from two common drug containers, a high-density polyethylene bottle and a glass vial, in six mAbs exposed solely to agitation stress. An extension of this study was also conducted to evaluate the impact of sequential stress exposure compared to exposure to one stress alone, on particle morphology. Overall, the convolutional neural network was able to classify particles belonging to a particular formulation or container. These studies indicate that storage and shipping stresses can impact particle morphology according to formulation composition and mAb.

10.
Indian J Otolaryngol Head Neck Surg ; 76(5): 4036-4042, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39376269

RESUMO

Background: Laryngeal cancer accounts for a third of all head and neck malignancies, necessitating timely detection for effective treatment and enhanced patient outcomes. Machine learning shows promise in medical diagnostics, but the impact of model complexity on diagnostic efficacy in laryngeal cancer detection can be ambiguous. Methods: In this study, we examine the relationship between model sophistication and diagnostic efficacy by evaluating three approaches: Logistic Regression, a small neural network with 4 layers of neurons and a more complex convolutional neural network with 50 layers and examine their efficacy on laryngeal cancer detection on computed tomography images. Results: Logistic regression achieved 82.5% accuracy. The 4-Layer NN reached 87.2% accuracy, while ResNet-50, a deep learning architecture, achieved the highest accuracy at 92.6%. Its deep learning capabilities excelled in discerning fine-grained CT image features. Conclusion: Our study highlights the choices involved in selecting a laryngeal cancer detection model. Logistic regression is interpretable but may struggle with complex patterns. The 4-Layer NN balances complexity and accuracy. ResNet-50 excels in image classification but demands resources. This research advances understanding affect machine learning model complexity could have on learning features of laryngeal tumor features in contrast CT images for purposes of disease prediction.

11.
Front Cell Dev Biol ; 12: 1420161, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39376633

RESUMO

A common problem in confocal microscopy is the decrease in intensity of excitation light and emission signal from fluorophores as they travel through 3D specimens, resulting in decreased signal detected as a function of depth. Here, we report a visualization program compatible with widely used fluorophores in cell biology to facilitate image interpretation of differential protein disposition in 3D specimens. Glioblastoma cell clusters were fluorescently labeled for mitochondrial complex I (COXI), P2X7 receptor (P2X7R), ß-Actin, Ki-67, and DAPI. Each cell cluster was imaged using a laser scanning confocal microscope. We observed up to ∼70% loss in fluorescence signal across the depth in Z-stacks. This progressive underrepresentation of fluorescence intensity as the focal plane deepens hinders an accurate representation of signal location within a 3D structure. To address these challenges, we developed ProDiVis: a program that adjusts apparent fluorescent signals by normalizing one fluorescent signal to a reference signal at each focal plane. ProDiVis serves as a free and accessible, unbiased visualization tool to use in conjunction with fluorescence microscopy images and imaging software.

12.
Small ; : e2405065, 2024 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-39380435

RESUMO

DNA visualization has advanced across multiple microscopy platforms, albeit with limited progress in the identification of novel staining agents for electron microscopy (EM), notwithstanding its ability to furnish a broad magnification range and high-resolution details for observing DNA molecules. Herein, a non-toxic, universal, and simple method is proposed that uses gold nanoparticle-tagged peptides to stain all types of naturally occurring DNA molecules, enabling their visualization under EM. This method enhances the current DNA visualization capabilities, allowing for sequence-specific, genomic-scale, and multi-conformational visualization. Importantly, an artificial intelligence (AI)-enabled pipeline for identifying DNA molecules imaged under EM is presented, followed by classification based on their size, shape, or conformation, and finally, extraction of their significant dimensional features, which to the best of authors' knowledge, has not been reported yet. This pipeline strongly improved the accuracy of obtaining crucial information such as the number and mean length of DNA molecules in a given EM image for linear DNA (salmon sperm DNA) and the circumferential length and diameter for circular DNA (M13 phage DNA), owing to its image segmentation capability. Furthermore, it remained robust to several variations in the raw EM images arising from handling during the DNA staining stage.

13.
Eur Heart J Imaging Methods Pract ; 2(3): qyae094, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39385845

RESUMO

Aims: Automated algorithms are regularly used to analyse cardiac magnetic resonance (CMR) images. Validating data output reliability from this method is crucial for enabling widespread adoption. We outline a visual quality control (VQC) process for image analysis using automated batch processing. We assess the performance of automated analysis and the reliability of replacing visual checks with statistical outlier (SO) removal approach in UK Biobank CMR scans. Methods and results: We included 1987 CMR scans from the UK Biobank COVID-19 imaging study. We used batch processing software (Circle Cardiovascular Imaging Inc.-CVI42) to automatically extract chamber volumetric data, strain, native T1, and aortic flow data. The automated analysis outputs (∼62 000 videos and 2000 images) were visually checked by six experienced clinicians using a standardized approach and a custom-built R Shiny app. Inter-observer variability was assessed. Data from scans passing VQC were compared with a SO removal QC method in a subset of healthy individuals (n = 1069). Automated segmentation was highly rated, with over 95% of scans passing VQC. Overall inter-observer agreement was very good (Gwet's AC2 0.91; 95% confidence interval 0.84, 0.94). No difference in overall data derived from VQC or SO removal in healthy individuals was observed. Conclusion: Automated image analysis using CVI42 prototypes for UK Biobank CMR scans demonstrated high quality. Larger UK Biobank data sets analysed using these automated algorithms do not require in-depth VQC. SO removal is sufficient as a QC measure, with operator discretion for visual checks based on population or research objectives.

14.
J Biophotonics ; : e202400143, 2024 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-39384323

RESUMO

Efficient visualization of the vascular system is of key importance in biomedical research into tumor angiogenesis, cerebrovascular alterations, and other angiopathies. Optoacoustic (OA) angiography offers a promising solution combining molecular optical contrast with high resolution and deep penetration of ultrasound. However, its hybrid nature implies complex data collection and processing workflows, with significant variability in methodologies across developers and users. To streamline interoperability, we introduce SKYQUANT 3D, a Python-based set of instructions for the Thermo Fisher Scientific Amira/Avizo 3D Visualization & Analysis Software. Our workflow simplifies the batch processing of volumetric optoacoustic angiography images, extracting meaningful quantitative information while also providing statistical analysis and graphical representation of the results. Quantification performance of SKYQUANT 3D is demonstrated using functional preclinical and clinical in vivo 3D OA angiographic tests involving ambient temperature variations and repositioning of the imaged limb.

15.
J Appl Clin Med Phys ; : e14548, 2024 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-39382940

RESUMO

PURPOSE: To develop and validate an automated software analysis method for mammography image quality assessment of the American College of Radiology (ACR) digital mammography (DM) phantom images. METHODS: Twenty-seven DICOM images were acquired using Fuji mammography systems. All images were evaluated by three expert medical physicists using the Royal Australian and New Zealand College of Radiologists (RANZCR) mammography quality control guideline. To enhance the robustness and sensitivity assessment of our algorithm, an additional set of 12 images from a Hologic mammography system was included to test various phantom positional adjustments. The software automatically chose multiple regions of interest (ROIs) for analysis. A template matching method was primarily used for image analysis, followed by an additional method that locates and scores each target object (speck groups, fibers, and masses). RESULTS: The software performance shows a good to excellent agreement with the average scoring of observers (intraclass correlation coefficient [ICC] of 0.75, 0.79, 0.82 for speck groups, fibers, and masses, respectively). No significant differences were found in the scoring of target objects between human observers and the software. Both methods achieved scores meeting the pass criteria for speck groups and masses. Expert observers allocated lower scores to fiber objects, with diameters less than 0.61 mm, when compared to the software. The software was able to accurately score objects when the phantom position changed by up to 25 mm laterally, up to 5 degrees rotation, and overhanging the chest wall edge by up to 15 mm. CONCLUSIONS: Automated software analysis is a feasible method that may help improve the consistency and reproducibility of mammography image quality assessment with reduced reliance on human interaction and processing time.

16.
J Cell Sci ; 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39219476

RESUMO

The enteric nervous system (ENS) consists of an extensive network of neurons and glial cells embedded within the wall of the gastrointestinal (GI) tract. Alterations in neuronal distribution and function are strongly associated with GI dysfunction. Current methods for assessing neuronal distribution suffer from undersampling, partly due to challenges associated with imaging and analyzing large tissue areas, and operator bias due to manual analysis. We present the Gut Analysis Toolbox (GAT), an image analysis tool designed for characterization of enteric neurons and their neurochemical coding using 2D images of GI wholemount preparations. It is developed in Fiji, has a user-friendly interface and offers rapid and accurate segmentation via custom deep learning (DL) based cell segmentation models developed using StarDist, and a ganglion segmentation model in deepImageJ. We use proximal neighbor-based spatial analysis to reveal differences in cellular distribution across gut regions using a public dataset. In summary, GAT provides an easy-to-use toolbox to streamline routine image analysis tasks in ENS research. GAT enhances throughput allowing unbiased analysis of larger tissue areas, multiple neuronal markers and numerous samples rapidly.

17.
Front Physiol ; 15: 1408832, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39219839

RESUMO

Introduction: Lung image segmentation plays an important role in computer-aid pulmonary disease diagnosis and treatment. Methods: This paper explores the lung CT image segmentation method by generative adversarial networks. We employ a variety of generative adversarial networks and used their capability of image translation to perform image segmentation. The generative adversarial network is employed to translate the original lung image into the segmented image. Results: The generative adversarial networks-based segmentation method is tested on real lung image data set. Experimental results show that the proposed method outperforms the state-of-the-art method. Discussion: The generative adversarial networks-based method is effective for lung image segmentation.

18.
bioRxiv ; 2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-39229075

RESUMO

Zebrafish larvae are used to model the pathogenesis of multiple bacteria. This transparent model offers the unique advantage of allowing quantification of fluorescent bacterial burdens (fluorescent pixel counts: FPC) in vivo by facile microscopical methods, replacing enumeration of bacteria using time-intensive plating of lysates on bacteriological media. Accurate FPC measurements require laborious manual image processing to mark the outside borders of the animals so as to delineate the bacteria inside the animals from those in the culture medium that they are in. Here, we have developed an automated ImageJ/Fiji-based macro that accurately detect the outside borders of Mycobacterium marinum-infected larvae.

19.
Comput Biol Med ; 182: 109100, 2024 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-39244959

RESUMO

Automated computer-aided diagnosis (CAD) is becoming more significant in the field of medicine due to advancements in computer hardware performance and the progress of artificial intelligence. The knowledge graph is a structure for visually representing knowledge facts. In the last decade, a large body of work based on knowledge graphs has effectively improved the organization and interpretability of large-scale complex knowledge. Introducing knowledge graph inference into CAD is a research direction with significant potential. In this review, we briefly review the basic principles and application methods of knowledge graphs firstly. Then, we systematically organize and analyze the research and application of knowledge graphs in medical imaging-assisted diagnosis. We also summarize the shortcomings of the current research, such as medical data barriers and deficiencies, low utilization of multimodal information, and weak interpretability. Finally, we propose future research directions with possibilities and potentials to address the shortcomings of current approaches.

20.
J Sci Food Agric ; 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39247997

RESUMO

BACKGROUND: Determining the freshness of chilled pork is of paramount importance to consumers worldwide. Established freshness indicators such as total viable count, total volatile basic nitrogen and pH are destructive and time-consuming. Color change in chilled pork is also associated with freshness. However, traditional detection methods using handheld colorimeters are expensive, inconvenient and prone to limitations in accuracy. Substantial progress has been made in methods for pork preservation and freshness evaluation. However, traditional methods often necessitate expensive equipment or specialized expertise, restricting their accessibility to general consumers and small-scale traders. Therefore, developing a user-friendly, rapid and economical method is of particular importance. RESULTS: This study conducted image analysis of photographs captured by smartphone cameras of chilled pork stored at 4 °C for 7 days. The analysis tracked color changes, which were then used to develop predictive models for freshness indicators. Compared to handheld colorimeters, smartphone image analysis demonstrated superior stability and accuracy in color data acquisition. Machine learning regression models, particularly the random forest and decision tree models, achieved prediction accuracies of more than 80% and 90%, respectively. CONCLUSION: Our study provides a feasible and practical non-destructive approach to determining the freshness of chilled pork. © 2024 Society of Chemical Industry.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA