Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
J Urol ; 211(4): 575-584, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38265365

RESUMO

PURPOSE: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel computer-vision algorithm for automated identification and labeling of surgical steps during robotic-assisted radical prostatectomy (RARP). MATERIALS AND METHODS: Surgical videos from RARP were manually annotated by a team of image annotators under the supervision of 2 urologic oncologists. Full-length surgical videos were labeled to identify all steps of surgery. These manually annotated videos were then utilized to train a computer vision algorithm to perform automated video annotation of RARP surgical video. Accuracy of automated video annotation was determined by comparing to manual human annotations as the reference standard. RESULTS: A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with surgical steps. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between artificial intelligence‒enabled automated video analysis and manual human video annotation was 92.8%. Algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%). CONCLUSIONS: We developed a fully automated artificial intelligence tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics.


Assuntos
Prostatectomia , Procedimentos Cirúrgicos Robóticos , Humanos , Masculino , Inteligência Artificial , Escolaridade , Próstata/cirurgia , Prostatectomia/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Gravação em Vídeo
2.
Respir Res ; 24(1): 241, 2023 Oct 05.
Artigo em Inglês | MEDLINE | ID: mdl-37798709

RESUMO

BACKGROUND: Computed tomography (CT) imaging and artificial intelligence (AI)-based analyses have aided in the diagnosis and prediction of the severity of COVID-19. However, the potential of AI-based CT quantification of pneumonia in assessing patients with COVID-19 has not yet been fully explored. This study aimed to investigate the potential of AI-based CT quantification of COVID-19 pneumonia to predict the critical outcomes and clinical characteristics of patients with residual lung lesions. METHODS: This retrospective cohort study included 1,200 hospitalized patients with COVID-19 from four hospitals. The incidence of critical outcomes (requiring the support of high-flow oxygen or invasive mechanical ventilation or death) and complications during hospitalization (bacterial infection, renal failure, heart failure, thromboembolism, and liver dysfunction) was compared between the groups of pneumonia with high/low-percentage lung lesions, based on AI-based CT quantification. Additionally, 198 patients underwent CT scans 3 months after admission to analyze prognostic factors for residual lung lesions. RESULTS: The pneumonia group with a high percentage of lung lesions (N = 400) had a higher incidence of critical outcomes and complications during hospitalization than the low percentage group (N = 800). Multivariable analysis demonstrated that AI-based CT quantification of pneumonia was independently associated with critical outcomes (adjusted odds ratio [aOR] 10.5, 95% confidence interval [CI] 5.59-19.7), as well as with oxygen requirement (aOR 6.35, 95% CI 4.60-8.76), IMV requirement (aOR 7.73, 95% CI 2.52-23.7), and mortality rate (aOR 6.46, 95% CI 1.87-22.3). Among patients with follow-up CT scans (N = 198), the multivariable analysis revealed that the pneumonia group with a high percentage of lung lesions on admission (aOR 4.74, 95% CI 2.36-9.52), older age (aOR 2.53, 95% CI 1.16-5.51), female sex (aOR 2.41, 95% CI 1.13-5.11), and medical history of hypertension (aOR 2.22, 95% CI 1.09-4.50) independently predicted persistent residual lung lesions. CONCLUSIONS: AI-based CT quantification of pneumonia provides valuable information beyond qualitative evaluation by physicians, enabling the prediction of critical outcomes and residual lung lesions in patients with COVID-19.


Assuntos
COVID-19 , Pneumonia , Humanos , Feminino , COVID-19/diagnóstico por imagem , COVID-19/patologia , Inteligência Artificial , Estudos Retrospectivos , Japão/epidemiologia , SARS-CoV-2 , Pulmão/patologia , Pneumonia/patologia , Tomografia Computadorizada por Raios X/métodos , Oxigênio
3.
J Sci Food Agric ; 103(6): 3093-3101, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36418909

RESUMO

BACKGROUND: Intelligent monitoring of fixation quality is a prerequisite for automated green tea processing. To meet the requirements of intelligent monitoring of fixation quality in large-scale production, fast and non-destructive detection means are urgently needed. Here, smartphone-coupled micro near-infrared spectroscopy and a self-built computer vision system were used to perform rapid detection of the fixation quality in green tea processing lines. RESULTS: Spectral and image information from green tea samples with different fixation degrees were collected at-line by two intelligent monitoring sensors. Competitive adaptive reweighted sampling and correlation analysis were employed to select feature variables from spectral and color information as the target data for modeling, respectively. The developed least squares support vector machine (LS-SVM) model by spectral information and the LS-SVM model by image information achieved the best discriminations of sample fixation degree, with both prediction set accuracies of 100%. Compared to the spectral information, the image information-based support vector regression model performed better in moisture prediction, with a correlation coefficient of prediction of 0.9884 and residual predictive deviation of 6.46. CONCLUSION: The present study provided a rapid and low-cost means of monitoring fixation quality, and also provided theoretical support and technical guidance for the automation of the green tea fixation process. © 2022 Society of Chemical Industry.


Assuntos
Espectroscopia de Luz Próxima ao Infravermelho , Chá , Chá/química , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Análise dos Mínimos Quadrados , Máquina de Vetores de Suporte
4.
Anim Genet ; 52(5): 633-644, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34291482

RESUMO

Intramuscular fat (IMF) content is a critical indicator of pork quality that affects directly the purchasing desire of consumers. However, to measure IMF content is both laborious and costly, preventing our understanding of its genetic determinants and improvement. In the present study, we constructed an accurate and fast image acquisition and analysis system, to extract and calculate the digital IMF content, the proportion of fat areas in the image (PFAI) of the longissimus muscle of 1709 animals from multiple pig populations. PFAI was highly significantly correlated with marbling scores (MS; 0.95, r2  = 0.90), and also with IMF contents chemically defined for 80 samples (0.79, r2  = 0.63; more accurate than direct analysis between IMF contents and MS). The processing time for one image is only 2.31 s. Genome-wide association analysis on PFAI for all 1709 animals identified 14 suggestive significant SNPs and 1 genome-wide significant SNP. On MS, we identified nine suggestive significant SNPs, and seven of them were also identified in PFAI. Furthermore, the significance (-log P) values of the seven common SNPs are higher in PFAI than in MS. Novel candidate genes of biological importance for IMF content were also discovered. Our imaging systems developed for prediction of digital IMF content is closer to IMF measured by Soxhlet extraction and slightly more accurate than MS. It can achieve fast and high-throughput IMF phenotype, which can be used in improvement of pork quality.


Assuntos
Tecido Adiposo/fisiologia , Músculo Esquelético/fisiologia , Carne de Porco , Sus scrofa/genética , Animais , Feminino , Estudos de Associação Genética/veterinária , Masculino , Fenótipo
5.
Sensors (Basel) ; 21(4)2021 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-33670030

RESUMO

Convolutional neural network (CNN)-based computer vision systems have been increasingly applied in animal farming to improve animal management, but current knowledge, practices, limitations, and solutions of the applications remain to be expanded and explored. The objective of this study is to systematically review applications of CNN-based computer vision systems on animal farming in terms of the five deep learning computer vision tasks: image classification, object detection, semantic/instance segmentation, pose estimation, and tracking. Cattle, sheep/goats, pigs, and poultry were the major farm animal species of concern. In this research, preparations for system development, including camera settings, inclusion of variations for data recordings, choices of graphics processing units, image preprocessing, and data labeling were summarized. CNN architectures were reviewed based on the computer vision tasks in animal farming. Strategies of algorithm development included distribution of development data, data augmentation, hyperparameter tuning, and selection of evaluation metrics. Judgment of model performance and performance based on architectures were discussed. Besides practices in optimizing CNN-based computer vision systems, system applications were also organized based on year, country, animal species, and purposes. Finally, recommendations on future research were provided to develop and improve CNN-based computer vision systems for improved welfare, environment, engineering, genetics, and management of farm animals.


Assuntos
Criação de Animais Domésticos/instrumentação , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Animais , Animais Domésticos , Bovinos , Cabras , Aves Domésticas , Ovinos , Suínos
6.
J Food Sci Technol ; 56(4): 2305-2311, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30996464

RESUMO

Maturity of a citrus fruit is generally expressed by a numerical value called citrus colour index (CCI). The success of methods employed in estimating the maturity depends on the cultivar and climatic conditions of growing regions. In this work, an image processing based method using CIELAB color model has been developed to estimate the CCI of Kinnow mandarin fruits. A polynomial transformation based camera characterization method was employed to reduce the number of transformations required for RGB to L ∗ a ∗ b ∗ colour space transformation, which resulted into a colour difference of 2.191 with CIELAB Δ E ∗ 2000 colour difference formula. In order to analyse the performance of this method, linear regression and partial least square (PLS) models were built on a dataset of 271 Kinnow fruit images wherein spectrophotometer was used for the validation of computed CCI values. The proposed method achieved a high adjusted R 2 value of 0.9660 using PLS regression, which ascertain the feasibility of image processing based system in estimating the maturity of Kinnow fruits. Additionally, a correlation analysis between colour coordinates and physicochemical properties was conducted to analyze the relation between the fruit's external peel colour and its internal characteristics.

7.
Telemed J E Health ; 23(12): 976-982, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-28537789

RESUMO

OBJECTIVE: This work sought to evaluate the precision and repeatability of a telepathology prototype based on open software and hardware. MATERIALS AND METHODS: A prototype was designed with application in telepathology and telemicroscopy. Accuracy and prototype precision were evaluated by calculating the mean absolute error and the intraclass and repeatability correlation coefficients for a series of 190 displacements at 10, 25, 50, 75, and 100 µm. RESULTS AND CONCLUSIONS: This work developed a low-cost prototype that is accessible, easily reproducible, implementable, and scalable; based on the use of technology created under principles of open software and hardware. A pathologist reviewed the obtained images and found them to be of diagnostic quality. Its excellent repeatability, coupled with its good accuracy, allows for its application in telemicroscopy and static, dynamic, and whole-slide imaging pathology systems.


Assuntos
Telepatologia/instrumentação , Telepatologia/normas , Humanos , Microscopia , Impressão Tridimensional , Consulta Remota , Reprodutibilidade dos Testes , Design de Software
8.
J Sci Food Agric ; 96(14): 4785-4796, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27322542

RESUMO

BACKGROUND: Pre-treating is a crucial stage of drying process. The best pretreatment for hot air drying of kiwifruit was investigated using a computer vision system (CVS), for online monitoring of drying attributes including drying time, colour changes and shrinkage, as decision criteria and using clustering method. Slices were dried at 70 °C with hot water blanching (HWB), steam blanching (SB), infrared blanching (IR) and acid ascorbic 1% w/w (AA) as pretreatments each with three durations of 5, 10 and 15 min. RESULTS: The results showed that the cells in HWB-pretreated samples stretched without any cell wall rupture, while the highest damage was observed in AA-pretreated kiwifruit microstructure. Increasing duration of AA and HWB significantly lengthened the drying time while SB showed opposite results. The drying rate had a profound effect on the progression of the shrinkage. The total colour change of pretreated samples was higher than those with no pretreatment except for AA and HWB. The AA could well prevent colour change during the initial stage of drying. Among all pretreatments, SB and IR had the highest colour changes. CONCLUSION: HWB with a duration of 5 min is the optimum pretreatment method for kiwifruit drying. © 2016 Society of Chemical Industry.


Assuntos
Actinidia/química , Manipulação de Alimentos/métodos , Frutas/química , Processamento de Imagem Assistida por Computador/métodos , Temperatura Alta , Fatores de Tempo , Água
9.
J Pharm Bioallied Sci ; 16(Suppl 1): S466-S468, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38595489

RESUMO

Aim: The research project focuses on the creation and assessment of an innovative computer vision system designed to identify dental irregularities in individuals undergoing orthodontic treatment. Materials and Methods: To establish the computer vision system, a comprehensive dataset of dental images was collected, encompassing various orthodontic cases. The system's algorithm was trained to recognize patterns indicative of common dental anomalies, such as malocclusions, spacing issues, and misalignments. Rigorous testing and refinement of the algorithm were conducted to enhance its accuracy and reliability. Results: The validation of the system was carried out using the dental records and images of the 40 patients. The computer vision system's performance was evaluated against assessments made by experienced orthodontists. The results demonstrated a commendable level of concurrence between the system's automated detections and the orthodontists' evaluations, suggesting its potential as a valuable diagnostic tool. Conclusion: In conclusion, the development and validation of this novel computer vision system exhibit promising outcomes in its ability to automatically detect dental anomalies in orthodontic patients.

10.
Foods ; 13(16)2024 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-39200443

RESUMO

Color characteristics are a crucial indicator of green tea quality, particularly in needle-shaped green tea, and are predominantly evaluated through subjective sensory analysis. Thus, the necessity arises for an objective, precise, and efficient assessment methodology. In this study, 885 images from 157 samples, obtained through computer vision technology, were used to predict sensory evaluation results based on the color features of the images. Three machine learning methods, Random Forest (RF), Support Vector Machine (SVM) and Decision Tree-based AdaBoost (DT-AdaBoost), were carried out to construct the color quality evaluation model. Notably, the DT-Adaboost model shows significant potential for application in evaluating tea quality, with a correct discrimination rate (CDR) of 98.50% and a relative percent deviation (RPD) of 14.827 in the 266 samples used to verify the accuracy of the model. This result indicates that the integration of computer vision with machine learning models presents an effective approach for assessing the color quality of needle-shaped green tea.

11.
Artigo em Inglês | MEDLINE | ID: mdl-39401253

RESUMO

OBJECTIVES: Human monitoring of personal protective equipment (PPE) adherence among healthcare providers has several limitations, including the need for additional personnel during staff shortages and decreased vigilance during prolonged tasks. To address these challenges, we developed an automated computer vision system for monitoring PPE adherence in healthcare settings. We assessed the system performance against human observers detecting nonadherence in a video surveillance experiment. MATERIALS AND METHODS: The automated system was trained to detect 15 classes of eyewear, masks, gloves, and gowns using an object detector and tracker. To assess how the system performs compared to human observers in detecting nonadherence, we designed a video surveillance experiment under 2 conditions: variations in video durations (20, 40, and 60 seconds) and the number of individuals in the videos (3 versus 6). Twelve nurses participated as human observers. Performance was assessed based on the number of detections of nonadherence. RESULTS: Human observers detected fewer instances of nonadherence than the system (parameter estimate -0.3, 95% CI -0.4 to -0.2, P < .001). Human observers detected more nonadherence during longer video durations (parameter estimate 0.7, 95% CI 0.4-1.0, P < .001). The system achieved a sensitivity of 0.86, specificity of 1, and Matthew's correlation coefficient of 0.82 for detecting PPE nonadherence. DISCUSSION: An automated system simultaneously tracks multiple objects and individuals. The system performance is also independent of observation duration, an improvement over human monitoring. CONCLUSION: The automated system presents a potential solution for scalable monitoring of hospital-wide infection control practices and improving PPE usage in healthcare settings.

12.
Artigo em Inglês | MEDLINE | ID: mdl-38978825

RESUMO

Background: The American Optometric Association defines computer vision syndrome (CVS), also known as digital eye strain, as "a group of eye- and vision-related problems that result from prolonged computer, tablet, e-reader and cell phone use". We aimed to create a well-structured, valid, and reliable questionnaire to determine the prevalence of CVS, and to analyze the visual, ocular surface, and extraocular sequelae of CVS using a novel and smart self-assessment questionnaire. Methods: This multicenter, observational, cross-sectional, descriptive, survey-based, online study included 6853 complete online responses of medical students from 15 universities. All participants responded to the updated, online, fourth version of the CVS questionnaire (CVS-F4), which has high validity and reliability. CVS was diagnosed according to five basic diagnostic criteria (5DC) derived from the CVS-F4. Respondents who fulfilled the 5DC were considered CVS cases. The 5DC were then converted into a novel five-question self-assessment questionnaire designated as the CVS-Smart. Results: Of 10 000 invited medical students, 8006 responded to the CVS-F4 survey (80% response rate), while 6853 of the 8006 respondents provided complete online responses (85.6% completion rate). The overall CVS prevalence was 58.78% (n = 4028) among the study respondents; CVS prevalence was higher among women (65.87%) than among men (48.06%). Within the CVS group, the most common visual, ocular surface, and extraocular complaints were eye strain, dry eye, and neck/shoulder/back pain in 74.50% (n = 3001), 58.27% (n = 2347), and 80.52% (n = 3244) of CVS cases, respectively. Notably, 75.92% (3058/4028) of CVS cases were involved in the Mandated Computer System Use Program. Multivariate logistic regression analysis revealed that the two most statistically significant diagnostic criteria of the 5DC were ≥2 symptoms/attacks per month over the last 12 months (odds ratio [OR] = 204177.2; P <0.0001) and symptoms/attacks associated with screen use (OR = 16047.34; P <0.0001). The CVS-Smart demonstrated a Cronbach's alpha reliability coefficient of 0.860, Guttman split-half coefficient of 0.805, with perfect content and construct validity. A CVS-Smart score of 7-10 points indicated the presence of CVS. Conclusions: The visual, ocular surface, and extraocular diagnostic criteria for CVS constituted the basic components of CVS-Smart. CVS-Smart is a novel, valid, reliable, subjective instrument for determining CVS diagnosis and prevalence and may provide a tool for rapid periodic assessment and prognostication. Individuals with positive CVS-Smart results should consider modifying their lifestyles and screen styles and seeking the help of ophthalmologists and/or optometrists. Higher institutional authorities should consider revising the Mandated Computer System Use Program to avoid the long-term consequences of CVS among university students. Further research must compare CVS-Smart with other available metrics for CVS, such as the CVS questionnaire, to determine its test-retest reliability and to justify its widespread use.

13.
Artigo em Inglês | MEDLINE | ID: mdl-38978827

RESUMO

Background: Diabetic retinopathy (DR), a sight-threatening ocular complication of diabetes mellitus, is one of the main causes of blindness in the working-age population. Dyslipidemia is a potential risk factor for the development or worsening of DR, with conflicting evidence in epidemiological studies. Fenofibrate, an antihyperlipidemic agent, has lipid-modifying and pleiotropic (non-lipid) effects that may lessen the incidence of microvascular events. Methods: Relevant studies were identified through a PubMed/MEDLINE search spanning the last 20 years, using the broad term "diabetic retinopathy" and specific terms "fenofibrate" and "dyslipidemia". References cited in these studies were further examined to compile this mini-review. These pivotal investigations underwent meticulous scrutiny and synthesis, focusing on methodological approaches and clinical outcomes. Furthermore, we provided the main findings of the seminal studies in a table to enhance comprehension and comparison. Results: Growing evidence indicates that fenofibrate treatment slows DR advancement owing to its possible protective effects on the blood-retinal barrier. The protective attributes of fenofibrate against DR progression and development can be broadly classified into two categories: lipid-modifying effects and non-lipid-related (pleiotropic) effects. The lipid-modifying effect is mediated through peroxisome proliferator-activated receptor-α activation, while the pleiotropic effects involve the reduction in serum levels of C-reactive protein, fibrinogen, and pro-inflammatory markers, and improvement in flow-mediated dilatation. In patients with DR, the lipid-modifying effects of fenofibrate primarily involve a reduction in lipoprotein-associated phospholipase A2 levels and the upregulation of apolipoprotein A1 levels. These changes contribute to the anti-inflammatory and anti-angiogenic effects of fenofibrate. Fenofibrate elicits a diverse array of pleiotropic effects, including anti-apoptotic, antioxidant, anti-inflammatory, and anti-angiogenic properties, along with the indirect consequences of these effects. Two randomized controlled trials-the Fenofibrate Intervention and Event Lowering in Diabetes and Action to Control Cardiovascular Risk in Diabetes studies-noted that fenofibrate treatment protected against DR progression, independent of serum lipid levels. Conclusions: Fenofibrate, an oral antihyperlipidemic agent that is effective in decreasing DR progression, may reduce the number of patients who develop vision-threatening complications and require invasive treatment. Despite its proven protection against DR progression, fenofibrate treatment has not yet gained wide clinical acceptance in DR management. Ongoing and future clinical trials may clarify the role of fenofibrate treatment in DR management.

14.
Artigo em Inglês | MEDLINE | ID: mdl-38978826

RESUMO

Background: Vascular endothelial growth factor (VEGF) is the primary substance involved in retinal barrier breach. VEGF overexpression may cause diabetic macular edema (DME). Laser photocoagulation of the macula is the standard treatment for DME; however, recently, intravitreal anti-VEGF injections have surpassed laser treatment. Our aim was to evaluate the efficacy of intravitreal injections of aflibercept or ranibizumab for managing treatment-naive DME. Methods: This single-center, retrospective, interventional, comparative study included eyes with visual impairment due to treatment-naive DME that underwent intravitreal injection of either aflibercept 2 mg/0.05 mL or ranibizumab 0.5 mg/0.05 mL at Al-Azhar University Hospitals, Egypt between March 2023 and January 2024. Demographic data and full ophthalmological examination results at baseline and 1, 3, and 6 months post-injection were collected, including the best-corrected distance visual acuity (BCDVA) in logarithm of the minimum angle of resolution (logMAR) notation, slit-lamp biomicroscopy, dilated fundoscopy, and central subfield thickness (CST) measured using spectral-domain optical coherence tomography. Results: Overall, the 96 eyes of 96 patients with a median (interquartile range [IQR]) age of 57 (10) (range: 20-74) years and a male-to-female ratio of 1:2.7 were allocated to one of two groups with comparable age, sex, diabetes mellitus duration, and presence of other comorbidities (all P >0.05). There was no statistically significant difference in baseline diabetic retinopathy status or DME type between groups (both P >0.05). In both groups, the median (IQR) BCDVA significantly improved from 0.7 (0.8) logMAR at baseline to 0.4 (0.1) logMAR at 6 months post-injection (both P = 0.001), with no statistically significant difference between groups at all follow-up visits (all P >0.05). The median (IQR) CST significantly decreased in the aflibercept group from 347 (166) µm at baseline to 180 (233) µm at 6 months post-injection, and it decreased in the ranibizumab group from 360 (180) µm at baseline to 190 (224) µm at 6 months post-injection (both P = 0.001), with no statistically significant differences between groups at all follow-up visits (all P >0.05). No serious adverse effects were documented in either group. Conclusions: Ranibizumab and aflibercept were equally effective in achieving the desired anatomical and functional results in patients with treatment-naïve DME in short-term follow-up without significant differences in injection counts between both drugs. Larger prospective, randomized, double-blinded trials with longer follow-up periods are needed to confirm our preliminary results.

15.
Front Radiol ; 3: 1251825, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38089643

RESUMO

Unlocking the vast potential of deep learning-based computer vision classification systems necessitates large data sets for model training. Natural Language Processing (NLP)-involving automation of dataset labelling-represents a potential avenue to achieve this. However, many aspects of NLP for dataset labelling remain unvalidated. Expert radiologists manually labelled over 5,000 MRI head reports in order to develop a deep learning-based neuroradiology NLP report classifier. Our results demonstrate that binary labels (normal vs. abnormal) showed high rates of accuracy, even when only two MRI sequences (T2-weighted and those based on diffusion weighted imaging) were employed as opposed to all sequences in an examination. Meanwhile, the accuracy of more specific labelling for multiple disease categories was variable and dependent on the category. Finally, resultant model performance was shown to be dependent on the expertise of the original labeller, with worse performance seen with non-expert vs. expert labellers.

16.
Meat Sci ; 200: 109159, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36934522

RESUMO

Water holding capacity (WHC) plays an important role when obtaining a high-quality pork meat. This attribute is usually estimated by pressing the meat and measuring the amount of water expelled by the sample and absorbed by a filter paper. In this work, we used the Deep Learning (DL) architecture named U-Net to estimate water holding capacity (WHC) from filter paper images of pork samples obtained using the press method. We evaluated the ability of the U-Net to segment the different regions of the WHC images and, since the images are much larger than the traditional input size of the U-Net, we also evaluated its performance when we change the input size. Results show that U-Net can be used to segment the external and internal areas of the WHC images with great precision, even though the difference in the appearance of these areas is subtle.


Assuntos
Aprendizado Profundo , Carne de Porco , Carne Vermelha , Animais , Suínos , Água , Carne/análise
17.
Heliyon ; 9(7): e17976, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37519729

RESUMO

The quality of beef products relies on the presence of a cherry red color, as any deviation toward brownish tones indicates a loss in quality. Existing studies typically analyze individual color channels separately, establishing acceptable ranges. In contrast, our proposed approach involves conducting a multivariate analysis of beef color changes using white-box machine learning techniques. Our proposal encompasses three phases. (1) We employed a Computer Vision System (CVS) to capture the color of beef pieces, implementing a color correction pre-processing step within a specially designed cabin. (2) We examined the differences among three color spaces (RGB, HSV, and CIELab*) (3) We evaluated the performance of three white-box classifiers (decision tree, logistic regression, and multivariate normal distributions) for predicting color in both fresh and non-fresh beef. These models demonstrated high accuracy and enabled a comprehensive understanding of the prediction process. Our results affirm that conducting a multivariate analysis yields superior beef color prediction outcomes compared to the conventional practice of analyzing each channel independently.

18.
Data Brief ; 43: 108422, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35811654

RESUMO

Cauliflower, a winter seasoned vegetable that originated in the Mediterranean region and arrived in Europe at the end of the 15th century, takes the lead in production among all vegetables. It's high in fiber and can keep us hydrated, and have medicinal properties like the chemical glucosinolates, which may help prevent cancer. If proper care is not given to the plants, several significant diseases can affect the plants, reducing production, quantity, and quality. Plant disease monitoring by hand is extremely tough because it demands a great deal of effort and time. Early detection of the diseases allows the agriculture sector to grow cauliflower more efficiently. In this scenario, an insightful and scientific dataset can be a lifesaver for researchers looking to analyze and observe different diseases in cauliflower development patterns. So, in this work, we present a well-organized and technically valuable dataset "VegNet' to effectively recognize conditions in cauliflower plants and fruits. Healthy and disease-affected cauliflower head and leaves by black rot,downy mildew, and bacterial spot rot are included in our suggested dataset. The images were taken manually from December 20th to January 15th, when the flowers were fully blown, and most of the diseases were observed clearly. It is a well-organized dataset to develop and validate machine learning-based automated cauliflower disease detection algorithms. The dataset is hosted by the Institute - National Institute of Textile Engineering and Research (NITER),the Department of Computer Science and Engineering and is available at the link following: https://data.mendeley.com/datasets/t5sssfgn2v/3.

19.
Meat Sci ; 192: 108904, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35841854

RESUMO

Computer vision systems (CVS) are applied to macro- and microscopic digital photographs captured using digital cameras, ultrasound scanners, computer tomography, and wide-angle imaging cameras. Diverse image acquisition devices make it technically feasible to obtain information about both the external features and internal structures of targeted objects. Attributes measured in CVS can be used to evaluate meat quality. CVS are also used in research related to assessing the composition of animal carcasses, which might help determine the impact of cross-breeding or rearing systems on the quality of meat. The results obtained by the CVS technique also contribute to assessing the impact of technological treatments on the quality of raw and cooked meat. CVS have many positive attributes including objectivity, non-invasiveness, speed, and low cost of analysis and systems are under constant development an improvement. The present review covers computer vision system techniques, stages of measurements, and possibilities for using these to assess carcass and meat quality.


Assuntos
Processamento de Imagem Assistida por Computador , Carne , Animais , Inteligência Artificial , Culinária , Processamento de Imagem Assistida por Computador/métodos , Carne/análise
20.
Food Res Int ; 143: 110230, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33992344

RESUMO

Color is a main factor in the perception of food product quality. Food surfaces are often not homogenous at micro-, meso-, and macroscopic scales. This matrix can include a variety of colors that are subject to changes during food processing. These different colors can be analyzed to provides more information than the average color. The objective of this study was to compare color analysis techniques on their ability to differentiate samples, quantify heterogeneity, and flexibility. The included techniques are sensory testing, Hunterlab colorimeter, a commercial CVS (IRIS-Alphasoft), and the custom made CVS (Canon-CVS) in analyzing nine different vacuum fried fruits. Sensory testing was a straightforward method and able to describe color heterogeneity. However, the subjectivity of the panelist is a limitation. Hunterlab was easy and accurate to measure homogeneous samples with high differentiation, without the color distribution information. IRIS-Alphasoft was quick and easy for color distribution analysis, however the closed system is the limit. The Canon-CVS protocol was able to assess the color heterogeneity, able to discriminate samples and flexible. As a take home massage, objective color distribution analysis has a potential to unlock the limitation of traditional color analysis by providing more detailed color distribution information which is important with respect to overall product quality.


Assuntos
Manipulação de Alimentos , Frutas , Cor , Computadores , Vácuo
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa