Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 59
Filter
1.
Front Vet Sci ; 11: 1371939, 2024.
Article in English | MEDLINE | ID: mdl-39132431

ABSTRACT

The motility pattern of the reticulo-rumen is a key factor affecting feed intake, rumen digesta residence time, and rumen fermentation. However, it is difficult to study reticulo-ruminal motility using general methods owing to the complexity of the reticulo-ruminal structure. Thus, we aimed to develop a technique to demonstrate the reticulo-ruminal motility pattern in static goats. Six Xiangdong black goats (half bucks and half does, body weight 29.5 ± 1.0 kg) were used as model specimens. Reticulo-ruminal motility videos were obtained using medical barium meal imaging technology. Videos were then analyzed using image annotation and the centroid method. The results showed that reticulo-ruminal motility was divided into primary (stages I, II, III, and IV) and secondary contraction, and the movements of ruminal digesta depended on reticulo-ruminal motility. Our results indicated common motility between the ruminal dorsal sac and ruminal dorsal blind sac. We observed that stages I (3.92 vs. 3.21 s) (P < 0.01), II (4.81 vs. 4.23 s) (P < 0.01), and III (5.65 vs. 5.15 s) (P < 0.05); interval (53.79 vs. 50.95 s); secondary contraction time (10.5 vs. 10 s); and were longer, whereas stage IV appeared to be shorter in the bucks than in the does (7.83 vs. 14.67 s) (P < 0.01). The feasibility of using barium meal imaging technology for assessing reticulo-ruminal and digesta motility was verified in our study. We determined the duration of each stage of reticulo-ruminal motility and collected data on the duration and interval of each stage of ruminal motility in goats. This research provides new insights for the study of gastrointestinal motility and lays a solid foundation for the study of artificial rumen.

2.
Sensors (Basel) ; 24(15)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39123824

ABSTRACT

In this work, we investigate the impact of annotation quality and domain expertise on the performance of Convolutional Neural Networks (CNNs) for semantic segmentation of wear on titanium nitride (TiN) and titanium carbonitride (TiCN) coated end mills. Using an innovative measurement system and customized CNN architecture, we found that domain expertise significantly affects model performance. Annotator 1 achieved maximum mIoU scores of 0.8153 for abnormal wear and 0.7120 for normal wear on TiN datasets, whereas Annotator 3 with the lowest expertise achieved significantly lower scores. Sensitivity to annotation inconsistencies and model hyperparameters were examined, revealing that models for TiCN datasets showed a higher coefficient of variation (CV) of 16.32% compared to 8.6% for TiN due to the subtle wear characteristics, highlighting the need for optimized annotation policies and high-quality images to improve wear segmentation.

3.
PeerJ ; 12: e17557, 2024.
Article in English | MEDLINE | ID: mdl-38952993

ABSTRACT

Imagery has become one of the main data sources for investigating seascape spatial patterns. This is particularly true in deep-sea environments, which are only accessible with underwater vehicles. On the one hand, using collaborative web-based tools and machine learning algorithms, biological and geological features can now be massively annotated on 2D images with the support of experts. On the other hand, geomorphometrics such as slope or rugosity derived from 3D models built with structure from motion (sfm) methodology can then be used to answer spatial distribution questions. However, precise georeferencing of 2D annotations on 3D models has proven challenging for deep-sea images, due to a large mismatch between navigation obtained from underwater vehicles and the reprojected navigation computed in the process of building 3D models. In addition, although 3D models can be directly annotated, the process becomes challenging due to the low resolution of textures and the large size of the models. In this article, we propose a streamlined, open-access processing pipeline to reproject 2D image annotations onto 3D models using ray tracing. Using four underwater image datasets, we assessed the accuracy of annotation reprojection on 3D models and achieved successful georeferencing to centimetric accuracy. The combination of photogrammetric 3D models and accurate 2D annotations would allow the construction of a 3D representation of the landscape and could provide new insights into understanding species microdistribution and biotic interactions.


Subject(s)
Imaging, Three-Dimensional , Imaging, Three-Dimensional/methods , Algorithms , Machine Learning , Image Processing, Computer-Assisted/methods , Oceans and Seas
4.
Data Brief ; 54: 110462, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38711743

ABSTRACT

The "Tea Leaf Age Quality" dataset represents a pioneering agricultural and machine-learning resource to enhance tea leaf classification, detection, and quality prediction based on leaf age. This comprehensive collection includes 2208 raw images from the historic Malnicherra Tea Garden in Sylhet and two other gardens from Sreemangal and Moulvibajar in Bangladesh. The dataset is systematically categorized into four distinct classes (T1: 1-2 days, T2: 3-4 days, T3: 5-7 days, and T4: 7+ days) according to age-based quality criteria. This dataset helps to determine how tea quality changes with age. The most recently harvested leaves (T1) exhibited superior quality, whereas the older leaves (T4) were suboptimal for brewing purposes. It includes raw, unannotated images that capture the natural diversity of tea leaves, precisely annotated versions for targeted analysis, and augmented data to facilitate advanced research. The compilation process involved extensive on-ground data collection and expert consultations to ensure the authenticity and applicability of the dataset. The "Tea Leaf Age Quality" dataset is a crucial tool for advancing deep learning models in tea leaf classification and quality assessment, ultimately contributing to the technological evolution of the agricultural sector by providing detailed age-stratified tea leaf categorization.

5.
Artif Intell Med ; 149: 102814, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38462277

ABSTRACT

Machine Learning models need large amounts of annotated data for training. In the field of medical imaging, labeled data is especially difficult to obtain because the annotations have to be performed by qualified physicians. Natural Language Processing (NLP) tools can be applied to radiology reports to extract labels for medical images automatically. Compared to manual labeling, this approach requires smaller annotation efforts and can therefore facilitate the creation of labeled medical image data sets. In this article, we summarize the literature on this topic spanning from 2013 to 2023, starting with a meta-analysis of the included articles, followed by a qualitative and quantitative systematization of the results. Overall, we found four types of studies on the extraction of labels from radiology reports: those describing systems based on symbolic NLP, statistical NLP, neural NLP, and those describing systems combining or comparing two or more of the latter. Despite the large variety of existing approaches, there is still room for further improvement. This work can contribute to the development of new techniques or the improvement of existing ones.


Subject(s)
Natural Language Processing , Radiology , Machine Learning
6.
Med Image Anal ; 94: 103141, 2024 May.
Article in English | MEDLINE | ID: mdl-38489896

ABSTRACT

In the context of automatic medical image segmentation based on statistical learning, raters' variability of ground truth segmentations in training datasets is a widely recognized issue. Indeed, the reference information is provided by experts but bias due to their knowledge may affect the quality of the ground truth data, thus hindering creation of robust and reliable datasets employed in segmentation, classification or detection tasks. In such a framework, automatic medical image segmentation would significantly benefit from utilizing some form of presegmentation during training data preparation process, which could lower the impact of experts' knowledge and reduce time-consuming labeling efforts. The present manuscript proposes a superpixels-driven procedure for annotating medical images. Three different superpixeling methods with two different number of superpixels were evaluated on three different medical segmentation tasks and compared with manual annotations. Within the superpixels-based annotation procedure medical experts interactively select superpixels of interest, apply manual corrections, when necessary, and then the accuracy of the annotations, the time needed to prepare them, and the number of manual corrections are assessed. In this study, it is proven that the proposed procedure reduces inter- and intra-rater variability leading to more reliable annotations datasets which, in turn, may be beneficial for the development of more robust classification or segmentation models. In addition, the proposed approach reduces time needed to prepare the annotations.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Humans , Reproducibility of Results , Magnetic Resonance Imaging/methods , Bias , Image Processing, Computer-Assisted/methods
7.
Int J Comput Assist Radiol Surg ; 19(1): 87-96, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37233894

ABSTRACT

PURPOSE: The training of deep medical image segmentation networks usually requires a large amount of human-annotated data. To alleviate the burden of human labor, many semi- or non-supervised methods have been developed. However, due to the complexity of clinical scenario, insufficient training labels still causes inaccurate segmentation in some difficult local areas such as heterogeneous tumors and fuzzy boundaries. METHODS: We propose an annotation-efficient training approach, which only requires scribble guidance in the difficult areas. A segmentation network is initially trained with a small amount of fully annotated data and then used to produce pseudo labels for more training data. Human supervisors draw scribbles in the areas of incorrect pseudo labels (i.e., difficult areas), and the scribbles are converted into pseudo label maps using a probability-modulated geodesic transform. To reduce the influence of the potential errors in the pseudo labels, a confidence map of the pseudo labels is generated by jointly considering the pixel-to-scribble geodesic distance and the network output probability. The pseudo labels and confidence maps are iteratively optimized with the update of the network, and the network training is promoted by the pseudo labels and the confidence maps in turn. RESULTS: Cross-validation based on two data sets (brain tumor MRI and liver tumor CT) showed that our method significantly reduces the annotation time while maintains the segmentation accuracy of difficult areas (e.g., tumors). Using 90 scribble-annotated training images (annotated time: ~ 9 h), our method achieved the same performance as using 45 fully annotated images (annotation time: > 100 h) but required much shorter annotation time. CONCLUSION: Compared to the conventional full annotation approaches, the proposed method significantly saves the annotation efforts by focusing the human supervisions on the most difficult regions. It provides an annotation-efficient way for training medical image segmentation networks in complex clinical scenario.


Subject(s)
Brain Neoplasms , Liver Neoplasms , Humans , Liver Neoplasms/diagnostic imaging , Neuroimaging , Probability , Research Design , Image Processing, Computer-Assisted
8.
Plant Methods ; 19(1): 128, 2023 Nov 16.
Article in English | MEDLINE | ID: mdl-37974271

ABSTRACT

BACKGROUND: With the emergence of deep-learning methods, tools are needed to capture and standardize image annotations made by experimentalists. In developmental biology, cell lineages are generally reconstructed from time-lapse data. However, some tissues need to be fixed to be accessible or to improve the staining. In this case, classical software do not offer the possibility of generating any lineage. Because of their rigid cell walls, plants present the advantage of keeping traces of the cell division history over successive generations in the cell patterns. To record this information despite having only a static image, dedicated tools are required. RESULTS: We developed an interface to assist users in the building and editing of a lineage tree from a 3D labeled image. Each cell within the tree can be tagged. From the created tree, cells of a sub-tree or cells sharing the same tag can be extracted. The tree can be exported in a format compatible with dedicated software for advanced graph visualization and manipulation. CONCLUSIONS: The TreeJ plugin for ImageJ/Fiji allows the user to generate and manipulate a lineage tree structure. The tree is compatible with other software to analyze the tree organization at the graphical level and at the cell pattern level. The code source is available at https://github.com/L-EL/TreeJ .

9.
Front Nutr ; 10: 1191962, 2023.
Article in English | MEDLINE | ID: mdl-37575335

ABSTRACT

Introduction: Dietary assessment is important for understanding nutritional status. Traditional methods of monitoring food intake through self-report such as diet diaries, 24-hour dietary recall, and food frequency questionnaires may be subject to errors and can be time-consuming for the user. Methods: This paper presents a semi-automatic dietary assessment tool we developed - a desktop application called Image to Nutrients (I2N) - to process sensor-detected eating events and images captured during these eating events by a wearable sensor. I2N has the capacity to offer multiple food and nutrient databases (e.g., USDA-SR, FNDDS, USDA Global Branded Food Products Database) for annotating eating episodes and food items. I2N estimates energy intake, nutritional content, and the amount consumed. The components of I2N are three-fold: 1) sensor-guided image review, 2) annotation of food images for nutritional analysis, and 3) access to multiple food databases. Two studies were used to evaluate the feasibility and usefulness of I2N: 1) a US-based study with 30 participants and a total of 60 days of data and 2) a Ghana-based study with 41 participants and a total of 41 days of data). Results: In both studies, a total of 314 eating episodes were annotated using at least three food databases. Using I2N's sensor-guided image review, the number of images that needed to be reviewed was reduced by 93% and 85% for the two studies, respectively, compared to reviewing all the images. Discussion: I2N is a unique tool that allows for simultaneous viewing of food images, sensor-guided image review, and access to multiple databases in one tool, making nutritional analysis of food images efficient. The tool is flexible, allowing for nutritional analysis of images if sensor signals aren't available.

10.
Front Big Data ; 6: 1149523, 2023.
Article in English | MEDLINE | ID: mdl-37469440

ABSTRACT

Color similarity has been a key feature for content-based image retrieval by contemporary search engines, such as Google. In this study, we compare the visual content information of images, obtained through color histograms, with their corresponding hashtag sets in the case of Instagram posts. In previous studies, we had concluded that less than 25% of Instagram hashtags are related to the actual visual content of the image they accompany. Thus, the use of Instagram images' corresponding hashtags for automatic image annotation is questionable. In this study, we are answering this question through the computational comparison of images' low-level characteristics with the semantic and syntactic information of their corresponding hashtags. The main conclusion of our study on 26 different subjects (concepts) is that color histograms and filtered hashtag sets, although related, should be better seen as a complementary source for image retrieval and automatic image annotation.

11.
Front Immunol ; 14: 1021638, 2023.
Article in English | MEDLINE | ID: mdl-37359539

ABSTRACT

Neutrophil extracellular traps (NETs), pathogen-ensnaring structures formed by neutrophils by expelling their DNA into the environment, are believed to play an important role in immunity and autoimmune diseases. In recent years, a growing attention has been put into developing software tools to quantify NETs in fluorescent microscopy images. However, current solutions require large, manually-prepared training data sets, are difficult to use for users without background in computer science, or have limited capabilities. To overcome these problems, we developed Trapalyzer, a computer program for automatic quantification of NETs. Trapalyzer analyzes fluorescent microscopy images of samples double-stained with a cell-permeable and a cell-impermeable dye, such as the popular combination of Hoechst 33342 and SYTOX™ Green. The program is designed with emphasis on software ergonomy and accompanied with step-by-step tutorials to make its use easy and intuitive. The installation and configuration of the software takes less than half an hour for an untrained user. In addition to NETs, Trapalyzer detects, classifies and counts neutrophils at different stages of NET formation, allowing for gaining a greater insight into this process. It is the first tool that makes this possible without large training data sets. At the same time, it attains a precision of classification on par with state-of-the-art machine learning algorithms. As an example application, we show how to use Trapalyzer to study NET release in a neutrophil-bacteria co-culture. Here, after configuration, Trapalyzer processed 121 images and detected and classified 16 000 ROIs in approximately three minutes on a personal computer. The software and usage tutorials are available at https://github.com/Czaki/Trapalyzer.


Subject(s)
Extracellular Traps , Neutrophils , Software , Algorithms , Microscopy, Fluorescence/methods
12.
Front Public Health ; 11: 1044525, 2023.
Article in English | MEDLINE | ID: mdl-36908475

ABSTRACT

Introduction: In light of the potential problems of missed diagnosis and misdiagnosis in the diagnosis of spinal diseases caused by experience differences and fatigue, this paper investigates the use of artificial intelligence technology for auxiliary diagnosis of spinal diseases. Methods: The LableImg tool was used to label the MRIs of 604 patients by clinically experienced doctors. Then, in order to select an appropriate object detection algorithm, deep transfer learning models of YOLOv3, YOLOv5, and PP-YOLOv2 were created and trained on the Baidu PaddlePaddle framework. The experimental results showed that the PP-YOLOv2 model achieved a 90.08% overall accuracy in the diagnosis of normal, IVD bulges and spondylolisthesis, which were 27.5 and 3.9% higher than YOLOv3 and YOLOv5, respectively. Finally, a visualization of the intelligent spine assistant diagnostic software based on the PP-YOLOv2 model was created and the software was made available to the doctors in the spine and osteopathic surgery at Guilin People's Hospital. Results and discussion: This software automatically provides auxiliary diagnoses in 14.5 s on a standard computer, is much faster than doctors in diagnosing human spines, which typically take 10 min, and its accuracy of 98% can be compared to that of experienced doctors in the comparison of various diagnostic methods. It significantly improves doctors' working efficiency, reduces the phenomenon of missed diagnoses and misdiagnoses, and demonstrates the efficacy of the developed intelligent spinal auxiliary diagnosis software.


Subject(s)
Deep Learning , Spinal Diseases , Humans , Artificial Intelligence , Magnetic Resonance Imaging/methods , Spine
13.
Islets ; 15(1): 2189873, 2023 12 31.
Article in English | MEDLINE | ID: mdl-36987915

ABSTRACT

We previously developed a deep learning-based web service (IsletNet) for an automated counting of isolated pancreatic islets. The neural network training is limited by the absent consensus on the ground truth annotations. Here, we present a platform (IsletSwipe) for an exchange of graphical opinions among experts to facilitate the consensus formation. The platform consists of a web interface and a mobile application. In a small pilot study, we demonstrate the functionalities and the use case scenarios of the platform. Nine experts from three centers validated the drawing tools, tested precision and consistency of the expert contour drawing, and evaluated user experience. Eight experts from two centers proceeded to evaluate additional images to demonstrate the following two use case scenarios. The Validation scenario involves an automated selection of images and islets for the expert scrutiny. It is scalable (more experts, images, and islets may readily be added) and can be applied to independent validation of islet contours from various sources. The Inquiry scenario serves the ground truth generating expert in seeking assistance from peers to achieve consensus on challenging cases during the preparation for IsletNet training. This scenario is limited to a small number of manually selected images and islets. The experts gained an opportunity to influence IsletNet training and to compare other experts' opinions with their own. The ground truth-generating expert obtained feedback for future IsletNet training. IsletSwipe is a suitable tool for the consensus finding. Experts from additional centers are welcome to participate.


Subject(s)
Islets of Langerhans Transplantation , Islets of Langerhans , Expert Testimony , Pilot Projects , Islets of Langerhans Transplantation/methods , Neural Networks, Computer
14.
Micromachines (Basel) ; 14(2)2023 Feb 13.
Article in English | MEDLINE | ID: mdl-36838142

ABSTRACT

In the past few years, object detection has attracted a lot of attention in the context of human-robot collaboration and Industry 5.0 due to enormous quality improvements in deep learning technologies. In many applications, object detection models have to be able to quickly adapt to a changing environment, i.e., to learn new objects. A crucial but challenging prerequisite for this is the automatic generation of new training data which currently still limits the broad application of object detection methods in industrial manufacturing. In this work, we discuss how to adapt state-of-the-art object detection methods for the task of automatic bounding box annotation in a use case where the background is homogeneous and the object's label is provided by a human. We compare an adapted version of Faster R-CNN and the Scaled-YOLOv4-p5 architecture and show that both can be trained to distinguish unknown objects from a complex but homogeneous background using only a small amount of training data. In contrast to most other state-of-the-art methods for bounding box labeling, our proposed method neither requires human verification, a predefined set of classes, nor a very large manually annotated dataset. Our method outperforms the state-of-the-art, transformer-based object discovery method LOST on our simple fruits dataset by large margins.

15.
MethodsX ; 10: 102040, 2023.
Article in English | MEDLINE | ID: mdl-36793672

ABSTRACT

The use of very high-resolution (VHR) optical satellites is gaining momentum in the field of wildlife monitoring, particularly for whales, as this technology is showing potential for monitoring the less studied regions. However, surveying large areas using VHR optical satellite imagery requires the development of automated systems to detect targets. Machine learning approaches require large training datasets of annotated images. Here we propose a standardised workflow to annotate VHR optical satellite imagery using ESRI ArcMap 10.8, and ESRI ArcGIS Pro 2.5., using cetaceans as a case study, to develop AI-ready annotations.•A step-by-step protocol to review VHR optical satellite images and annotate the features of interest.•A step-by-step protocol to create bounding boxes encompassing the features of interest.•A step-by-step guide to clip the satellite image using bounding boxes to create image chips.

16.
Int J Comput Assist Radiol Surg ; 18(2): 379-394, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36048319

ABSTRACT

PURPOSE: Training deep neural networks usually require a large number of human-annotated data. For organ segmentation from volumetric medical images, human annotation is tedious and inefficient. To save human labour and to accelerate the training process, the strategy of annotation by iterative deep learning recently becomes popular in the research community. However, due to the lack of domain knowledge or efficient human-interaction tools, the current AID methods still suffer from long training time and high annotation burden. METHODS: We develop a contour-based annotation by iterative deep learning (AID) algorithm which uses boundary representation instead of voxel labels to incorporate high-level organ shape knowledge. We propose a contour segmentation network with a multi-scale feature extraction backbone to improve the boundary detection accuracy. We also developed a contour-based human-intervention method to facilitate easy adjustments of organ boundaries. By combining the contour-based segmentation network and the contour-adjustment intervention method, our algorithm achieves fast few-shot learning and efficient human proofreading. RESULTS: For validation, two human operators independently annotated four abdominal organs in computed tomography (CT) images using our method and two compared methods, i.e. a traditional contour-interpolation method and a state-of-the-art (SOTA) convolutional network (CNN) method based on voxel label representation. Compared to these methods, our approach considerably saved annotation time and reduced inter-rater variabilities. Our contour detection network also outperforms the SOTA nnU-Net in producing anatomically plausible organ shape with only a small training set. CONCLUSION: Taking advantage of the boundary shape prior and the contour representation, our method is more efficient, more accurate and less prone to inter-operator variability than the SOTA AID methods for organ segmentation from volumetric medical images. The good shape learning ability and flexible boundary adjustment function make it suitable for fast annotation of organ structures with regular shape.


Subject(s)
Deep Learning , Humans , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Algorithms , Image Processing, Computer-Assisted/methods
17.
J Digit Imaging ; 36(1): 373-378, 2023 02.
Article in English | MEDLINE | ID: mdl-36344635

ABSTRACT

Lack of reliable measures of cutaneous chronic graft-versus-host disease (cGVHD) remains a significant challenge. Non-expert assistance in marking photographs of active disease could aid the development of automated segmentation algorithms, but validated metrics to evaluate training effects are lacking. We studied absolute and relative error of marked body surface area (BSA), redness, and the Dice index as potential metrics of non-expert improvement. Three non-experts underwent an extensive training program led by a board-certified dermatologist to mark cGVHD in photographs. At the end of the 4-month training, the dermatologist confirmed that each trainee had learned to accurately mark cGVHD. The trainees' inter- and intra-rater intraclass correlation coefficient estimates were "substantial" to "almost perfect" for both BSA and total redness. For fifteen 3D photos of patients with cGVHD, the trainees' median absolute (relative) BSA error compared to expert marking dropped from 20 cm2 (29%) pre-training to 14 cm2 (24%) post-training. Total redness error decreased from 122 a*·cm2 (26%) to 95 a*·cm2 (21%). By contrast, median Dice index did not reflect improvement (0.76 to 0.75). Both absolute and relative BSA and redness errors similarly and stably reflected improvements from this training program, which the Dice index failed to capture.


Subject(s)
Bronchiolitis Obliterans Syndrome , Graft vs Host Disease , Humans , Algorithms , Skin , Chronic Disease
18.
Med Image Comput Comput Assist Interv ; 14225: 497-507, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38529367

ABSTRACT

Multi-class cell segmentation in high-resolution Giga-pixel whole slide images (WSI) is critical for various clinical applications. Training such an AI model typically requires labor-intensive pixel-wise manual annotation from experienced domain experts (e.g., pathologists). Moreover, such annotation is error-prone when differentiating fine-grained cell types (e.g., podocyte and mesangial cells) via the naked human eye. In this study, we assess the feasibility of democratizing pathological AI deployment by only using lay annotators (annotators without medical domain knowledge). The contribution of this paper is threefold: (1) We proposed a molecular-empowered learning scheme for multi-class cell segmentation using partial labels from lay annotators; (2) The proposed method integrated Giga-pixel level molecular-morphology cross-modality registration, molecular-informed annotation, and molecular-oriented segmentation model, so as to achieve significantly superior performance via 3 lay annotators as compared with 2 experienced pathologists; (3) A deep corrective learning (learning with imperfect label) method is proposed to further improve the segmentation performance using partially annotated noisy data. From the experimental results, our learning method achieved F1 = 0.8496 using molecular-informed annotations from lay annotators, which is better than conventional morphology-based annotations (F1 = 0.7015) from experienced pathologists. Our method democratizes the development of a pathological segmentation deep model to the lay annotator level, which consequently scales up the learning process similar to a non-medical computer vision task. The official implementation and cell annotations are publicly available at https://github.com/hrlblab/MolecularEL.

19.
Arkh Patol ; 84(6): 67-73, 2022.
Article in Russian | MEDLINE | ID: mdl-36469721

ABSTRACT

OBJECTIVE: Development of original methodological approaches to annotation and labeling of histological images in relation to the problem of automatic segmentation of the layers of the stomach wall. MATERIAL AND METHODS: Three image collections were used in the study: NCT-CRC-HE-100K, CRC-VAL-HE-7K, and part of the PATH-DT-MSU collection. The used part of the original PATH-DT-MSU collection contains 20 histological images obtained using a high performance digital scanning microscope.Each image is a fragment of the stomach wall, cut from the surgical material of gastric cancer and stained with hematoxylin and eosin. Images were obtained using a scanning microscope Leica Aperio AT2 (Leica Microsystems Inc., Germany), annotations were made using Aperio ImageScope 12.3.3 (Leica Microsystems Inc., Germany). RESULTS: A labeling system is proposed that includes 5 classes (tissue types): areas of gastric adenocarcinoma (TUM), unchanged areas of the lamina propria (LP), unchanged areas of the muscular lamina of the mucosa (MM), a class of underlying tissues (AT), including areas of the submucosa, own muscular layer of the stomach and subserous sections, image background (BG). The advantage of this marking technique is to ensure high efficiency of recognition of the muscularis lamina (MM) - a natural «line¼ separating the lamina propria of the mucous membrane and all other underlying layers of the stomach wall. The disadvantages of the technique include a small number of classes, which leads to insufficient detailing of automatic segmentation. CONCLUSION: In the course of the study, an original technique for labeling and annotating images was developed, including 5 classes (types of tissues). This technique is effective at the initial stages of teaching mathematical algorithms for the classification and segmentation of histological images. Further stages in the development of a real diagnostic algorithm to automatically determine the depth of invasion of gastric cancer require the correction and development of the presented method of marking and annotation.


Subject(s)
Stomach Neoplasms , Humans , Stomach Neoplasms/pathology , Stomach/pathology , Eosine Yellowish-(YS)
20.
Diagnostics (Basel) ; 12(12)2022 Dec 09.
Article in English | MEDLINE | ID: mdl-36553118

ABSTRACT

Consistent annotation of data is a prerequisite for the successful training and testing of artificial intelligence-based decision support systems in radiology. This can be obtained by standardizing terminology when annotating diagnostic images. The purpose of this study was to evaluate the annotation consistency among radiologists when using a novel diagnostic labeling scheme for chest X-rays. Six radiologists with experience ranging from one to sixteen years, annotated a set of 100 fully anonymized chest X-rays. The blinded radiologists annotated on two separate occasions. Statistical analyses were done using Randolph's kappa and PABAK, and the proportions of specific agreements were calculated. Fair-to-excellent agreement was found for all labels among the annotators (Randolph's Kappa, 0.40-0.99). The PABAK ranged from 0.12 to 1 for the two-reader inter-rater agreement and 0.26 to 1 for the intra-rater agreement. Descriptive and broad labels achieved the highest proportion of positive agreement in both the inter- and intra-reader analyses. Annotating findings with specific, interpretive labels were found to be difficult for less experienced radiologists. Annotating images with descriptive labels may increase agreement between radiologists with different experience levels compared to annotation with interpretive labels.

SELECTION OF CITATIONS
SEARCH DETAIL