Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Ann Med Surg (Lond) ; 86(5): 2437-2441, 2024 May.
Article in English | MEDLINE | ID: mdl-38694288

ABSTRACT

Introduction: To explore the feasibility and safety of retroperitoneal laparoscopic partial nephrectomy (RLPN) with selective artery clamp (SAC) in patients with renal cell carcinoma (RCC). Methods: The authors recruited three men and two women who underwent RLPN for T1 RCC between December 2022 and May 2023 at a tertiary hospital. The median age of the patients was 32 years (range, 25-70 years). The tumour size ranged from 3 to 4.5 cm. The R.E.N.A.L scores were 4x, 5p, 8a, 5a, and 8ah. The median preoperative eGFR was 96.9 (74.3-105.2). Renal computed tomography angiography was performed before the surgery to evaluate the artery branches. The operation time, number of clamped arteries, warm ischaemic time (WIT), intraoperative blood loss, RCC type, postoperative hospital stay, changes in renal function, and complications were evaluated. The follow-up duration was 6 months. Results: The median operation time was 120 (75-150) minutes. One artery was clamped in four patients, while three were clamped in one patient. The median WIT was 22 (15-30) min, and the median blood loss was 150 (100-300) ml. No complications were recorded, and the resection margin was negative in all patients. The median decrease in eGFR was 6 (4-30%). Conclusions: RLPN with SAC for T1 RCC is safe and feasible in clinical practice.

2.
Sensors (Basel) ; 23(11)2023 May 27.
Article in English | MEDLINE | ID: mdl-37299848

ABSTRACT

Human activity recognition (HAR) is an important research problem in computer vision. This problem is widely applied to building applications in human-machine interactions, monitoring, etc. Especially, HAR based on the human skeleton creates intuitive applications. Therefore, determining the current results of these studies is very important in selecting solutions and developing commercial products. In this paper, we perform a full survey on using deep learning to recognize human activity based on three-dimensional (3D) human skeleton data as input. Our research is based on four types of deep learning networks for activity recognition based on extracted feature vectors: Recurrent Neural Network (RNN) using extracted activity sequence features; Convolutional Neural Network (CNN) uses feature vectors extracted based on the projection of the skeleton into the image space; Graph Convolution Network (GCN) uses features extracted from the skeleton graph and the temporal-spatial function of the skeleton; Hybrid Deep Neural Network (Hybrid-DNN) uses many other types of features in combination. Our survey research is fully implemented from models, databases, metrics, and results from 2019 to March 2023, and they are presented in ascending order of time. In particular, we also carried out a comparative study on HAR based on a 3D human skeleton on the KLHA3D 102 and KLYOGA3D datasets. At the same time, we performed analysis and discussed the obtained results when applying CNN-based, GCN-based, and Hybrid-DNN-based deep learning networks.


Subject(s)
Deep Learning , Humans , Neural Networks, Computer , Databases, Factual , Human Activities , Skeleton
3.
Sensors (Basel) ; 23(6)2023 Mar 20.
Article in English | MEDLINE | ID: mdl-36991971

ABSTRACT

Hand detection and classification is a very important pre-processing step in building applications based on three-dimensional (3D) hand pose estimation and hand activity recognition. To automatically limit the hand data area on egocentric vision (EV) datasets, especially to see the development and performance of the "You Only Live Once" (YOLO) network over the past seven years, we propose a study comparing the efficiency of hand detection and classification based on the YOLO-family networks. This study is based on the following problems: (1) systematizing all architectures, advantages, and disadvantages of YOLO-family networks from version (v)1 to v7; (2) preparing ground-truth data for pre-trained models and evaluation models of hand detection and classification on EV datasets (FPHAB, HOI4D, RehabHand); (3) fine-tuning the hand detection and classification model based on the YOLO-family networks, hand detection, and classification evaluation on the EV datasets. Hand detection and classification results on the YOLOv7 network and its variations were the best across all three datasets. The results of the YOLOv7-w6 network are as follows: FPHAB is P = 97% with TheshIOU = 0.5; HOI4D is P = 95% with TheshIOU = 0.5; RehabHand is larger than 95% with TheshIOU = 0.5; the processing speed of YOLOv7-w6 is 60 fps with a resolution of 1280 × 1280 pixels and that of YOLOv7 is 133 fps with a resolution of 640 × 640 pixels.


Subject(s)
Hand , Neural Networks, Computer , Humans
4.
Sensors (Basel) ; 22(14)2022 Jul 20.
Article in English | MEDLINE | ID: mdl-35891099

ABSTRACT

Three-dimensional human pose estimation is widely applied in sports, robotics, and healthcare. In the past five years, the number of CNN-based studies for 3D human pose estimation has been numerous and has yielded impressive results. However, studies often focus only on improving the accuracy of the estimation results. In this paper, we propose a fast, unified end-to-end model for estimating 3D human pose, called YOLOv5-HR-TCM (YOLOv5-HRet-Temporal Convolution Model). Our proposed model is based on the 2D to 3D lifting approach for 3D human pose estimation while taking care of each step in the estimation process, such as person detection, 2D human pose estimation, and 3D human pose estimation. The proposed model is a combination of best practices at each stage. Our proposed model is evaluated on the Human 3.6M dataset and compared with other methods at each step. The method achieves high accuracy, not sacrificing processing speed. The estimated time of the whole process is 3.146 FPS on a low-end computer. In particular, we propose a sports scoring application based on the deviation angle between the estimated 3D human posture and the standard (reference) origin. The average deviation angle evaluated on the Human 3.6M dataset (Protocol #1-Pro #1) is 8.2 degrees.


Subject(s)
Posture , Robotics , Humans
5.
Sensors (Basel) ; 21(24)2021 Dec 16.
Article in English | MEDLINE | ID: mdl-34960491

ABSTRACT

Human segmentation and tracking often use the outcome of person detection in the video. Thus, the results of segmentation and tracking depend heavily on human detection results in the video. With the advent of Convolutional Neural Networks (CNNs), there are excellent results in this field. Segmentation and tracking of the person in the video have significant applications in monitoring and estimating human pose in 2D images and 3D space. In this paper, we performed a survey of many studies, methods, datasets, and results for human segmentation and tracking in video. We also touch upon detecting persons as it affects the results of human segmentation and human tracking. The survey is performed in great detail up to source code paths. The MADS (Martial Arts, Dancing and Sports) dataset comprises fast and complex activities. It has been published for the task of estimating human posture. However, before determining the human pose, the person needs to be detected as a segment in the video. Moreover, in the paper, we publish a mask dataset to evaluate the segmentation and tracking of people in the video. In our MASK MADS dataset, we have prepared 28 k mask images. We also evaluated the MADS dataset for segmenting and tracking people in the video with many recently published CNNs methods.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...