Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Hand (N Y) ; : 15589447241235340, 2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38551109

RESUMO

BACKGROUND: The lumbrical muscles comprise 4 intrinsic muscles of the hand and are involved in flexion of the metacarpophalangeal joint (MCPJ) and extension of the proximal interphalangeal and distal interphalangeal joints. The purpose of this study was to investigate the anatomical mechanics of the lumbrical muscles of the index, middle, ring, and small fingers. METHODS: We evaluated 25 cadaver arms and measured the distance between the MCPJ and fingertip, the distance between the MCPJ and lumbrical muscle insertion, and the distance between the MCPJ and the most proximal lumbrical muscle origin. With these measurements we calculated the needed force, insertion ratio (length of the proximal, middle, and distal phalanx divided by the MCPJ to insertion distance), and lumbrical muscle length. RESULTS: We found that the force was significantly different between all fingers, except for the comparison of the index and ring finger (P = .34). In addition, we found that muscle length was significantly different between most the fingers, except for the comparison between the index and middle fingers (P = .24), and index and ring fingers (P = .20). There was no significant difference in insertion ratio. CONCLUSIONS: Our study suggests that the anatomical mechanics for the motor function of the lumbrical muscles are similar in all fingers. This could further imply that movements are equally precise in all fingers resulting in coordination with one another and, therefore, adequate hand function. LEVEL OF EVIDENCE: IV.

2.
Plast Reconstr Surg ; 2024 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-38194624

RESUMO

SUMMARY: The impact of clinical prediction models within Artificial Intelligence (AI) and machine learning (ML) is significant. With its ability to analyze vast amounts of data and identify complex patterns, machine learning has the potential to improve and implement evidence-based plastic, reconstructive, and hand surgery. Among others, it is capable of predicting the diagnosis, prognosis, and outcomes of individual patients. This modeling aids daily clinical decision making, most commonly at the moment, as decision-support.Therefore, the purpose of this paper is to provide a practice guideline to plastic surgeons implementing AI in clinical decision-making or setting up AI research to develop clinical prediction models using the 7-step approach and the ABCD validation steps of Steyerberg et al. Secondly, we describe two important protocols which are in the development stage for AI research: 1) the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) checklist, and 2) The PROBAST checklist to access potential biases.

3.
Front Med (Lausanne) ; 8: 661309, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34381793

RESUMO

Background: The inclusion of facial and bodily cues (clinical gestalt) in machine learning (ML) models improves the assessment of patients' health status, as shown in genetic syndromes and acute coronary syndrome. It is unknown if the inclusion of clinical gestalt improves ML-based classification of acutely ill patients. As in previous research in ML analysis of medical images, simulated or augmented data may be used to assess the usability of clinical gestalt. Objective: To assess whether a deep learning algorithm trained on a dataset of simulated and augmented facial photographs reflecting acutely ill patients can distinguish between healthy and LPS-infused, acutely ill individuals. Methods: Photographs from twenty-six volunteers whose facial features were manipulated to resemble a state of acute illness were used to extract features of illness and generate a synthetic dataset of acutely ill photographs, using a neural transfer convolutional neural network (NT-CNN) for data augmentation. Then, four distinct CNNs were trained on different parts of the facial photographs and concatenated into one final, stacked CNN which classified individuals as healthy or acutely ill. Finally, the stacked CNN was validated in an external dataset of volunteers injected with lipopolysaccharide (LPS). Results: In the external validation set, the four individual feature models distinguished acutely ill patients with sensitivities ranging from 10.5% (95% CI, 1.3-33.1% for the skin model) to 89.4% (66.9-98.7%, for the nose model). Specificity ranged from 42.1% (20.3-66.5%) for the nose model and 94.7% (73.9-99.9%) for skin. The stacked model combining all four facial features achieved an area under the receiver characteristic operating curve (AUROC) of 0.67 (0.62-0.71) and distinguished acutely ill patients with a sensitivity of 100% (82.35-100.00%) and specificity of 42.11% (20.25-66.50%). Conclusion: A deep learning algorithm trained on a synthetic, augmented dataset of facial photographs distinguished between healthy and simulated acutely ill individuals, demonstrating that synthetically generated data can be used to develop algorithms for health conditions in which large datasets are difficult to obtain. These results support the potential of facial feature analysis algorithms to support the diagnosis of acute illness.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA