Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Laryngoscope ; 133(9): 2413-2416, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36444914

RESUMO

OBJECTIVES: The objective of this study is to determine whether machine learning may be used for objective assessment of aesthetic outcomes of auricular reconstructive surgery. METHODS: Images of normal and reconstructed auricles were obtained from internet image search engines. Convolutional neural networks were constructed to identify auricles in 2D images in an auto-segmentation task and to evaluate whether an ear was normal versus reconstructed in a binary classification task. Images were then assigned a percent score for "normal" ear appearance based on confidence of the classification. RESULTS: Images of 1115 ears (600 normal and 515 reconstructed) were obtained. The auto-segmentation task identified auricles with 95.30% accuracy compared to manually segmented auricles. The binary classification task achieved 89.22% accuracy in identifying reconstructed ears. When the confidence of the classification was used to assign percent scores to "normal" appearance, the reconstructed ears were classified to a range of 2% (least like normal ears) to 98% (most like normal ears). CONCLUSION: Image-based analysis using machine learning can offer objective assessment without the bias of the patient or the surgeon. This methodology could be adapted to be used by surgeons to assess quality of operative outcome in clinical and research settings. LEVEL OF EVIDENCE: 4 Laryngoscope, 133:2413-2416, 2023.


Assuntos
Microtia Congênita , Pavilhão Auricular , Procedimentos de Cirurgia Plástica , Humanos , Orelha Externa/cirurgia , Microtia Congênita/cirurgia , Pavilhão Auricular/cirurgia , Estética
2.
Artigo em Inglês | MEDLINE | ID: mdl-32318651

RESUMO

Activity-oriented cameras are increasingly being used to provide visual confirmation of specific hand-related activities in real-world settings. However, recent studies have shown that bystander privacy concerns limit participant willingness to wear a camera. Researchers have investigated different image obfuscation methods as an approach to enhance bystander privacy; however, these methods may have varying effects on the visual confirmation utility of the image, which we define as the ability of a human viewer to interpret the activity of the wearer in the image. Visual confirmation utility is needed to annotate and validate hand-related activities for several behavioral-based applications, particularly in cases where a human in the loop method is needed to label (e.g., annotating gestures that cannot be automatically detected yet). We propose a new type of obfuscation, activity-oriented partial obfuscation, as a methodological contribution to researchers interested in obtaining visual confirmation of hand-related activities in the wild. We tested the effects of this approach by collecting ten diverse and realistic video scenarios that involved the wearer performing hand-related activities while bystanders performed activities that could be of concern if recorded. Then we conducted an online experiment with 367 participants to evaluate the effect of varying degrees of obfuscation on bystander privacy and visual confirmation utility. Our results show that activity-oriented partial obfuscation (1) maintains visual confirmation of the wearer's hand-related activity, especially when an object is present in the hand, and even when extreme filters are applied, while (2) significantly reducing bystander concerns and enhancing bystander privacy. Informed by our analysis, we further discuss the impact of the filter method used in activity-oriented partial obfuscation on bystander privacy and concerns.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA