Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Ann Nucl Med ; 2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38717535

RESUMO

OBJECTIVE: In preclinical studies, high-throughput positron emission tomography (PET) imaging, known as simultaneous multiple animal scanning, can reduce the time spent on animal experiments, the cost of PET tracers, and the risk of synthesis of PET tracers. It is well known that the image quality acquired by high-throughput imaging depends on the PET system. Herein, we investigated the influence of large field of view (FOV) PET scanner on high-throughput imaging. METHODS: We investigated the influence of scanning four objects using a small animal PET scanner with a large FOV. We compared the image quality acquired by four objects scanned with the one acquired by one object scanned using phantoms and animals. We assessed the image quality with uniformity, recovery coefficient (RC), and spillover ratio (SOR), which are indicators of image noise, spatial resolution, and quantitative precision, respectively. For the phantom study, we used the NEMA NU 4-2008 image quality phantom and evaluated uniformity, RC, and SOR, and for the animal study, we used Wistar rats and evaluated the spillover in the heart and kidney. RESULTS: In the phantom study, four phantoms had little effect on imaging quality, especially SOR compared with that for one phantom. In the animal study as well, four rats had little effect on spillover from the heart muscle and kidney cortex compared with that for one rat. CONCLUSIONS: This study demonstrated that an animal PET scanner with a large FOV was suitable for high-throughput imaging. Thus, the large FOV PET scanner can support drug discovery and bridging research through rapid pharmacological and pathological evaluation.

2.
Phys Med Biol ; 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38749457

RESUMO

In positron emission tomography (PET) reconstruction, the integration of Time-of-Flight (TOF) information, known as TOF-PET, has been a major research focus. Compared to traditional reconstruction methods, the introduction of TOF enhances the signal-to-noise ratio (SNR) of images. Precision in TOF is measured by Full Width at Half Maximum (FWHM) and the offset from ground truth, referred to as coincidence time resolution (CTR) and bias. This study proposes a network combining Transformer and Convolutional Neural Network (CNN) to utilize TOF information from detector waveforms, using event waveform pairs as inputs. This approach integrates the global self-attention mechanism of Transformer, which focuses on temporal relationships, with the local receptive field of CNN. The combination of global and local information allows the network to assign greater weight to the rising edges of waveforms, thereby extracting valuable temporal information for precise TOF predictions. Experiments were conducted using lutetium yttrium oxyorthosilicate (LYSO) scintillators and silicon photomultiplier (SiPM) detectors. The network was trained and tested using the waveform datasets after cropping. Compared to the constant fraction discriminator (CFD), CNN, CNN with attention, Long Short-Term Memory (LSTM) and Transformer, our network achieved an average CTR of 189 ps, reducing it by 82 ps (more than 30%), 13 ps (6.4%), 12 ps (6.0%), 16 ps (7.8%) and 9 ps (4.6%), respectively. Additionally, a reduction of 10.3, 8.7, 6.7 and 4 ps in average bias was achieved compared to CNN, CNN with attention, LSTM and Transformer. This work demonstrates the potential of applying the Transformer for PET TOF estimation using real experimental data. Through the integration of both CNN and Transformer with local and global attention, it achieves optimal performance, thereby presenting a novel direction for future research in this field.

3.
Front Robot AI ; 11: 1240408, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38590970

RESUMO

In recent years, virtual idols have garnered considerable attention because they can perform activities similar to real idols. However, as they are fictitious idols with nonphysical presence, they cannot perform physical interactions such as handshake. Combining a robotic hand with a display showing virtual idols is the one of the methods to solve this problem. Nonetheless a physical handshake is possible, the form of handshake that can effectively induce the desirable behavior is unclear. In this study, we adopted a robotic hand as an interface and aimed to imitate the behavior of real idols. To test the effects of this behavior, we conducted step-wise experiments. The series of experiments revealed that the handshake by the robotic hand increased the feeling of intimacy toward the virtual idol, and it became more enjoyable to respond to a request from the virtual idol. In addition, viewing the virtual idols during the handshake increased the feeling of intimacy with the virtual idol. Moreover, the method of the hand-shake peculiar to idols, which tried to keep holding the user's hand after the conversation, increased the feeling of intimacy to the virtual idol.

5.
Radiol Phys Technol ; 17(1): 24-46, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38319563

RESUMO

This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Redes Neurais de Computação , Algoritmos , Imagens de Fantasmas
6.
Sci Rep ; 14(1): 758, 2024 01 08.
Artigo em Inglês | MEDLINE | ID: mdl-38191647

RESUMO

Cough is known as a protective reflex to keep the airway free from harmful substances. Although brain activity during cough was previously examined mainly by functional magnetic resonance imaging (fMRI) with model analysis, this method does not capture real brain activity during cough. To obtain accurate measurements of brain activity during cough, we conducted whole-brain scans during different coughing tasks while correcting for head motion using a restraint-free positron emission tomography (PET) system. Twenty-four healthy right-handed males underwent multiple PET scans with [15O]H2O. Four tasks were performed during scans: "resting"; "voluntary cough (VC)", which simply repeated spontaneous coughing; "induced cough (IC)", where participants coughed in response to an acid stimulus in the cough-inducing method with tartaric acid (CiTA); and "suppressed cough (SC)", where coughing was suppressed against CiTA. The whole brain analyses of motion-corrected data revealed that VC chiefly activated the cerebellum extending to pons. In contrast, CiTA-related tasks (IC and SC) activated the higher sensory regions of the cerebral cortex and associated brain regions. The present results suggest that brain activity during simple cough is controlled chiefly by infratentorial areas, whereas manipulating cough predominantly requires the higher sensory brain regions to allow top-down control of information from the periphery.


Assuntos
Tosse , Tomografia Computadorizada por Raios X , Masculino , Humanos , Encéfalo/diagnóstico por imagem , Cerebelo , Córtex Cerebral
7.
IEEE Trans Med Imaging ; 43(5): 1654-1663, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38109238

RESUMO

Direct positron emission imaging (dPEI), which does not require a mathematical reconstruction step, is a next-generation molecular imaging modality. To maximize the practical applicability of the dPEI system to clinical practice, we introduce a novel reconstruction-free image-formation method called direct µCompton imaging, which directly localizes the interaction position of Compton scattering from the annihilation photons in a three-dimensional space by utilizing the same compact geometry as that for dPEI, involving ultrafast time-of-flight radiation detectors. This unique imaging method not only provides the anatomical information about an object but can also be applied to attenuation correction of dPEI images. Evaluations through Monte Carlo simulation showed that functional and anatomical hybrid images can be acquired using this multimodal imaging system. By fusing the images, it is possible to simultaneously access various object data, which ensures the synergistic effect of the two imaging methodologies. In addition, attenuation correction improves the quantification of dPEI images. The realization of the whole reconstruction-free imaging system from image generation to quantitative correction provides a new perspective in molecular imaging.


Assuntos
Processamento de Imagem Assistida por Computador , Método de Monte Carlo , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Tomografia por Emissão de Pósitrons/instrumentação , Algoritmos , Humanos , Simulação por Computador
8.
Phys Med Biol ; 68(15)2023 07 21.
Artigo em Inglês | MEDLINE | ID: mdl-37406637

RESUMO

Objective. Deep image prior (DIP) has recently attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction method, which does not require any prior training dataset. In this paper, we present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method that incorporates a forward-projection model into a loss function.Approach. A practical implementation of a fully 3D PET image reconstruction could not be performed at present because of a graphics processing unit memory limitation. Consequently, we modify the DIP optimization to a block iteration and sequential learning of an ordered sequence of block sinograms. Furthermore, the relative difference penalty (RDP) term is added to the loss function to enhance the quantitative accuracy of the PET image.Main results. We evaluated our proposed method using Monte Carlo simulation with [18F]FDG PET data of a human brain and a preclinical study on monkey-brain [18F]FDG PET data. The proposed method was compared with the maximum-likelihood expectation maximization (EM), maximuma posterioriEM with RDP, and hybrid DIP-based PET reconstruction methods. The simulation results showed that, compared with other algorithms, the proposed method improved the PET image quality by reducing statistical noise and better preserved the contrast of brain structures and inserted tumors. In the preclinical experiment, finer structures and better contrast recovery were obtained with the proposed method.Significance.The results indicated that the proposed method could produce high-quality images without a prior training dataset. Thus, the proposed method could be a key enabling technology for the straightforward and practical implementation of end-to-end DIP-based fully 3D PET image reconstruction.


Assuntos
Fluordesoxiglucose F18 , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Tomografia por Emissão de Pósitrons/métodos , Algoritmos , Imagens de Fantasmas
9.
IEEE Trans Med Imaging ; 42(6): 1822-1834, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37022039

RESUMO

List-mode positron emission tomography (PET) image reconstruction is an important tool for PET scanners with many lines-of-response and additional information such as time-of-flight and depth-of-interaction. Deep learning is one possible solution to enhance the quality of PET image reconstruction. However, the application of deep learning techniques to list-mode PET image reconstruction has not been progressed because list data is a sequence of bit codes and unsuitable for processing by convolutional neural networks (CNN). In this study, we propose a novel list-mode PET image reconstruction method using an unsupervised CNN called deep image prior (DIP) which is the first trial to integrate list-mode PET image reconstruction and CNN. The proposed list-mode DIP reconstruction (LM-DIPRecon) method alternatively iterates the regularized list-mode dynamic row action maximum likelihood algorithm (LM-DRAMA) and magnetic resonance imaging conditioned DIP (MR-DIP) using an alternating direction method of multipliers. We evaluated LM-DIPRecon using both simulation and clinical data, and it achieved sharper images and better tradeoff curves between contrast and noise than the LM-DRAMA, MR-DIP and sinogram-based DIPRecon methods. These results indicated that the LM-DIPRecon is useful for quantitative PET imaging with limited events while keeping accurate raw data information. In addition, as list data has finer temporal information than dynamic sinograms, list-mode deep image prior reconstruction is expected to be useful for 4D PET imaging and motion correction.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Movimento (Física) , Simulação por Computador , Algoritmos , Imagens de Fantasmas
10.
Ann Nucl Med ; 36(8): 746-755, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35698016

RESUMO

OBJECTIVE: Various motion correction (MC) algorithms for positron emission tomography (PET) have been proposed to accelerate the diagnostic performance and research in brain activity and neurology. We have incorporated MC system-based optical motion tracking into the brain-dedicated time-of-flight PET scanner. In this study, we evaluate the performance characteristics of the developed PET scanner when performing MC in accordance with the standards and guidelines for the brain PET scanner. METHODS: We evaluate the spatial resolution, scatter fraction, count rate characteristics, sensitivity, and image quality of PET images. The MC evaluation is measured in terms of the spatial resolution and image quality that affect movement. RESULTS: In the basic performance evaluation, the average spatial resolution by iterative reconstruction was 2.2 mm at 10 mm offset position. The measured peak noise equivalent count rate was 38.0 kcps at 16.7 kBq/mL. The scatter fraction and system sensitivity were 43.9% and 22.4 cps/(Bq/mL), respectively. The image contrast recovery was between 43.2% (10 mm sphere) and 72.0% (37 mm sphere). In the MC performance evaluation, the average spatial resolution was 2.7 mm at 10 mm offset position, when the phantom stage with the point source translates to ± 15 mm along the y-axis. The image contrast recovery was between 34.2 % (10 mm sphere) and 66.8 % (37 mm sphere). CONCLUSIONS: The reconstructed images using MC were restored to their nearly identical state as those at rest. Therefore, it is concluded that this scanner can observe more natural brain activity.


Assuntos
Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X , Encéfalo/diagnóstico por imagem , Cabeça , Humanos , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons/métodos
11.
Phys Med Biol ; 67(4)2022 02 11.
Artigo em Inglês | MEDLINE | ID: mdl-35100575

RESUMO

Objective.Convolutional neural networks (CNNs) are a strong tool for improving the coincidence time resolution (CTR) of time-of-flight (TOF) positron emission tomography detectors. However, several signal waveforms from multiple source positions are required for CNN training. Furthermore, there is concern that TOF estimation is biased near the edge of the training space, despite the reduced estimation variance (i.e. timing uncertainty).Approach.We propose a simple method for unbiased TOF estimation by combining a conventional leading-edge discriminator (LED) and a CNN that can be trained with waveforms collected from one source position. The proposed method estimates and corrects the time difference error calculated by the LED rather than the absolute time difference. This model can eliminate the TOF estimation bias, as the combination with the LED converts the distribution of the label data from discrete values at each position into a continuous symmetric distribution.Main results.Evaluation results using signal waveforms collected from scintillation detectors show that the proposed method can correctly estimate all source positions without bias from a single source position. Moreover, the proposed method improves the CTR of the conventional LED.Significance.We believe that the improved CTR will not only increase the signal-to-noise ratio but will also contribute significantly to a part of the direct positron emission imaging.


Assuntos
Fótons , Contagem de Cintilação , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons/métodos , Contagem de Cintilação/métodos , Razão Sinal-Ruído
13.
Med Image Anal ; 74: 102226, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34563861

RESUMO

Although supervised convolutional neural networks (CNNs) often outperform conventional alternatives for denoising positron emission tomography (PET) images, they require many low- and high-quality reference PET image pairs. Herein, we propose an unsupervised 3D PET image denoising method based on an anatomical information-guided attention mechanism. The proposed magnetic resonance-guided deep decoder (MR-GDD) utilizes the spatial details and semantic features of MR-guidance image more effectively by introducing encoder-decoder and deep decoder subnetworks. Moreover, the specific shapes and patterns of the guidance image do not affect the denoised PET image, because the guidance image is input to the network through an attention gate. In a Monte Carlo simulation of [18F]fluoro-2-deoxy-D-glucose (FDG), the proposed method achieved the highest peak signal-to-noise ratio and structural similarity (27.92 ± 0.44 dB/0.886 ± 0.007), as compared with Gaussian filtering (26.68 ± 0.10 dB/0.807 ± 0.004), image guided filtering (27.40 ± 0.11 dB/0.849 ± 0.003), deep image prior (DIP) (24.22 ± 0.43 dB/0.737 ± 0.017), and MR-DIP (27.65 ± 0.42 dB/0.879 ± 0.007). Furthermore, we experimentally visualized the behavior of the optimization process, which is often unknown in unsupervised CNN-based restoration problems. For preclinical (using [18F]FDG and [11C]raclopride) and clinical (using [18F]florbetapir) studies, the proposed method demonstrates state-of-the-art denoising performance while retaining spatial resolution and quantitative accuracy, despite using a common network architecture for various noisy PET images with 1/10th of the full counts. These results suggest that the proposed MR-GDD can reduce PET scan times and PET tracer doses considerably without impacting patients.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Fluordesoxiglucose F18 , Humanos , Redes Neurais de Computação , Razão Sinal-Ruído
14.
Radiol Phys Technol ; 13(2): 160-169, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32358643

RESUMO

It is often difficult to distinguish between benign and malignant pulmonary nodules using only image diagnosis. A biopsy is performed when malignancy is suspected based on CT examination. However, biopsies are highly invasive, and patients with benign nodules may undergo unnecessary procedures. In this study, we performed automated classification of pulmonary nodules using a three-dimensional convolutional neural network (3DCNN). In addition, to increase the number of training data, we utilized generative adversarial networks (GANs), a deep learning technique used as a data augmentation method. In this approach, three-dimensional regions of different sizes centered on pulmonary nodules were extracted from CT images, and a large number of pseudo-pulmonary nodules were synthesized using 3DGAN. The 3DCNN has a multi-scale structure in which multiple nodules in each region are inputted and integrated into the final layer. During the training of multi-scale 3DCNN, pre-training was first performed using 3DGAN-synthesized nodules, and the pulmonary nodules were then comprehensively classified by fine-tuning the pre-trained model using real nodules. Using an evaluation process that involved 60 confirmed cases of pathological diagnosis based on biopsies, the sensitivity was determined to be 90.9% and specificity was 74.1%. The classification accuracy was improved compared to the case of training with only real nodules without pre-training. The 2DCNN results of our previous study were slightly better than the 3DCNN results. However, it was shown that even though 3DCNN is difficult to train with limited data such as in the case of medical images, classification accuracy can be improved by GAN.


Assuntos
Imageamento Tridimensional/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Humanos , Tomografia Computadorizada por Raios X
15.
Int J Comput Assist Radiol Surg ; 15(1): 173-178, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31732864

RESUMO

PURPOSE: Early detection and treatment of lung cancer holds great importance. However, pulmonary-nodule classification using CT images alone is difficult to realize. To address this concern, a method for pulmonary-nodule classification based on a deep convolutional neural network (DCNN) and generative adversarial networks (GAN) has previously been proposed by the authors. In that method, the said classification was performed exclusively using axial cross sections of pulmonary nodules. During actual medical-examination procedures, however, a comprehensive judgment can only be made via observation of various pulmonary-nodule cross sections. In the present study, a comprehensive analysis was performed by extending the application of the previously proposed DCNN- and GAN-based automatic classification method to multiple cross sections of pulmonary nodules. METHODS: Using the proposed method, CT images of 60 cases with confirmed pathological diagnosis by biopsy are analyzed. Firstly, multiplanar images of the pulmonary nodule are generated. Classification training was performed for three DCNNs. A certain pretraining was initially performed using GAN-generated nodule images. This was followed by fine-tuning of each pretrained DCNN using original nodule images provided as input. RESULTS: As a result of the evaluation, the specificity was 77.8% and the sensitivity was 93.9%. Additionally, the specificity was observed to have improved by 11.1% without any reduction in the sensitivity, compared to our previous report. CONCLUSION: This study reports development of a comprehensive analysis method to classify pulmonary nodules at multiple sections using GAN and DCNN. The effectiveness of the proposed discrimination method based on use of multiplanar images has been demonstrated to be improved compared to that realized in a previous study reported by the authors. In addition, the possibility of enhancing classification accuracy via application of GAN-generated images, instead of data augmentation, for pretraining even for medical datasets that contain relatively few images has also been demonstrated.


Assuntos
Detecção Precoce de Câncer/métodos , Neoplasias Pulmonares/classificação , Redes Neurais de Computação , Nódulo Pulmonar Solitário/classificação , Tomografia Computadorizada por Raios X/métodos , Humanos , Neoplasias Pulmonares/diagnóstico , Reprodutibilidade dos Testes , Nódulo Pulmonar Solitário/diagnóstico
16.
Radiol Phys Technol ; 13(1): 27-36, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31686300

RESUMO

In digital mammography, which is used for the early detection of breast tumors, oversight may occur due to overlap between normal tissues and lesions. However, since digital breast tomosynthesis can acquire three-dimensional images, tissue overlapping is reduced, and, therefore, the shape and distribution of the lesions can be easily identified. However, it is often difficult to distinguish between benign and malignant breast lesions on images, and the diagnostic accuracy can be reduced due to complications from radiological interpretations, owing to acquisition of a higher number of images. In this study, we developed an automated classification method for diagnosing breast lesions on digital breast tomosynthesis images using radiomics to comprehensively analyze the radiological images. We extracted an analysis area centered on the lesion and calculated 70 radiomic features, including the shape of the lesion, existence of spicula, and texture information. The accuracy was compared by inputting the obtained radiomic features to four classifiers (support vector machine, random forest, naïve Bayes, and multi-layer perceptron), and the final classification result was obtained as an output using a classifier with high accuracy. To confirm the effectiveness of the proposed method, we used 24 cases with confirmed pathological diagnosis on biopsy. We also compared the classification results based on the presence or absence of dimension reduction using least absolute shrinkage and a selection operator (LASSO). As a result, when the support vector machine was used as a classifier, the correct identification rate of the benign tumors was 55% and that of malignant tumors was 84%, with best results. These results indicate that the proposed method may help in more accurately diagnosing cases that are difficult to classify on images.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Aprendizado de Máquina , Mamografia/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Teorema de Bayes , Mama/diagnóstico por imagem , Diagnóstico por Computador , Reações Falso-Positivas , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Curva ROC , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Reprodutibilidade dos Testes , Máquina de Vetores de Suporte
17.
Biomed Res Int ; 2019: 6051939, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30719445

RESUMO

Lung cancer is a leading cause of death worldwide. Although computed tomography (CT) examinations are frequently used for lung cancer diagnosis, it can be difficult to distinguish between benign and malignant pulmonary nodules on the basis of CT images alone. Therefore, a bronchoscopic biopsy may be conducted if malignancy is suspected following CT examinations. However, biopsies are highly invasive, and patients with benign nodules may undergo many unnecessary biopsies. To prevent this, an imaging diagnosis with high classification accuracy is essential. In this study, we investigate the automated classification of pulmonary nodules in CT images using a deep convolutional neural network (DCNN). We use generative adversarial networks (GANs) to generate additional images when only small amounts of data are available, which is a common problem in medical research, and evaluate whether the classification accuracy is improved by generating a large amount of new pulmonary nodule images using the GAN. Using the proposed method, CT images of 60 cases with confirmed pathological diagnosis by biopsy are analyzed. The benign nodules assessed in this study are difficult for radiologists to differentiate because they cannot be rejected as being malignant. A volume of interest centered on the pulmonary nodule is extracted from the CT images, and further images are created using axial sections and augmented data. The DCNN is trained using nodule images generated by the GAN and then fine-tuned using the actual nodule images to allow the DCNN to distinguish between benign and malignant nodules. This pretraining and fine-tuning process makes it possible to distinguish 66.7% of benign nodules and 93.9% of malignant nodules. These results indicate that the proposed method improves the classification accuracy by approximately 20% in comparison with training using only the original images.


Assuntos
Pulmão/diagnóstico por imagem , Pulmão/patologia , Nódulo Pulmonar Solitário/diagnóstico por imagem , Nódulo Pulmonar Solitário/patologia , Algoritmos , Feminino , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Masculino , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos
18.
Front Robot AI ; 6: 85, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-33501100

RESUMO

Communication robots, such as robotic salespeople and guide robots, are increasingly becoming involved in various aspects of people's everyday lives. However, it is still unclear what types of robot behavior are most effective for such purposes. In this research, we focused on a robotic salesperson. We believe that people often ignore what such robots have to say owing to their weak social presence. Thus, these robots must behave in ways that attract attention encouraging people to nod or reply when the robots speak. In order to identify suitable behaviors, we conducted two experiments. First, we conducted a field experiment in a shop in a traditional Kyoto shopping street to observe customers' real-world interactions with a robotic salesperson. Here, we found that the first impression given by the robot had a crucial influence on its subsequent conversations with most customer groups and that it was important for the robot to indicate it could understand how much attention customers were paying to the robot in the early stages of its interactions if it was to persuade customers to respond to what it said. Although the field experiment enabled us to observe natural interactions, it also included many external factors. In order to validate some of our findings without the involving these factors, we further conducted a laboratory experiment to investigate whether having the robot look back at the participants when they looked at it increased their perception that the robot was aware of their actions. These results supported the findings of the field experiment. Thus, we can conclude that demonstrating that a robot can recognize and respond to human behavior is important if it is to engage with people and persuade them to nod and reply to its comments.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...