RESUMO
Since the beginning of the COVID-19 pandemic, many works have been published proposing solutions to the problems that arose in this scenario. In this vein, one of the topics that attracted the most attention is the development of computer-based strategies to detect COVID-19 from thoracic medical imaging, such as chest X-ray (CXR) and computerized tomography scan (CT scan). By searching for works already published on this theme, we can easily find thousands of them. This is partly explained by the fact that the most severe worldwide pandemic emerged amid the technological advances recently achieved, and also considering the technical facilities to deal with the large amount of data produced in this context. Even though several of these works describe important advances, we cannot overlook the fact that others only use well-known methods and techniques without a more relevant and critical contribution. Hence, differentiating the works with the most relevant contributions is not a trivial task. The number of citations obtained by a paper is probably the most straightforward and intuitive way to verify its impact on the research community. Aiming to help researchers in this scenario, we present a review of the top-100 most cited papers in this field of investigation according to the Google Scholar search engine. We evaluate the distribution of the top-100 papers taking into account some important aspects, such as the type of medical imaging explored, learning settings, segmentation strategy, explainable artificial intelligence (XAI), and finally, the dataset and code availability.
Assuntos
COVID-19 , Inteligência Artificial , COVID-19/diagnóstico por imagem , Humanos , Pandemias , SARS-CoV-2 , Tomografia Computadorizada por Raios X/métodos , Raios XRESUMO
Recently, ocular biometrics in unconstrained environments using images obtained at visible wavelength have gained the researchers' attention, especially with images captured by mobile devices. Periocular recognition has been demonstrated to be an alternative when the iris trait is not available due to occlusions or low image resolution. However, the periocular trait does not have the high uniqueness presented in the iris trait. Thus, the use of datasets containing many subjects is essential to assess biometric systems' capacity to extract discriminating information from the periocular region. Also, to address the within-class variability caused by lighting and attributes in the periocular region, it is of paramount importance to use datasets with images of the same subject captured in distinct sessions. As the datasets available in the literature do not present all these factors, in this work, we present a new periocular dataset containing samples from 1122 subjects, acquired in 3 sessions by 196 different mobile devices. The images were captured under unconstrained environments with just a single instruction to the participants: to place their eyes on a region of interest. We also performed an extensive benchmark with several Convolutional Neural Network (CNN) architectures and models that have been employed in state-of-the-art approaches based on Multi-class Classification, Multi-task Learning, Pairwise Filters Network, and Siamese Network. The results achieved in the closed- and open-world protocol, considering the identification and verification tasks, show that this area still needs research and development.