Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Sensors (Basel) ; 21(4)2021 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-33672934

RESUMEN

Applications related to smart cities require virtual cities in the experimental development stage. To build a virtual city that are close to a real city, a large number of various types of human models need to be created. To reduce the cost of acquiring models, this paper proposes a method to reconstruct 3D human meshes from single images captured using a normal camera. It presents a method for reconstructing the complete mesh of the human body from a single RGB image and a generative adversarial network consisting of a newly designed shape-pose-based generator (based on deep convolutional neural networks) and an enhanced multi-source discriminator. Using a machine learning approach, the reliance on multiple sensors is reduced and 3D human meshes can be recovered using a single camera, thereby reducing the cost of building smart cities. The proposed method achieves an accuracy of 92.1% in body shape recovery; it can also process 34 images per second. The method proposed in this paper approach significantly improves the performance compared with previous state-of-the-art approaches. Given a single view image of various humans, our results can be used to generate various 3D human models, which can facilitate 3D human modeling work to simulate virtual cities. Since our method can also restore the poses of the humans in the image, it is possible to create various human poses by given corresponding images with specific human poses.


Asunto(s)
Cuerpo Humano , Procesamiento de Imagen Asistido por Computador , Ciudades , Humanos , Redes Neurales de la Computación , Interfaz Usuario-Computador
2.
Saudi Pharm J ; 29(8): 843-856, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34408545

RESUMEN

The current study focuses on the development and evaluation of nano lipidic carriers (NLCs) for codelivery of sorafenib (SRF) and ganoderic acid (GA) therapy in order to treat hepatocellular carcinoma (HCC). The dual drug-loaded NLCs were prepared by hot microemulsion technique, where SRF and GA as the drugs, Precirol ATO5, Capmul PG8 as the lipids, while Solutol HS15 and ethanol was used as surfactant and cosolvents. The optimized drug-loaded NLCs were extensively characterized through in vitro and in vivo studies. The optimized formulation had particle size 29.28 nm, entrapment efficiency 93.1%, and loading capacity 14.21%. In vitro drug release studies revealed>64% of the drug was released in the first 6 h. The enzymatic stability analysis revealed stable nature of NLCs in various gastric pH, while accelerated stability analysis at 25◦C/60% RH indicated the insignificant effect of studied condition on particle size, entrapment efficiency, and loading capacity of NLCs. The cytotoxicity performed on HepG2 cells indicated higher cytotoxicity of SRF and GA-loaded NLCs as compared to the free drugs (p < 0.05). Furthermore, the optimized formulation suppressed the development of hepatic nodules in the Wistar rats and significantly reduced the levels of hepatic enzymes and nonhepatic elements against DEN intoxication. The SRF and GA-loaded NLCs also showed a significant effect in suppressing the tumor growth and inflammatory cytokines in the experimental study. Further, histopathology study of rats treated SRF and GA-loaded NLCs and DEN showed absence of necrosis, apoptosis, and disorganized hepatic parenchyma, etc. over other treated groups of rats. Overall, the dual drug-loaded NLCs outperformed over the plain drugs in terms of chemoprotection, implying superior therapeutic action and most significantly eliminating the hepatic toxicity induced by DEN in Wistar rat model.

3.
Sensors (Basel) ; 19(20)2019 Oct 14.
Artículo en Inglés | MEDLINE | ID: mdl-31615164

RESUMEN

Nowadays, deep learning methods based on a virtual environment are widely applied to research and technology development for autonomous vehicle's smart sensors and devices. Learning various driving environments in advance is important to handle unexpected situations that can exist in the real world and to continue driving without accident. For training smart sensors and devices of an autonomous vehicle well, a virtual simulator should create scenarios of various possible real-world situations. To create reality-based scenarios, data on the real environment must be collected from a real driving vehicle or a scenario analysis process conducted by experts. However, these two approaches increase the period and the cost of scenario generation as more scenarios are created. This paper proposes a scenario generation method based on deep learning to create scenarios automatically for training autonomous vehicle smart sensors and devices. To generate various scenarios, the proposed method extracts multiple events from a video which is taken on a real road by using deep learning and generates the multiple event in a virtual simulator. First, Faster-region based convolution neural network (Faster-RCNN) extracts bounding boxes of each object in a driving video. Second, the high-level event bounding boxes are calculated. Third, long-term recurrent convolution networks (LRCN) classify each type of extracted event. Finally, all multiple event classification results are combined into one scenario. The generated scenarios can be used in an autonomous driving simulator to teach multiple events that occur during real-world driving. To verify the performance of the proposed scenario generation method, experiments using real driving video data and a virtual simulator were conducted. The results for deep learning model show an accuracy of 95.6%; furthermore, multiple high-level events were extracted, and various scenarios were generated in a virtual simulator for smart sensors and devices of an autonomous vehicle.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA