Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Nature ; 623(7987): 522-530, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37968527

RESUMO

Recreating complex structures and functions of natural organisms in a synthetic form is a long-standing goal for humanity1. The aim is to create actuated systems with high spatial resolutions and complex material arrangements that range from elastic to rigid. Traditional manufacturing processes struggle to fabricate such complex systems2. It remains an open challenge to fabricate functional systems automatically and quickly with a wide range of elastic properties, resolutions, and integrated actuation and sensing channels2,3. We propose an inkjet deposition process called vision-controlled jetting that can create complex systems and robots. Hereby, a scanning system captures the three-dimensional print geometry and enables a digital feedback loop, which eliminates the need for mechanical planarizers. This contactless process allows us to use continuously curing chemistries and, therefore, print a broader range of material families and elastic moduli. The advances in material properties are characterized by standardized tests comparing our printed materials to the state-of-the-art. We directly fabricated a wide range of complex high-resolution composite systems and robots: tendon-driven hands, pneumatically actuated walking manipulators, pumps that mimic a heart and metamaterial structures. Our approach provides an automated, scalable, high-throughput process to manufacture high-resolution, functional multimaterial systems.


Assuntos
Impressão Tridimensional , Robótica , Humanos , Módulo de Elasticidade , Robótica/instrumentação , Robótica/métodos , Retroalimentação , Materiais Biomiméticos/síntese química , Materiais Biomiméticos/química
2.
Nature ; 591(7849): 234-239, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33692557

RESUMO

The ability to present three-dimensional (3D) scenes with continuous depth sensation has a profound impact on virtual and augmented reality, human-computer interaction, education and training. Computer-generated holography (CGH) enables high-spatio-angular-resolution 3D projection via numerical simulation of diffraction and interference1. Yet, existing physically based methods fail to produce holograms with both per-pixel focal control and accurate occlusion2,3. The computationally taxing Fresnel diffraction simulation further places an explicit trade-off between image quality and runtime, making dynamic holography impractical4. Here we demonstrate a deep-learning-based CGH pipeline capable of synthesizing a photorealistic colour 3D hologram from a single RGB-depth image in real time. Our convolutional neural network (CNN) is extremely memory efficient (below 620 kilobytes) and runs at 60 hertz for a resolution of 1,920 × 1,080 pixels on a single consumer-grade graphics processing unit. Leveraging low-power on-device artificial intelligence acceleration chips, our CNN also runs interactively on mobile (iPhone 11 Pro at 1.1 hertz) and edge (Google Edge TPU at 2.0 hertz) devices, promising real-time performance in future-generation virtual and augmented-reality mobile headsets. We enable this pipeline by introducing a large-scale CGH dataset (MIT-CGH-4K) with 4,000 pairs of RGB-depth images and corresponding 3D holograms. Our CNN is trained with differentiable wave-based loss functions5 and physically approximates Fresnel diffraction. With an anti-aliasing phase-only encoding method, we experimentally demonstrate speckle-free, natural-looking, high-resolution 3D holograms. Our learning-based approach and the Fresnel hologram dataset will help to unlock the full potential of holography and enable applications in metasurface design6,7, optical and acoustic tweezer-based microscopic manipulation8-10, holographic microscopy11 and single-exposure volumetric 3D printing12,13.


Assuntos
Gráficos por Computador , Sistemas Computacionais , Holografia/métodos , Holografia/normas , Redes Neurais de Computação , Animais , Realidade Aumentada , Cor , Conjuntos de Dados como Assunto , Aprendizado Profundo , Microscopia , Pinças Ópticas , Impressão Tridimensional , Fatores de Tempo , Realidade Virtual
3.
Annu Rev Biomed Eng ; 26(1): 331-355, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38959390

RESUMO

Recent advancements in soft electronic skin (e-skin) have led to the development of human-like devices that reproduce the skin's functions and physical attributes. These devices are being explored for applications in robotic prostheses as well as for collecting biopotentials for disease diagnosis and treatment, as exemplified by biomedical e-skins. More recently, machine learning (ML) has been utilized to enhance device control accuracy and data processing efficiency. The convergence of e-skin technologies with ML is promoting their translation into clinical practice, especially in healthcare. This review highlights the latest developments in ML-reinforced e-skin devices for robotic prostheses and biomedical instrumentations. We first describe technological breakthroughs in state-of-the-art e-skin devices, emphasizing technologies that achieve skin-like properties. We then introduce ML methods adopted for control optimization and pattern recognition, followed by practical applications that converge the two technologies. Lastly, we briefly discuss the challenges this interdisciplinary research encounters in its clinical and industrial transition.


Assuntos
Aprendizado de Máquina , Robótica , Dispositivos Eletrônicos Vestíveis , Humanos , Robótica/métodos , Pele , Desenho de Equipamento , Engenharia Biomédica/métodos
4.
Nature ; 569(7758): 698-702, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-31142856

RESUMO

Humans can feel, weigh and grasp diverse objects, and simultaneously infer their material properties while applying the right amount of force-a challenging set of tasks for a modern robot1. Mechanoreceptor networks that provide sensory feedback and enable the dexterity of the human grasp2 remain difficult to replicate in robots. Whereas computer-vision-based robot grasping strategies3-5 have progressed substantially with the abundance of visual data and emerging machine-learning tools, there are as yet no equivalent sensing platforms and large-scale datasets with which to probe the use of the tactile information that humans rely on when grasping objects. Studying the mechanics of how humans grasp objects will complement vision-based robotic object handling. Importantly, the inability to record and analyse tactile signals currently limits our understanding of the role of tactile information in the human grasp itself-for example, how tactile maps are used to identify objects and infer their properties is unknown6. Here we use a scalable tactile glove and deep convolutional neural networks to show that sensors uniformly distributed over the hand can be used to identify individual objects, estimate their weight and explore the typical tactile patterns that emerge while grasping objects. The sensor array (548 sensors) is assembled on a knitted glove, and consists of a piezoresistive film connected by a network of conductive thread electrodes that are passively probed. Using a low-cost (about US$10) scalable tactile glove sensor array, we record a large-scale tactile dataset with 135,000 frames, each covering the full hand, while interacting with 26 different objects. This set of interactions with different objects reveals the key correspondences between different regions of a human hand while it is manipulating objects. Insights from the tactile signatures of the human grasp-through the lens of an artificial analogue of the natural mechanoreceptor network-can thus aid the future design of prosthetics7, robot grasping tools and human-robot interactions1,8-10.


Assuntos
Vestuário , Análise de Dados , Força da Mão/fisiologia , Mãos/fisiologia , Redes Neurais de Computação , Tato/fisiologia , Conjuntos de Dados como Assunto , Humanos , Sistemas Homem-Máquina
5.
Int J Comput Vis ; 132(4): 1148-1166, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38549787

RESUMO

Portrait viewpoint and illumination editing is an important problem with several applications in VR/AR, movies, and photography. Comprehensive knowledge of geometry and illumination is critical for obtaining photorealistic results. Current methods are unable to explicitly model in 3D while handling both viewpoint and illumination editing from a single image. In this paper, we propose VoRF, a novel approach that can take even a single portrait image as input and relight human heads under novel illuminations that can be viewed from arbitrary viewpoints. VoRF represents a human head as a continuous volumetric field and learns a prior model of human heads using a coordinate-based MLP with individual latent spaces for identity and illumination. The prior model is learned in an auto-decoder manner over a diverse class of head shapes and appearances, allowing VoRF to generalize to novel test identities from a single input image. Additionally, VoRF has a reflectance MLP that uses the intermediate features of the prior model for rendering One-Light-at-A-Time (OLAT) images under novel views. We synthesize novel illuminations by combining these OLAT images with target environment maps. Qualitative and quantitative evaluations demonstrate the effectiveness of VoRF for relighting and novel view synthesis, even when applied to unseen subjects under uncontrolled illumination. This work is an extension of Rao et al. (VoRF: Volumetric Relightable Faces 2022). We provide extensive evaluation and ablative studies of our model and also provide an application, where any face can be relighted using textual input.

7.
Phys Rev Lett ; 124(11): 114301, 2020 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-32242717

RESUMO

An elastic cloak is a coating material that can be applied to an arbitrary inclusion to make it indistinguishable from the background medium. Cloaking against elastic disturbances, in particular, has been demonstrated using several designs and gauges. None, however, tolerate the coexistence of normal and shear stresses due to a shortage of physical realization of transformation-invariant elastic materials. Here, we overcome this limitation to design and fabricate a new class of polar materials with a distribution of body torque that exhibits asymmetric stresses. A static cloak for full two-dimensional elasticity is thus constructed based on the transformation method. The proposed cloak is made of a functionally graded multilayered lattice embedded in an isotropic continuum background. While one layer is tailored to produce a target elastic behavior, the other layers impose a set of kinematic constraints equivalent to a distribution of body torque that breaks the stress symmetry. Experimental testing under static compressive and shear loads demonstrates encouraging cloaking performance in good agreement with our theoretical prediction. The work sets a precedent in the field of transformation elasticity and should find applications in mechanical stress shielding and stealth technologies.

8.
Sci Adv ; 10(5): eadk4284, 2024 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-38306429

RESUMO

The conflict between stiffness and toughness is a fundamental problem in engineering materials design. However, the systematic discovery of microstructured composites with optimal stiffness-toughness trade-offs has never been demonstrated, hindered by the discrepancies between simulation and reality and the lack of data-efficient exploration of the entire Pareto front. We introduce a generalizable pipeline that integrates physical experiments, numerical simulations, and artificial neural networks to address both challenges. Without any prescribed expert knowledge of material design, our approach implements a nested-loop proposal-validation workflow to bridge the simulation-to-reality gap and find microstructured composites that are stiff and tough with high sample efficiency. Further analysis of Pareto-optimal designs allows us to automatically identify existing toughness enhancement mechanisms, which were previously found through trial and error or biomimicry. On a broader scale, our method provides a blueprint for computational design in various research areas beyond solid mechanics, such as polymer chemistry, fluid dynamics, meteorology, and robotics.

9.
Nat Commun ; 15(1): 868, 2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38286796

RESUMO

Human-machine interfaces for capturing, conveying, and sharing tactile information across time and space hold immense potential for healthcare, augmented and virtual reality, human-robot collaboration, and skill development. To realize this potential, such interfaces should be wearable, unobtrusive, and scalable regarding both resolution and body coverage. Taking a step towards this vision, we present a textile-based wearable human-machine interface with integrated tactile sensors and vibrotactile haptic actuators that are digitally designed and rapidly fabricated. We leverage a digital embroidery machine to seamlessly embed piezoresistive force sensors and arrays of vibrotactile actuators into textiles in a customizable, scalable, and modular manner. We use this process to create gloves that can record, reproduce, and transfer tactile interactions. User studies investigate how people perceive the sensations reproduced by our gloves with integrated vibrotactile haptic actuators. To improve the effectiveness of tactile interaction transfer, we develop a machine-learning pipeline that adaptively models how each individual user reacts to haptic sensations and then optimizes haptic feedback parameters. Our interface showcases adaptive tactile interaction transfer through the implementation of three end-to-end systems: alleviating tactile occlusion, guiding people to perform physical skills, and enabling responsive robot teleoperation.


Assuntos
Percepção do Tato , Interface Usuário-Computador , Humanos , Tato , Têxteis , Retroalimentação
10.
Sci Data ; 11(1): 343, 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38580698

RESUMO

The sports industry is witnessing an increasing trend of utilizing multiple synchronized sensors for player data collection, enabling personalized training systems with multi-perspective real-time feedback. Badminton could benefit from these various sensors, but there is a scarcity of comprehensive badminton action datasets for analysis and training feedback. Addressing this gap, this paper introduces a multi-sensor badminton dataset for forehand clear and backhand drive strokes, based on interviews with coaches for optimal usability. The dataset covers various skill levels, including beginners, intermediates, and experts, providing resources for understanding biomechanics across skill levels. It encompasses 7,763 badminton swing data from 25 players, featuring sensor data on eye tracking, body tracking, muscle signals, and foot pressure. The dataset also includes video recordings, detailed annotations on stroke type, skill level, sound, ball landing, and hitting location, as well as survey and interview data. We validated our dataset by applying a proof-of-concept machine learning model to all annotation data, demonstrating its comprehensive applicability in advanced badminton training and research.


Assuntos
Desempenho Atlético , Esportes com Raquete , Dispositivos Eletrônicos Vestíveis , Fenômenos Biomecânicos , Extremidade Inferior , Esportes com Raquete/fisiologia , Humanos
11.
Artigo em Inglês | MEDLINE | ID: mdl-38515455

RESUMO

Wearable devices like smartwatches and smart wristbands have gained substantial popularity in recent years. However, their small interfaces create inconvenience and limit computing functionality. To fill this gap, we propose ViWatch, which enables robust finger interactions under deployment variations, and relies on a single IMU sensor that is ubiquitous in COTS smartwatches. To this end, we design an unsupervised Siamese adversarial learning method. We built a real-time system on commodity smartwatches and tested it with over one hundred volunteers. Results show that the system accuracy is about 97% over a week. In addition, it is resistant to deployment variations such as different hand shapes, finger activity strengths, and smartwatch positions on the wrist. We also developed a number of mobile applications using our interactive system and conducted a user study where all participants preferred our un-supervised approach to supervised calibration. The demonstration of ViWatch is shown at https://youtu.be/N5-ggvy2qfI.

12.
Light Sci Appl ; 11(1): 247, 2022 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-35922407

RESUMO

Computer-generated holography (CGH) provides volumetric control of coherent wavefront and is fundamental to applications such as volumetric 3D displays, lithography, neural photostimulation, and optical/acoustic trapping. Recently, deep learning-based methods emerged as promising computational paradigms for CGH synthesis that overcome the quality-runtime tradeoff in conventional simulation/optimization-based methods. Yet, the quality of the predicted hologram is intrinsically bounded by the dataset's quality. Here we introduce a new hologram dataset, MIT-CGH-4K-V2, that uses a layered depth image as a data-efficient volumetric 3D input and a two-stage supervised+unsupervised training protocol for direct synthesis of high-quality 3D phase-only holograms. The proposed system also corrects vision aberration, allowing customization for end-users. We experimentally show photorealistic 3D holographic projections and discuss relevant spatial light modulator calibration procedures. Our method runs in real-time on a consumer GPU and 5 FPS on an iPhone 13 Pro, promising drastically enhanced performance for the applications above.

13.
Adv Sci (Weinh) ; 9(23): e2101864, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35678650

RESUMO

Polymers are widely studied materials with diverse properties and applications determined by molecular structures. It is essential to represent these structures clearly and explore the full space of achievable chemical designs. However, existing approaches cannot offer comprehensive design models for polymers because of their inherent scale and structural complexity. Here, a parametric, context-sensitive grammar designed specifically for polymers (PolyGrammar) is proposed. Using the symbolic hypergraph representation and 14 simple production rules, PolyGrammar can represent and generate all valid polyurethane structures. An algorithm is presented to translate any polyurethane structure from the popular Simplified Molecular-Input Line-entry System (SMILES) string format into the PolyGrammar representation. The representative power of PolyGrammar is tested by translating a dataset of over 600 polyurethane samples collected from the literature. Furthermore, it is shown that PolyGrammar can be easily extended to other copolymers and homopolymers. By offering a complete, explicit representation scheme and an explainable generative model with validity guarantees, PolyGrammar takes an essential step toward a more comprehensive and practical system for polymer discovery and exploration. As the first bridge between formal languages and chemistry, PolyGrammar also serves as a critical blueprint to inform the design of similar grammars for other chemistries, including organic and inorganic molecules.

14.
Sci Adv ; 7(42): eabf7435, 2021 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-34652949

RESUMO

Additive manufacturing has become one of the forefront technologies in fabrication, enabling products impossible to manufacture before. Although many materials exist for additive manufacturing, most suffer from performance trade-offs. Current materials are designed with inefficient human-driven intuition-based methods, leaving them short of optimal solutions. We propose a machine learning approach to accelerating the discovery of additive manufacturing materials with optimal trade-offs in mechanical performance. A multiobjective optimization algorithm automatically guides the experimental design by proposing how to mix primary formulations to create better performing materials. The algorithm is coupled with a semiautonomous fabrication platform to substantially reduce the number of performed experiments and overall time to solution. Without prior knowledge of the primary formulations, the proposed methodology autonomously uncovers 12 optimal formulations and enlarges the discovered performance space 288 times after only 30 experimental iterations. This methodology could be easily generalized to other material design systems and enable automated discovery.

15.
ACS Nano ; 14(8): 10413-10420, 2020 Aug 25.
Artigo em Inglês | MEDLINE | ID: mdl-32806046

RESUMO

Refractory metals and their carbides possess extraordinary chemical and temperature resilience and exceptional mechanical strength. Yet, they are notoriously difficult to employ in additive manufacturing, due to the high temperatures needed for processing. State of the art approaches to manufacture these materials generally require either a high-energy laser or electron beam as well as ventilation to protect the metal powder from combustion. Here, we present a versatile manufacturing process that utilizes tar as both a light absorber and antioxidant binder to sinter thin films of aluminum, copper, nickel, molybdenum, and tungsten powder using a low power (<2W) CO2 laser in air. Films of sintered Al/Cu/Ni metals have sheet resistances of ∼10-1 ohm/sq, while laser-sintered Mo/W-tar thin films form carbide phases. Several devices are demonstrated, including laser-sintered porous copper with a stable response to large strain (3.0) after 150 cycles, and a laserprocessed Mo/MoC(1-x) filament that reaches T ∼1000 °C in open air at 12 V. These results show that tar-mediated laser sintering represents a possible low energy, cost-effective route for engineering refractory materials and one that can easily be extended to additive manufacturing processes.

16.
Sci Adv ; 5(7): eaaw1160, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31309144

RESUMO

Upcoming actuation systems will be required to perform multiple tightly coupled functions analogous to their natural counterparts; e.g., the ability to control displacements and high-resolution appearance simultaneously is necessary for mimicking the camouflage seen in cuttlefish. Creating integrated actuation systems is challenging owing to the combined complexity of generating high-dimensional designs and developing multifunctional materials and their associated fabrication processes. Here, we present a complete toolkit consisting of multiobjective topology optimization (for design synthesis) and multimaterial drop-on-demand three-dimensional printing for fabricating complex actuators (>106 design dimensions). The actuators consist of soft and rigid polymers and a magnetic nanoparticle/polymer composite that responds to a magnetic field. The topology optimizer assigns materials for individual voxels (volume elements) while simultaneously optimizing for physical deflection and high-resolution appearance. Unifying a topology optimization-based design strategy with a multimaterial fabrication process enables the creation of complex actuators and provides a promising route toward automated, goal-driven fabrication.

17.
Sci Adv ; 4(1): eaao7005, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29376124

RESUMO

Modern fabrication techniques, such as additive manufacturing, can be used to create materials with complex custom internal structures. These engineered materials exhibit a much broader range of bulk properties than their base materials and are typically referred to as metamaterials or microstructures. Although metamaterials with extraordinary properties have many applications, designing them is very difficult and is generally done by hand. We propose a computational approach to discover families of microstructures with extremal macroscale properties automatically. Using efficient simulation and sampling techniques, we compute the space of mechanical properties covered by physically realizable microstructures. Our system then clusters microstructures with common topologies into families. Parameterized templates are eventually extracted from families to generate new microstructure designs. We demonstrate these capabilities on the computational design of mechanical metamaterials and present five auxetic microstructure families with extremal elastic material properties. Our study opens the way for the completely automated discovery of extremal microstructures across multiple domains of physics, including applications reliant on thermal, electrical, and magnetic properties.

18.
IEEE Comput Graph Appl ; 37(3): 34-42, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28459670

RESUMO

The FabSquare system is a personal fabrication method that lets users fabricate objects by molding photopolymers inside a 3D printed mold. The molds are printed with UV-transparent materials that allow for UV curing--the polymerization and solidification of the fluid content. The molds can be repeatedly reused to fabricate identical objects or create new objects with identical geometry, but different components. Because the necessary equipment is easily obtainable and affordable, the FabSquare approach is suitable for ordinary users in nonspecialized labs, allowing them to rapidly fabricate a range of objects. https://extras.computer.org/extra/mcg2017030034s1.mp4https://extras.computer.org/extra/mcg2017030034s2.pdf.

19.
ACS Appl Mater Interfaces ; 9(37): 32290-32298, 2017 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-28825288

RESUMO

Self-transforming structures are gaining prominence due to their general ability to adopt programmed shapes each tailored for specific functions. Composites that self-fold have so far relied on using the stimuli-responsive mechanisms focusing on reversible shape change. Integrating additional functions within these composites can rapidly enhance their practical applicability; however, this remains a challenging problem. Here, we demonstrate a method for spontaneous folding of three-dimensional (3D)-printed composites with embedded electronics at room temperature. The composite is printed using a multimaterial 3D-printing process with no external processing steps. Upon peeling from the print platform, the composite self-shapes itself using the residual forces resulting from polymer swelling during the layer-by-layer fabrication process. As a specific example, electrochromic elements are printed within the composite and can be electrically controlled through its folded legs. Our shape-transformation scheme provides a route to transform planar electronics into nonplanar geometries containing the overhangs. Integrating electronics within complex 3D shapes can enable new applications in sensing and robotics.

20.
IEEE Comput Graph Appl ; 33(6): 48-57, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24808130

RESUMO

A new method fabricates custom surface reflectance and spatially varying bidirectional reflectance distribution functions (svBRDFs). Researchers optimize a microgeometry for a range of normal distribution functions and simulate the resulting surface's effective reflectance. Using the simulation's results, they reproduce an input svBRDF's appearance by distributing the microgeometry on the printed material's surface. This method lets people print svBRDFs on planar samples with current 3D printing technology, even with a limited set of printing materials. It extends naturally to printing svBRDFs on arbitrary shapes.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa