RESUMEN
Additive manufacturing is an expanding multidisciplinary field encompassing applications including medical devices1, aerospace components2, microfabrication strategies3,4 and artificial organs5. Among additive manufacturing approaches, light-based printing technologies, including two-photon polymerization6, projection micro stereolithography7,8 and volumetric printing9-14, have garnered significant attention due to their speed, resolution or potential applications for biofabrication. Here we introduce dynamic interface printing, a new 3D printing approach that leverages an acoustically modulated, constrained air-liquid boundary to rapidly generate centimetre-scale 3D structures within tens of seconds. Unlike volumetric approaches, this process eliminates the need for intricate feedback systems, specialized chemistry or complex optics while maintaining rapid printing speeds. We demonstrate the versatility of this technique across a broad array of materials and intricate geometries, including those that would be impossible to print with conventional layer-by-layer methods. In doing so, we demonstrate the rapid fabrication of complex structures in situ, overprinting, structural parallelization and biofabrication utility. Moreover, we show that the formation of surface waves at the air-liquid boundary enables enhanced mass transport, improves material flexibility and permits 3D particle patterning. We, therefore, anticipate that this approach will be invaluable for applications where high-resolution, scalable throughput and biocompatible printing is required.
Asunto(s)
Impresión Tridimensional , Aire , Acústica , Bioimpresión/métodos , Ingeniería de Tejidos/métodos , Factores de TiempoRESUMEN
Fabrics, by virtue of their composition and structure, have traditionally been used as acoustic absorbers1,2. Here, inspired by the auditory system3, we introduce a fabric that operates as a sensitive audible microphone while retaining the traditional qualities of fabrics, such as machine washability and draping. The fabric medium is composed of high-Young's modulus textile yarns in the weft of a cotton warp, converting tenuous 10-7-atmosphere pressure waves at audible frequencies into lower-order mechanical vibration modes. Woven into the fabric is a thermally drawn composite piezoelectric fibre that conforms to the fabric and converts the mechanical vibrations into electrical signals. Key to the fibre sensitivity is an elastomeric cladding that concentrates the mechanical stress in a piezocomposite layer with a high piezoelectric charge coefficient of approximately 46 picocoulombs per newton, a result of the thermal drawing process. Concurrent measurements of electric output and spatial vibration patterns in response to audible acoustic excitation reveal that fabric vibrational modes with nanometre amplitude displacement are the source of the electrical output of the fibre. With the fibre subsuming less than 0.1% of the fabric by volume, a single fibre draw enables tens of square metres of fabric microphone. Three different applications exemplify the usefulness of this study: a woven shirt with dual acoustic fibres measures the precise direction of an acoustic impulse, bidirectional communications are established between two fabrics working as sound emitters and receivers, and a shirt auscultates cardiac sound signals.
Asunto(s)
Textiles , Vibración , Dispositivos Electrónicos Vestibles , Acústica , Fibras de la Dieta , Auscultación CardíacaRESUMEN
The loss of elastic stability (buckling) can lead to catastrophic failure in the context of traditional engineering structures. Conversely, in nature, buckling often serves a desirable function, such as in the prey-trapping mechanism of the Venus fly trap (Dionaea muscipula). This paper investigates the buckling-enabled sound production in the wingbeat-powered (aeroelastic) tymbals of Yponomeuta moths. The hindwings of Yponomeuta possess a striated band of ridges that snap through sequentially during the up- and downstroke of the wingbeat cycle-a process reminiscent of cellular buckling in compressed slender shells. As a result, bursts of ultrasonic clicks are produced that deter predators (i.e. bats). Using various biological and mechanical characterization techniques, we show that wing camber changes during the wingbeat cycle act as the single actuation mechanism that causes buckling to propagate sequentially through each stria on the tymbal. The snap-through of each stria excites a bald patch of the wing's membrane, thereby amplifying sound pressure levels and radiating sound at the resonant frequencies of the patch. In addition, the interaction of phased tymbal clicks from the two wings enhances the directivity of the acoustic signal strength, suggesting an improvement in acoustic protection. These findings unveil the acousto-mechanics of Yponomeuta tymbals and uncover their buckling-driven evolutionary origin. We anticipate that through bioinspiration, aeroelastic tymbals will encourage novel developments in the context of multi-stable morphing structures, acoustic structural monitoring, and soft robotics.
Asunto(s)
Mariposas Nocturnas , Sonido , Animales , Ultrasonido , AcústicaRESUMEN
The development of individuality during learned behavior is a common trait observed across animal species; however, the underlying biological mechanisms remain understood. Similar to human speech, songbirds develop individually unique songs with species-specific traits through vocal learning. In this study, we investigate the developmental and molecular mechanisms underlying individuality in vocal learning by utilizing F1 hybrid songbirds (Taeniopygia guttata cross with Taeniopygia bichenovii), taking an integrating approach combining experimentally controlled systematic song tutoring, unbiased discriminant analysis of song features, and single-cell transcriptomics. When tutoring with songs from both parental species, F1 hybrid individuals exhibit evident diversity in their acquired songs. Approximately 30% of F1 hybrids selectively learn either song of the two parental species, while others develop merged songs that combine traits from both species. Vocal acoustic biases during vocal babbling initially appear as individual differences in songs among F1 juveniles and are maintained through the sensitive period of song vocal learning. These vocal acoustic biases emerge independently of the initial auditory experience of hearing the biological father's and passive tutored songs. We identify individual differences in transcriptional signatures in a subset of cell types, including the glutamatergic neurons projecting from the cortical vocal output nucleus to the hypoglossal nuclei, which are associated with variations of vocal acoustic features. These findings suggest that a genetically predisposed vocal motor bias serves as the initial origin of individual variation in vocal learning, influencing learning constraints and preferences.
Asunto(s)
Individualidad , Pájaros Cantores , Animales , Humanos , Predisposición Genética a la Enfermedad , Habla , Acústica , SesgoRESUMEN
Motion is the basis of nearly all animal behavior. Evolution has led to some extraordinary specializations of propulsion mechanisms among invertebrates, including the mandibles of the dracula ant and the claw of the pistol shrimp. In contrast, vertebrate skeletal movement is considered to be limited by the speed of muscle, saturating around 250 Hz. Here, we describe the unique propulsion mechanism by which Danionella cerebrum, a miniature cyprinid fish of only 12 mm length, produces high amplitude sounds exceeding 140 dB (re. 1 µPa, at a distance of one body length). Using a combination of high-speed video, micro-computed tomography (micro-CT), RNA profiling, and finite difference simulations, we found that D. cerebrum employ a unique sound production mechanism that involves a drumming cartilage, a specialized rib, and a dedicated muscle adapted for low fatigue. This apparatus accelerates the drumming cartilage at over 2,000 g, shooting it at the swim bladder to generate a rapid, loud pulse. These pulses are chained together to make calls with either bilaterally alternating or unilateral muscle contractions. D. cerebrum use this remarkable mechanism for acoustic communication with conspecifics.
Asunto(s)
Comunicación Animal , Cyprinidae , Animales , Microtomografía por Rayos X , Sonido , Acústica , Cyprinidae/genéticaRESUMEN
Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.
Asunto(s)
Música , Humanos , Música/psicología , Sensación , Comparación Transcultural , Acústica , Emociones , Percepción AuditivaRESUMEN
Changes in behaviour resulting from environmental influences, development and learning1-5 are commonly quantified on the basis of a few hand-picked features2-4,6,7 (for example, the average pitch of acoustic vocalizations3), assuming discrete classes of behaviours (such as distinct vocal syllables)2,3,8-10. However, such methods generalize poorly across different behaviours and model systems and may miss important components of change. Here we present a more-general account of behavioural change that is based on nearest-neighbour statistics11-13, and apply it to song development in a songbird, the zebra finch3. First, we introduce the concept of 'repertoire dating', whereby each rendition of a behaviour (for example, each vocalization) is assigned a repertoire time, reflecting when similar renditions were typical in the behavioural repertoire. Repertoire time isolates the components of vocal variability that are congruent with long-term changes due to vocal learning and development, and stratifies the behavioural repertoire into 'regressions', 'anticipations' and 'typical renditions'. Second, we obtain a holistic, yet low-dimensional, description of vocal change in terms of a stratified 'behavioural trajectory', revealing numerous previously unrecognized components of behavioural change on fast and slow timescales, as well as distinct patterns of overnight consolidation1,2,4,14,15 across the behavioral repertoire. We find that diurnal changes in regressions undergo only weak consolidation, whereas anticipations and typical renditions consolidate fully. Because of its generality, our nonparametric description of how behaviour evolves relative to itself-rather than to a potentially arbitrary, experimenter-defined goal2,3,14,16-appears well suited for comparing learning and change across behaviours and species17,18, as well as biological and artificial systems5.
Asunto(s)
Pinzones/fisiología , Aprendizaje/fisiología , Modelos Neurológicos , Desempeño Psicomotor/fisiología , Vocalización Animal/fisiología , Acústica , Animales , Simulación por Computador , Interpretación Estadística de Datos , Masculino , Factores de TiempoRESUMEN
Despite the variability of music across cultures, some types of human songs share acoustic characteristics. For example, dance songs tend to be loud and rhythmic, and lullabies tend to be quiet and melodious. Human perceptual sensitivity to the behavioral contexts of songs, based on these musical features, suggests that basic properties of music are mutually intelligible, independent of linguistic or cultural content. Whether these effects reflect universal interpretations of vocal music, however, is unclear because prior studies focus almost exclusively on English-speaking participants, a group that is not representative of humans. Here, we report shared intuitions concerning the behavioral contexts of unfamiliar songs produced in unfamiliar languages, in participants living in Internet-connected industrialized societies (n = 5,516 native speakers of 28 languages) or smaller-scale societies with limited access to global media (n = 116 native speakers of three non-English languages). Participants listened to songs randomly selected from a representative sample of human vocal music, originally used in four behavioral contexts, and rated the degree to which they believed the song was used for each context. Listeners in both industrialized and smaller-scale societies inferred the contexts of dance songs, lullabies, and healing songs, but not love songs. Within and across cohorts, inferences were mutually consistent. Further, increased linguistic or geographical proximity between listeners and singers only minimally increased the accuracy of the inferences. These results demonstrate that the behavioral contexts of three common forms of music are mutually intelligible cross-culturally and imply that musical diversity, shaped by cultural evolution, is nonetheless grounded in some universal perceptual phenomena.
Asunto(s)
Evolución Cultural , Música , Humanos , Lenguaje , Lingüística , AcústicaRESUMEN
We report a label-free acoustic microfluidic method to confine single, cilia-driven swimming cells in space without limiting their rotational degrees of freedom. Our platform integrates a surface acoustic wave (SAW) actuator and bulk acoustic wave (BAW) trapping array to enable multiplexed analysis with high spatial resolution and trapping forces that are strong enough to hold individual microswimmers. The hybrid BAW/SAW acoustic tweezers employ high-efficiency mode conversion to achieve submicron image resolution while compensating for parasitic system losses to immersion oil in contact with the microfluidic chip. We use the platform to quantify cilia and cell body motion for wildtype biciliate cells, investigating effects of environmental variables like temperature and viscosity on ciliary beating, synchronization, and three-dimensional helical swimming. We confirm and expand upon the existing understanding of these phenomena, for example determining that increasing viscosity promotes asynchronous beating. Motile cilia are subcellular organelles that propel microorganisms or direct fluid and particulate flow. Thus, cilia are critical to cell survival and human health. The unicellular alga Chlamydomonas reinhardtii is widely used to investigate the mechanisms underlying ciliary beating and coordination. However, freely swimming cells are difficult to image with sufficient resolution to capture cilia motion, necessitating that the cell body be held during experiments. Acoustic confinement is a compelling alternative to use of a micropipette, or to magnetic, electrical, and optical trapping that may modify the cells and affect their behavior. Beyond establishing our approach to studying microswimmers, we demonstrate a unique ability to mechanically perturb cells via rapid acoustic positioning.
Asunto(s)
Acústica , Natación , Humanos , Sonido , Cilios , Cuerpo CelularRESUMEN
How humans and animals segregate sensory information into discrete, behaviorally meaningful categories is one of the hallmark questions in neuroscience. Much of the research around this topic in the auditory system has centered around human speech perception, in which categorical processes result in an enhanced sensitivity for acoustically meaningful differences and a reduced sensitivity for nonmeaningful distinctions. Much less is known about whether nonhuman primates process their species-specific vocalizations in a similar manner. We address this question in the common marmoset, a small arboreal New World primate with a rich vocal repertoire produced across a range of behavioral contexts. We first show that marmosets perceptually categorize their vocalizations in ways that correspond to previously defined call types for this species. Next, we show that marmosets are differentially sensitive to changes in particular acoustic features of their most common call types and that these sensitivity differences are matched to the population statistics of their vocalizations in ways that likely maximize category formation. Finally, we show that marmosets are less sensitive to changes in these acoustic features when within the natural range of variability of their calls, which possibly reflects perceptual specializations which maintain existing call categories. These findings suggest specializations for categorical vocal perception in a New World primate species and pave the way for future studies examining their underlying neural mechanisms.
Asunto(s)
Callithrix , Percepción del Habla , Animales , Humanos , Vocalización Animal , Acústica , Especificidad de la EspecieRESUMEN
The emergence of complex social interactions is predicted to be an important selective force in the diversification of communication systems. Parental care presents a key social context in which to study the evolution of novel signals, as care often requires communication and behavioral coordination between parents and is an evolutionary stepping-stone toward increasingly complex social systems. Anuran amphibians (frogs and toads) are a classic model of acoustic communication and the vocal repertoires of many species have been characterized in the contexts of advertisement, courtship, and aggression, yet quantitative descriptions of calls elicited in the context of parental care are lacking. The biparental poison frog, Ranitomeya imitator, exhibits a remarkable parenting behavior in which females, cued by the calls of their male partners, feed tadpoles unfertilized eggs. Here, we characterized and compared calls across three social contexts, for the first time including a parental care context. We found that egg-feeding calls share some properties with both advertisement and courtship calls but also had unique properties. Multivariate analysis revealed high classification success for advertisement and courtship calls but misclassified nearly half of egg feeding calls as either advertisement or courtship calls. Egg feeding and courtship calls both contained less identity information than advertisement calls, as expected for signals used in close-range communication where uncertainty about identity is low and additional signal modalities may be used. Taken together, egg-feeding calls likely borrowed and recombined elements of both ancestral call types to solicit a novel, context-dependent parenting response.
Asunto(s)
Anuros , Vocalización Animal , Animales , Femenino , Masculino , Vocalización Animal/fisiología , Anuros/fisiología , Acústica , Análisis Multivariante , Conducta CooperativaRESUMEN
Neural speech tracking has advanced our understanding of how our brains rapidly map an acoustic speech signal onto linguistic representations and ultimately meaning. It remains unclear, however, how speech intelligibility is related to the corresponding neural responses. Many studies addressing this question vary the level of intelligibility by manipulating the acoustic waveform, but this makes it difficult to cleanly disentangle the effects of intelligibility from underlying acoustical confounds. Here, using magnetoencephalography recordings, we study neural measures of speech intelligibility by manipulating intelligibility while keeping the acoustics strictly unchanged. Acoustically identical degraded speech stimuli (three-band noise-vocoded, ~20 s duration) are presented twice, but the second presentation is preceded by the original (nondegraded) version of the speech. This intermediate priming, which generates a "pop-out" percept, substantially improves the intelligibility of the second degraded speech passage. We investigate how intelligibility and acoustical structure affect acoustic and linguistic neural representations using multivariate temporal response functions (mTRFs). As expected, behavioral results confirm that perceived speech clarity is improved by priming. mTRFs analysis reveals that auditory (speech envelope and envelope onset) neural representations are not affected by priming but only by the acoustics of the stimuli (bottom-up driven). Critically, our findings suggest that segmentation of sounds into words emerges with better speech intelligibility, and most strongly at the later (~400 ms latency) word processing stage, in prefrontal cortex, in line with engagement of top-down mechanisms associated with priming. Taken together, our results show that word representations may provide some objective measures of speech comprehension.
Asunto(s)
Inteligibilidad del Habla , Percepción del Habla , Inteligibilidad del Habla/fisiología , Estimulación Acústica/métodos , Habla/fisiología , Ruido , Acústica , Magnetoencefalografía/métodos , Percepción del Habla/fisiologíaRESUMEN
Male crickets attract females by producing calls with their forewings. Louder calls travel further and are more effective at attracting mates. However, crickets are much smaller than the wavelength of their call, and this limits their power output. A small group called tree crickets make acoustic tools called baffles which reduce acoustic short-circuiting, a source of dipole inefficiency. Here, we ask why baffling is uncommon among crickets. We hypothesize that baffling may be rare because like other tools they offer insufficient advantage for most species. To test this, we modelled the calling efficiencies of crickets within the full space of possible natural wing sizes and call frequencies, in multiple acoustic environments. We then generated efficiency landscapes, within which we plotted 112 cricket species across 7 phylogenetic clades. We found that all sampled crickets, in all conditions, could gain efficiency from tool use. Surprisingly, we also found that calling from the ground significantly increased efficiency, with or without a baffle, by as much as an order of magnitude. We found that the ground provides some reduction of acoustic short-circuiting but also halves the air volume within which sound is radiated. It simultaneously reflects sound upwards, allowing recapture of a significant amount of acoustic energy through constructive interference. Thus, using the ground as a reflective baffle is an effective strategy for increasing calling efficiency. Indeed, theory suggests that this increase in efficiency is accessible not just to crickets but to all acoustically communicating animals whether they are dipole or monopole sound sources.
Asunto(s)
Críquet , Gryllidae , Animales , Femenino , Filogenia , Acústica , Sonido , Alas de Animales , Vocalización Animal , Estimulación AcústicaRESUMEN
The process by which sensory evidence contributes to perceptual choices requires an understanding of its transformation into decision variables. Here, we address this issue by evaluating the neural representation of acoustic information in the auditory cortex-recipient parietal cortex, while gerbils either performed a two-alternative forced-choice auditory discrimination task or while they passively listened to identical acoustic stimuli. During task engagement, stimulus identity decoding performance from simultaneously recorded parietal neurons significantly correlated with psychometric sensitivity. In contrast, decoding performance during passive listening was significantly reduced. Principal component and geometric analyses revealed the emergence of low-dimensional encoding of linearly separable manifolds with respect to stimulus identity and decision, but only during task engagement. These findings confirm that the parietal cortex mediates a transition of acoustic representations into decision-related variables. Finally, using a clustering analysis, we identified three functionally distinct subpopulations of neurons that each encoded task-relevant information during separate temporal segments of a trial. Taken together, our findings demonstrate how parietal cortex neurons integrate and transform encoded auditory information to guide sound-driven perceptual decisions.
Asunto(s)
Corteza Auditiva , Lóbulo Parietal , Animales , Lóbulo Parietal/fisiología , Percepción Auditiva/fisiología , Corteza Auditiva/fisiología , Estimulación Acústica , Acústica , GerbillinaeRESUMEN
Humans can recognize differences in sound intensity of up to 6 orders of magnitude. However, it is not clear how this is achieved and what enables our auditory systems to encode such a gradient. Özçete & Moser (2021) report in this issue that the key to this lies in the synaptic heterogeneity within individual sensory cells in the inner ear.
Asunto(s)
Acústica , HumanosRESUMEN
Measuring cellular and tissue mechanics inside intact living organisms is essential for interrogating the roles of force in physiological and disease processes. Current agents for studying the mechanobiology of intact, living organisms are limited by poor light penetration and material stability. Magnetomotive ultrasound is an emerging modality for real-time in vivo imaging of tissue mechanics. Nonetheless, it has poor sensitivity and spatiotemporal resolution. Here we describe magneto-gas vesicles (MGVs), protein nanostructures based on gas vesicles and magnetic nanoparticles that produce differential ultrasound signals in response to varying mechanical properties of surrounding tissues. These hybrid nanomaterials significantly improve signal strength and detection sensitivity. Furthermore, MGVs enable non-invasive, long-term and quantitative measurements of mechanical properties within three-dimensional tissues and in vivo fibrosis models. Using MGVs as novel contrast agents, we demonstrate their potential for non-invasive imaging of tissue elasticity, offering insights into mechanobiology and its application to disease diagnosis and treatment.
Asunto(s)
Nanopartículas , Nanoestructuras , Diagnóstico por Imagen/métodos , Proteínas/química , Acústica , Nanopartículas/químicaRESUMEN
Where's Whaledo is a software toolkit that uses a combination of automated processes and user interfaces to greatly accelerate the process of reconstructing animal tracks from arrays of passive acoustic recording devices. Passive acoustic localization is a non-invasive yet powerful way to contribute to species conservation. By tracking animals through their acoustic signals, important information on diving patterns, movement behavior, habitat use, and feeding dynamics can be obtained. This method is useful for helping to understand habitat use, observe behavioral responses to noise, and develop potential mitigation strategies. Animal tracking using passive acoustic localization requires an acoustic array to detect signals of interest, associate detections on various receivers, and estimate the most likely source location by using the time difference of arrival (TDOA) of sounds on multiple receivers. Where's Whaledo combines data from two small-aperture volumetric arrays and a variable number of individual receivers. In a case study conducted in the Tanner Basin off Southern California, we demonstrate the effectiveness of Where's Whaledo in localizing groups of Ziphius cavirostris. We reconstruct the tracks of six individual animals vocalizing concurrently and identify Ziphius cavirostris tracks despite being obscured by a large pod of vocalizing dolphins.
Asunto(s)
Programas Informáticos , Vocalización Animal , Animales , Vocalización Animal/fisiología , Biología Computacional/métodos , Delfines/fisiología , AcústicaRESUMEN
Science-fiction movies portray volumetric systems that provide not only visual but also tactile and audible three-dimensional (3D) content. Displays based on swept-volume surfaces1,2, holography3, optophoretics4, plasmonics5 or lenticular lenslets6 can create 3D visual content without the need for glasses or additional instrumentation. However, they are slow, have limited persistence-of-vision capabilities and, most importantly, rely on operating principles that cannot produce tactile and auditive content as well. Here we present the multimodal acoustic trap display (MATD): a levitating volumetric display that can simultaneously deliver visual, auditory and tactile content, using acoustophoresis as the single operating principle. Our system traps a particle acoustically and illuminates it with red, green and blue light to control its colour as it quickly scans the display volume. Using time multiplexing with a secondary trap, amplitude modulation and phase minimization, the MATD delivers simultaneous auditive and tactile content. The system demonstrates particle speeds of up to 8.75 metres per second and 3.75 metres per second in the vertical and horizontal directions, respectively, offering particle manipulation capabilities superior to those of other optical or acoustic approaches demonstrated until now. In addition, our technique offers opportunities for non-contact, high-speed manipulation of matter, with applications in computational fabrication7 and biomedicine8.
Asunto(s)
Percepción Auditiva , Tacto , Percepción Visual , Estimulación Acústica , Acústica , HumanosRESUMEN
Various physical tweezers for manipulating liquid droplets based on optical, electrical, magnetic, acoustic, or other external fields have emerged and revolutionized research and application in medical, biological, and environmental fields. Despite notable progress, the existing modalities for droplet control and manipulation are still limited by the extra responsive additives and relatively poor controllability in terms of droplet motion behaviors, such as distance, velocity, and direction. Herein, we report a versatile droplet electrostatic tweezer (DEST) for remotely and programmatically trapping or guiding the liquid droplets under diverse conditions, such as in open and closed spaces and on flat and tilted surfaces as well as in oil medium. DEST, leveraging on the coulomb attraction force resulting from its electrostatic induction to a droplet, could manipulate droplets of various compositions, volumes, and arrays on various substrates, offering a potential platform for a series of applications, such as high-throughput surface-enhanced Raman spectroscopy detection with single measuring time less than 20 s.
Asunto(s)
Pinzas Ópticas , Electricidad Estática , Acústica , Magnetismo , Espectrometría RamanRESUMEN
Culture, a pillar of the remarkable ecological success of humans, is increasingly recognized as a powerful force structuring nonhuman animal populations. A key gap between these two types of culture is quantitative evidence of symbolic markers-seemingly arbitrary traits that function as reliable indicators of cultural group membership to conspecifics. Using acoustic data collected from 23 Pacific Ocean locations, we provide quantitative evidence that certain sperm whale acoustic signals exhibit spatial patterns consistent with a symbolic marker function. Culture segments sperm whale populations into behaviorally distinct clans, which are defined based on dialects of stereotyped click patterns (codas). We classified 23,429 codas into types using contaminated mixture models and hierarchically clustered coda repertoires into seven clans based on similarities in coda usage; then we evaluated whether coda usage varied with geographic distance within clans or with spatial overlap between clans. Similarities in within-clan usage of both "identity codas" (coda types diagnostic of clan identity) and "nonidentity codas" (coda types used by multiple clans) decrease as space between repertoire recording locations increases. However, between-clan similarity in identity, but not nonidentity, coda usage decreases as clan spatial overlap increases. This matches expectations if sympatry is related to a measurable pressure to diversify to make cultural divisions sharper, thereby providing evidence that identity codas function as symbolic markers of clan identity. Our study provides quantitative evidence of arbitrary traits, resembling human ethnic markers, conveying cultural identity outside of humans, and highlights remarkable similarities in the distributions of human ethnolinguistic groups and sperm whale clans.