ABSTRACT
Speech perception is thought to rely on a cortical feedforward serial transformation of acoustic into linguistic representations. Using intracranial recordings across the entire human auditory cortex, electrocortical stimulation, and surgical ablation, we show that cortical processing across areas is not consistent with a serial hierarchical organization. Instead, response latency and receptive field analyses demonstrate parallel and distinct information processing in the primary and nonprimary auditory cortices. This functional dissociation was also observed where stimulation of the primary auditory cortex evokes auditory hallucination but does not distort or interfere with speech perception. Opposite effects were observed during stimulation of nonprimary cortex in superior temporal gyrus. Ablation of the primary auditory cortex does not affect speech perception. These results establish a distributed functional organization of parallel information processing throughout the human auditory cortex and demonstrate an essential independent role for nonprimary auditory cortex in speech processing.
Subject(s)
Auditory Cortex/physiology , Speech/physiology , Audiometry, Pure-Tone , Electrodes , Electronic Data Processing , Humans , Phonetics , Pitch Perception , Reaction Time/physiology , Temporal Lobe/physiologyABSTRACT
Most deaths from cancer are explained by metastasis, and yet large-scale metastasis research has been impractical owing to the complexity of in vivo models. Here we introduce an in vivo barcoding strategy that is capable of determining the metastatic potential of human cancer cell lines in mouse xenografts at scale. We validated the robustness, scalability and reproducibility of the method and applied it to 500 cell lines1,2 spanning 21 types of solid tumour. We created a first-generation metastasis map (MetMap) that reveals organ-specific patterns of metastasis, enabling these patterns to be associated with clinical and genomic features. We demonstrate the utility of MetMap by investigating the molecular basis of breast cancers capable of metastasizing to the brain-a principal cause of death in patients with this type of cancer. Breast cancers capable of metastasizing to the brain showed evidence of altered lipid metabolism. Perturbation of lipid metabolism in these cells curbed brain metastasis development, suggesting a therapeutic strategy to combat the disease and demonstrating the utility of MetMap as a resource to support metastasis research.
Subject(s)
Breast Neoplasms/pathology , Cell Movement , Neoplasm Metastasis/pathology , Organ Specificity , Animals , Brain Neoplasms/genetics , Brain Neoplasms/metabolism , Brain Neoplasms/pathology , Brain Neoplasms/secondary , Breast Neoplasms/genetics , Breast Neoplasms/metabolism , Cell Line, Tumor , Electronic Data Processing , Female , Heterografts , Humans , Lipid Metabolism/genetics , Mice , Molecular Typing , Mutation , Neoplasm Metastasis/genetics , Neoplasm Transplantation , Pilot ProjectsABSTRACT
MOTIVATION: The process of analyzing high throughput sequencing data often requires the identification and extraction of specific target sequences. This could include tasks, such as identifying cellular barcodes and UMIs in single-cell data, and specific genetic variants for genotyping. However, existing tools, which perform these functions are often task-specific, such as only demultiplexing barcodes for a dedicated type of experiment, or are not tolerant to noise in the sequencing data. RESULTS: To overcome these limitations, we developed Flexiplex, a versatile and fast sequence searching and demultiplexing tool for omics data, which is based on the Levenshtein distance and thus allows imperfect matches. We demonstrate Flexiplex's application on three use cases, identifying cell-line-specific sequences in Illumina short-read single-cell data, and discovering and demultiplexing cellular barcodes from noisy long-read single-cell RNA-seq data. We show that Flexiplex achieves an excellent balance of accuracy and computational efficiency compared to leading task-specific tools. AVAILABILITY AND IMPLEMENTATION: Flexiplex is available at https://davidsongroup.github.io/flexiplex/.
Subject(s)
Search Engine , Software , Sequence Analysis, DNA , High-Throughput Nucleotide Sequencing , Electronic Data ProcessingABSTRACT
Metacognitive frameworks such as processing fluency often suggest people respond more favorably to simple and common language versus complex and technical language. It is easier for people to process information that is simple and nontechnical compared to complex information, therefore leading to more engagement with targets. In two studies covering 12 field samples (total n = 1,064,533), we establish and replicate this simpler-is-better phenomenon by demonstrating people engage more with nontechnical language when giving their time and attention (e.g., simple online language tends to receive more social engagements). However, people respond to complex language when giving their money (e.g., complex language within charitable giving campaigns and grant abstracts tend to receive more money). This evidence suggests people engage with the heuristic of complex language differently depending on a time or money target. These results underscore language as a lens into social and psychological processes and computational methods to measure text patterns at scale.
Subject(s)
Comprehension , Data Mining , Electronic Data Processing , Psychological Tests/standards , Female , Humans , Language , Male , Recognition, PsychologyABSTRACT
BACKGROUND: Barcode information management systems (BIMS) have been implemented in operating rooms to improve the quality of medical care and administrative efficiency. Previous research has demonstrated that the Agile development model is extensively used in the development and management of information systems. However, the effect of information systems on staff acceptance has not been examined within the context of clinical medical information management systems. OBJECTIVE: This study aimed to explore the effects and acceptance of implementing a BIMS in comparison to the original information system (OIS) among operating and supply room staff. METHODS: This study was a comparative cohort design. A total of 80 staff members from the operating and supply rooms of a Northern Taiwan medical center were recruited. Data collection, conducted from January 2020 to August 2020 using a mobile-based structured questionnaire, included participant characteristics and the Information Management System Scale. SPSS (version 20.0, IBM Corp) for Windows (Microsoft Corporation) was used for data analysis. Descriptive statistics included mean, SD, frequency, and percentage. Differences between groups were analyzed using the Mann-Whitney U test and Kruskal-Wallis test, with a P value <.05 considered statistically significant. RESULTS: The results indicated that the BIMS generally achieved higher scores in key elements of system success, system quality, information quality, perceived system use, perceived ease of use, perceived usefulness, and overall quality score; none of these differences were statistically significant (P>.05), with the system quality subscale being closest to significance (P=.06). Nurses showed significantly better perceived system use than technicians (1.58, SD 4.78 vs -1.19, SD 6.24; P=.02). Significant differences in perceived usefulness were found based on educational level (P=.04) and experience with OIS (P=.03), with junior college-educated nurses and those with over 6 years of OIS experience reporting the highest perceived usefulness. CONCLUSIONS: The study demonstrates that using the Agile development model for BIMS is advantageous for clinical environments. The high acceptance among operating room staff underscores its practicality and broader adoption potential. It advocates for continued exploration of technology-driven solutions to enhance health care delivery and optimize clinical workflows.
Subject(s)
Operating Rooms , Humans , Operating Rooms/standards , Taiwan , Adult , Female , Male , Electronic Data Processing/methods , Information Management , Surveys and Questionnaires , Cohort Studies , Middle AgedABSTRACT
In this paper, we present the development of a low-cost distributed computing pipeline for cotton plant phenotyping using Raspberry Pi, Hadoop, and deep learning. Specifically, we use a cluster of several Raspberry Pis in a primary-replica distributed architecture using the Apache Hadoop ecosystem and a pre-trained Tiny-YOLOv4 model for cotton bloom detection from our past work. We feed cotton image data collected from a research field in Tifton, GA, into our cluster's distributed file system for robust file access and distributed, parallel processing. We then submit job requests to our cluster from our client to process cotton image data in a distributed and parallel fashion, from pre-processing to bloom detection and spatio-temporal map creation. Additionally, we present a comparison of our four-node cluster performance with centralized, one-, two-, and three-node clusters. This work is the first to develop a distributed computing pipeline for high-throughput cotton phenotyping in field-based agriculture.
Subject(s)
Gossypium , Phenotype , Humans , Electronic Data ProcessingABSTRACT
Pregnancy monitoring is always essential for pregnant women and fetuses. According to the report of WHO (World Health Organization), there were an estimated 287,000 maternal deaths worldwide in 2020. Regular hospital check-ups, although well established, are a burden for pregnant women because of frequent travelling or hospitalization. Therefore, home-based, long-term, non-invasive health monitoring is one of the hot research areas. In recent years, with the development of wearable sensors and related data-processing technologies, pregnancy monitoring has become increasingly convenient. This article presents a review on recent research in wearable sensors, physiological data processing, and artificial intelligence (AI) for pregnancy monitoring. The wearable sensors mainly focus on physiological signals such as electrocardiogram (ECG), uterine contraction (UC), fetal movement (FM), and multimodal pregnancy-monitoring systems. The data processing involves data transmission, pre-processing, and application of threshold-based and AI-based algorithms. AI proves to be a powerful tool in early detection, smart diagnosis, and lifelong well-being in pregnancy monitoring. In this review, some improvements are proposed for future health monitoring of pregnant women. The rollout of smart wearables and the introduction of AI have shown remarkable potential in pregnancy monitoring despite some challenges in accuracy, data privacy, and user compliance.
Subject(s)
Artificial Intelligence , Wearable Electronic Devices , Humans , Pregnancy , Female , Monitoring, Physiologic/instrumentation , Monitoring, Physiologic/methods , Electrocardiography/methods , Electrocardiography/instrumentation , Algorithms , Electronic Data Processing/methods , Uterine Contraction/physiologyABSTRACT
BACKGROUND: We explore an innovative approach by transforming patient information leaflet (PILs) into Quick Response (QR) code linked patient information videos (PIVs) in ophthalmology. Our objectives are to assess the subjective utility of a PIV on glaucoma and analyse the use of QR codes as a delivery method. METHODS: A prospective study was conducted in Ninewells Hospital, NHS Tayside. A glaucoma PIV was created and linked to a QR code provided to 130 glaucoma patients. Pre- and post-video questionnaires evaluated the patients' perception of using a QR code and subjective improvement in their understanding of glaucoma. RESULTS: Out of 102 responses collected, 55% of patients had no prior experience with QR codes. However, 81% of patients were able to watch the PIV. The average view duration of the video was 3:26, with 82.5% view retention. Statistically significant improvement in glaucoma knowledge was observed across all six areas questioned (p < 0.001) using a 5-point Likert scale. Overall, 70% of patients preferred PIVs over PILs, and 77% acknowledged that PIVs could be a sustainable alternative. CONCLUSION: QR codes for delivering PIVs were well-received, with patients finding them easy to use. Our PIV on glaucoma effectively enhanced patients' understanding of the condition.
Subject(s)
Audiovisual Aids , Electronic Data Processing , Health Communication , Information Dissemination , Ophthalmology , Patient Education as Topic , Aged , Female , Humans , Male , Glaucoma , Health Communication/methods , Information Dissemination/methods , Ophthalmology/methods , Patient Education as Topic/methods , Prospective Studies , Surveys and Questionnaires , PamphletsABSTRACT
Drugs are the imperial part of modern society, but along with their therapeutic effects, drugs can also cause adverse effects, which can be mild to morbid. Pharmacovigilance is the process of collection, detection, assessment, monitoring and prevention of adverse drug events in both clinical trials as well as in the post-marketing phase. The recent trends in increasing unknown adverse events, known as signals, have raised the need to develop an ideal system for monitoring and detecting the potential signals timely. The process of signal management comprises of techniques to identify individual case safety reports systematically. Automated signal detection is highly based upon the data mining of the spontaneous reporting system such as reports from health care professional, observational studies, medical literature or from social media. If a signal is not managed properly, it can become an identical risk associated with the drug which can be hazardous for the patient safety and may have fatal outcomes which may impact health care system adversely. Once a signal is detected quantitatively, it can be further processed by the signal management team for the qualitative analysis and further evaluations. The main components of automated signal detection are data extraction, data acquisition, data selection, and data analysis and data evaluation. This system must be developed in the correct format and context, which eventually emphasizes the quality of data collected and leads to the optimal decision-making based upon the scientific evaluation.
Subject(s)
Adverse Drug Reaction Reporting Systems , Data Mining , Databases, Factual , Electronic Data Processing , Pharmacovigilance , HumansABSTRACT
The rapid increase of genome data brought by gene sequencing technologies poses a massive challenge to data processing. To solve the problems caused by enormous data and complex computing requirements, researchers have proposed many methods and tools which can be divided into three types: big data storage, efficient algorithm design and parallel computing. The purpose of this review is to investigate popular parallel programming technologies for genome sequence processing. Three common parallel computing models are introduced according to their hardware architectures, and each of which is classified into two or three types and is further analyzed with their features. Then, the parallel computing for genome sequence processing is discussed with four common applications: genome sequence alignment, single nucleotide polymorphism calling, genome sequence preprocessing, and pattern detection and searching. For each kind of application, its background is firstly introduced, and then a list of tools or algorithms are summarized in the aspects of principle, hardware platform and computing efficiency. The programming model of each hardware and application provides a reference for researchers to choose high-performance computing tools. Finally, we discuss the limitations and future trends of parallel computing technologies.
Subject(s)
Electronic Data Processing/methods , Genome, Human , Genomics/methods , Polymorphism, Single Nucleotide , Sequence Alignment/methods , Algorithms , Base Sequence/genetics , Chromosome Mapping/methods , High-Throughput Nucleotide Sequencing/methods , Humans , Information Storage and Retrieval , Software , Whole Genome Sequencing/methodsABSTRACT
MOTIVATION: Single-cell sequencing methods provide previously impossible resolution into the transcriptome of individual cells. Cell hashing reduces single-cell sequencing costs by increasing capacity on droplet-based platforms. Cell hashing methods rely on demultiplexing algorithms to accurately classify droplets; however, assumptions underlying these algorithms limit accuracy of demultiplexing, ultimately impacting the quality of single-cell sequencing analyses. RESULTS: We present Bimodal Flexible Fitting (BFF) demultiplexing algorithms BFFcluster and BFFraw, a novel class of algorithms that rely on the single inviolable assumption that barcode count distributions are bimodal. We integrated these and other algorithms into cellhashR, a new R package that provides integrated QC and a single command to execute and compare multiple demultiplexing algorithms. We demonstrate that BFFcluster demultiplexing is both tunable and insensitive to issues with poorly behaved data that can confound other algorithms. Using two well-characterized reference datasets, we demonstrate that demultiplexing with BFF algorithms is accurate and consistent for both well-behaved and poorly behaved input data. AVAILABILITY AND IMPLEMENTATION: cellhashR is available as an R package at https://github.com/BimberLab/cellhashR. cellhashR version 1.0.3 was used for the analyses in this manuscript and is archived on Zenodo at https://www.doi.org/10.5281/zenodo.6402477. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Subject(s)
Algorithms , Software , Electronic Data Processing , Sequence Analysis , Single-Cell AnalysisABSTRACT
Engagement with scientific manuscripts is frequently facilitated by Twitter and other social media platforms. As such, the demographics of a paper's social media audience provide a wealth of information about how scholarly research is transmitted, consumed, and interpreted by online communities. By paying attention to public perceptions of their publications, scientists can learn whether their research is stimulating positive scholarly and public thought. They can also become aware of potentially negative patterns of interest from groups that misinterpret their work in harmful ways, either willfully or unintentionally, and devise strategies for altering their messaging to mitigate these impacts. In this study, we collected 331,696 Twitter posts referencing 1,800 highly tweeted bioRxiv preprints and leveraged topic modeling to infer the characteristics of various communities engaging with each preprint on Twitter. We agnostically learned the characteristics of these audience sectors from keywords each user's followers provide in their Twitter biographies. We estimate that 96% of the preprints analyzed are dominated by academic audiences on Twitter, suggesting that social media attention does not always correspond to greater public exposure. We further demonstrate how our audience segmentation method can quantify the level of interest from nonspecialist audience sectors such as mental health advocates, dog lovers, video game developers, vegans, bitcoin investors, conspiracy theorists, journalists, religious groups, and political constituencies. Surprisingly, we also found that 10% of the preprints analyzed have sizable (>5%) audience sectors that are associated with right-wing white nationalist communities. Although none of these preprints appear to intentionally espouse any right-wing extremist messages, cases exist in which extremist appropriation comprises more than 50% of the tweets referencing a given preprint. These results present unique opportunities for improving and contextualizing the public discourse surrounding scientific research.
Subject(s)
Databases as Topic , Publications , Science , Social Change , Social Media , Academies and Institutes/organization & administration , Academies and Institutes/standards , Academies and Institutes/statistics & numerical data , Access to Information , Databases as Topic/organization & administration , Databases as Topic/standards , Databases as Topic/statistics & numerical data , Electronic Data Processing/organization & administration , Electronic Data Processing/standards , Electronic Data Processing/statistics & numerical data , Humans , Information Literacy , Internet/organization & administration , Internet/standards , Internet/statistics & numerical data , Political Activism , Publications/classification , Publications/standards , Publications/statistics & numerical data , Publications/supply & distribution , Science/organization & administration , Science/standards , Science/statistics & numerical data , Social Media/organization & administration , Social Media/standards , Social Media/statistics & numerical dataABSTRACT
Using computer vision through artificial intelligence (AI) is one of the main technological advances in dentistry. However, the existing literature on the practical application of AI for detecting cephalometric landmarks of orthodontic interest in digital images is heterogeneous, and there is no consensus regarding accuracy and precision. Thus, this review evaluated the use of artificial intelligence for detecting cephalometric landmarks in digital imaging examinations and compared it to manual annotation of landmarks. An electronic search was performed in nine databases to find studies that analyzed the detection of cephalometric landmarks in digital imaging examinations with AI and manual landmarking. Two reviewers selected the studies, extracted the data, and assessed the risk of bias using QUADAS-2. Random-effects meta-analyses determined the agreement and precision of AI compared to manual detection at a 95% confidence interval. The electronic search located 7410 studies, of which 40 were included. Only three studies presented a low risk of bias for all domains evaluated. The meta-analysis showed AI agreement rates of 79% (95% CI: 76-82%, I2 = 99%) and 90% (95% CI: 87-92%, I2 = 99%) for the thresholds of 2 and 3 mm, respectively, with a mean divergence of 2.05 (95% CI: 1.41-2.69, I2 = 10%) compared to manual landmarking. The menton cephalometric landmark showed the lowest divergence between both methods (SMD, 1.17; 95% CI, 0.82; 1.53; I2 = 0%). Based on very low certainty of evidence, the application of AI was promising for automatically detecting cephalometric landmarks, but further studies should focus on testing its strength and validity in different samples.
Subject(s)
Algorithms , Artificial Intelligence , Humans , Reproducibility of Results , Cephalometry/methods , Electronic Data ProcessingABSTRACT
ABSTRACT: Preventing medication errors remains a priority in nursing education. The implementation of Barcode Medication Administration (BCMA) systems is one strategy that has been used to reduce medication errors. Practice using BCMA in simulated settings may enhance the transfer of these skills to the clinical practice setting. However, the purchase of BCMA educational products available for nursing students can be cost prohibitive for many nursing programs. To overcome the barrier of cost, an interdisciplinary and innovative collaborative approach was used to create a fully functional low-cost BCMA system.
Subject(s)
Education, Nursing , Electronic Data Processing , Humans , Medication Errors/prevention & control , Interdisciplinary Studies , ComputersABSTRACT
BACKGROUND: Modern mass spectrometry has revolutionized the detection and analysis of metabolites but likewise, let the data skyrocket with repositories for metabolomics data filling up with thousands of datasets. While there are many software tools for the analysis of individual experiments with a few to dozens of chromatograms, we see a demand for a contemporary software solution capable of processing and analyzing hundreds or even thousands of experiments in an integrative manner with standardized workflows. RESULTS: Here, we introduce MetHoS as an automated web-based software platform for the processing, storage and analysis of great amounts of mass spectrometry-based metabolomics data sets originating from different metabolomics studies. MetHoS is based on Big Data frameworks to enable parallel processing, distributed storage and distributed analysis of even larger data sets across clusters of computers in a highly scalable manner. It has been designed to allow the processing and analysis of any amount of experiments and samples in an integrative manner. In order to demonstrate the capabilities of MetHoS, thousands of experiments were downloaded from the MetaboLights database and used to perform a large-scale processing, storage and statistical analysis in a proof-of-concept study. CONCLUSIONS: MetHoS is suitable for large-scale processing, storage and analysis of metabolomics data aiming at untargeted metabolomic analyses. It is freely available at: https://methos.cebitec.uni-bielefeld.de/ . Users interested in analyzing their own data are encouraged to apply for an account.
Subject(s)
Metabolomics , Software , Electronic Data Processing , Mass Spectrometry , Metabolomics/methods , WorkflowABSTRACT
BACKGROUND: Barcode-based multiplexing methods can be used to increase throughput and reduce batch effects in large single-cell genomics studies. Despite advantages in flexibility of sample collection and scale, there are additional complications in the data deconvolution steps required to assign each cell to their originating samples. RESULTS: To meet computational needs for efficient sample deconvolution, we developed the tools BarCounter and BarMixer that compute barcode counts and deconvolute mixed single-cell data into sample-specific files, respectively. Together, these tools are implemented as the BarWare pipeline to support demultiplexing from large sequencing projects with many wells of hashed 10x Genomics scRNA-seq data. CONCLUSIONS: BarWare is a modular set of tools linked by shell scripting: BarCounter, a computationally efficient barcode sequence quantification tool implemented in C; and BarMixer, an R package for identification of barcoded populations, merging barcoded data from multiple wells, and quality-control reporting related to scRNA-seq data. These tools and a self-contained implementation of the pipeline are freely available for non-commercial use at https://github.com/AllenInstitute/BarWare-pipeline .
Subject(s)
Genomics , Software , Electronic Data Processing , Genomics/methods , Quality ControlABSTRACT
Multimodal magnetic resonance imaging (MRI) has accelerated human neuroscience by fostering the analysis of brain microstructure, geometry, function, and connectivity across multiple scales and in living brains. The richness and complexity of multimodal neuroimaging, however, demands processing methods to integrate information across modalities and to consolidate findings across different spatial scales. Here, we present micapipe, an open processing pipeline for multimodal MRI datasets. Based on BIDS-conform input data, micapipe can generate i) structural connectomes derived from diffusion tractography, ii) functional connectomes derived from resting-state signal correlations, iii) geodesic distance matrices that quantify cortico-cortical proximity, and iv) microstructural profile covariance matrices that assess inter-regional similarity in cortical myelin proxies. The above matrices can be automatically generated across established 18 cortical parcellations (100-1000 parcels), in addition to subcortical and cerebellar parcellations, allowing researchers to replicate findings easily across different spatial scales. Results are represented on three different surface spaces (native, conte69, fsaverage5), and outputs are BIDS-conform. Processed outputs can be quality controlled at the individual and group level. micapipe was tested on several datasets and is available at https://github.com/MICA-MNI/micapipe, documented at https://micapipe.readthedocs.io/, and containerized as a BIDS App http://bids-apps.neuroimaging.io/apps/. We hope that micapipe will foster robust and integrative studies of human brain microstructure, morphology, function, cand connectivity.
Subject(s)
Connectome , Electronic Data Processing , Neuroimaging , Software , Humans , Brain/diagnostic imaging , Brain/anatomy & histology , Connectome/methods , Diffusion Tensor Imaging , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Software/standards , Electronic Data Processing/methods , Electronic Data Processing/standardsABSTRACT
Quantum guessing games form a versatile framework for studying different tasks of information processing. A quantum guessing game with posterior information uses quantum systems to encode messages and classical communication to give partial information after a quantum measurement has been performed. We present a general framework for quantum guessing games with posterior information and derive structure and reduction theorems that enable to analyze any such game. We formalize symmetry of guessing games and characterize the optimal measurements in cases where the symmetry is related to an irreducible representation. The application of guessing games to incompatibility detection is reviewed and clarified. All the presented main concepts and results are demonstrated with examples.
Subject(s)
Electronic Data Processing , Game Theory , CommunicationABSTRACT
Barcoding and pooling cells for processing as a composite sample are critical to minimize technical variability in multiplex technologies. Fluorescent cell barcoding has been established as a standard method for multiplexing in flow cytometry analysis. In parallel, mass-tag barcoding is routinely used to label cells for mass cytometry. Barcode reagents currently used label intracellular proteins in fixed and permeabilized cells and, therefore, are not suitable for studies with live cells in long-term culture prior to analysis. In this study, we report the development of fluorescent palladium-based hybrid-tag nanotrackers to barcode live cells for flow and mass cytometry dual-modal readout. We describe the preparation, physicochemical characterization, efficiency of cell internalization, and durability of these nanotrackers in live cells cultured over time. In addition, we demonstrate their compatibility with standardized cytometry reagents and protocols. Finally, we validated these nanotrackers for drug response assays during a long-term coculture experiment with two barcoded cell lines. This method represents a new and widely applicable advance for fluorescent and mass-tag barcoding that is independent of protein expression levels and can be used to label cells before long-term drug studies.
Subject(s)
Electronic Data Processing , Fluorescent Dyes , Cell Line , Flow Cytometry/methods , Fluorescent Dyes/chemistry , ProteomicsABSTRACT
The biological nervous system possesses a powerful information processing capability, and only needs a partial signal stimulation to perceive the entire signal. Likewise, the hardware implementation of an information processing system with similar capabilities is of great significance, for reducing the dimensions of data from sensors and improving the processing efficiency. Here, it is reported that indium-gallium-zinc-oxide thin film phototransistors exhibit the optoelectronic switching and light-tunable synaptic characteristics for in-sensor compression and computing. Phototransistor arrays can compress the signal while sensing, to realize in-sensor compression. Additionally, a reservoir computing network can also be implemented via phototransistors for in-sensor computing. By integrating these two systems, a neuromorphic system for high-efficiency in-sensor compression and computing is demonstrated. The results reveal that even for cases where the signal is compressed by 50%, the recognition accuracy of reconstructed signal still reaches ≈96%. The work paves the way for efficient information processing of human-computer interactions and the Internet of Things.