RESUMO
The BDB database (http://immunet.cn/bdb) is an update of the MimoDB database, which was previously described in the 2012 Nucleic Acids Research Database issue. The rebranded name BDB is short for Biopanning Data Bank, which aims to be a portal for biopanning results of the combinatorial peptide library. Last updated in July 2015, BDB contains 2904 sets of biopanning data collected from 1322 peer-reviewed papers. It contains 25,786 peptide sequences, 1704 targets, 492 known templates, 447 peptide libraries and 310 crystal structures of target-template or target-peptide complexes. All data stored in BDB were revisited, and information on peptide affinity, measurement method and procedures was added for 2298 peptides from 411 sets of biopanning data from 246 published papers. In addition, a more professional and user-friendly web interface was implemented, a more detailed help system was designed, and a new on-the-fly data visualization tool and a series of tools for data analysis were integrated. With these new data and tools made available, we expect that the BDB database would become a major resource for scholars using phage display, with improved utility for biopanning and related scientific communities.
Assuntos
Bases de Dados de Compostos Químicos , Biblioteca de Peptídeos , Peptídeos/química , Técnicas de Visualização da Superfície Celular , Internet , SoftwareRESUMO
Background: The failure of high-flow nasal cannula (HFNC) oxygen therapy can necessitate endotracheal intubation in patients, making timely prediction of the intubation risk following HFNC therapy crucial for reducing mortality due to delays in intubation. Objectives: To investigate the accuracy of ChatGPT in predicting the endotracheal intubation risk within 48 h following HFNC therapy and compare it with the predictive accuracy of specialist and non-specialist physicians. Methods: We conducted a prospective multicenter cohort study based on the data of 71 adult patients who received HFNC therapy. For each patient, their baseline data and physiological parameters after 6-h HFNC therapy were recorded to create a 6-alternative-forced-choice questionnaire that asked participants to predict the 48-h endotracheal intubation risk using scale options ranging from 1 to 6, with higher scores indicating a greater risk. GPT-3.5, GPT-4.0, respiratory and critical care specialist physicians and non-specialist physicians completed the same questionnaires (N = 71) respectively. We then determined the optimal diagnostic cutoff point, using the Youden index, for each predictor and 6-h ROX index, and compared their predictive performance using receiver operating characteristic (ROC) analysis. Results: The optimal diagnostic cutoff points were determined to be ≥ 4 for both GPT-4.0 and specialist physicians. GPT-4.0 demonstrated a precision of 76.1 %, with a specificity of 78.6 % (95%CI = 52.4-92.4 %) and sensitivity of 75.4 % (95%CI = 62.9-84.8 %). In comparison, the precision of specialist physicians was 80.3 %, with a specificity of 71.4 % (95%CI = 45.4-88.3 %) and sensitivity of 82.5 % (95%CI = 70.6-90.2 %). For GPT-3.5 and non-specialist physicians, the optimal diagnostic cutoff points were ≥5, with precisions of 73.2 % and 64.8 %, respectively. The area under the curve (AUC) in ROC analysis for GPT-4.0 was 0.821 (95%CI = 0.698-0.943), which was the highest among the predictors and significantly higher than that of non-specialist physicians [0.662 (95%CI = 0.518-0.805), P = 0.011]. Conclusion: GPT-4.0 achieves an accuracy level comparable to specialist physicians in predicting the 48-h endotracheal intubation risk following HFNC therapy, based on patient baseline data and physiological parameters after 6-h HFNC therapy.
RESUMO
To interpret our surroundings, the brain uses a visual categorization process. Current theories and models suggest that this process comprises a hierarchy of different computations that transforms complex, high-dimensional inputs into lower-dimensional representations (i.e., manifolds) in support of multiple categorization behaviors. Here, we tested this hypothesis by analyzing these transformations reflected in dynamic MEG source activity while individual participants actively categorized the same stimuli according to different tasks: face expression, face gender, pedestrian gender, and vehicle type. Results reveal three transformation stages guided by the pre-frontal cortex. At stage 1 (high-dimensional, 50-120 ms), occipital sources represent both task-relevant and task-irrelevant stimulus features; task-relevant features advance into higher ventral/dorsal regions, whereas task-irrelevant features halt at the occipital-temporal junction. At stage 2 (121-150 ms), stimulus feature representations reduce to lower-dimensional manifolds, which then transform into the task-relevant features underlying categorization behavior over stage 3 (161-350 ms). Our findings shed light on how the brain's network mechanisms transform high-dimensional inputs into specific feature manifolds that support multiple categorization behaviors.
Assuntos
Lobo Occipital , Humanos , Masculino , Feminino , Adulto , Lobo Occipital/fisiologia , Adulto Jovem , Córtex Pré-Frontal/fisiologia , MagnetoencefalografiaRESUMO
Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.
Assuntos
Emoções , Expressão Facial , Humanos , Ira , Medo , FelicidadeRESUMO
Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1-5 including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal."6-8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.e., specific categories and broader dimensions-via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver's perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10-12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent-i.e., multiplex-categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results-based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms-show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.
Assuntos
Emoções , Expressão Facial , Ira , Nível de Alerta , Face , HumanosRESUMO
The CRISPR-Cas (clustered regularly interspaced short palindromic repeats-CRISPR-associated proteins) adaptive immune systems are discovered in many bacteria and most archaea. These systems are encoded by cas (CRISPR-associated) operons that have an extremely diverse architecture. The most crucial step in the depiction of cas operons composition is the identification of cas genes or Cas proteins. With the continuous increase of the newly sequenced archaeal and bacterial genomes, the recognition of new Cas proteins is becoming possible, which not only provides candidates for novel genome editing tools but also helps to understand the prokaryotic immune system better. Here, we describe HMMCAS, a web service for the detection of CRISPR-associated structural and functional domains in protein sequences. HMMCAS uses hmmscan similarity search algorithm in HMMER3.1 to provide a fast, interactive service based on a comprehensive collection of hidden Markov models of Cas protein family. It can accurately identify the Cas proteins including those fusion proteins, for example the Cas1-Cas4 fusion protein in Candidatus Chloracidobacterium thermophilum B (Cab. thermophilum B). HMMCAS can also find putative cas operon and determine which type it belongs to. HMMCAS is freely available at http://i.uestc.edu.cn/hmmcas.
Assuntos
Sistemas CRISPR-Cas , Biologia Computacional/métodos , Software , Acidobacteria/genética , Algoritmos , Archaea/genética , Proteínas Arqueais/química , Bactérias/genética , Proteínas de Bactérias/química , Genoma Arqueal , Genoma Bacteriano , Internet , Cadeias de Markov , Methanocaldococcus/genética , Mimiviridae/genética , Óperon , Filogenia , Domínios Proteicos , Proteoma , ProteômicaRESUMO
Database URL: The BDB database is available at http://immunet.cn/bdb.