RESUMO
BACKGROUND: Natural language processing (NLP) can facilitate research utilizing data from electronic health records (EHRs). Large language models can potentially improve NLP applications leveraging EHR notes. The objective of this study was to assess the performance of zero-shot learning using Chat Generative Pre-trained Transformer 4 (ChatGPT-4) for extraction of symptoms and signs, and compare its performance to baseline machine learning and rule-based methods developed using annotated data. METHODS AND RESULTS: From unstructured clinical notes of the national EHR data on the Veterans healthcare system, we extracted 1999 text snippets containing relevant keywords for heart failure symptoms and signs, which were then annotated by two clinicians. We also created 102 synthetic snippets that were semantically similar to snippets randomly selected from the original 1999 snippets. The authors applied zero-shot learning, using two different forms of prompt engineering in a symptom and sign extraction task with ChatGPT-4, utilizing the synthetic snippets. For comparison, baseline models using machine learning and rule-based methods were trained using the original 1999 annotated text snippets, and then used to classify the 102 synthetic snippets. The best zero-shot learning application achieved 90.6â¯% precision, 100â¯% recall, and 95â¯% F1 score, outperforming the best baseline method, which achieved 54.9â¯% precision, 82.4â¯% recall, and 65.5â¯% F1 score. Prompt style and temperature settings influenced zero-shot learning performance. CONCLUSIONS: Zero-shot learning utilizing ChatGPT-4 significantly outperformed traditional machine learning and rule-based NLP. Prompt type and temperature settings affected zero-shot learning performance. These findings suggest a more efficient means of symptoms and signs extraction than traditional machine learning and rule-based methods.
RESUMO
Power circuit breakers (CBs) are vital for the control and protection of power systems, yet diagnosing their faults accurately remains a challenge due to the diversity of fault types and the complexity of their structures. Traditional data-driven methods, although effective, require extensive labeled data for each fault class, limiting their applicability in real-world scenarios where many faults are unseen. This paper addresses these limitations by introducing symptom description transfer-based zero-shot fault diagnosis (SDT-ZSFD), a method that leverages zero-shot learning for fault diagnosis. Our approach constructs a fault symptom description (FSD) framework, which embeds a fault symptom layer between the feature layer and the label layer to facilitate knowledge transfer from seen to unseen fault classes. The method utilizes current and acceleration signals collected during CB operation to extract features. By applying sparse principal component analysis to these signals, we derive high-quality features that are mapped to the FSD framework, enabling effective zero-shot learning. Our method achieves a satisfactory recognition rate by accurately diagnosing unseen faults based on these symptoms. This approach not only overcomes the data scarcity problem but also holds potential for practical applications in power system maintenance. The SDT-ZSFD method offers a reliable solution for CB fault diagnosis and provides a foundation for future improvements in symptom-based zero-shot diagnostic mechanisms and algorithmic robustness.
RESUMO
Cell surface proteins serve as primary drug targets and cell identity markers. Techniques such as CITE-seq (cellular indexing of transcriptomes and epitopes by sequencing) have enabled the simultaneous quantification of surface protein abundance and transcript expression within individual cells. The published data have been utilized to train machine learning models for predicting surface protein abundance solely from transcript expression. However, the small scale of proteins predicted and the poor generalization ability of these computational approaches across diverse contexts (e.g., different tissues/disease states) impede their widespread adoption. Here, we propose SPIDER (surface protein prediction using deep ensembles from single-cell RNA sequencing), a context-agnostic zero-shot deep ensemble model, which enables large-scale protein abundance prediction and generalizes better to various contexts. Comprehensive benchmarking shows that SPIDER outperforms other state-of-the-art methods. Using the predicted surface abundance of >2,500 proteins from single-cell transcriptomes, we demonstrate the broad applications of SPIDER, including cell type annotation, biomarker/target identification, and cell-cell interaction analysis in hepatocellular carcinoma and colorectal cancer. A record of this paper's transparent peer review process is included in the supplemental information.
Assuntos
Proteínas de Membrana , Análise de Célula Única , Transcriptoma , Humanos , Análise de Célula Única/métodos , Transcriptoma/genética , Proteínas de Membrana/genética , Proteínas de Membrana/metabolismo , Biologia Computacional/métodos , Análise de Sequência de RNA/métodos , Perfilação da Expressão Gênica/métodosRESUMO
In real industrial settings, collecting and labeling concurrent abnormal control chart pattern (CCP) samples are challenging, thereby hindering the effectiveness of current CCP recognition (CCPR) methods. This paper introduces zero-shot learning into quality control, proposing an intelligent model for recognizing zero-shot concurrent CCPs (C-CCPs). A multiscale ordinal pattern (OP) feature considering data sequential relationship is proposed. Drawing from expert knowledge, an attribute description space (ADS) is established to infer from single CCPs to C-CCPs. An ADS is embedded between features and labels, and the attribute classifier associates the features and attributes of CCPs. Experimental results demonstrate an accuracy of 98.73 % for 11 unseen C-CCPs and an overall accuracy of 98.89 % for all 19 CCPs, without C-CCP samples in training. Compared with other features, the multiscale OP feature has the best recognition effect on unseen C-CCPs.
RESUMO
Ensuring the stability of high-voltage circuit breakers (HVCBs) is crucial for maintaining an uninterrupted supply of electricity. Existing fault diagnosis methods typically rely on extensive labeled datasets, which are challenging to obtain due to the unique operational contexts and complex mechanical structures of HVCBs. Additionally, these methods often cater to specific HVCB models and lack generalizability across different types, limiting their practical applicability. To address these challenges, we propose a novel cross-domain zero-shot learning (CDZSL) approach specifically designed for HVCB fault diagnosis. This approach incorporates an adaptive weighted fusion strategy that combines vibration and current signals. To bypass the constraints of manual fault semantics, we develop an automatic semantic construction method. Furthermore, a multi-channel residual convolutional neural network is engineered to distill deep, low-level features, ensuring robust cross-domain diagnostic capabilities. Our model is further enhanced with a local subspace embedding technique that effectively aligns semantic features within the embedding space. Comprehensive experimental evaluations demonstrate the superior performance of our CDZSL approach in diagnosing faults across various HVCB types.
RESUMO
A data-driven approach to defect identification requires many labeled samples for model training. Yet new defects tend to appear during data acquisition cycles, which can lead to a lack of labeled samples of these new defects. Aiming at solving this problem, we proposed a zero-shot pipeline blockage detection and identification method based on stacking ensemble learning. The experimental signals were first decomposed using variational modal decomposition (VMD), and then, the information entropy was calculated for each intrinsic modal function (IMF) component to construct the feature sets. Second, the attribute matrix was established according to the attribute descriptions of the defect categories, and the stacking ensemble attribute learner was used for the attribute learning of defect features. Finally, defect identification was accomplished by comparing the similarity within the attribute matrices. The experimental results show that target defects can be identified even without targeted training samples. The model showed better classification performance on the six sets of experimental data, and the average recognition accuracy of the model for unknown defect categories reached 72.5%.
RESUMO
PURPOSE: In order to produce a surgical gesture recognition system that can support a wide variety of procedures, either a very large annotated dataset must be acquired, or fitted models must generalize to new labels (so-called zero-shot capability). In this paper we investigate the feasibility of latter option. METHODS: Leveraging the bridge-prompt framework, we prompt-tune a pre-trained vision-text model (CLIP) for gesture recognition in surgical videos. This can utilize extensive outside video data such as text, but also make use of label meta-data and weakly supervised contrastive losses. RESULTS: Our experiments show that prompt-based video encoder outperforms standard encoders in surgical gesture recognition tasks. Notably, it displays strong performance in zero-shot scenarios, where gestures/tasks that were not provided during the encoder training phase are included in the prediction phase. Additionally, we measure the benefit of inclusion text descriptions in the feature extractor training schema. CONCLUSION: Bridge-prompt and similar pre-trained + prompt-tuned video encoder models present significant visual representation for surgical robotics, especially in gesture recognition tasks. Given the diverse range of surgical tasks (gestures), the ability of these models to zero-shot transfer without the need for any task (gesture) specific retraining makes them invaluable.
RESUMO
Fine-grained visual categorization in zero-shot setting is a challenging problem in the computer vision community. It requires algorithms to accurately identify fine-grained categories that do not appear during the training phase and have high visual similarity to each other. Existing methods usually address this problem by using attribute information as intermediate knowledge, which provides sufficient fine-grained characteristics of categories and can be transferred from seen categories to unseen categories. However, the learning of attribute visual features is not trivial due to the following two reasons: (i) The visual information about attributes of different types may interfere with the visual feature learning of each other. (ii) The visual characteristics of the same attribute may vary in different categories. To solve these issues, we propose a Multi-Group Multi-Stream attribute Attention network (MGMSA), which not only separates the feature learning of attributes of different types, but also isolates the learning of attribute visual features for categories with big differences in attribute appearance. This avoids the interference between uncorrelated attributes and helps to learn category-specific attribute-related visual features. This is beneficial for distinguishing fine-grained categories with subtle visual differences. Extensive experiments on benchmark datasets show that MGMSA achieves state-of-the-art performance on attribute prediction and fine-grained zero-shot learning.
Assuntos
Algoritmos , Atenção , Redes Neurais de Computação , Atenção/fisiologia , Humanos , Aprendizado de Máquina , Percepção Visual/fisiologiaRESUMO
Supervised named entity recognition (NER) in the biomedical domain depends on large sets of annotated texts with the given named entities. The creation of such datasets can be time-consuming and expensive, while extraction of new entities requires additional annotation tasks and retraining the model. This paper proposes a method for zero- and few-shot NER in the biomedical domain to address these challenges. The method is based on transforming the task of multi-class token classification into binary token classification and pre-training on a large number of datasets and biomedical entities, which allows the model to learn semantic relations between the given and potentially novel named entity labels. We have achieved average F1 scores of 35.44% for zero-shot NER, 50.10% for one-shot NER, 69.94% for 10-shot NER, and 79.51% for 100-shot NER on 9 diverse evaluated biomedical entities with fine-tuned PubMedBERT-based model. The results demonstrate the effectiveness of the proposed method for recognizing new biomedical entities with no or limited number of examples, outperforming previous transformer-based methods, and being comparable to GPT3-based models using models with over 1000 times fewer parameters. We make models and developed code publicly available.
Assuntos
Semântica , Processamento de Linguagem Natural , Humanos , Mineração de Dados/métodos , AlgoritmosRESUMO
Large language models (LLMs) are sophisticated AI-driven models trained on vast sources of natural language data. They are adept at generating responses that closely mimic human conversational patterns. One of the most notable examples is OpenAI's ChatGPT, which has been extensively used across diverse sectors. Despite their flexibility, a significant challenge arises as most users must transmit their data to the servers of companies operating these models. Utilizing ChatGPT or similar models online may inadvertently expose sensitive information to the risk of data breaches. Therefore, implementing LLMs that are open source and smaller in scale within a secure local network becomes a crucial step for organizations where ensuring data privacy and protection has the highest priority, such as regulatory agencies. As a feasibility evaluation, we implemented a series of open-source LLMs within a regulatory agency's local network and assessed their performance on specific tasks involving extracting relevant clinical pharmacology information from regulatory drug labels. Our research shows that some models work well in the context of few- or zero-shot learning, achieving performance comparable, or even better than, neural network models that needed thousands of training samples. One of the models was selected to address a real-world issue of finding intrinsic factors that affect drugs' clinical exposure without any training or fine-tuning. In a dataset of over 700 000 sentences, the model showed a 78.5% accuracy rate. Our work pointed to the possibility of implementing open-source LLMs within a secure local network and using these models to perform various natural language processing tasks when large numbers of training examples are unavailable.
Assuntos
Processamento de Linguagem Natural , Humanos , Redes Neurais de Computação , Aprendizado de MáquinaRESUMO
BACKGROUND: Understanding the multifaceted nature of health outcomes requires a comprehensive examination of the social, economic, and environmental determinants that shape individual well-being. Among these determinants, behavioral factors play a crucial role, particularly the consumption patterns of psychoactive substances, which have important implications on public health. The Global Burden of Disease Study shows a growing impact in disability-adjusted life years due to substance use. The successful identification of patients' substance use information equips clinical care teams to address substance-related issues more effectively, enabling targeted support and ultimately improving patient outcomes. OBJECTIVE: Traditional natural language processing methods face limitations in accurately parsing diverse clinical language associated with substance use. Large language models offer promise in overcoming these challenges by adapting to diverse language patterns. This study investigates the application of the generative pretrained transformer (GPT) model in specific GPT-3.5 for extracting tobacco, alcohol, and substance use information from patient discharge summaries in zero-shot and few-shot learning settings. This study contributes to the evolving landscape of health care informatics by showcasing the potential of advanced language models in extracting nuanced information critical for enhancing patient care. METHODS: The main data source for analysis in this paper is Medical Information Mart for Intensive Care III data set. Among all notes in this data set, we focused on discharge summaries. Prompt engineering was undertaken, involving an iterative exploration of diverse prompts. Leveraging carefully curated examples and refined prompts, we investigate the model's proficiency through zero-shot as well as few-shot prompting strategies. RESULTS: Results show GPT's varying effectiveness in identifying mentions of tobacco, alcohol, and substance use across learning scenarios. Zero-shot learning showed high accuracy in identifying substance use, whereas few-shot learning reduced accuracy but improved in identifying substance use status, enhancing recall and F1-score at the expense of lower precision. CONCLUSIONS: Excellence of zero-shot learning in precisely extracting text span mentioning substance use demonstrates its effectiveness in situations in which comprehensive recall is important. Conversely, few-shot learning offers advantages when accurately determining the status of substance use is the primary focus, even if it involves a trade-off in precision. The results contribute to enhancement of early detection and intervention strategies, tailor treatment plans with greater precision, and ultimately, contribute to a holistic understanding of patient health profiles. By integrating these artificial intelligence-driven methods into electronic health record systems, clinicians can gain immediate, comprehensive insights into substance use that results in shaping interventions that are not only timely but also more personalized and effective.
RESUMO
Objective.Left ventricular hypertrophy (LVH) is the thickening of the left ventricle wall of the heart. The objective of this study is to develop a novel approach for the accurate assessment of LVH) severity, addressing the limitations of traditional manual grading systems.Approach.We propose the Multi-purpose Siamese Weighted Euclidean Distance Model (MSWED), which utilizes convolutional Siamese neural networks and zero-shot/few-shot learning techniques. Unlike traditional methods, our model introduces a cutoff distance-based approach for zero-shot learning, enhancing accuracy. We also incorporate a weighted Euclidean distance targeting informative regions within echocardiograms.Main results.We collected comprehensive datasets labeled by experienced echocardiographers, including Normal heart and various levels of LVH severity. Our model outperforms existing techniques, demonstrating significant precision enhancement, with improvements of up to 13% for zero-shot and few-shot learning approaches.Significance.Accurate assessment of LVH severity is crucial for clinical prognosis and treatment decisions. Our proposed MSWED model offers a more reliable and efficient solution compared to traditional grading systems, reducing subjectivity and errors while providing enhanced precision in severity classification.
Assuntos
Hipertrofia Ventricular Esquerda , Redes Neurais de Computação , Humanos , Hipertrofia Ventricular Esquerda/diagnóstico por imagem , Hipertrofia Ventricular Esquerda/fisiopatologia , Ecocardiografia , Processamento de Imagem Assistida por Computador/métodosRESUMO
This study aims at addressing the challenging incremental few-shot object detection (iFSOD) problem toward online adaptive detection. iFSOD targets to learn novel categories in a sequential manner, and eventually, the detection is performed on all learned categories. Moreover, only a few training samples are available for all sequential novel classes in these situations. In this study, we propose an efficient yet suitably simple framework, Expandable-RCNN, as a solution for the iFSOD problem, which allows online sequentially adding new classes with zero retraining of the base network. We achieve this by adapting the Faster R-CNN to the few-shot learning scenario with two elegant components to effectively address the overfitting and category bias. First, an IOU-aware weight imprinting strategy is proposed to directly determine the classifier weights for incremental novel classes and the background class, which is with zero training to avoid the notorious overfitting issue in few-shot learning. Second, since the above zero-retraining imprinting approach may lead to undesired category bias in the classifier, we develop a bias correction module for iFSOD, named the group soft-max layer (GSL), that efficiently calibrates the biased prediction of the imprinted classifier to organically improve classification performance for the few-shot classes, preventing catastrophic forgetting. Extensive experiments on MS-COCO show that our method can significantly outperform the state-of-the-art method ONCE by 5.9 points in commonly encountered few-shot classes.
RESUMO
BACKGROUND: We investigated the potential of an imaging-aware GPT-4-based chatbot in providing diagnoses based on imaging descriptions of abdominal pathologies. METHODS: Utilizing zero-shot learning via the LlamaIndex framework, GPT-4 was enhanced using the 96 documents from the Radiographics Top 10 Reading List on gastrointestinal imaging, creating a gastrointestinal imaging-aware chatbot (GIA-CB). To assess its diagnostic capability, 50 cases on a variety of abdominal pathologies were created, comprising radiological findings in fluoroscopy, MRI, and CT. We compared the GIA-CB to the generic GPT-4 chatbot (g-CB) in providing the primary and 2 additional differential diagnoses, using interpretations from senior-level radiologists as ground truth. The trustworthiness of the GIA-CB was evaluated by investigating the source documents as provided by the knowledge-retrieval mechanism. Mann-Whitney U test was employed. RESULTS: The GIA-CB demonstrated a high capability to identify the most appropriate differential diagnosis in 39/50 cases (78%), significantly surpassing the g-CB in 27/50 cases (54%) (p = 0.006). Notably, the GIA-CB offered the primary differential in the top 3 differential diagnoses in 45/50 cases (90%) versus g-CB with 37/50 cases (74%) (p = 0.022) and always with appropriate explanations. The median response time was 29.8 s for GIA-CB and 15.7 s for g-CB, and the mean cost per case was $0.15 and $0.02, respectively. CONCLUSIONS: The GIA-CB not only provided an accurate diagnosis for gastrointestinal pathologies, but also direct access to source documents, providing insight into the decision-making process, a step towards trustworthy and explainable AI. Integrating context-specific data into AI models can support evidence-based clinical decision-making. RELEVANCE STATEMENT: A context-aware GPT-4 chatbot demonstrates high accuracy in providing differential diagnoses based on imaging descriptions, surpassing the generic GPT-4. It provided formulated rationale and source excerpts supporting the diagnoses, thus enhancing trustworthy decision-support. KEY POINTS: ⢠Knowledge retrieval enhances differential diagnoses in a gastrointestinal imaging-aware chatbot (GIA-CB). ⢠GIA-CB outperformed the generic counterpart, providing formulated rationale and source excerpts. ⢠GIA-CB has the potential to pave the way for AI-assisted decision support systems.
Assuntos
Inteligência Artificial , Gastroenteropatias , Estudo de Prova de Conceito , Humanos , Diagnóstico Diferencial , Gastroenteropatias/diagnóstico por imagemRESUMO
Generalized zero-shot learning (GZSL) aims to recognize both seen and unseen classes, while only samples from seen classes are available for training. The mainstream methods mitigate the lack of unseen training data by simulating the visual unseen samples. However, the sample generator is actually learned with just seen-class samples, and semantic descriptions of unseen classes are just provided to the pre-trained sample generator for unseen data generation, therefore, the generator would have bias towards seen categories, and the unseen generation quality, including both precision and diversity, is still the main learning challenge. To this end, we propose a Prototype-Guided Generation for Generalized Zero-Shot Learning (PGZSL), in order to guide the sample generation with unseen knowledge. First, unseen data generation is guided and rectified in PGZSL by contrastive prototypical anchors with both class semantic consistency and feature discriminability. Second, PGZSL introduces Certainty-Driven Mixup for generator to enrich the diversity of generated unseen samples, while suppress the generation of uncertain boundary samples as well. Empirical results over five benchmark datasets show that PGZSL significantly outperforms the SOTA methods in both ZSL and GZSL tasks.
Assuntos
Aprendizado de Máquina , Humanos , Redes Neurais de Computação , Semântica , AlgoritmosRESUMO
In the field of deep learning, large quantities of data are typically required to effectively train models. This challenge has given rise to techniques like zero-shot learning (ZSL), which trains models on a set of "seen" classes and evaluates them on a set of "unseen" classes. Although ZSL has shown considerable potential, particularly with the employment of generative methods, its generalizability to real-world scenarios remains uncertain. The hypothesis of this work is that the performance of ZSL models is systematically influenced by the chosen "splits"; in particular, the statistical properties of the classes and attributes used in training. In this paper, we test this hypothesis by introducing the concepts of generalizability and robustness in attribute-based ZSL and carry out a variety of experiments to stress-test ZSL models against different splits. Our aim is to lay the groundwork for future research on ZSL models' generalizability, robustness, and practical applications. We evaluate the accuracy of state-of-the-art models on benchmark datasets and identify consistent trends in generalizability and robustness. We analyze how these properties vary based on the dataset type, differentiating between coarse- and fine-grained datasets, and our findings indicate significant room for improvement in both generalizability and robustness. Furthermore, our results demonstrate the effectiveness of dimensionality reduction techniques in improving the performance of state-of-the-art models in fine-grained datasets.
Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Humanos , Algoritmos , Aprendizado de MáquinaRESUMO
Large language models (LLMs) find increasing applications in many fields. Here, three LLM chatbots (ChatGPT-3.5, ChatGPT-4, and Bard) are assessed in their current form, as publicly available, for their ability to recognize Alzheimer's dementia (AD) and Cognitively Normal (CN) individuals using textual input derived from spontaneous speech recordings. A zero-shot learning approach is used at two levels of independent queries, with the second query (chain-of-thought prompting) eliciting more detailed information than the first. Each LLM chatbot's performance is evaluated on the prediction generated in terms of accuracy, sensitivity, specificity, precision, and F1 score. LLM chatbots generated a three-class outcome ("AD", "CN", or "Unsure"). When positively identifying AD, Bard produced the highest true-positives (89% recall) and highest F1 score (71%), but tended to misidentify CN as AD, with high confidence (low "Unsure" rates); for positively identifying CN, GPT-4 resulted in the highest true-negatives at 56% and highest F1 score (62%), adopting a diplomatic stance (moderate "Unsure" rates). Overall, the three LLM chatbots can identify AD vs. CN, surpassing chance-levels, but do not currently satisfy the requirements for clinical application.
RESUMO
Supervised learning-based image classification in computer vision relies on visual samples containing a large amount of labeled information. Considering that it is labor-intensive to collect and label images and construct datasets manually, Zero-Shot Learning (ZSL) achieves knowledge transfer from seen categories to unseen categories by mining auxiliary information, which reduces the dependence on labeled image samples and is one of the current research hotspots in computer vision. However, most ZSL methods fail to properly measure the relationships between classes, or do not consider the differences and similarities between classes at all. In this paper, we propose Adaptive Relation-Aware Network (ARAN), a novel ZSL approach that incorporates the improved triplet loss from deep metric learning into a VAE-based generative model, which helps to model inter-class and intra-class relationships for different classes in ZSL datasets and generate an arbitrary amount of high-quality visual features containing more discriminative information. Moreover, we validate the effectiveness and superior performance of our ARAN through experimental evaluations under ZSL and more practical GZSL settings on three popular datasets AWA2, CUB, and SUN.
RESUMO
A vast amount of media-related text data is generated daily in the form of social media posts, news stories or academic articles. These text data provide opportunities for researchers to analyse and understand how substance-related issues are being discussed. The main methods to analyse large text data (content analyses or specifically trained deep-learning models) require substantial manual annotation and resources. A machine-learning approach called 'zero-shot learning' may be quicker, more flexible and require fewer resources. Zero-shot learning uses models trained on large, unlabelled (or weakly labelled) data sets to classify previously unseen data into categories on which the model has not been specifically trained. This means that a pre-existing zero-shot learning model can be used to analyse media-related text data without the need for task-specific annotation or model training. This approach may be particularly important for analysing data that is time critical. This article describes the relatively new concept of zero-shot learning and how it can be applied to text data in substance use research, including a brief practical tutorial.
Assuntos
Mídias Sociais , Transtornos Relacionados ao Uso de Substâncias , Humanos , Aprendizado de MáquinaRESUMO
PURPOSE: To develop and evaluate methods for (1) reconstructing 3D-quantification using an interleaved Look-Locker acquisition sequence with T2 preparation pulse (3D-QALAS) time-series images using a low-rank subspace method, which enables accurate and rapid T1 and T2 mapping, and (2) improving the fidelity of subspace QALAS by combining scan-specific deep-learning-based reconstruction and subspace modeling. THEORY AND METHODS: A low-rank subspace method for 3D-QALAS (i.e., subspace QALAS) and zero-shot deep-learning subspace method (i.e., Zero-DeepSub) were proposed for rapid and high fidelity T1 and T2 mapping and time-resolved imaging using 3D-QALAS. Using an ISMRM/NIST system phantom, the accuracy and reproducibility of the T1 and T2 maps estimated using the proposed methods were evaluated by comparing them with reference techniques. The reconstruction performance of the proposed subspace QALAS using Zero-DeepSub was evaluated in vivo and compared with conventional QALAS at high reduction factors of up to nine-fold. RESULTS: Phantom experiments showed that subspace QALAS had good linearity with respect to the reference methods while reducing biases and improving precision compared to conventional QALAS, especially for T2 maps. Moreover, in vivo results demonstrated that subspace QALAS had better g-factor maps and could reduce voxel blurring, noise, and artifacts compared to conventional QALAS and showed robust performance at up to nine-fold acceleration with Zero-DeepSub, which enabled whole-brain T1, T2, and PD mapping at 1 mm isotropic resolution within 2 min of scan time. CONCLUSION: The proposed subspace QALAS along with Zero-DeepSub enabled high fidelity and rapid whole-brain multiparametric quantification and time-resolved imaging.