INDUS: Effective and Efficient Language Models for Scientific Applications
CoRR(2024)
Abstract
Large language models (LLMs) trained on general domain corpora showed
remarkable results on natural language processing (NLP) tasks. However,
previous research demonstrated LLMs trained using domain-focused corpora
perform better on specialized tasks. Inspired by this pivotal insight, we
developed INDUS, a comprehensive suite of LLMs tailored for the Earth science,
biology, physics, heliophysics, planetary sciences and astrophysics domains and
trained using curated scientific corpora drawn from diverse data sources. The
suite of models include: (1) an encoder model trained using domain-specific
vocabulary and corpora to address natural language understanding tasks, (2) a
contrastive-learning-based general text embedding model trained using a diverse
set of datasets drawn from multiple sources to address information retrieval
tasks and (3) smaller versions of these models created using knowledge
distillation techniques to address applications which have latency or resource
constraints. We also created three new scientific benchmark datasets namely,
CLIMATE-CHANGE-NER (entity-recognition), NASA-QA (extractive QA) and NASA-IR
(IR) to accelerate research in these multi-disciplinary fields. Finally, we
show that our models outperform both general-purpose encoders (RoBERTa) and
existing domain-specific encoders (SciBERT) on these new tasks as well as
existing benchmark tasks in the domains of interest.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined