Diffusion on language model embeddings for protein sequence generation
CoRR(2024)
Abstract
Protein design requires a deep understanding of the inherent complexities of
the protein universe. While many efforts lean towards conditional generation or
focus on specific families of proteins, the foundational task of unconditional
generation remains underexplored and undervalued. Here, we explore this pivotal
domain, introducing DiMA, a model that leverages continuous diffusion on
embeddings derived from the protein language model, ESM-2, to generate amino
acid sequences. DiMA surpasses leading solutions, including autoregressive
transformer-based and discrete diffusion models, and we quantitatively
illustrate the impact of the design choices that lead to its superior
performance. We extensively evaluate the quality, diversity, distribution
similarity, and biological relevance of the generated sequences using multiple
metrics across various modalities. Our approach consistently produces novel,
diverse protein sequences that accurately reflect the inherent structural and
functional diversity of the protein space. This work advances the field of
protein design and sets the stage for conditional models by providing a robust
framework for scalable and high-quality protein sequence generation.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined