CLAP: Learning Transferable Binary Code Representations with Natural Language Supervision
CoRR(2024)
Abstract
Binary code representation learning has shown significant performance in
binary analysis tasks. But existing solutions often have poor transferability,
particularly in few-shot and zero-shot scenarios where few or no training
samples are available for the tasks. To address this problem, we present CLAP
(Contrastive Language-Assembly Pre-training), which employs natural language
supervision to learn better representations of binary code (i.e., assembly
code) and get better transferability. At the core, our approach boosts superior
transfer learning capabilities by effectively aligning binary code with their
semantics explanations (in natural language), resulting a model able to
generate better embeddings for binary code. To enable this alignment training,
we then propose an efficient dataset engine that could automatically generate a
large and diverse dataset comprising of binary code and corresponding natural
language explanations. We have generated 195 million pairs of binary code and
explanations and trained a prototype of CLAP. The evaluations of CLAP across
various downstream tasks in binary analysis all demonstrate exceptional
performance. Notably, without any task-specific training, CLAP is often
competitive with a fully supervised baseline, showing excellent
transferability. We release our pre-trained model and code at
https://github.com/Hustcw/CLAP.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined