OBELISC: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents

Hugo Laurençon,Lucile Saulnier, Léo Tronchon, Stas Bekman,Amanpreet Singh,Anton Lozhkov, Thomas Wang, Siddharth Karamcheti,Alexander M. Rush,Douwe Kiela, Matthieu Cord,Victor Sanh


引用 0|浏览135
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks that require reasoning over one or multiple images to generate a text. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELISC dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELISC, we train an 80 billion parameters vision and language model on the dataset and obtain competitive performance on various multimodal benchmarks. We release the code to reproduce the dataset along with the dataset itself.
AI 理解论文