Masked Capsule Autoencoders
CoRR(2024)
摘要
We propose Masked Capsule Autoencoders (MCAE), the first Capsule Network that
utilises pretraining in a self-supervised manner. Capsule Networks have emerged
as a powerful alternative to Convolutional Neural Networks (CNNs), and have
shown favourable properties when compared to Vision Transformers (ViT), but
have struggled to effectively learn when presented with more complex data,
leading to Capsule Network models that do not scale to modern tasks. Our
proposed MCAE model alleviates this issue by reformulating the Capsule Network
to use masked image modelling as a pretraining stage before finetuning in a
supervised manner. Across several experiments and ablations studies we
demonstrate that similarly to CNNs and ViTs, Capsule Networks can also benefit
from self-supervised pretraining, paving the way for further advancements in
this neural network domain. For instance, pretraining on the Imagenette
dataset, a dataset of 10 classes of Imagenet-sized images, we achieve not only
state-of-the-art results for Capsule Networks but also a 9
compared to purely supervised training. Thus we propose that Capsule Networks
benefit from and should be trained within a masked image modelling framework,
with a novel capsule decoder, to improve a Capsule Network's performance on
realistic-sized images.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要