Ferroelectric FET Based In-Memory Computing for Few-Shot Learning

Proceedings of the 2019 on Great Lakes Symposium on VLSI(2019)

引用 31|浏览60
暂无评分
摘要
As CMOS technology advances, the performance gap between the CPU and main memory has not improved. Furthermore, the hardware deployed for Internet of Things (IoT) applications need to process ever growing volumes of data, which can further exacerbate the "memory wall". Computing-in-memory (CiM) architectures, where logic and arithmetic operations are performed in memory, can significantly reduce energy and latency overheads associated with data transfer, and potentially alleviate processor-memory bottlenecks. In this paper, we consider the utility of ternary content addressable memory (TCAM) arrays and CiM arrays based on ferroelectric field effect transistors (FeFETs) to support emerging machine learning models that can learn new classes of data with significantly less training overhead - highly desirable in IoT applications. Architecturally, we use TCAM and CiM arrays to implement the external memory module in a memory enhanced neural network (MENN) - which can be used to minimize catastrophic forgetting - a major problem in applications such as lifelong and few-shot learning. As a representative example, we achieve 95.14% accuracy for a few-shot learning task with the Omniglot data set by using a combined L∞ infinity and L1 distance metric computed via a TCAM-CiM cascaded architecture (as opposed to 99.06% accuracy assuming a GPU backed by DRAM). While there is a slight drop in accuracy, the TCAM-CiM approach is 4.34X faster and 4.18X more energy efficient than a CMOS implementation for the same task. The ability of an FeFET to serve as both a compact logic and storage element helps to enable dense CiM and TCAM structures that drive the aforementioned improvements to application-level figures of merit (FOMs).
更多
查看译文
关键词
compute-in-memory, fefet, few-shot learning, lifelong learning, mann, neural networks, tcam
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要