Designing universal causal deep learning models: The geometric (Hyper)transformer

MATHEMATICAL FINANCE(2024)

引用 0|浏览0
暂无评分
摘要
Several problems in stochastic analysis are defined through their geometry, and preserving that geometric structure is essential to generating meaningful predictions. Nevertheless, how to design principled deep learning (DL) models capable of encoding these geometric structures remains largely unknown. We address this open problem by introducing a universal causal geometric DL framework in which the user specifies a suitable pair of metric spaces X$\mathcal {X}$ and Y$\mathcal {Y}$ and our framework returns a DL model capable of causally approximating any "regular" map sending time series in XZ$\mathcal {X}<^>{\mathbb {Z}}$ to time series in YZ$\mathcal {Y}<^>{\mathbb {Z}}$ while respecting their forward flow of information throughout time. Suitable geometries on Y$\mathcal {Y}$ include various (adapted) Wasserstein spaces arising in optimal stopping problems, a variety of statistical manifolds describing the conditional distribution of continuous-time finite state Markov chains, and all Frechet spaces admitting a Schauder basis, for example, as in classical finance. Suitable spaces X$\mathcal {X}$ are compact subsets of any Euclidean space. Our results all quantitatively express the number of parameters needed for our DL model to achieve a given approximation error as a function of the target map's regularity and the geometric structure both of X$\mathcal {X}$ and of Y$\mathcal {Y}$. Even when omitting any temporal structure, our universal approximation theorems are the first guarantees that Holder functions, defined between such X$\mathcal {X}$ and Y$\mathcal {Y}$ can be approximated by DL models.
更多
查看译文
关键词
adapted optimal transport,geometric deep learning,hypernetworks,metric geometry,random projection,stochastic processes,transformer networks,universal approximation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要