Achievable Error Exponents for Almost Fixed-Length M-ary Classification

2023 IEEE International Symposium on Information Theory (ISIT)(2023)

引用 0|浏览2
暂无评分
摘要
We revisit the multiple classification problem and propose a two-phase test, where each phase is a fixed-length test and the second-phase proceeds only if a reject option is decided in the first phase. We derive the achievable error exponent under each hypothesis and show that our two-phase test bridges over the fixed-length test of Gutman (TIT, 1989) and the sequential test of Haghifam, Tan, and Khisti (TIT 2021). In contrast to the fixed-length test of Gutman that requires an additional reject option, with proper choices of test parameters, our test achieves error exponents close to the sequential test of Haghifam, Tan, and Khisti without a reject option. We generalize the result of Lalitha and Javidi (ISIT 2016) for binary hypothesis testing to the more practical families of M-ary statistical classification, where the test outcome is more than two and the generating distribution under each hypothesis is unknown.
更多
查看译文
关键词
Hypothesis Testing,Classification,Large deviations,Two-phase test,Neyman-Pearson,Bayesian
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要