iEnhancer-CLA: Self-attention-based interpretable model for enhancers and their strength prediction

biorxiv(2021)

引用 0|浏览1
暂无评分
摘要
Enhancer is a class of non-coding DNA cis-acting elements that plays a crucial role in the development of eukaryotes for their transcription. Computational methods for predicting enhancers have been developed and achieve satisfactory performance. However, existing computational methods suffer from experience-based feature engineering and lack of interpretability, which not only limit the representation ability of the models to some extent, but also make it difficult to provide interpretable analysis of the model prediction findings.In this paper, we propose a novel deep-learning-based model, iEnhancer-CLA, for identifying enhancers and their strengths. Specifically, iEnhancer-CLA automatically learns sequence 1D features through multiscale convolutional neural networks (CNN), and employs a self-attention mechanism to represent global features formed by multiple elements (multibody effects). In particular, the model can provide an interpretable analysis of the enhancer motifs and key base signals by decoupling CNN modules and generating self-attention weights. To avoid the bias of setting hyperparameters manually, we construct Bayesian optimization methods to obtain model global optimization hyperparameters. The results demonstrate that our method outperforms existing predictors in terms of accuracy for identifying enhancers and their strengths. Importantly, our analyses found that the distribution of bases in enhancers is uneven and the base G contents are more enriched, while the distribution of bases in non-enhancers is relatively even. This result contributes to the improvement of prediction performance and thus facilitates revealing an in-depth understanding of the potential functional mechanisms of enhancers. ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要