AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We observe that our fine-tuned Robustly Optimized BERT Pretraining Approach model sets new state-of-the-art results on all six datasets in terms of Mean Reciprocal Rank

Contextualized Embeddings based Transformer Encoder for Sentence Similarity Modeling in Answer Selection Task.

LREC, pp.5505-5514, (2020)

Cited by: 0|Views5
EI
Full Text
Bibtex
Weibo

Abstract

Word embeddings that consider context have attracted great attention for various natural language processing tasks in recent years. In this paper, we utilize contextualized word embeddings with the transformer encoder for sentence similarity modeling in the answer selection task. We present two different approaches (feature-based and fine...More

Code:

Data:

0
Introduction
  • Measuring the similarity between question answering pairs (Yih et al, 2013) is a fundamental problem in the areas of Information Retrieval and Natural Language Processing (NLP).
  • The BERT model (Devlin et al, 2019) can generate contextual embeddings like ELMo via utilizing the encoder of transformer (Vaswani et al, 2017) and yields very good results on tasks such as named-entity recognition
  • Since these contextual embeddings can capture better representation of a sentence by generating embedding of each word based on its surrounding context, the authors are motivated to use them for sentence similarity modeling in answer selection task
Highlights
  • Measuring the similarity between question answering pairs (Yih et al, 2013) is a fundamental problem in the areas of Information Retrieval and Natural Language Processing (NLP)
  • We observe new state-of-the-art results in all community question answering (CQA) datasets by fine-tuning both Bidirectional Encoder Representations from Transformers (BERT) and Robustly Optimized BERT Pretraining Approach (RoBERTa) models. Though both Base and Large versions of BERT and RoBERTa provide state-of-the-art results across the CQA datasets, we find that the Large version outperforms the Base version in all of them
  • We find that integrating transformer encoder with contextual embeddings improves the performance by 43.73%, 25.23%, and 26.34% in terms of Mean Average Precision (MAP) and 41.32%, 21.27%, and 24.08% in terms of Mean Reciprocal Rank (MRR) in Embeddings from Language Models (ELMo), BERTBase, and BERTLarge respectively
  • We present two approaches to utilize contextualized embeddings with the transformer encoder for the answer selection task
  • We find that our approach of fine-tuning the pre-trained transformer encoder models for answer selection is very effective even without the leverage of transfer learning from large corpora
  • We observe that combining contextual embeddings with transformer encoder improves performance from models where only contextual embeddings were used
  • We observe that our fine-tuned RoBERTa model sets new state-of-the-art results on all six datasets in terms of MRR
Methods
  • To evaluate the effectiveness of the approach, the authors ran experiments on six different datasets.
  • The authors present the description of the datasets, evaluation metrics, the training procedure and parameter settings used in the experiments.
  • The authors used six datasets for the answer selection task as shown on Table 2.
  • TREC-QA: This dataset is created from the QA track (813) of Text REtrieval Conference (Wang et al, 2007).
Results
  • Results and Analyses

    The authors performed extensive experiments to compare the contextualized embeddings based transformer encoder (CETE) with the recent progress.
  • The XLNet model did not use the original transformer encoder (Vaswani et al, 2017)
  • It utilized the ideas from the Transformer-XL (Dai et al, 2019) model by using the segments recurrence mechanism and the relative encoding scheme into pre-training.
  • It proposed the permutationbased language modeling to capture bidirectional context
Conclusion
Summary
  • Introduction:

    Measuring the similarity between question answering pairs (Yih et al, 2013) is a fundamental problem in the areas of Information Retrieval and Natural Language Processing (NLP).
  • The BERT model (Devlin et al, 2019) can generate contextual embeddings like ELMo via utilizing the encoder of transformer (Vaswani et al, 2017) and yields very good results on tasks such as named-entity recognition
  • Since these contextual embeddings can capture better representation of a sentence by generating embedding of each word based on its surrounding context, the authors are motivated to use them for sentence similarity modeling in answer selection task
  • Methods:

    To evaluate the effectiveness of the approach, the authors ran experiments on six different datasets.
  • The authors present the description of the datasets, evaluation metrics, the training procedure and parameter settings used in the experiments.
  • The authors used six datasets for the answer selection task as shown on Table 2.
  • TREC-QA: This dataset is created from the QA track (813) of Text REtrieval Conference (Wang et al, 2007).
  • Results:

    Results and Analyses

    The authors performed extensive experiments to compare the contextualized embeddings based transformer encoder (CETE) with the recent progress.
  • The XLNet model did not use the original transformer encoder (Vaswani et al, 2017)
  • It utilized the ideas from the Transformer-XL (Dai et al, 2019) model by using the segments recurrence mechanism and the relative encoding scheme into pre-training.
  • It proposed the permutationbased language modeling to capture bidirectional context
  • Conclusion:

    Conclusions and Future Work

    The authors present two approaches to utilize contextualized embeddings with the transformer encoder for the answer selection task.
  • The authors find that the approach of fine-tuning the pre-trained transformer encoder models for answer selection is very effective even without the leverage of transfer learning from large corpora.
  • The authors will investigate the performance of transformer based models on more tasks, such as information retrieval applications (Huang and Hu, 2009; Huang et al, 2003; Yin et al, 2013; Huang et al, 2005), sentiment analysis (Liu et al, 2007; Yu et al, 2012), learning from imbalanced datasets (Liu et al, 2006), named-entity recognition (Bari et al, 2019), and query focused abstractive summarization (Nishida et al, 2019; Nema et al, 2017)
Tables
  • Table1: An example of Answer Selection Task. A question along with list of candidate answers are given. The text in bold font is the correct answer
  • Table2: Dataset Overview (‘#’ denotes ‘Number of’ and ‘RAW’ indicates the ‘Original’ version)
  • Table3: Performance comparisons with recent progress on TREC-QA and WikiQA datasets
  • Table4: Performance comparisons with recent progress on YahooCQA and SemEvalCQA datasets
Download tables as Excel
Related work
Funding
  • This research is supported by the Natural Sciences & Engineering Research Council (NSERC) of Canada and an ORF-RE (Ontario Research Fund-Research Excellence) award in BRAIN Alliance
Study subjects and analysis
datasets: 6
In the second approach, we fine-tune two pre-trained transformer encoder models for the answer selection task. Based on our experiments on six datasets, we find that the fine-tuning approach outperforms the feature-based approach on all of them. Among our fine-tuning-based models, the Robustly Optimized BERT Pretraining Approach (RoBERTa) model results in new state-of-the-art performance across five datasets

datasets: 5
Based on our experiments on six datasets, we find that the fine-tuning approach outperforms the feature-based approach on all of them. Among our fine-tuning-based models, the Robustly Optimized BERT Pretraining Approach (RoBERTa) model results in new state-of-the-art performance across five datasets. To evaluate the effectiveness of our approach, we ran experiments on six different datasets

datasets: 6
In this section, we present the description of the datasets, evaluation metrics, the training procedure and parameter settings used in our experiments. 4.1. Datasets

We used six datasets for the answer selection task as shown on Table 2
. Specifically, we used two widely used question answering (QA) datasets namely the TREC-QA and WikiQA as well as four community question answering (CQA) datasets, namely, the YahooCQA, SemEval2015CQA, SemEval-2016CQA, and SemEval-2017CQA

datasets: 5
Based on our experiments on six datasets, we find that the fine-tuning approach outperforms the feature-based approach on all of them. Among our fine-tuning-based models, the Robustly Optimized BERT Pretraining Approach (RoBERTa) model results in new state-of-the-art performance across five datasets. Keywords: Answer Selection, Transformer Encoder, Contextualized Embeddings, ELMo, BERT, RoBERTa, Deep Learning

publicly available datasets: 6
In addition, previous works based on pre-trained transformer encoder for answer selection (Garg et al, 2019; Lai et al, 2019; Laskar et al, 2019) were only evaluated on TREC-QA and WikiQA datasets, but these models were not evaluated on community answer selection datasets, such as SemEvalCQA (Nakov et al, 2015; Nakov et al, 2016; Nakov et al, 2017) and YahooCQA (Tay et al, 2017). In comparison to the previous work, we use six publicly available datasets for conducting a series of experiments to investigate the robustness of our approach. 3

datasets: 6
Datasets. We used six datasets for the answer selection task as shown on Table 2. Specifically, we used two widely used question answering (QA) datasets namely the TREC-QA and WikiQA as well as four community question answering (CQA) datasets, namely, the YahooCQA, SemEval2015CQA, SemEval-2016CQA, and SemEval-2017CQA

QA pairs: 1148
The difference between the two versions is that the RAW version has some questions for which there is no answer or there are only positive/negative answers, whereas the Cleaned version removes those instances from the development and test sets. As a result, the RAW version contains 1148 QA pairs in the development set and 1517 QA pairs in the test set whereas the Cleaned version contains 1117 QA pairs in the development set and 1442 QA pairs in the test set. WikiQA: This is an open domain QA dataset (Yang et al, 2015) in which the answers were collected from the Wikipedia

samples: 4
Each question in YahooCQA dataset is associated with at most one correct answer. The negative answers were generated by sampling 4 samples from the top 1000 hits obtained via Lucene7search. There are 253440, 31680, and 31680 QA pairs in the training, development, and test sets respectively

QA pairs: 31680
The negative answers were generated by sampling 4 samples from the top 1000 hits obtained via Lucene7search. There are 253440, 31680, and 31680 QA pairs in the training, development, and test sets respectively. SemEval-2015CQA: This CQA dataset is created from Qatar Living Forums8

CQA datasets: 4
However, their performances are still comparable or better than many recent work (Tay et al, 2018; Tymoshenko and Moschitti, 2018; Chen et al, 2018a; Rao et al, 2019). Performance on CQA datasets: We show the performance of our models in four CQA datasets in Table 4. We again notice that our proposed approach of integrating transformer encoder with ELMo or BERT have outperformed the baseline in all the CQA datasets

datasets: 6
We present two approaches to utilize contextualized embeddings with the transformer encoder for the answer selection task. Our experiments on six datasets demonstrate that the performance of our feature-based approach is comparable with most of the prior work. More importantly, we find that our approach of fine-tuning the pre-trained transformer encoder models for answer selection is very effective even without the leverage of transfer learning from large corpora

datasets: 6
More importantly, we find that our approach of fine-tuning the pre-trained transformer encoder models for answer selection is very effective even without the leverage of transfer learning from large corpora. We also observe that our fine-tuned RoBERTa model sets new state-of-the-art results on all six datasets in terms of MRR. Finally, we share the resources through LRE Map for further research and reproducibility of experiments

Reference
  • Bari, M. S., Joty, S., and Jwalapuram, P. (2019). Zeroresource cross-lingual named entity recognition. arXiv preprint arXiv:1911.09812.
    Findings
  • Bian, W., Li, S., Yang, Z., Chen, G., and Lin, Z. (2017). A compare-aggregate model with dynamic-clip attention for answer selection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 1987–1990.
    Google ScholarLocate open access versionFindings
  • Cer, D., Yang, Y., Kong, S.-y., Hua, N., Limtiaco, N., John, R. S., Constant, N., Guajardo-Cespedes, M., Yuan, S., Tar, C., et al. (2018). Universal sentence encoder. arXiv preprint arXiv:1803.11175.
    Findings
  • Chelba, C., Mikolov, T., Schuster, M., Ge, Q., Brants, T., Koehn, P., and Robinson, T. (2013). One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005.
    Findings
  • Chen, Q., Hu, Q., Huang, J. X., He, L., and An, W. (2017). Enhancing recurrent neural networks with positional attention for question answering. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 993– 996.
    Google ScholarLocate open access versionFindings
  • Chen, Q., Hu, Q., Huang, J. X., and He, L. (2018a). CA-RNN: Using context-aligned recurrent neural networks for modeling sentence similarity. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence.
    Google ScholarLocate open access versionFindings
  • Chen, Q., Hu, Q., Huang, J. X., and He, L. (2018b). CAN: Enhancing sentence similarity modeling with collaborative and adversarial network. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 815–824.
    Google ScholarLocate open access versionFindings
  • Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q., and Salakhutdinov, R. (2019). Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988.
    Google ScholarLocate open access versionFindings
  • Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186.
    Google ScholarLocate open access versionFindings
  • Garg, S., Vu, T., and Moschitti, A. (2019). Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection. arXiv preprint arXiv:1911.04118.
    Findings
  • Hochreiter, S. and Schmidhuber, J. (1997). Long shortterm memory. Neural computation, 9(8):1735–1780.
    Google ScholarLocate open access versionFindings
  • Huang, X. and Hu, Q. (2009). A bayesian learning approach to promoting diversity in ranking for biomedical information retrieval. In Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval,, pages 307–314.
    Google ScholarLocate open access versionFindings
  • Huang, X., Peng, F., Schuurmans, D., Cercone, N., and Robertson, S. E. (2003). Applying machine learning to text segmentation for information retrieval. Information Retrieval, 6(3-4):333–362.
    Google ScholarLocate open access versionFindings
  • Huang, X., Zhong, M., and Si, L. (2005). York university at TREC 2005: Genomics track. In Proceedings of the Fourteenth Text REtrieval Conference, TREC.
    Google ScholarLocate open access versionFindings
  • Kamath, S., Grau, B., and Ma, Y. (2019). Predicting and integrating expected answer types into a simple recurrent neural network model for answer sentence selection. In 20th International Conference on Computational Linguistics and Intelligent Text Processing.
    Google ScholarLocate open access versionFindings
  • Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
    Findings
  • Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Epstein, D., Polosukhin, I., Devlin, J., Lee, K., et al. (2019). Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466.
    Google ScholarLocate open access versionFindings
  • Lai, T., Tran, Q. H., Bui, T., and Kihara, D. (2019). A gated self-attention memory network for answer selection. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 5955–5961.
    Google ScholarLocate open access versionFindings
  • Laskar, M. T. R., Hoque, E., and Huang, J. (2019). Utilizing bidirectional encoder representations from transformers for answer selection task. In The V AMMCS International Conference: Extended Abstract, page 221.
    Google ScholarLocate open access versionFindings
  • Liu, Y., An, A., and Huang, X. (2006). Boosting prediction accuracy on imbalanced datasets with SVM ensembles. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD, pages 107–118.
    Google ScholarLocate open access versionFindings
  • Liu, Y., Huang, X., An, A., and Yu, X. (2007). ARSA: a sentiment-aware model for predicting sales performance using blogs. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 607–614.
    Google ScholarLocate open access versionFindings
  • Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
    Findings
  • Madabushi, H. T., Lee, M., and Barnden, J. (2018). Integrating question classification and deep learning for improved answer selection. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3283–3294.
    Google ScholarLocate open access versionFindings
  • Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
    Findings
  • Nakov, P., Marquez, L., Magdy, W., Moschitti, A., Glass, J., and Randeree, B. (2015). SemEval-2015 task 3: Answer selection in community question answering. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval-2015), pages 269–281.
    Google ScholarLocate open access versionFindings
  • Nakov, P., Marquez, L., Moschitti, A., Magdy, W., Mubarak, H., Freihat, A. A., Glass, J., and Randeree, B. (2016). SemEval-2016 task 3: Community question answering. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 525–545.
    Google ScholarLocate open access versionFindings
  • Nakov, P., Hoogeveen, D., Marquez, L., Moschitti, A., Mubarak, H., Baldwin, T., and Verspoor, K. (2017). Semeval-2017 task 3: Community question answering. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 27–48.
    Google ScholarLocate open access versionFindings
  • Nema, P., Khapra, M. M., Laha, A., and Ravindran, B. (2017). Diversity driven attention model for query-based abstractive summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1063–1072.
    Google ScholarLocate open access versionFindings
  • Nishida, K., Saito, I., Nishida, K., Shinoda, K., Otsuka, A., Asano, H., and Tomita, J. (2019). Multi-style generative reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2273–2284.
    Google ScholarLocate open access versionFindings
  • Pennington, J., Socher, R., and Manning, C. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing, pages 1532–1543.
    Google ScholarLocate open access versionFindings
  • Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227–2237.
    Google ScholarLocate open access versionFindings
  • Peters, M. E., Ruder, S., and Smith, N. A. (2019). To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP, pages 7–14.
    Google ScholarLocate open access versionFindings
  • Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. (2018). Improving language understanding by generative pre-training.
    Google ScholarFindings
  • Rao, J., Liu, L., Tay, Y., Yang, W., Shi, P., and Lin, J. (2019). Bridging the gap between relevance matching and semantic matching for short text similarity modeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 5373–5384.
    Google ScholarLocate open access versionFindings
  • Santos, C. d., Tan, M., Xiang, B., and Zhou, B. (2016). Attentive pooling networks. arXiv preprint arXiv:1602.03609.
    Findings
  • Sha, L., Zhang, X., Qian, F., Chang, B., and Sui, Z. (2018). A multi-view fusion neural network for answer selection. In Thirty-Second AAAI Conference on Artificial Intelligence.
    Google ScholarLocate open access versionFindings
  • Tan, M., Santos, C. d., Xiang, B., and Zhou, B. (2015). LSTM-based deep learning models for non-factoid answer selection. arXiv preprint arXiv:1511.04108.
    Findings
  • Tay, Y., Phan, M. C., Tuan, L. A., and Hui, S. C. (2017). Learning to rank question answer pairs with holographic dual lstm architecture. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 695–704.
    Google ScholarLocate open access versionFindings
  • Tay, Y., Tuan, L. A., and Hui, S. C. (2018). Hyperbolic representation learning for fast and efficient neural question answering. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 583–591.
    Google ScholarLocate open access versionFindings
  • Tymoshenko, K. and Moschitti, A. (2018). Cross-pair text representations for answer sentence selection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2162–2173.
    Google ScholarLocate open access versionFindings
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems, pages 5998–6008.
    Google ScholarLocate open access versionFindings
  • Wan, S., Dras, M., Dale, R., and Paris, C. (2006). Using dependency-based features to take the’para-farce’out of paraphrase. In Proceedings of the Australasian Language Technology Workshop 2006, pages 131–138.
    Google ScholarLocate open access versionFindings
  • Wang, D. and Nyberg, E. (2015). A long short-term memory model for answer sentence selection in question answering. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 707–712.
    Google ScholarLocate open access versionFindings
  • Wang, M., Smith, N. A., and Mitamura, T. (2007). What is the jeopardy model? a quasi-synchronous grammar for QA. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 22– 32.
    Google ScholarLocate open access versionFindings
  • Wang, Z., Mi, H., and Ittycheriah, A. (2016). Sentence similarity learning by lexical decomposition and composition. arXiv preprint arXiv:1602.07019.
    Findings
  • Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., and Brew, J. (2019). Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.
    Findings
  • Yang, Y., Yih, W.-t., and Meek, C. (2015). WikiQA: A challenge dataset for open-domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2013– 2018.
    Google ScholarLocate open access versionFindings
  • Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., and Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237.
    Findings
  • Yih, W.-t., Chang, M.-W., Meek, C., and Pastusiak, A. (2013). Question answering using enhanced lexical semantic models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1744–1753.
    Google ScholarLocate open access versionFindings
  • Yin, X., Huang, J. X., Li, Z., and Zhou, X. (2013). A survival modeling approach to biomedical search result diversification using wikipedia. IEEE Transactions on Knowledge and Data Engineering, 25(6):1201–1212.
    Google ScholarLocate open access versionFindings
  • Yu, X., Liu, Y., Huang, X., and An, A. (2012). Mining online reviews for predicting sales performance: A case study in the movie domain. IEEE Transactions on Knowledge and Data Engineering, 24(4):720–734.
    Google ScholarLocate open access versionFindings
  • Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., and Fidler, S. (2015). Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19–27.
    Google ScholarLocate open access versionFindings
Author
Md. Tahmid Rahman Laskar
Md. Tahmid Rahman Laskar
Enamul Hoque
Enamul Hoque
Your rating :
0

 

Tags
Comments
小科