Large Language Models for Stemming: Promises, Pitfalls and Failures
CoRR(2024)
摘要
Text stemming is a natural language processing technique that is used to
reduce words to their base form, also known as the root form. The use of
stemming in IR has been shown to often improve the effectiveness of
keyword-matching models such as BM25. However, traditional stemming methods,
focusing solely on individual terms, overlook the richness of contextual
information. Recognizing this gap, in this paper, we investigate the promising
idea of using large language models (LLMs) to stem words by leveraging its
capability of context understanding. With this respect, we identify three
avenues, each characterised by different trade-offs in terms of computational
cost, effectiveness and robustness : (1) use LLMs to stem the vocabulary for a
collection, i.e., the set of unique words that appear in the collection
(vocabulary stemming), (2) use LLMs to stem each document separately
(contextual stemming), and (3) use LLMs to extract from each document entities
that should not be stemmed, then use vocabulary stemming to stem the rest of
the terms (entity-based contextual stemming). Through a series of empirical
experiments, we compare the use of LLMs for stemming with that of traditional
lexical stemmers such as Porter and Krovetz for English text. We find that
while vocabulary stemming and contextual stemming fail to achieve higher
effectiveness than traditional stemmers, entity-based contextual stemming can
achieve a higher effectiveness than using Porter stemmer alone, under specific
conditions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要