Web Similarity in Sets of Search Terms Using Database Queries

CoRR(2020)

引用 25|浏览5
暂无评分
摘要
Normalized web distance (NWD) is a similarity or normalized semantic distance based on the World Wide Web or another large electronic database, for instance Wikipedia, and a search engine that returns reliable aggregate page counts. For sets of search terms the NWD gives a common similarity (common semantics) on a scale from 0 (identical) to 1 (completely different). The NWD approximates the similarity of members of a set according to all (upper semi)computable properties. We develop the theory and give applications of classifying using Amazon, Wikipedia, and the NCBI website from the National Institutes of Health. The last gives new correlations between health hazards. A restriction of the NWD to a set of two yields the earlier normalized Google distance (NGD), but no combination of the NGD’s of pairs in a set can extract the information the NWD extracts from the set. The NWD enables a new contextual (different databases) learning approach based on Kolmogorov complexity theory that incorporates knowledge from these databases.
更多
查看译文
关键词
Normalized web distance,Pattern recognition,Data mining,Similarity,Classification,Kolmogorov complexity,(1) CCS,Information systems,World Wide Web,Web searching and information discovery,(2) CCS,Information Retrieval
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要