Scalability of a distributed neural information retrieval system
Tóm tắt
Summary form only given. AURA (Advanced Uncertain Reasoning Architecture) is a generic family of techniques and implementations intended for high-speed approximate search and match operations on large unstructured datasets. AURA technology is fast, economical, and offers unique advantages for finding near-matches not available with other methods. AURA is based upon a high-performance binary neural network called a correlation matrix memory (CMM). Typically, several CMM elements are used in combination to solve soft or fuzzy pattern-matching problems. AURA takes large volumes of data and constructs a special type of compressed index. AURA finds exact and near-matches between indexed records and a given query, where the query itself may have omissions and errors. The degree of nearness required during matching can be varied through thresholding techniques. The PCI-based PRESENCE (Parallel Structured Neural Computing Engine) card is a hardware-accelerator architecture for the core CMM computations needed in AURA-based applications. The card is designed for use in low-cost workstations and incorporates 128 MByte of low-cost DRAM for CMM storage. To investigate the scalability of the distributed AURA system, we implement a word-to-document index of an AURA-based information retrieval system, called MinerTaur, over a distributed PRESENCE CMM.
Từ khóa
#Scalability #Information retrieval #Coordinate measuring machines #Computer architecture #Concurrent computing #Aircraft #Artificial neural networks #Computer science #Environmental economics #Neural networksTài liệu tham khảo
0, Reuters Corpus Volume 1 English language 1996–08–20 to 1997–08–19
michael, 2002, A hardware accelerated novel IR system, proceedings of 10th Euromicro Workshop on Parallel Distributed and Network-based Processing PDP2002
hodge, 2001, An Integrated Neural IR System, Proceedings of the European Symposium on Artificial Neural Networks
austin, 1995, The advanced uncertain reasoning architecture, Weightless Neural Network Workshop