CLucene - a full-featured, c++ search engine
API Documentation


Data Structures

Here are the data structures with brief descriptions:
lucene::search::AbstractCachingFilterWraps another filter's result and caches it
lucene::util::AbstractDeletor
lucene::analysis::AnalyzerAn Analyzer builds TokenStreams, which analyze text
lucene::util::Array< T >
lucene::util::BitSet
lucene::search::BooleanClause
lucene::search::BooleanQuery
lucene::store::BufferedIndexInputAbstract base class for input from a file in a Directory
lucene::store::BufferedIndexOutputBase implementation class for buffered IndexOutput
jstreams::BufferedInputStream< T >
lucene::search::CachingWrapperFilterWraps another filter's result and caches it
lucene::search::ChainedFilter
lucene::analysis::CharTokenizerAn abstract base class for simple, character-oriented tokenizers
lucene::util::Comparable
lucene::util::Compare
lucene::util::Compare::_base
lucene::util::Compare::Char
lucene::util::Compare::Float
lucene::util::Compare::Int32
lucene::util::Compare::TChar
lucene::util::Compare::Void< _cl >
lucene::document::DateFieldProvides support for converting dates to strings and vice-versa
lucene::search::DateFilterA Filter that restricts search results to a range of time
lucene::document::DateTools
LuceneBase
lucene::search::DefaultSimilarityExpert: Default scoring implementation
lucene::util::Deletor
lucene::util::Deletor::acArray
lucene::util::Deletor::Array< _kt >
lucene::util::Deletor::ConstNullVal< _type >
lucene::util::Deletor::Dummy
lucene::util::Deletor::DummyFloat
lucene::util::Deletor::DummyInt32
lucene::util::Deletor::NullVal< _type >
lucene::util::Deletor::Object< _kt >
lucene::util::Deletor::tcArray
lucene::util::Deletor::Void< _kt >
lucene::store::DirectoryA Directory is a flat list of files
lucene::document::DocumentDocuments are the unit of indexing and search
lucene::document::DocumentFieldEnumeration
lucene::util::Equals
lucene::util::Equals::Char
lucene::util::Equals::Int32
lucene::util::Equals::TChar
lucene::util::Equals::Void< _cl >
lucene::search::Explanation
lucene::document::FieldA field is a section of a Document
lucene::search::FieldCacheExpert: Maintains caches of term values
lucene::search::FieldCache::StringIndexExpert: Stores term text values and document ordering data
lucene::search::FieldCacheAutoA class holding an AUTO field
lucene::search::FieldDocExpert: A ScoreDoc which also contains information about how to sort the referenced document
lucene::search::FieldSortedHitQueueExpert: A hit queue for sorting by hits by terms in more than one field
jstreams::FileInputStream
lucene::util::FileReaderA helper class which constructs a FileReader with a specified simple encodings, or a given inputstreamreader
lucene::search::Filter
lucene::search::FilteredTermEnum
lucene::store::FSDirectoryStraightforward implementation of Directory as a directory of files
lucene::store::FSLockFactory
lucene::search::FuzzyQuery
lucene::search::FuzzyTermEnumFuzzyTermEnum is a subclass of FilteredTermEnum for enumerating all terms that are similiar to the specified filter term
lucene::search::HitCollector
lucene::search::HitDoc
lucene::search::HitsA ranked list of documents, used to hold search results
lucene::store::IndexInputAbstract base class for input from a file in a Directory
lucene::store::IndexInputStreamJStream InputStream which reads from an IndexInput
lucene::index::IndexModifierA class to modify an index, i.e
lucene::store::IndexOutputAbstract class for output to a file in a Directory
lucene::index::IndexReaderIndexReader is an abstract class, providing an interface for accessing an index
lucene::search::IndexSearcherImplements search over a single IndexReader
lucene::index::IndexWriterAn IndexWriter creates and maintains an index
jstreams::InputStreamBuffer< T >
lucene::analysis::ISOLatin1AccentFilterA filter that replaces accented characters in the ISO Latin 1 character set (ISO-8859-1) by their unaccented equivalent
lucene::analysis::KeywordAnalyzer"Tokenizes" the entire stream as a single token
lucene::analysis::KeywordTokenizerEmits the entire input as a single token
lucene::analysis::LengthFilterRemoves words that are too long and too short from the stream
lucene::analysis::LetterTokenizerA LetterTokenizer is a tokenizer that divides text at non-letters
lucene::store::LockFactory
lucene::analysis::LowerCaseFilterNormalizes token text to lower case
lucene::analysis::LowerCaseTokenizerLowerCaseTokenizer performs the function of LetterTokenizer and LowerCaseFilter together
lucene::store::LuceneLock
lucene::util::MiscA class containing various functions
lucene::queryParser::MultiFieldQueryParserA QueryParser which constructs queries to search multiple fields
lucene::index::MultiReader
lucene::search::MultiSearcherImplements search over a set of Searchables
lucene::search::MultiTermQueryA Query that matches documents containing a subset of terms provided by a FilteredTermEnum enumeration
lucene::util::mutexGuard
lucene::store::NoLockFactory
lucene::index::PayloadA Payload is metadata that can be stored together with each occurrence of a term
lucene::analysis::PerFieldAnalyzerWrapperThis analyzer is used to facilitate scenarios where different fields require different analysis techniques
lucene::search::PhraseQuery
lucene::search::PrefixFilter
lucene::search::PrefixQueryA Query that matches documents containing terms with a specified prefix
lucene::util::PriorityQueue< _type, _valueDeletor >A PriorityQueue maintains a partial ordering of its elements such that the least element can always be found in constant time
lucene::search::QueryThe abstract base class for queries
lucene::search::QueryFilter
lucene::queryParser::QueryParserCLucene's default query parser
lucene::queryParser::QueryParserBaseContains default implementations used by QueryParser
lucene::queryParser::QueryToken
lucene::store::RAMDirectoryA memory-resident Directory implementation
lucene::search::RangeFilter
lucene::search::RangeQueryConstructs a query selecting all terms greater than lowerTerm but less than upperTerm
lucene::util::ReaderAn inline wrapper that reads from Jos van den Oever's jstreams
lucene::search::ScoreDocExpert: Returned by low-level search implementations
lucene::search::ScoreDocComparatorExpert: Compares two ScoreDoc objects for sorting
lucene::search::ScoreDocComparators
lucene::search::ScoreDocComparators::Float
lucene::search::ScoreDocComparators::IndexOrder
lucene::search::ScoreDocComparators::Int32
lucene::search::ScoreDocComparators::Relevance
lucene::search::ScoreDocComparators::String
lucene::search::ScorerExpert: Implements scoring for a class of queries
lucene::search::SearchableThe interface for search implementations
lucene::search::SearcherAn abstract base class for search implementations
lucene::search::SimilarityExpert: Scoring API
lucene::analysis::SimpleAnalyzerAn Analyzer that filters LetterTokenizer with LowerCaseFilter
lucene::util::SimpleInputStreamReaderA very simple inputstreamreader implementation
lucene::store::SingleInstanceLockFactory
lucene::search::SortEncapsulates sort criteria for returned hits
lucene::search::SortComparatorAbstract base class for sorting hits returned by a Query
lucene::search::SortComparatorSourceExpert: returns a comparator for sorting ScoreDocs
lucene::search::SortFieldStores information about how to sort documents by terms in an individual field
lucene::analysis::standard::StandardAnalyzerFilters StandardTokenizer with StandardFilter, LowerCaseFilter and StopFilter, using a list of English stop words
lucene::analysis::standard::StandardFilterNormalizes tokens extracted with StandardTokenizer
lucene::analysis::standard::StandardTokenizerA grammar-based tokenizer constructed with JavaCC
lucene::analysis::StopAnalyzerFilters LetterTokenizer with LowerCaseFilter and StopFilter
lucene::analysis::StopFilterRemoves stop words from a token stream
jstreams::StreamBase< T >Base class for stream read access to many different file types
lucene::util::StringReaderA helper class which constructs a the jstreams StringReader
jstreams::StringReader< T >
jstreams::SubInputStream< T >
lucene::index::TermA Term represents a word from text
lucene::index::Term_Compare
lucene::index::Term_Equals
lucene::index::TermDocsTermDocs provides an interface for enumerating <document, frequency> pairs for a term
lucene::index::TermEnum
lucene::index::TermFreqVectorProvides access to stored term vector of a document field
lucene::index::TermPositionsTermPositions provides an interface for enumerating the <document, frequency, <position>* > tuples for a term
lucene::index::TermPositionVectorExtends TermFreqVector to provide additional information about positions in which each of the terms is found
lucene::search::TermQueryA Query that matches documents containing a term
lucene::index::TermVectorOffsetInfo
lucene::analysis::TokenA Token is an occurence of a term from the text of a field
lucene::analysis::TokenFilterA TokenFilter is a TokenStream whose input is another token stream
lucene::analysis::TokenizerA Tokenizer is a TokenStream whose input is a Reader
lucene::analysis::TokenStreamA TokenStream enumerates the sequence of tokens, either from fields of a document or from query text
lucene::search::TopDocsExpert: Returned by low-level search implementations
lucene::search::WeightExpert: Calculate query weights and build query scorers
lucene::analysis::WhitespaceAnalyzerAn Analyzer that uses WhitespaceTokenizer
lucene::analysis::WhitespaceTokenizerA WhitespaceTokenizer is a tokenizer that divides text at whitespace
lucene::search::WildcardFilter
lucene::search::WildcardQueryImplements the wildcard search query
lucene::search::WildcardTermEnumSubclass of FilteredTermEnum for enumerating all terms that match the specified wildcard filter term->
lucene::analysis::WordlistLoaderLoader for text files that represent a list of stopwords

clucene.sourceforge.net