CLucene - a full-featured, c++ search engine
API Documentation
#include <Field.h>
Public Types | |
enum | Store { STORE_YES = 1, STORE_NO = 2, STORE_COMPRESS = 4 } |
enum | Index { INDEX_NO = 16, INDEX_TOKENIZED = 32, INDEX_UNTOKENIZED = 64, INDEX_NONORMS = 128 } |
enum | TermVector { TERMVECTOR_NO = 256, TERMVECTOR_YES = 512, TERMVECTOR_WITH_POSITIONS = TERMVECTOR_YES | 1024, TERMVECTOR_WITH_OFFSETS = TERMVECTOR_YES | 2048, TERMVECTOR_WITH_POSITIONS_OFFSETS = TERMVECTOR_WITH_OFFSETS | TERMVECTOR_WITH_POSITIONS } |
enum | { LAZY_YES = 4096 } |
Public Member Functions | |
Field (const TCHAR *name, const TCHAR *value, int _config) | |
Field (const TCHAR *name, lucene::util::Reader *reader, int _config) | |
Field (const TCHAR *name, jstreams::StreamBase< char > *stream, int _config) | |
~Field () | |
const TCHAR * | name () const |
The name of the field (e.g., "date", "subject", "title", "body", etc. | |
TCHAR * | stringValue () const |
The value of the field as a String, or null. | |
lucene::util::Reader * | readerValue () const |
The value of the field as a reader, or null. | |
jstreams::StreamBase< char > * | streamValue () const |
The value of the field as a String, or null. | |
lucene::analysis::TokenStream * | tokenStreamValue () const |
The value of the field as a TokesStream, or null. | |
bool | isStored () const |
bool | isIndexed () const |
bool | isTokenized () const |
bool | isCompressed () const |
True if the value of the field is stored and compressed within the index NOTE: CLucene does not actually support compressed fields, Instead, a reader will be returned with a pointer to a SubIndexInputStream. | |
bool | isTermVectorStored () const |
True iff the term or terms used to index this field are stored as a term vector, available from IndexReader#getTermFreqVector(int32_t,TCHAR*). | |
bool | isStoreOffsetWithTermVector () const |
True iff terms are stored as term vector together with their offsets (start and end positon in source text). | |
bool | isStorePositionWithTermVector () const |
True iff terms are stored as term vector together with their token positions. | |
float_t | getBoost () const |
Returns the boost factor for hits for this field. | |
void | setBoost (const float_t value) |
Sets the boost factor hits on this field. | |
bool | isBinary () const |
True if the value of the filed is stored as binary. | |
bool | getOmitNorms () const |
True if norms are omitted for this indexed field. | |
void | setOmitNorms (const bool omitNorms) |
Expert:. | |
bool | isLazy () const |
Indicates whether a Field is Lazy or not. | |
TCHAR * | toString () |
void | setValue (const TCHAR *value) |
void | setValue (lucene::util::Reader *value) |
Expert: change the value of this field. | |
void | setValue (jstreams::StreamBase< char > *value) |
Expert: change the value of this field. | |
void | setValue (lucene::analysis::TokenStream *value) |
Expert: change the value of this field. | |
Protected Member Functions | |
void | setConfig (const uint32_t termVector) |
void | _resetValue () |
Each field has two parts, a name and a value. Values may be free text, provided as a String or as a Reader, or they may be atomic keywords, which are not further processed. Such keywords may be used to represent dates, urls, etc. Fields are optionally stored in the index, so that they may be returned with hits on the document.
PORTING: CLucene doesn't directly support compressed fields. However, it is easy to reproduce this functionality by using the GZip streams in the contrib package. Also note that binary fields are not read immediately in CLucene, a substream is pointed directly to the field's data, in affect creating a lazy load ability. This means that large fields are best saved in binary format (even if they are text), so that they can be loaded lazily.
STORE_YES |
Store the original field value in the index.
This is useful for short texts like a document's title which should be displayed with the results. The value is stored in its original form, i.e. no analyzer is used before it is stored. |
STORE_NO |
Do not store the field value in the index.
|
STORE_COMPRESS |
Store the original field value in the index in a compressed form.
This is useful for long documents and for binary valued fields. NOTE: CLucene does not directly support compressed fields, to store a compressed field. //TODO: need better documentation on how to add a compressed field //because actually we still need to write a GZipOutputStream... |
INDEX_NO |
Do not index the field value.
This field can thus not be searched, but one can still access its contents provided it is stored. |
INDEX_TOKENIZED |
Index the field's value so it can be searched.
An Analyzer will be used to tokenize and possibly further normalize the text before its terms will be stored in the index. This is useful for common text. |
INDEX_UNTOKENIZED |
Index the field's value without using an Analyzer, so it can be searched.
As no analyzer is used the value will be stored as a single term. This is useful for unique Ids like product numbers. |
INDEX_NONORMS |
Index the field's value without an Analyzer, and disable the storing of norms.
No norms means that index-time boosting and field length normalization will be disabled. The benefit is less memory usage as norms take up one byte per indexed field for every document in the index. Note that once you index a given field with norms enabled, disabling norms will have no effect. In other words, for NO_NORMS to have the above described effect on a field, all instances of that field must be indexed with NO_NORMS from the beginning. |
TERMVECTOR_NO |
Do not store term vectors.
|
TERMVECTOR_YES |
Store the term vectors of each document.
A term vector is a list of the document's terms and their number of occurences in that document. |
TERMVECTOR_WITH_POSITIONS |
Store the term vector + token position information.
|
TERMVECTOR_WITH_OFFSETS |
Store the term vector + Token offset information.
|
TERMVECTOR_WITH_POSITIONS_OFFSETS |
Store the term vector + Token position and offset information.
|
lucene::document::Field::Field | ( | const TCHAR * | name, | |
const TCHAR * | value, | |||
int | _config | |||
) |
lucene::document::Field::Field | ( | const TCHAR * | name, | |
lucene::util::Reader * | reader, | |||
int | _config | |||
) |
lucene::document::Field::Field | ( | const TCHAR * | name, | |
jstreams::StreamBase< char > * | stream, | |||
int | _config | |||
) |
lucene::document::Field::~Field | ( | ) |
const TCHAR* lucene::document::Field::name | ( | ) | const |
The name of the field (e.g., "date", "subject", "title", "body", etc.
) as an interned string. returns reference
TCHAR* lucene::document::Field::stringValue | ( | ) | const |
The value of the field as a String, or null.
If null, the Reader value or binary value is used. Exactly one of stringValue(), readerValue() and streamValue() must be set. returns reference
lucene:: util ::Reader* lucene::document::Field::readerValue | ( | ) | const |
The value of the field as a reader, or null.
If null, the String value or stream value is used. Exactly one of stringValue(), readerValue() and streamValue() must be set.
jstreams::StreamBase<char>* lucene::document::Field::streamValue | ( | ) | const |
The value of the field as a String, or null.
If null, the String value or Reader value is used. Exactly one of stringValue(), readerValue() and streamValue() must be set.
lucene:: analysis ::TokenStream* lucene::document::Field::tokenStreamValue | ( | ) | const |
The value of the field as a TokesStream, or null.
If null, the Reader value, String value, or binary value is used. Exactly one of stringValue(), readerValue(), binaryValue(), and tokenStreamValue() must be set.
bool lucene::document::Field::isStored | ( | ) | const |
bool lucene::document::Field::isIndexed | ( | ) | const |
bool lucene::document::Field::isTokenized | ( | ) | const |
bool lucene::document::Field::isCompressed | ( | ) | const |
True if the value of the field is stored and compressed within the index NOTE: CLucene does not actually support compressed fields, Instead, a reader will be returned with a pointer to a SubIndexInputStream.
A GZipInputStream and a UTF8 reader must be used to actually read the content. This flag will only be set if the index was created by another lucene implementation.
bool lucene::document::Field::isTermVectorStored | ( | ) | const |
True iff the term or terms used to index this field are stored as a term vector, available from IndexReader#getTermFreqVector(int32_t,TCHAR*).
These methods do not provide access to the original content of the field, only to terms used to index it. If the original content must be preserved, use the stored
attribute instead.
bool lucene::document::Field::isStoreOffsetWithTermVector | ( | ) | const |
True iff terms are stored as term vector together with their offsets (start and end positon in source text).
bool lucene::document::Field::isStorePositionWithTermVector | ( | ) | const |
True iff terms are stored as term vector together with their token positions.
float_t lucene::document::Field::getBoost | ( | ) | const |
Returns the boost factor for hits for this field.
The default value is 1.0.
Note: this value is not stored directly with the document in the index. Documents returned from IndexReader#document(int) and Hits#doc(int) may thus not have the same value present as when this field was indexed.
void lucene::document::Field::setBoost | ( | const float_t | value | ) |
Sets the boost factor hits on this field.
This value will be multiplied into the score of all hits on this this field of this document.
The boost is multiplied by Document#getBoost() of the document containing this field. If a document has multiple fields with the same name, all such values are multiplied together. This product is then multipled by the value Similarity#lengthNorm(String,int), and rounded by Similarity#encodeNorm(float) before it is stored in the index. One should attempt to ensure that this product does not overflow the range of that encoding.
Similarity::lengthNorm(String, int)
Similarity::encodeNorm(float)
bool lucene::document::Field::isBinary | ( | ) | const |
True if the value of the filed is stored as binary.
bool lucene::document::Field::getOmitNorms | ( | ) | const |
True if norms are omitted for this indexed field.
void lucene::document::Field::setOmitNorms | ( | const bool | omitNorms | ) |
Expert:.
If set, omit normalization factors associated with this indexed field. This effectively disables indexing boosts and length normalization for this field.
bool lucene::document::Field::isLazy | ( | ) | const |
Indicates whether a Field is Lazy or not.
The semantics of Lazy loading are such that if a Field is lazily loaded, retrieving it's values via stringValue() or binaryValue() is only valid as long as the org.apache.lucene.index.IndexReader that retrieved the Document is still open.
TCHAR* lucene::document::Field::toString | ( | ) |
void lucene::document::Field::setValue | ( | const TCHAR * | value | ) |
Expert: change the value of this field. This can be used during indexing to re-use a single Field instance to improve indexing speed by avoiding GC cost of new'ing and reclaiming Field instances. Typically a single Document instance is re-used as well. This helps most on small documents.
Note that you should only use this method after the Field has been consumed (ie, the Document containing this Field has been added to the index). Also, each Field instance should only be used once within a single Document instance. See ImproveIndexingSpeed for details.
void lucene::document::Field::setValue | ( | lucene::util::Reader * | value | ) |
Expert: change the value of this field.
See setValue(String).
void lucene::document::Field::setValue | ( | jstreams::StreamBase< char > * | value | ) |
Expert: change the value of this field.
See setValue(String).
void lucene::document::Field::setValue | ( | lucene::analysis::TokenStream * | value | ) |
Expert: change the value of this field.
See setValue(String).
void lucene::document::Field::setConfig | ( | const uint32_t | termVector | ) | [inline, protected] |
void lucene::document::Field::_resetValue | ( | ) | [inline, protected] |