Can’t deploy the index successfully
- If you used a local (e.g. file:///<path to index directory>) URL, did you copy the index directory to the master and all of the slave nodes?
- Do your slave nodes have enough free disk space, on the install volume specified in the Katta config file, for the index? The space required will be at least <replication factor> * (<total index size>/<number of nodes>)
- Do your slave nodes have enough unused memory for the index? [Unsure how much is really needed, but have seen deploys hang when the slave had enough disk space but limited RAM].
- Were you pulling the index shards from S3? This can hang due to a bug in the Hadoop native filesystem support for S3.
Searches return no hits
After you’ve successfully added a Katta index, if your test searches return no hits, you can do the following checks:
- Run the listIndices and listErrors commands to verify that the index looks good.
- Use Luke to inspect the index shards to make sure they are valid Lucene indexes.
- Confirm that that the field name used in the search actually exists in the index. E.g. myfield:sweet.