We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Searching large data sets in (near) constant time

Formale Metadaten

Titel
Searching large data sets in (near) constant time
Serientitel
Anzahl der Teile
60
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
In low latency search environments, queries producing large result sets are a real pain. A proper ranking of large result sets burns a lot cpu. Those queries have the potential to slow down or even brick your cluster. On the customer side it is questionable whether it makes sense to return millions of documents as the customer has to filter them afterwards anyway. Those large result sets caused us heavy headache as they significantly reduced the available compute head room on the nodes of our Solr cluster. They even bricked the whole cluster when hitting the cluster in high volume. In this project report we'll guide you through the steps (and math) how we: - constructed index based random experiments, - estimate the rough query hit count of a query by extrapolating bucket search results, - collect and apply static first phase ranking information, - use the information collected to filter the result set to the most relevant documents to return no more than a given number of documents, - extrapolate hit and facet counts to mimic the original search result and - handle document collapsing and facetting. In this talk we'll guide you through the software architectural aspects as well as the math applied. Although applied on a Solr search system, this concept can be applied on other search engines as well.