@@ -170,10 +170,10 @@ Those include files might look like::
170
170
# search/includes/blog/post.html
171
171
<div class="post_result">
172
172
<h3><a href="{{ result.object.get_absolute_url }}">{{ result.object.title }}</a></h3>
173
-
173
+
174
174
<p>{{ result.object.tease }}</p>
175
175
</div>
176
-
176
+
177
177
# search/includes/media/photo.html
178
178
<div class="photo_result">
179
179
<a href="{{ result.object.get_absolute_url }}">
@@ -197,21 +197,23 @@ might looks something like::
197
197
Real-Time Search
198
198
================
199
199
200
- If your site sees heavy search traffic and up-to-date information is very important,
201
- Haystack provides a way to constantly keep your index up to date. By using the
202
- ``RealTimeSearchIndex `` class instead of the ``SearchIndex `` class, Haystack will
203
- automatically update the index whenever a model is saved/deleted.
200
+ If your site sees heavy search traffic and up-to-date information is very
201
+ important, Haystack provides a way to constantly keep your index up to date.
202
+
203
+ You can enable the ``RealtimeSignalProcessor `` within your settings, which
204
+ will allow Haystack to automatically update the index whenever a model is
205
+ saved/deleted.
204
206
205
- You can find more information within the :doc: `searchindex_api ` documentation.
207
+ You can find more information within the :doc: `signal_processors ` documentation.
206
208
207
209
208
210
Use Of A Queue For A Better User Experience
209
211
===========================================
210
212
211
213
By default, you have to manually reindex content, Haystack immediately tries to merge
212
214
it into the search index. If you have a write-heavy site, this could mean your
213
- search engine may spend most of its time churning on constant merges. If you can
214
- afford a small delay between when a model is saved and when it appears in the
215
+ search engine may spend most of its time churning on constant merges. If you can
216
+ afford a small delay between when a model is saved and when it appears in the
215
217
search results, queuing these merges is a good idea.
216
218
217
219
You gain a snappier interface for users as updates go into a queue (a fast
@@ -222,24 +224,40 @@ could live on a completely separate server from your webservers, allowing you
222
224
to tune more efficiently.
223
225
224
226
Implementing this is relatively simple. There are two parts, creating a new
225
- ``QueuedSearchIndex `` class and creating a queue processing script to handle the
226
- actual updates.
227
-
228
- For the ``QueuedSearchIndex ``, simply inherit from the ``SearchIndex `` provided
229
- by Haystack and override the ``_setup_save ``/``_setup_delete `` methods. These
230
- methods usually attach themselves to their model's ``post_save ``/``post_delete ``
231
- signals and call the backend to update or remove a record. You should override
232
- this behavior and place a message in your queue of choice. At a minimum, you'll
233
- want to include the model you're indexing and the id of the model within that
234
- message, so that you can retrieve the proper index from the ``SearchSite `` in
235
- your consumer. Then alter all of your ``SearchIndex `` classes to inherit from
236
- this new class. Now all saves/deletes will be handled by the queue and you
237
- should receive a speed boost.
238
-
239
- For the consumer, this is much more specific to the queue used and your desired
240
- setup. At a minimum, you will need to periodically consume the queue, fetch the
241
- correct index from the ``SearchSite `` for your application, load the model from
242
- the message and pass that model to the ``update_object `` or ``remove_object ``
243
- methods on the ``SearchIndex ``. Proper grouping, batching and intelligent
244
- handling are all additional things that could be applied on top to further
227
+ ``QueuedSignalProcessor `` class and creating a queue processing script to
228
+ handle the actual updates.
229
+
230
+ For the ``QueuedSignalProcessor ``, you should inherit from
231
+ ``haystack.signals.BaseSignalProcessor ``, then alter the ``setup/teardown ``
232
+ methods to call an enqueuing method instead of directly calling
233
+ ``handle_save/handle_delete ``. For example::
234
+
235
+ from haystack import signals
236
+
237
+
238
+ class QueuedSignalProcessor(signals.BaseSignalProcessor):
239
+ # Override the built-in.
240
+ def setup(self):
241
+ models.signals.post_save.connect(self.enqueue_save)
242
+ models.signals.post_delete.connect(self.enqueue_delete)
243
+
244
+ # Override the built-in.
245
+ def teardown(self):
246
+ models.signals.post_save.disconnect(self.enqueue_save)
247
+ models.signals.post_delete.disconnect(self.enqueue_delete)
248
+
249
+ # Add on a queuing method.
250
+ def enqueue_save(self, sender, instance, **kwargs):
251
+ # Push the save & information onto queue du jour here...
252
+
253
+ # Add on a queuing method.
254
+ def enqueue_delete(self, sender, instance, **kwargs):
255
+ # Push the delete & information onto queue du jour here...
256
+
257
+ For the consumer, this is much more specific to the queue used and your desired
258
+ setup. At a minimum, you will need to periodically consume the queue, fetch the
259
+ correct index from the ``SearchSite `` for your application, load the model from
260
+ the message and pass that model to the ``update_object `` or ``remove_object ``
261
+ methods on the ``SearchIndex ``. Proper grouping, batching and intelligent
262
+ handling are all additional things that could be applied on top to further
245
263
improve performance.
0 commit comments