You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering how to make deep research faster. I have connected it with single mysql table having 7 -8 million rows. deep research on this take around 20 mins on local macbook pro m3. is there a way we can run these queries in parallel ? or some other improvements ?
The text was updated successfully, but these errors were encountered:
Thanks for the issue. We'll attempt to add this functionality.
Currently, all tool calls happen sequentially so that they are data-aware. I.e., subsequent questions asked can be much more intelligent – as they are based on data from prior questions asked.
Having said that, we don't have to have every call be sequential. Instead of a "one-node-at-a-time" graph, we can try and make this a "multiple-nodes-that-interact-at-once" graph.
We'll experiment with ways to make this happen, though no guarantees as this might cause regressions.
Though I wouldn't recommend calling the tools in parallel within each query for the reasons @rishsriv outlined above since this is a self reflective/adaptive RAG chain where each response is included in the context for further tool calls.
However, you CAN run separate queries in parallel by just submitting separate queries one after another (I do it through their API http://localhost:1235/docs#/) although you may have to space out each subsequent request in order to avoid hitting Anthropic API rate limits depending on which Tier you are on.
I was wondering how to make deep research faster. I have connected it with single mysql table having 7 -8 million rows. deep research on this take around 20 mins on local macbook pro m3. is there a way we can run these queries in parallel ? or some other improvements ?
The text was updated successfully, but these errors were encountered: