Skip to content

Added example notebook for handling function calls with reasoning models #1796

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Apr 28, 2025

Conversation

tompakeman-oai
Copy link
Contributor

@tompakeman-oai tompakeman-oai commented Apr 25, 2025

Summary

This adds a cookbook demonstrating how to make function calls when using Reasoning models such as o4-mini.

  • Handling sequential function calling and reasoning steps in a loop
  • Handling manual conversation orchestration vs using previous_response_id
  • Inspecting reasoning summaries and token usage

When contributing new content, read through our contribution guidelines, and mark the following action items as completed:

  • I have added a new entry in registry.yaml (and, optionally, in authors.yaml) so that my content renders on the cookbook website.
  • I have conducted a self-review of my content based on the contribution guidelines:
    • Relevance: This content is related to building with OpenAI technologies and is useful to others.
    • Uniqueness: I have searched for related examples in the OpenAI Cookbook, and verified that my content offers new insights or unique information compared to existing documentation.
    • Spelling and Grammar: I have checked for spelling or grammatical mistakes.
    • Clarity: I have done a final read-through and verified that my submission is well-organized and easy to understand.
    • Correctness: The information I include is correct and all of my code executes successfully.
    • Completeness: I have explained everything fully, including all necessary references and citations.

We will rate each of these areas on a scale from 1 to 4, and will only accept contributions that score 3 or higher on all areas. Refer to our contribution guidelines for more details.

Copy link
Contributor

@pap-openai pap-openai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

love the cookbook and the idea! made a few comments overall on how it could be rephrased a bit or restructured, there's things you can take as feedback some others you can ignore it's not a grounded truth! I'm wondering if we couldn't find a more "real" use-cases that'd involve reasoning and function calling (e.g: RAG on policies, mocking up some users and a customer service use-cases where you might need to fetch more information of the user (e.g: payment history in a payment context) based on policy+user information – that basically involves multi turn function calling and reasoning in between) – but also understand that's more work and this is already high quality bar so I'm happy to also approve this with the current use-case

Thanks @tompakeman-oai !

],
"source": [
"# Let's keep track of the response ids in a naive way, in case we want to reverse the conversation and pick up from a previous point\n",
"response = client.responses.create(input=\"Which of the last four Olympic host cities has the highest average temperature?\", **MODEL_DEFAULTS)\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For readability (avoiding horizontal scroll) on cookbook, might be better to have new line for arguments

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the question looks like a fact-based one which is more suited for web_search than base knowledge (even if that works and is true, could be misleading, as o3 and o4-mini don't support web search)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point - I agree this could be misleading. How would you feel about publishing this now to prioritise speed, but I will come up with a better use case for a v2?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree!

"More reasoning required, continuing...\n",
"Invoking tool: get_city_uuid({'city': 'Tokyo'})\n",
"More reasoning required, continuing...\n",
"Invoking tool: get_city_uuid({'city': 'Paris'})\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very good example of multiple calls! I'm wondering if 4o wouldn't get it with parallel as it looks very simple and each step doesn't involve reasoning. It could involve reasoning if you got information back from one of the call to define the next one, that's something 4o can't do by themselves.

I also think that 4o wouldn't be perfect to do >~ 10 calls as it'll might loose some in the context/process so that can be enough but worth highlighting the use-cases in the cookbook as text?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above - I will try and find a more real-world use case for v2 which requires reasoning and has dependencies between the function outputs and reasoning steps

@pap-openai
Copy link
Contributor

thank you!!

pull bot pushed a commit to SUNDONEcindy/openai-cookbook that referenced this pull request Apr 28, 2025
@tompakeman-oai tompakeman-oai merged commit 8fd8b9b into main Apr 28, 2025
@tompakeman-oai tompakeman-oai deleted the tompakeman branch April 28, 2025 09:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants