Skip to content

Support OpenAI encrypted thinking tokens #2127

Open
@lewisarnold

Description

@lewisarnold

Thank you for the recently added support for reasoning responses (#1142).

In the issue thread there, there was a comment about adding support for the encrypted thinking tokens from OpenAI.

Adding support for these tokens such that they can be properly sent back to the API after a tool call (as per OpenAIs recommendation) would be a great enhancement.

Hi,

I’d like to express interest in supporting extended reasoning models, and contribute a few observations.

Looking at PR #1142, I noticed that the current implementation returns the summary field from OpenAI as the reasoning part. However, OpenAI’s responses API also supports an optional encrypted_content field (reference, docs), which is useful for including full reasoning traces in follow-up interactions. This is especially recommended when using tool calling (docs).

To support this, I suggest either:

  • Extending ThinkingPart to include the encrypted_content field, or
  • Introducing a new kind of part specifically for encrypted or provider-specific reasoning items.

It’s worth noting that encrypted_content is provider-dependent and may not be interoperable - for example, it can’t be mapped to Anthropic’s redacted_thinking. Still, a unified abstraction might help represent these different formats consistently.

Happy to help further if there’s interest!

Originally posted by @seangal2 in #907

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions