You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In most LLM inference applications the token generation loop needs to know the EOS token id to figure out when to stop. The tokenizer already has this information, but it doesn't expose any API that the caller can use to retrieve it.
This results in applications/callers either hardcoding the EOS./BOS ids or having to resort to other metadata (e.g, the ONNXRuntime GenAI sdk has to duplicate this information in GenAI config file).
It would be great if such an API can be exposed from ORTX Tokenizer itself.
The text was updated successfully, but these errors were encountered:
In most LLM inference applications the token generation loop needs to know the EOS token id to figure out when to stop. The tokenizer already has this information, but it doesn't expose any API that the caller can use to retrieve it.
This results in applications/callers either hardcoding the EOS./BOS ids or having to resort to other metadata (e.g, the ONNXRuntime GenAI sdk has to duplicate this information in GenAI config file).
It would be great if such an API can be exposed from ORTX Tokenizer itself.
The text was updated successfully, but these errors were encountered: