We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我从/paddlenlp/transformers/ernie/tokenizer.py的154行提取了vocab.txt的下载地址: https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_nano_zh_vocab.txt 我发现这个文件的12085行和18006行存在相同的字符“$“。为什么会出现这种情况? 这会对huggingface transformers库的tokenizer产生影响。由于vocab.txt存在相同的字符,tokenizer会丢失token id:12084,且在新增special token时,新增的special token将无法被赋予正确的新id(实际上会产生与最后一个字符[UNK]相同的id)。
The text was updated successfully, but these errors were encountered:
收到,这是一个已知的问题
Sorry, something went wrong.
wawltor
LiuChiachi
No branches or pull requests
请提出你的问题
我从/paddlenlp/transformers/ernie/tokenizer.py的154行提取了vocab.txt的下载地址:
https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_nano_zh_vocab.txt
我发现这个文件的12085行和18006行存在相同的字符“$“。为什么会出现这种情况?
这会对huggingface transformers库的tokenizer产生影响。由于vocab.txt存在相同的字符,tokenizer会丢失token id:12084,且在新增special token时,新增的special token将无法被赋予正确的新id(实际上会产生与最后一个字符[UNK]相同的id)。
The text was updated successfully, but these errors were encountered: