We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我们是百度商业的同学,和plato合作基于预训练的plato模型微调在商业场景下使用,生成模型已经部署完毕。现在基于微调后的模型热启训练了一个排序模型,现在有一些部署上的问题想请教一下: 1.plato提供的是加密后的模型,在模型转换时有一个encrypted参数: 但是在paddlenlp的api中都没有体现,请问是否有影响 2.生成模型使用了fast_transformer加速decode过程,排序模型一个样本只需要前向forward一次得到一个scalar,部署时是否可以服用fast_transformer加速?
The text was updated successfully, but these errors were encountered:
您好,fast_transformer已经停止更新了,欢迎内部合作或者开发者贡献。
Sorry, something went wrong.
wawltor
No branches or pull requests
请提出你的问题
我们是百度商业的同学,和plato合作基于预训练的plato模型微调在商业场景下使用,生成模型已经部署完毕。现在基于微调后的模型热启训练了一个排序模型,现在有一些部署上的问题想请教一下:

1.plato提供的是加密后的模型,在模型转换时有一个encrypted参数:
但是在paddlenlp的api中都没有体现,请问是否有影响
2.生成模型使用了fast_transformer加速decode过程,排序模型一个样本只需要前向forward一次得到一个scalar,部署时是否可以服用fast_transformer加速?
The text was updated successfully, but these errors were encountered: