-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Insights: PaddlePaddle/PaddleNLP
Overview
-
- 9 Merged pull requests
- 5 Open pull requests
- 2 Closed issues
- 0 New issues
Could not load contribution data
Please try again later
9 Pull requests merged by 7 people
-
[RL] fix bug in qwen fuse qkv
#10821 merged
Jul 6, 2025 -
Move setup_fp8.py to FleetY
#10820 merged
Jul 6, 2025 -
[RL] disable aistudio download and fix qwen bug
#10819 merged
Jul 5, 2025 -
Move FP8 ops to FleetY branch
#10803 merged
Jul 5, 2025 -
support dispatch both bf16 & fp8
#10817 merged
Jul 4, 2025 -
[AutoParallel] close dynamic sharding CI test
#10791 merged
Jul 4, 2025 -
[LLM] fix_qwen3_moe
#10801 merged
Jul 4, 2025 -
Set FP8Linear weight update by inplace add
#10813 merged
Jul 4, 2025 -
Fix nccl ut
#10812 merged
Jul 4, 2025
5 Pull requests opened by 5 people
-
dispatch support fp8
#10814 opened
Jul 4, 2025 -
Add recompute for post_norm and moe_gate
#10815 opened
Jul 4, 2025 -
[Auto-parallel] Fix sp_async and mp_async used together
#10816 opened
Jul 4, 2025 -
修复qwen3moe的一系列报错(justin-0704)
#10818 opened
Jul 4, 2025 -
Add tokens_zip_unique_add_subbatch and merge_subbatch_cast ops
#10822 opened
Jul 6, 2025
2 Issues closed by 1 person
-
[Question]: slm/model_zoo/ernie-3.0-tiny 多个意图的话 应该怎么改
#10478 closed
Jul 7, 2025 -
[Question]: 使用text_matching的predict_pointwise每次检测结果不一样
#10459 closed
Jul 6, 2025
13 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
[Bug]: 按照PP-UIE的新版本说明文档,调用大模型精调脚本 llm/run-finetune.py时,报错 No module named 'paddlenlp.datasets.json'
#10543 commented on
Jul 6, 2025 • 0 new comments -
[Question]: 使用GPU运行时报错:terminate called after throwing an instance of 'thrust::system::system_error'
#10535 commented on
Jul 6, 2025 • 0 new comments -
[Question]: 层次分类有些推理结果到不了最底层
#10557 commented on
Jul 7, 2025 • 0 new comments -
Dsv3 dev
#10273 commented on
Jul 4, 2025 • 0 new comments -
[LLM] add fuse attention options to LlmMetaConfig
#10542 commented on
Jul 6, 2025 • 0 new comments -
Add gpt3 13b dynamic auto benchmark
#10548 commented on
Jul 7, 2025 • 0 new comments -
Add llama-13b dynamic auto benchmark
#10549 commented on
Jul 7, 2025 • 0 new comments -
[LLM] Modify fuse layout
#10555 commented on
Jul 7, 2025 • 0 new comments -
[Inference] Add new wint2.75/wint2.5 quant type and support DeepseekV3
#10578 commented on
Jul 4, 2025 • 0 new comments -
add auto_parallel context_parallel strategy
#10722 commented on
Jul 4, 2025 • 0 new comments -
Fix the _save function so that it can save the optimizer parameters.
#10789 commented on
Jul 7, 2025 • 0 new comments -
[AutoParallel] Open dynamic sharding CI test
#10793 commented on
Jul 4, 2025 • 0 new comments -
[Auto Parallel] change fused_layers.py to support fused_linear_grad add/sync_mp_allreduce/sp_async_reduce_scatter in same time
#10797 commented on
Jul 5, 2025 • 0 new comments