-
Notifications
You must be signed in to change notification settings - Fork 3k
[INTEL_HPU] add e2e auto test cases #10108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks for your contribution! |
Codecov ReportAll modified and coverable lines are covered by tests ✅
❌ Your project status has failed because the head coverage (49.97%) is below the target coverage (58.00%). You can increase the head coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## develop #10108 +/- ##
===========================================
+ Coverage 49.93% 49.97% +0.04%
===========================================
Files 761 757 -4
Lines 124178 122498 -1680
===========================================
- Hits 62005 61219 -786
+ Misses 62173 61279 -894 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
unset FLAGS_selected_intel_hpus export GC_KERNEL_PATH=/workspace/pdpd_automation/repo/\ PaddleCustomDevice/backends/intel_hpu/build/\ libcustom_tpc_perf_lib.so:/usr/lib/habanalabs/libtpc_kernels.so python e2e-test-run.py --context pr --data /data/ckpt/ \ --filter stable --device intel_hpu --junit \ test_result.xml --platform gaudi2d python e2e-test-run.py --context bat --data /data/ckpt/ \ --filter stable --device intel_hpu --junit \ test_result.xml --platform gaudi2d python e2e-test-run.py --context sanity --data \ /data/ckpt/ --filter stable --device \ intel_hpu --junit test_result.xml --platform gaudi2d python e2e-test-run.py --context sanity \ --data /data/ckpt/ --filter stable \ --device intel_hpu:2 --junit test_result.xml --platform gaudi2d export PYTHONPATH=$PYTHONPATH:/workspace/pdpd_automation/repo/PaddleNLP/ export FLAGS_intel_hpu_execution_queue_size=10 python export_model.py --model_name_or_path \ /data/ckpt/meta-llama/Llama-2-7b-chat/ \ --inference_model --output_path ./inference \ --dtype bfloat16 --device intel_hpu python e2e-test-run.py --context sanity --data \ /data/ckpt/ --filter stable --device intel_hpu:2 \ --mode static --junit test_result.xml --platform gaudi2d Signed-off-by: Luo, Focus <[email protected]>
PDPD pdpd paddlenlp E2E stable test cases: COMPLETED
PDPD pdpd paddlenlp E2E multi-prompts stable test cases: COMPLETED
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@vivienfanghuagood @ZHUI |
unset FLAGS_selected_intel_hpus
export GC_KERNEL_PATH=/workspace/pdpd_automation/repo/PaddleCustomDevice/backends/intel_hpu/build/libcustom_tpc_perf_lib.so:/usr/lib/habanalabs/libtpc_kernels.so
python e2e-test-run.py --context pr --data /data/ckpt/ --filter stable --device intel_hpu --junit test_result.xml --platform gaudi2d python e2e-test-run.py --context bat --data /data/ckpt/ --filter stable --device intel_hpu --junit test_result.xml --platform gaudi2d python e2e-test-run.py --context sanity --data /data/ckpt/ --filter stable --device intel_hpu --junit test_result.xml --platform gaudi2d python e2e-test-run.py --context sanity --data /data/ckpt/ --filter stable --device intel_hpu:2 --junit test_result.xml --platform gaudi2d
export PYTHONPATH=$PYTHONPATH:/workspace/pdpd_automation/repo/PaddleNLP/ export FLAGS_intel_hpu_execution_queue_size=10
python export_model.py --model_name_or_path /data/ckpt/meta-llama/Llama-2-7b-chat/ --inference_model --output_path ./inference --dtype bfloat16 --device intel_hpu python e2e-test-run.py --context sanity --data /data/ckpt/ --filter stable --device intel_hpu:2 --mode static --junit test_result.xml --platform gaudi2d