We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PaddleNLP-develop\PaddleNLP-develop\applications\information_extraction\text 目录下的finetune.py i5,16g 笔记本执行以下命令 python finetune.py --device cpu --logging_steps 10 --save_steps 100 --eval_steps 100 --seed 42 --model_name_or_path uie-base --output_dir ./checkpoint/model_best --train_path data/train.txt --dev_path data/dev.txt --max_seq_len 512 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --num_train_epochs 20 --learning_rate 1e-5 --do_train --do_eval --do_export --export_model_dir ./checkpoint/model_best --overwrite_output_dir --disable_tqdm True --metric_for_best_model eval_f1 --load_best_model_at_end True --save_total_limit 1
执行到下面后就卡住不动了 [2023-04-06 00:06:14,583] [ INFO] - ***** Running training ***** [2023-04-06 00:06:14,583] [ INFO] - Num examples = 1076 [2023-04-06 00:06:14,583] [ INFO] - Num Epochs = 20 [2023-04-06 00:06:14,583] [ INFO] - Instantaneous batch size per device = 16 [2023-04-06 00:06:14,583] [ INFO] - Total train batch size (w. parallel, distributed & accumulation) = 16 [2023-04-06 00:06:14,583] [ INFO] - Gradient Accumulation steps = 1 [2023-04-06 00:06:14,583] [ INFO] - Total optimization steps = 1360.0 [2023-04-06 00:06:14,583] [ INFO] - Total num train samples = 21520.0 [2023-04-06 00:06:14,599] [ INFO] - Number of trainable parameters = 117946370
The text was updated successfully, but these errors were encountered:
请问您的paddle和paddlenlp的版本是多少?请升级一下试试
Sorry, something went wrong.
wawltor
No branches or pull requests
PaddleNLP-develop\PaddleNLP-develop\applications\information_extraction\text 目录下的finetune.py
i5,16g 笔记本执行以下命令
python finetune.py --device cpu --logging_steps 10 --save_steps 100 --eval_steps 100 --seed 42 --model_name_or_path uie-base --output_dir ./checkpoint/model_best --train_path data/train.txt --dev_path data/dev.txt --max_seq_len 512 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --num_train_epochs 20 --learning_rate 1e-5 --do_train --do_eval --do_export --export_model_dir ./checkpoint/model_best --overwrite_output_dir --disable_tqdm True --metric_for_best_model eval_f1 --load_best_model_at_end True --save_total_limit 1
执行到下面后就卡住不动了
[2023-04-06 00:06:14,583] [ INFO] - ***** Running training *****
[2023-04-06 00:06:14,583] [ INFO] - Num examples = 1076
[2023-04-06 00:06:14,583] [ INFO] - Num Epochs = 20
[2023-04-06 00:06:14,583] [ INFO] - Instantaneous batch size per device = 16
[2023-04-06 00:06:14,583] [ INFO] - Total train batch size (w. parallel, distributed & accumulation) = 16
[2023-04-06 00:06:14,583] [ INFO] - Gradient Accumulation steps = 1
[2023-04-06 00:06:14,583] [ INFO] - Total optimization steps = 1360.0
[2023-04-06 00:06:14,583] [ INFO] - Total num train samples = 21520.0
[2023-04-06 00:06:14,599] [ INFO] - Number of trainable parameters = 117946370
The text was updated successfully, but these errors were encountered: