Skip to content

Commit 9351cd4

Browse files
committed
你的update 34B
1 parent b858f7e commit 9351cd4

10 files changed

+65
-7
lines changed

Readme.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
[![PWC](https://img.shields.io/endpoint?url=https%3A%2F%2Fpaperswithcode.com%2Fbadge%2Fmotcoder-elevating-large-language-models-with%2Fcode-generation-on-apps%3Fmetric%3DIntroductory%2520Pass%25401)](https://paperswithcode.com/sota/code-generation-on-apps?metric=Introductory%20Pass%401/motcoder-elevating-large-language-models-with)
88
[![PWC](https://img.shields.io/endpoint?url=https%3A%2F%2Fpaperswithcode.com%2Fbadge%2Fmotcoder-elevating-large-language-models-with%2Fcode-generation-on-codecontests%3Fmetric%3DTest%2520Set%2520pass%25401)](https://paperswithcode.com/sota/code-generation-on-codecontests?metric=Test%20Set%20pass%401)
99

10-
Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks. However, their performance tends to falter when confronted with more challenging programming problems. We observe that conventional models often generate solutions as monolithic code blocks, restricting their effectiveness in tackling intricate questions. To overcome this limitation, we present Module-of-Thought Coder (MoTCoder). We introduce a framework for MoT instruction tuning, designed to promote the decomposition of tasks into logical sub-tasks and sub-modules. Our investigations reveal that, through the cultivation and utilization of sub-modules, MoTCoder significantly improves both the modularity and correctness of the generated solutions, leading to substantial pass@1 improvements of 2.4% on APPS and 4.5% on CodeContests. MoTCoder also achieved significant improvements in self-correction capabilities, surpassing the current SOTA by 3.3%. Additionally, we provide an analysis of between problem complexity and optimal module decomposition and evaluate the maintainability index, confirming that the code generated by MoTCoder is easier to understand and modify, which can be beneficial for long-term code maintenance and evolution. Our codes are available at https://github.com/dvlab-research/MoTCoder.
10+
Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks. However, their performance tends to falter when confronted with more challenging programming problems. We observe that conventional models often generate solutions as monolithic code blocks, restricting their effectiveness in tackling intricate questions. To overcome this limitation, we present Module-of-Thought Coder (MoTCoder). We introduce a framework for MoT instruction tuning, designed to promote the decomposition of tasks into logical sub-tasks and sub-modules. Our investigations reveal that, through the cultivation and utilization of sub-modules, MoTCoder significantly improves both the modularity and correctness of the generated solutions, leading to substantial pass@1 improvements of 5.9% on APPS and 5.8% on CodeContests. MoTCoder also achieved significant improvements in self-correction capabilities, surpassing the current SOTA by 3.3%. Additionally, we provide an analysis of between problem complexity and optimal module decomposition and evaluate the maintainability index, confirming that the code generated by MoTCoder is easier to understand and modify, which can be beneficial for long-term code maintenance and evolution. Our codes are available at https://github.com/dvlab-research/MoTCoder.
1111

1212
<div style="text-align: center;">
1313
<img src="./imgs/impression.png" alt="impression" />
@@ -56,17 +56,17 @@ Download [MoTCoder-7B-v1.5](https://huggingface.co/JingyaoLi/MoTCoder-7B-v1.5) a
5656
```
5757
2. Run the following script to prepare the test data:
5858
```bash
59-
python /mnt/nas-alinlp/ljy/MoTCoder/MoTCoder/eval/dataset/data.py
59+
python eval/dataset/data.py
6060
```
6161

6262
### Run Evaluation
6363
Execute the scripts below to perform the evaluation:
6464
```bash
6565
# For APPS and CodeContests datasets
66-
bash /mnt/nas-alinlp/ljy/MoTCoder/MoTCoder/eval/scripts/eval_motcoder_apps_codecontests.sh
66+
bash eval/scripts/eval_motcoder_apps_codecontests.sh
6767

6868
# For Fix
69-
bash /mnt/nas-alinlp/ljy/MoTCoder/MoTCoder/eval/scripts/eval_motcoder_fix.sh
69+
bash eval/scripts/eval_motcoder_fix.sh
7070
```
7171

7272
## Training

eval/scripts/eval_motcoder_apps_codecontests.sh

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,6 @@
44
model_list=(
55
models/MoTCoder-7B-v1.5
66
)
7-
cd /mnt/nas-alinlp/ljy/MoTCoder/
8-
97
echo "$model_id/results/chat-template-queries.jsonl"
108

119
for model_id in "${model_list[@]}"; do

eval/scripts/eval_motcoder_reflection.sh

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,19 @@ model_list=(
55
)
66
name=cc-chat-template-queries
77
fix_name=cc-chat-template-queries-reflect_n5
8-
cd /mnt/nas-alinlp/ljy/MoTCoder/
8+
99
for model_id in "${model_list[@]}"; do
1010
results_path=$model_id/results/$name-results.jsonl
11+
12+
python /mnt/nas-alinlp/ljy/MoTCoder/eval/vllm_gen.py \
13+
--model_id "$model_id" \
14+
--data_path /mnt/nas-alinlp/ljy/MoTCoder/data/prompts/queries_cc.jsonl \
15+
--save_path "$model_id/results/$name.jsonl" \
16+
--key prompt \
17+
--batch_size 1000 \
18+
--apply_chat_template \
19+
--tensor_parallel_size 8
20+
1121
python /mnt/nas-alinlp/ljy/MoTCoder/eval/eval.py \
1222
--data_path "$model_id/results/$name.jsonl" \
1323
--save_path "$model_id/results/$name-metrics.jsonl" \

imgs/apps.png

-103 KB
Loading

imgs/codecontests.png

69.6 KB
Loading

imgs/reflection.png

138 KB
Loading
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
model_name_or_path: /mnt/nas-alinlp/ljy/models/Qwen2.5-Coder-32B-Instruct
2+
stage: sft
3+
do_train: true
4+
finetuning_type: full
5+
deepspeed: examples/deepspeed/ds_z3_config.json
6+
dataset: motcode
7+
template: qwen
8+
cutoff_len: 2048
9+
overwrite_cache: true
10+
preprocessing_num_workers: 16
11+
output_dir: /mnt/nas-alinlp/ljy/MoTCoder/output/Qwen2.5-Coder-32B-Instruct-sft-2e-06-mot
12+
logging_steps: 1
13+
save_strategy: steps
14+
plot_loss: true
15+
save_steps: 200
16+
per_device_train_batch_size: 4
17+
gradient_accumulation_steps: 4
18+
learning_rate: 2.0e-06
19+
num_train_epochs: 1
20+
lr_scheduler_type: cosine
21+
warmup_ratio: 0.03
22+
bf16: true
23+
ddp_timeout: 180000000
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
model_name_or_path: /mnt/nas-alinlp/ljy/models/Qwen2.5-72B-Instruct
2+
stage: sft
3+
do_train: true
4+
finetuning_type: full
5+
deepspeed: examples/deepspeed/ds_z3_config.json
6+
dataset: motcode
7+
template: qwen
8+
cutoff_len: 2048
9+
overwrite_cache: true
10+
preprocessing_num_workers: 16
11+
output_dir: /mnt/nas-alinlp/ljy/MoTCoder/output/Qwen2.5-72B-Instruct-sft-2e-06-mot
12+
logging_steps: 1
13+
save_strategy: steps
14+
plot_loss: true
15+
save_steps: 200
16+
per_device_train_batch_size: 4
17+
gradient_accumulation_steps: 4
18+
learning_rate: 2.0e-06
19+
num_train_epochs: 1
20+
lr_scheduler_type: cosine
21+
warmup_ratio: 0.03
22+
bf16: true
23+
ddp_timeout: 180000000

train/scripts/run_32b.sh

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
2+
llamafactory-cli train train/configs/MoTCoder-32B-Instruct-sft-2e-06-mot.yaml

train/scripts/run_72b.sh

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
2+
llamafactory-cli train train/configs/MoTCoder-72B-Instruct-sft-2e-06-mot.yaml

0 commit comments

Comments
 (0)