Skip to content

Commit 21bc231

Browse files
authored
use alert formatting for notes in readme
1 parent 8710ec2 commit 21bc231

File tree

1 file changed

+13
-7
lines changed

1 file changed

+13
-7
lines changed

README.md

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,8 @@ Throughout the entire training process, we did not experience any irrecoverable
9898

9999
</div>
100100

101-
**NOTE: The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.**
101+
> [!NOTE]
102+
> The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.**
102103
103104
To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: [How_to Run_Locally](#6-how-to-run-locally).
104105

@@ -151,8 +152,9 @@ For developers looking to dive deeper, we recommend exploring [README_WEIGHTS.md
151152

152153
</div>
153154

154-
Note: Best results are shown in bold. Scores with a gap not exceeding 0.3 are considered to be at the same level. DeepSeek-V3 achieves the best performance on most benchmarks, especially on math and code tasks.
155-
For more evaluation details, please check our paper.
155+
> [!NOTE]
156+
> Best results are shown in bold. Scores with a gap not exceeding 0.3 are considered to be at the same level. DeepSeek-V3 achieves the best performance on most benchmarks, especially on math and code tasks.
157+
> For more evaluation details, please check our paper.
156158
157159
#### Context Window
158160
<p align="center">
@@ -193,10 +195,11 @@ Evaluation results on the ``Needle In A Haystack`` (NIAH) tests. DeepSeek-V3 pe
193195
| | C-Eval (EM) | 78.6 | 79.5 | 86.1 | 61.5 | 76.7 | 76.0 | **86.5** |
194196
| | C-SimpleQA (Correct) | 48.5 | 54.1 | 48.4 | 50.4 | 51.3 | 59.3 | **64.8** |
195197

196-
Note: All models are evaluated in a configuration that limits the output length to 8K. Benchmarks containing fewer than 1000 samples are tested multiple times using varying temperature settings to derive robust final results. DeepSeek-V3 stands as the best-performing open-source model, and also exhibits competitive performance against frontier closed-source models.
197-
198198
</div>
199199

200+
> [!NOTE]
201+
> All models are evaluated in a configuration that limits the output length to 8K. Benchmarks containing fewer than 1000 samples are tested multiple times using varying temperature settings to derive robust final results. DeepSeek-V3 stands as the best-performing open-source model, and also exhibits competitive performance against frontier closed-source models.
202+
200203

201204
#### Open Ended Generation Evaluation
202205

@@ -213,9 +216,11 @@ Note: All models are evaluated in a configuration that limits the output length
213216
| Claude-Sonnet-3.5-1022 | 85.2 | 52.0 |
214217
| DeepSeek-V3 | **85.5** | **70.0** |
215218

216-
Note: English open-ended conversation evaluations. For AlpacaEval 2.0, we use the length-controlled win rate as the metric.
217219
</div>
218220

221+
> [!NOTE]
222+
> English open-ended conversation evaluations. For AlpacaEval 2.0, we use the length-controlled win rate as the metric.
223+
219224

220225
## 5. Chat Website & API Platform
221226
You can chat with DeepSeek-V3 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in)
@@ -243,7 +248,8 @@ cd inference
243248
python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-hf-path /path/to/bf16_weights
244249
```
245250

246-
**NOTE: Hugging Face's Transformers has not been directly supported yet.**
251+
> [!NOTE]
252+
> Hugging Face's Transformers has not been directly supported yet.**
247253
248254
### 6.1 Inference with DeepSeek-Infer Demo (example only)
249255

0 commit comments

Comments
 (0)