You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+32-19Lines changed: 32 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,11 +4,40 @@
4
4
💫 StarCoder is a language model (LM) trained on source code and natural language text. Its training data incorporates more that 80 different programming languages as well as text extracted from github issues and commits and from notebooks. This repository showcases how we get an overview of this LM's capabilities.
5
5
6
6
# Table of Contents
7
-
1.[Fine-tuning](#fine-tuning)
7
+
1. Quickstart
8
+
-[Installation](#installation)
9
+
-[Code generation with StarCoder](#code-generation)
10
+
2.[Fine-tuning](#fine-tuning)
8
11
-[Step by step installation with conda](#step-by-step-installation-with-conda)
StarCoder was trained on github code, thus is can be use to perform text-generation. That is, completing the implementation of a function or infer the following characters in a line of code. This can be done with the help of the transformers's library.
18
+
19
+
## Installation
20
+
Here we have to install all the libraries listed in `requirements.txt`
21
+
```bash
22
+
pip install -r requirements.txt
23
+
```
24
+
## Code generation
25
+
The code generation pipeline is as follows
26
+
27
+
```python
28
+
from transformers import AutoModelForCausalLM, AutoTokenizer
29
+
30
+
checkpoint ="bigcode/starcoder"
31
+
device ="cuda"# for GPU usage or "cpu" for CPU usage
0 commit comments