Skip to content

Commit 024309b

Browse files
authored
Merge pull request #469 from alexrudall/assistants-readme-fixes
Assistants README fixes
2 parents 89b72ba + 2545871 commit 024309b

File tree

1 file changed

+21
-18
lines changed

1 file changed

+21
-18
lines changed

README.md

Lines changed: 21 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,10 @@ require "openai"
101101
For a quick test you can pass your token directly to a new client:
102102

103103
```ruby
104-
client = OpenAI::Client.new(access_token: "access_token_goes_here")
104+
client = OpenAI::Client.new(
105+
access_token: "access_token_goes_here",
106+
log_errors: true # Highly recommended in development, so you can see what errors OpenAI is returning. Not recommended in production.
107+
)
105108
```
106109

107110
### With Config
@@ -110,8 +113,9 @@ For a more robust setup, you can configure the gem with your API keys, for examp
110113

111114
```ruby
112115
OpenAI.configure do |config|
113-
config.access_token = ENV.fetch("OPENAI_ACCESS_TOKEN")
114-
config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID") # Optional.
116+
config.access_token = ENV.fetch("OPENAI_ACCESS_TOKEN")
117+
config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID") # Optional.
118+
config.log_errors = true # Highly recommended in development, so you can see what errors OpenAI is returning. Not recommended in production.
115119
end
116120
```
117121

@@ -342,12 +346,12 @@ puts response.dig("choices", 0, "message", "content")
342346
343347
#### JSON Mode
344348
345-
You can set the response_format to ask for responses in JSON (at least for `gpt-3.5-turbo-1106`):
349+
You can set the response_format to ask for responses in JSON:
346350
347351
```ruby
348352
response = client.chat(
349353
parameters: {
350-
model: "gpt-3.5-turbo-1106",
354+
model: "gpt-3.5-turbo",
351355
response_format: { type: "json_object" },
352356
messages: [{ role: "user", content: "Hello! Give me some JSON please."}],
353357
temperature: 0.7,
@@ -367,7 +371,7 @@ You can stream it as well!
367371
```ruby
368372
response = client.chat(
369373
parameters: {
370-
model: "gpt-3.5-turbo-1106",
374+
model: "gpt-3.5-turbo",
371375
messages: [{ role: "user", content: "Can I have some JSON please?"}],
372376
response_format: { type: "json_object" },
373377
stream: proc do |chunk, _bytesize|
@@ -564,7 +568,7 @@ These files are in JSONL format, with each line representing the output or error
564568
"id": "chatcmpl-abc123",
565569
"object": "chat.completion",
566570
"created": 1677858242,
567-
"model": "gpt-3.5-turbo-0301",
571+
"model": "gpt-3.5-turbo",
568572
"choices": [
569573
{
570574
"index": 0,
@@ -660,16 +664,19 @@ To create a new assistant:
660664
```ruby
661665
response = client.assistants.create(
662666
parameters: {
663-
model: "gpt-3.5-turbo-1106", # Retrieve via client.models.list. Assistants need 'gpt-3.5-turbo-1106' or later.
667+
model: "gpt-3.5-turbo",
664668
name: "OpenAI-Ruby test assistant",
665669
description: nil,
666-
instructions: "You are a helpful assistant for coding a OpenAI API client using the OpenAI-Ruby gem.",
670+
instructions: "You are a Ruby dev bot. When asked a question, write and run Ruby code to answer the question",
667671
tools: [
668-
{ type: 'retrieval' }, # Allow access to files attached using file_ids
669-
{ type: 'code_interpreter' }, # Allow access to Python code interpreter
672+
{ type: "code_interpreter" },
670673
],
671-
"file_ids": ["file-123"], # See Files section above for how to upload files
672-
"metadata": { my_internal_version_id: '1.0.0' }
674+
tool_resources: {
675+
"code_interpreter": {
676+
"file_ids": [] # See Files section above for how to upload files
677+
}
678+
},
679+
"metadata": { my_internal_version_id: "1.0.0" }
673680
})
674681
assistant_id = response["id"]
675682
```
@@ -851,11 +858,7 @@ client.runs.list(thread_id: thread_id, parameters: { order: "asc", limit: 3 })
851858
You can also create a thread and run in one call like this:
852859

853860
```ruby
854-
response = client.threads.create_and_run(
855-
parameters: {
856-
model: 'gpt-3.5-turbo',
857-
messages: [{ role: 'user', content: "What's deep learning?"}]
858-
})
861+
response = client.runs.create_thread_and_run(parameters: { assistant_id: assistant_id })
859862
run_id = response['id']
860863
thread_id = response['thread_id']
861864
```

0 commit comments

Comments
 (0)