Skip to content

Commit e30f5c2

Browse files
authored
Merge pull request alexrudall#344 from atesgoral/raise-exceptions
Raise exceptions
2 parents 181ffa5 + 37df302 commit e30f5c2

9 files changed

+150
-382
lines changed

README.md

Lines changed: 18 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -161,9 +161,9 @@ client.models.retrieve(id: "text-ada-001")
161161
- text-babbage-001
162162
- text-curie-001
163163

164-
### ChatGPT
164+
### Chat
165165

166-
ChatGPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
166+
GPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
167167

168168
```ruby
169169
response = client.chat(
@@ -176,11 +176,11 @@ puts response.dig("choices", 0, "message", "content")
176176
# => "Hello! How may I assist you today?"
177177
```
178178

179-
### Streaming ChatGPT
179+
### Streaming Chat
180180

181-
[Quick guide to streaming ChatGPT with Rails 7 and Hotwire](https://gist.github.com/alexrudall/cb5ee1e109353ef358adb4e66631799d)
181+
[Quick guide to streaming Chat with Rails 7 and Hotwire](https://gist.github.com/alexrudall/cb5ee1e109353ef358adb4e66631799d)
182182

183-
You can stream from the API in realtime, which can be much faster and used to create a more engaging user experience. Pass a [Proc](https://ruby-doc.org/core-2.6/Proc.html) (or any object with a `#call` method) to the `stream` parameter to receive the stream of text chunks as they are generated. Each time one or more chunks is received, the proc will be called once with each chunk, parsed as a Hash. If OpenAI returns an error, `ruby-openai` will pass that to your proc as a Hash.
183+
You can stream from the API in realtime, which can be much faster and used to create a more engaging user experience. Pass a [Proc](https://ruby-doc.org/core-2.6/Proc.html) (or any object with a `#call` method) to the `stream` parameter to receive the stream of completion chunks as they are generated. Each time one or more chunks is received, the proc will be called once with each chunk, parsed as a Hash. If OpenAI returns an error, `ruby-openai` will raise a Faraday error.
184184

185185
```ruby
186186
client.chat(
@@ -195,7 +195,7 @@ client.chat(
195195
# => "Anna is a young woman in her mid-twenties, with wavy chestnut hair that falls to her shoulders..."
196196
```
197197

198-
Note: the API docs state that token usage is included in the streamed chat chunk objects, but this doesn't currently appear to be the case. To count tokens while streaming, try `OpenAI.rough_token_count` or [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby).
198+
Note: OpenAPI currently does not report token usage for streaming responses. To count tokens while streaming, try `OpenAI.rough_token_count` or [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby). We think that each call to the stream proc corresponds to a single token, so you can also try counting the number of calls to the proc to get the completion token count.
199199

200200
### Functions
201201

@@ -455,6 +455,18 @@ puts response["text"]
455455
# => "Transcription of the text"
456456
```
457457

458+
#### Errors
459+
460+
HTTP errors can be caught like this:
461+
462+
```
463+
begin
464+
OpenAI::Client.new.models.retrieve(id: "text-ada-001")
465+
rescue Faraday::Error => e
466+
raise "Got a Faraday error: #{e}"
467+
end
468+
```
469+
458470
## Development
459471

460472
After checking out the repo, run `bin/setup` to install dependencies. You can run `bin/console` for an interactive prompt that will allow you to experiment.

lib/openai/http.rb

Lines changed: 7 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -50,34 +50,28 @@ def to_json(string)
5050
# For each chunk, the inner user_proc is called giving it the JSON object. The JSON object could
5151
# be a data object or an error object as described in the OpenAI API documentation.
5252
#
53-
# If the JSON object for a given data or error message is invalid, it is ignored.
54-
#
5553
# @param user_proc [Proc] The inner proc to call for each JSON object in the chunk.
5654
# @return [Proc] An outer proc that iterates over a raw stream, converting it to JSON.
5755
def to_json_stream(user_proc:)
5856
parser = EventStreamParser::Parser.new
5957

6058
proc do |chunk, _bytes, env|
6159
if env && env.status != 200
62-
emit_json(json: chunk, user_proc: user_proc)
63-
else
64-
parser.feed(chunk) do |_type, data|
65-
emit_json(json: data, user_proc: user_proc) unless data == "[DONE]"
66-
end
60+
raise_error = Faraday::Response::RaiseError.new
61+
raise_error.on_complete(env.merge(body: JSON.parse(chunk)))
6762
end
68-
end
69-
end
7063

71-
def emit_json(json:, user_proc:)
72-
user_proc.call(JSON.parse(json))
73-
rescue JSON::ParserError
74-
# Ignore invalid JSON.
64+
parser.feed(chunk) do |_type, data|
65+
user_proc.call(JSON.parse(data)) unless data == "[DONE]"
66+
end
67+
end
7568
end
7669

7770
def conn(multipart: false)
7871
Faraday.new do |f|
7972
f.options[:timeout] = @request_timeout
8073
f.request(:multipart) if multipart
74+
f.response :raise_error
8175
end
8276
end
8377

spec/fixtures/cassettes/finetune_completions_i_love_mondays.yml

Lines changed: 0 additions & 175 deletions
This file was deleted.

spec/fixtures/cassettes/finetune_completions_i_love_mondays_create.yml

Lines changed: 0 additions & 125 deletions
This file was deleted.

0 commit comments

Comments
 (0)