Skip to content

Commit 5ff5952

Browse files
authored
Merge pull request alexrudall#269 from ch0ngxian/main
Move audios related methods to Audios.rb from Client.rb
2 parents ca2ce46 + b386494 commit 5ff5952

File tree

10 files changed

+93
-67
lines changed

10 files changed

+93
-67
lines changed

CHANGELOG.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,18 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [5.0.0] - 2023-08-14
9+
10+
### Added
11+
12+
- Support multi-tenant use of the gem! Each client now holds its own config, so you can create unlimited clients in the same project, for example to Azure and OpenAI, or for different headers, access keys, etc.
13+
- [BREAKING-ish] This change should only break your usage of ruby-openai if you are directly calling class methods like `OpenAI::Client.get` for some reason, as they are now instance methods. Normal usage of the gem should be unaffected, just you can make new clients and they'll keep their own config if you want, overriding the global config.
14+
- Huge thanks to [@petergoldstein](https://github.com/petergoldstein) for his original work on this, [@cthulhu](https://github.com/cthulhu) for testing and many others for reviews and suggestions.
15+
16+
### Changed
17+
18+
- [BREAKING] Move audio related method to Audio model from Client model. You will need to update your code to handle this change, changing `client.translate` to `client.audio.translate` and `client.transcribe` to `client.audio.transcribe`.
19+
820
## [4.3.2] - 2023-08-14
921

1022
### Fixed

Gemfile.lock

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
PATH
22
remote: .
33
specs:
4-
ruby-openai (4.3.2)
4+
ruby-openai (5.0.0)
55
faraday (>= 1)
66
faraday-multipart (>= 1)
77

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -417,7 +417,7 @@ Whisper is a speech to text model that can be used to generate text based on aud
417417
The translations API takes as input the audio file in any of the supported languages and transcribes the audio into English.
418418

419419
```ruby
420-
response = client.translate(
420+
response = client.audio.translate(
421421
parameters: {
422422
model: "whisper-1",
423423
file: File.open("path_to_file", "rb"),
@@ -431,7 +431,7 @@ puts response["text"]
431431
The transcriptions API takes as input the audio file you want to transcribe and returns the text in the desired output file format.
432432

433433
```ruby
434-
response = client.transcribe(
434+
response = client.audio.transcribe(
435435
parameters: {
436436
model: "whisper-1",
437437
file: File.open("path_to_file", "rb"),

lib/openai.rb

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
require_relative "openai/finetunes"
88
require_relative "openai/images"
99
require_relative "openai/models"
10+
require_relative "openai/audio"
1011
require_relative "openai/version"
1112

1213
module OpenAI

lib/openai/audio.rb

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
module OpenAI
2+
class Audio
3+
def initialize(client:)
4+
@client = client
5+
end
6+
7+
def transcribe(parameters: {})
8+
@client.multipart_post(path: "/audio/transcriptions", parameters: parameters)
9+
end
10+
11+
def translate(parameters: {})
12+
@client.multipart_post(path: "/audio/translations", parameters: parameters)
13+
end
14+
end
15+
end

lib/openai/client.rb

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,10 @@ def embeddings(parameters: {})
3737
json_post(path: "/embeddings", parameters: parameters)
3838
end
3939

40+
def audio
41+
@audio ||= OpenAI::Audio.new(client: self)
42+
end
43+
4044
def files
4145
@files ||= OpenAI::Files.new(client: self)
4246
end
@@ -57,14 +61,6 @@ def moderations(parameters: {})
5761
json_post(path: "/moderations", parameters: parameters)
5862
end
5963

60-
def transcribe(parameters: {})
61-
multipart_post(path: "/audio/transcriptions", parameters: parameters)
62-
end
63-
64-
def translate(parameters: {})
65-
multipart_post(path: "/audio/translations", parameters: parameters)
66-
end
67-
6864
def azure?
6965
@api_type&.to_sym == :azure
7066
end

lib/openai/version.rb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
module OpenAI
2-
VERSION = "4.3.2".freeze
2+
VERSION = "5.0.0".freeze
33
end

spec/fixtures/cassettes/whisper-1_translate.yml renamed to spec/fixtures/cassettes/audio_whisper-1_transcribe.yml

Lines changed: 8 additions & 8 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

spec/fixtures/cassettes/whisper-1_transcribe.yml renamed to spec/fixtures/cassettes/audio_whisper-1_translate.yml

Lines changed: 8 additions & 8 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

spec/openai/client/audio_spec.rb

Lines changed: 41 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -1,54 +1,56 @@
11
RSpec.describe OpenAI::Client do
2-
describe "#transcribe" do
3-
context "with audio", :vcr do
4-
let(:filename) { "audio_sample.mp3" }
5-
let(:audio) { File.join(RSPEC_ROOT, "fixtures/files", filename) }
2+
describe "#audio" do
3+
describe "#transcribe" do
4+
context "with audio", :vcr do
5+
let(:filename) { "audio_sample.mp3" }
6+
let(:audio) { File.join(RSPEC_ROOT, "fixtures/files", filename) }
67

7-
let(:response) do
8-
OpenAI::Client.new.transcribe(
9-
parameters: {
10-
model: model,
11-
file: File.open(audio, "rb")
12-
}
13-
)
14-
end
15-
let(:content) { response["text"] }
16-
let(:cassette) { "#{model} transcribe".downcase }
8+
let(:response) do
9+
OpenAI::Client.new.audio.transcribe(
10+
parameters: {
11+
model: model,
12+
file: File.open(audio, "rb")
13+
}
14+
)
15+
end
16+
let(:content) { response["text"] }
17+
let(:cassette) { "audio #{model} transcribe".downcase }
1718

18-
context "with model: whisper-1" do
19-
let(:model) { "whisper-1" }
19+
context "with model: whisper-1" do
20+
let(:model) { "whisper-1" }
2021

21-
it "succeeds" do
22-
VCR.use_cassette(cassette) do
23-
expect(content.empty?).to eq(false)
22+
it "succeeds" do
23+
VCR.use_cassette(cassette) do
24+
expect(content.empty?).to eq(false)
25+
end
2426
end
2527
end
2628
end
2729
end
28-
end
2930

30-
describe "#translate" do
31-
context "with audio", :vcr do
32-
let(:filename) { "audio_sample.mp3" }
33-
let(:audio) { File.join(RSPEC_ROOT, "fixtures/files", filename) }
31+
describe "#translate" do
32+
context "with audio", :vcr do
33+
let(:filename) { "audio_sample.mp3" }
34+
let(:audio) { File.join(RSPEC_ROOT, "fixtures/files", filename) }
3435

35-
let(:response) do
36-
OpenAI::Client.new.translate(
37-
parameters: {
38-
model: model,
39-
file: File.open(audio, "rb")
40-
}
41-
)
42-
end
43-
let(:content) { response["text"] }
44-
let(:cassette) { "#{model} translate".downcase }
36+
let(:response) do
37+
OpenAI::Client.new.audio.translate(
38+
parameters: {
39+
model: model,
40+
file: File.open(audio, "rb")
41+
}
42+
)
43+
end
44+
let(:content) { response["text"] }
45+
let(:cassette) { "audio #{model} translate".downcase }
4546

46-
context "with model: whisper-1" do
47-
let(:model) { "whisper-1" }
47+
context "with model: whisper-1" do
48+
let(:model) { "whisper-1" }
4849

49-
it "succeeds" do
50-
VCR.use_cassette(cassette) do
51-
expect(content.empty?).to eq(false)
50+
it "succeeds" do
51+
VCR.use_cassette(cassette) do
52+
expect(content.empty?).to eq(false)
53+
end
5254
end
5355
end
5456
end

0 commit comments

Comments
 (0)