Skip to content

Commit 433aa68

Browse files
committed
Rename Audios to Audio
1 parent 1cd4be9 commit 433aa68

File tree

9 files changed

+29
-29
lines changed

9 files changed

+29
-29
lines changed

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
99

1010
### Changed
1111

12-
- [BREAKING] Move audios related method to Audios model from Client model. You will need to update your code to handle this change, changing `client.translate` to `client.audios.translate` and `client.transcribe` to `client.audios.transcribe`
12+
- [BREAKING] Move audio related method to Audio model from Client model. You will need to update your code to handle this change, changing `client.translate` to `client.audio.translate` and `client.transcribe` to `client.audio.transcribe`.
1313

1414
## [4.3.2] - 2023-08-14
1515

Gemfile.lock

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
PATH
22
remote: .
33
specs:
4-
ruby-openai (4.3.2)
4+
ruby-openai (5.0.0)
55
faraday (>= 1)
66
faraday-multipart (>= 1)
77

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -417,7 +417,7 @@ Whisper is a speech to text model that can be used to generate text based on aud
417417
The translations API takes as input the audio file in any of the supported languages and transcribes the audio into English.
418418

419419
```ruby
420-
response = client.audios.translate(
420+
response = client.audio.translate(
421421
parameters: {
422422
model: "whisper-1",
423423
file: File.open("path_to_file", "rb"),
@@ -431,7 +431,7 @@ puts response["text"]
431431
The transcriptions API takes as input the audio file you want to transcribe and returns the text in the desired output file format.
432432

433433
```ruby
434-
response = client.audios.transcribe(
434+
response = client.audio.transcribe(
435435
parameters: {
436436
model: "whisper-1",
437437
file: File.open("path_to_file", "rb"),

lib/openai.rb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
require_relative "openai/finetunes"
88
require_relative "openai/images"
99
require_relative "openai/models"
10-
require_relative "openai/audios"
10+
require_relative "openai/audio"
1111
require_relative "openai/version"
1212

1313
module OpenAI

lib/openai/audios.rb renamed to lib/openai/audio.rb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
module OpenAI
2-
class Audios
2+
class Audio
33
def initialize(access_token: nil, organization_id: nil)
44
OpenAI.configuration.access_token = access_token if access_token
55
OpenAI.configuration.organization_id = organization_id if organization_id

lib/openai/client.rb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,8 +57,8 @@ def moderations(parameters: {})
5757
json_post(path: "/moderations", parameters: parameters)
5858
end
5959

60-
def audios
61-
@audios ||= OpenAI::Audios.new
60+
def audio
61+
@audio ||= OpenAI::Audio.new
6262
end
6363

6464
def azure?

spec/fixtures/cassettes/audios_whisper-1_translate.yml renamed to spec/fixtures/cassettes/audio_whisper-1_transcribe.yml

Lines changed: 8 additions & 8 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

spec/fixtures/cassettes/audios_whisper-1_transcribe.yml renamed to spec/fixtures/cassettes/audio_whisper-1_translate.yml

Lines changed: 8 additions & 8 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

spec/openai/client/audios_spec.rb renamed to spec/openai/client/audio_spec.rb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
11
RSpec.describe OpenAI::Client do
2-
describe "#audios" do
2+
describe "#audio" do
33
describe "#transcribe" do
44
context "with audio", :vcr do
55
let(:filename) { "audio_sample.mp3" }
66
let(:audio) { File.join(RSPEC_ROOT, "fixtures/files", filename) }
77

88
let(:response) do
9-
OpenAI::Client.new.audios.transcribe(
9+
OpenAI::Client.new.audio.transcribe(
1010
parameters: {
1111
model: model,
1212
file: File.open(audio, "rb")
1313
}
1414
)
1515
end
1616
let(:content) { response["text"] }
17-
let(:cassette) { "audios #{model} transcribe".downcase }
17+
let(:cassette) { "audio #{model} transcribe".downcase }
1818

1919
context "with model: whisper-1" do
2020
let(:model) { "whisper-1" }
@@ -34,15 +34,15 @@
3434
let(:audio) { File.join(RSPEC_ROOT, "fixtures/files", filename) }
3535

3636
let(:response) do
37-
OpenAI::Client.new.audios.translate(
37+
OpenAI::Client.new.audio.translate(
3838
parameters: {
3939
model: model,
4040
file: File.open(audio, "rb")
4141
}
4242
)
4343
end
4444
let(:content) { response["text"] }
45-
let(:cassette) { "audios #{model} translate".downcase }
45+
let(:cassette) { "audio #{model} translate".downcase }
4646

4747
context "with model: whisper-1" do
4848
let(:model) { "whisper-1" }

0 commit comments

Comments
 (0)