Skip to content

Commit 03e76a7

Browse files
authored
Merge pull request alexrudall#526 from alexrudall/7.2.0
7.2.0
2 parents 5fcf898 + 5505307 commit 03e76a7

File tree

6 files changed

+14933
-6
lines changed

6 files changed

+14933
-6
lines changed

CHANGELOG.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,14 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [7.2.0] - 2024-10-10
9+
10+
### Added
11+
12+
- Add ability to pass parameters to Files#list endpoint - thanks to [@parterburn](https://github.com/parterburn)!
13+
- Add Velvet observability platform to README - thanks to [@philipithomas](https://github.com/philipithomas)
14+
- Add Assistants::Messages#delete endpoint - thanks to [@mochetts](https://github.com/mochetts)!
15+
816
## [7.1.0] - 2024-06-10
917

1018
### Added

Gemfile.lock

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
PATH
22
remote: .
33
specs:
4-
ruby-openai (7.1.0)
4+
ruby-openai (7.2.0)
55
event_stream_parser (>= 0.3.0, < 2.0.0)
66
faraday (>= 1)
77
faraday-multipart (>= 1)

README.md

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,9 @@ client = OpenAI::Client.new(access_token: "access_token_goes_here")
139139

140140
#### Custom timeout or base URI
141141

142-
The default timeout for any request using this library is 120 seconds. You can change that by passing a number of seconds to the `request_timeout` when initializing the client. You can also change the base URI used for all requests, eg. to use observability tools like [Helicone](https://docs.helicone.ai/quickstart/integrate-in-one-line-of-code) or [Velvet](https://docs.usevelvet.com/docs/getting-started), and add arbitrary other headers e.g. for [openai-caching-proxy-worker](https://github.com/6/openai-caching-proxy-worker):
142+
- The default timeout for any request using this library is 120 seconds. You can change that by passing a number of seconds to the `request_timeout` when initializing the client.
143+
- You can also change the base URI used for all requests, eg. to use observability tools like [Helicone](https://docs.helicone.ai/quickstart/integrate-in-one-line-of-code) or [Velvet](https://docs.usevelvet.com/docs/getting-started)
144+
- You can also add arbitrary other headers e.g. for [openai-caching-proxy-worker](https://github.com/6/openai-caching-proxy-worker), eg.:
143145

144146
```ruby
145147
client = OpenAI::Client.new(
@@ -326,7 +328,7 @@ client.chat(
326328
# => "Anna is a young woman in her mid-twenties, with wavy chestnut hair that falls to her shoulders..."
327329
```
328330

329-
Note: In order to get usage information, you can provide the [`stream_options` parameter](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream_options) and OpenAI will provide a final chunk with the usage. Here is an example:
331+
Note: In order to get usage information, you can provide the [`stream_options` parameter](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream_options) and OpenAI will provide a final chunk with the usage. Here is an example:
330332

331333
```ruby
332334
stream_proc = proc { |chunk, _bytesize| puts "--------------"; puts chunk.inspect; }
@@ -547,9 +549,11 @@ puts response.dig("data", 0, "embedding")
547549
```
548550

549551
### Batches
552+
550553
The Batches endpoint allows you to create and manage large batches of API requests to run asynchronously. Currently, the supported endpoints for batches are `/v1/chat/completions` (Chat Completions API) and `/v1/embeddings` (Embeddings API).
551554

552555
To use the Batches endpoint, you need to first upload a JSONL file containing the batch requests using the Files endpoint. The file must be uploaded with the purpose set to `batch`. Each line in the JSONL file represents a single request and should have the following format:
556+
553557
```json
554558
{
555559
"custom_id": "request-1",
@@ -633,7 +637,9 @@ These files are in JSONL format, with each line representing the output or error
633637
If a request fails with a non-HTTP error, the error object will contain more information about the cause of the failure.
634638

635639
### Files
640+
636641
#### For fine-tuning purposes
642+
637643
Put your data in a `.jsonl` file like this:
638644

639645
```json
@@ -666,7 +672,6 @@ my_file = File.open("path/to/file.pdf", "rb")
666672
client.files.upload(parameters: { file: my_file, purpose: "assistants" })
667673
```
668674

669-
670675
See supported file types on [API documentation](https://platform.openai.com/docs/assistants/tools/file-search/supported-files).
671676

672677
### Finetunes
@@ -722,6 +727,7 @@ client.finetunes.list_events(id: fine_tune_id)
722727
```
723728

724729
### Vector Stores
730+
725731
Vector Store objects give the File Search tool the ability to search your files.
726732

727733
You can create a new vector store:
@@ -767,6 +773,7 @@ client.vector_stores.delete(id: vector_store_id)
767773
```
768774

769775
### Vector Store Files
776+
770777
Vector store files represent files inside a vector store.
771778

772779
You can create a new vector store file by attaching a File to a vector store.
@@ -805,9 +812,11 @@ client.vector_store_files.delete(
805812
id: vector_store_file_id
806813
)
807814
```
815+
808816
Note: This will remove the file from the vector store but the file itself will not be deleted. To delete the file, use the delete file endpoint.
809817

810818
### Vector Store File Batches
819+
811820
Vector store file batches represent operations to add multiple files to a vector store.
812821

813822
You can create a new vector store file batch by attaching multiple Files to a vector store.

lib/openai/version.rb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
module OpenAI
2-
VERSION = "7.1.0".freeze
2+
VERSION = "7.2.0".freeze
33
end

0 commit comments

Comments
 (0)