Skip to content

release: 2.0.0 #466

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.6.1"
".": "2.0.0"
}
8 changes: 4 additions & 4 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 95
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-0ee6b36cf3cc278cef4199a6aec5f7d530a6c1f17a74830037e96d50ca1edc50.yml
openapi_spec_hash: e8ec5f46bc0655b34f292422d58a60f6
config_hash: d9b6b6e6bc85744663e300eebc482067
configured_endpoints: 99
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-794a6ed3c3d3d77887564755168056af8a426b17cf1ec721e3a300503dc22a41.yml
openapi_spec_hash: 25a81c220713cd5b0bafc221d1dfa79a
config_hash: 0b768ed1b56c6d82816f0fa40dc4aaf5
41 changes: 41 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,46 @@
# Changelog

## 2.0.0 (2025-05-10)

Full Changelog: [v1.6.1...v2.0.0](https://github.com/openai/openai-java/compare/v1.6.1...v2.0.0)

### ⚠ BREAKING CHANGES

* **client:** change precision of some numeric types
* **client:** extract auto pagination to shared classes
* **client:** **Migration:** - If you were referencing the `AutoPager` class on a specific `*Page` or `*PageAsync` type, then you should instead reference the shared `AutoPager` and `AutoPagerAsync` types, under the `core` package
- `AutoPagerAsync` now has different usage. You can call `.subscribe(...)` on the returned object instead to get called back each page item. You can also call `onCompleteFuture()` to get a future that completes when all items have been processed. Finally, you can call `.close()` on the returned object to stop auto-paginating early
- If you were referencing `getNextPage` or `getNextPageParams`:
- Swap to `nextPage()` and `nextPageParams()`
- Note that these both now return non-optional types (use `hasNextPage()` before calling these, since they will throw if it's impossible to get another page)

### Features

* **api:** Add reinforcement fine-tuning api support ([d243892](https://github.com/openai/openai-java/commit/d2438923c2f53a76879464ab3816732b5c4b5718))
* **client:** allow providing some params positionally ([7200cf6](https://github.com/openai/openai-java/commit/7200cf61d31fcc16b15d01cd83d3a0bcc53eba4d))
* **client:** extract auto pagination to shared classes ([f623bca](https://github.com/openai/openai-java/commit/f623bcac1e66ed15f8ba6c89375468b764cb900f))


### Bug Fixes

* add missing `deploymentModel` params ([bb85d0d](https://github.com/openai/openai-java/commit/bb85d0d1a899b3981f5c1f818dc4200939cb571d))
* merge conflict ([4587737](https://github.com/openai/openai-java/commit/458773748bba0efefce9e67d17b8d2879338cb61))


### Chores

* **internal:** fix custom code ([1da6c92](https://github.com/openai/openai-java/commit/1da6c92fc964837aec19a3688ffb9a1089b3d91c))


### Documentation

* remove or fix invalid readme examples ([4bf868a](https://github.com/openai/openai-java/commit/4bf868a717f9f782cd2f288bffc74bb24b2bb0e7))


### Refactors

* **client:** change precision of some numeric types ([6cdb671](https://github.com/openai/openai-java/commit/6cdb6717e047e96f9f4186bec3beca0744f27a3a))

## 1.6.1 (2025-05-08)

Full Changelog: [v1.6.0...v1.6.1](https://github.com/openai/openai-java/compare/v1.6.0...v1.6.1)
Expand Down
106 changes: 74 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,16 @@

<!-- x-release-please-start-version -->

[![Maven Central](https://img.shields.io/maven-central/v/com.openai/openai-java)](https://central.sonatype.com/artifact/com.openai/openai-java/1.6.1)
[![javadoc](https://javadoc.io/badge2/com.openai/openai-java/1.6.1/javadoc.svg)](https://javadoc.io/doc/com.openai/openai-java/1.6.1)
[![Maven Central](https://img.shields.io/maven-central/v/com.openai/openai-java)](https://central.sonatype.com/artifact/com.openai/openai-java/2.0.0)
[![javadoc](https://javadoc.io/badge2/com.openai/openai-java/2.0.0/javadoc.svg)](https://javadoc.io/doc/com.openai/openai-java/2.0.0)

<!-- x-release-please-end -->

The OpenAI Java SDK provides convenient access to the [OpenAI REST API](https://platform.openai.com/docs) from applications written in Java.

<!-- x-release-please-start-version -->

The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs). Javadocs are available on [javadoc.io](https://javadoc.io/doc/com.openai/openai-java/1.6.1).
The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs). Javadocs are available on [javadoc.io](https://javadoc.io/doc/com.openai/openai-java/2.0.0).

<!-- x-release-please-end -->

Expand All @@ -22,7 +22,7 @@ The REST API documentation can be found on [platform.openai.com](https://platfor
### Gradle

```kotlin
implementation("com.openai:openai-java:1.6.1")
implementation("com.openai:openai-java:2.0.0")
```

### Maven
Expand All @@ -31,7 +31,7 @@ implementation("com.openai:openai-java:1.6.1")
<dependency>
<groupId>com.openai</groupId>
<artifactId>openai-java</artifactId>
<version>1.6.1</version>
<version>2.0.0</version>
</dependency>
```

Expand Down Expand Up @@ -412,10 +412,7 @@ These methods return [`HttpResponse`](openai-java-core/src/main/kotlin/com/opena
import com.openai.core.http.HttpResponse;
import com.openai.models.files.FileContentParams;

FileContentParams params = FileContentParams.builder()
.fileId("file_id")
.build();
HttpResponse response = client.files().content(params);
HttpResponse response = client.files().content("file_id");
```

To save the response content to a file, use the [`Files.copy(...)`](https://docs.oracle.com/javase/8/docs/api/java/nio/file/Files.html#copy-java.io.InputStream-java.nio.file.Path-java.nio.file.CopyOption...-) method:
Expand Down Expand Up @@ -528,53 +525,101 @@ The SDK throws custom unchecked exception types:

## Pagination

For methods that return a paginated list of results, this library provides convenient ways access the results either one page at a time, or item-by-item across all pages.
The SDK defines methods that return a paginated lists of results. It provides convenient ways to access the results either one page at a time or item-by-item across all pages.

### Auto-pagination

To iterate through all results across all pages, you can use `autoPager`, which automatically handles fetching more pages for you:
To iterate through all results across all pages, use the `autoPager()` method, which automatically fetches more pages as needed.

### Synchronous
When using the synchronous client, the method returns an [`Iterable`](https://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html)

```java
import com.openai.models.finetuning.jobs.FineTuningJob;
import com.openai.models.finetuning.jobs.JobListPage;

// As an Iterable:
JobListPage page = client.fineTuning().jobs().list(params);
JobListPage page = client.fineTuning().jobs().list();

// Process as an Iterable
for (FineTuningJob job : page.autoPager()) {
System.out.println(job);
};
}

// As a Stream:
client.fineTuning().jobs().list(params).autoPager().stream()
// Process as a Stream
page.autoPager()
.stream()
.limit(50)
.forEach(job -> System.out.println(job));
```

### Asynchronous
When using the asynchronous client, the method returns an [`AsyncStreamResponse`](openai-java-core/src/main/kotlin/com/openai/core/http/AsyncStreamResponse.kt):

```java
// Using forEach, which returns CompletableFuture<Void>:
asyncClient.fineTuning().jobs().list(params).autoPager()
.forEach(job -> System.out.println(job), executor);
import com.openai.core.http.AsyncStreamResponse;
import com.openai.models.finetuning.jobs.FineTuningJob;
import com.openai.models.finetuning.jobs.JobListPageAsync;
import java.util.Optional;
import java.util.concurrent.CompletableFuture;

CompletableFuture<JobListPageAsync> pageFuture = client.async().fineTuning().jobs().list();

pageFuture.thenRun(page -> page.autoPager().subscribe(job -> {
System.out.println(job);
}));

// If you need to handle errors or completion of the stream
pageFuture.thenRun(page -> page.autoPager().subscribe(new AsyncStreamResponse.Handler<>() {
@Override
public void onNext(FineTuningJob job) {
System.out.println(job);
}

@Override
public void onComplete(Optional<Throwable> error) {
if (error.isPresent()) {
System.out.println("Something went wrong!");
throw new RuntimeException(error.get());
} else {
System.out.println("No more!");
}
}
}));

// Or use futures
pageFuture.thenRun(page -> page.autoPager()
.subscribe(job -> {
System.out.println(job);
})
.onCompleteFuture()
.whenComplete((unused, error) -> {
if (error != null) {
System.out.println("Something went wrong!");
throw new RuntimeException(error);
} else {
System.out.println("No more!");
}
}));
```

### Manual pagination

If none of the above helpers meet your needs, you can also manually request pages one-by-one. A page of results has a `data()` method to fetch the list of objects, as well as top-level `response` and other methods to fetch top-level data about the page. It also has methods `hasNextPage`, `getNextPage`, and `getNextPageParams` methods to help with pagination.
To access individual page items and manually request the next page, use the `items()`,
`hasNextPage()`, and `nextPage()` methods:

```java
import com.openai.models.finetuning.jobs.FineTuningJob;
import com.openai.models.finetuning.jobs.JobListPage;

JobListPage page = client.fineTuning().jobs().list(params);
while (page != null) {
for (FineTuningJob job : page.data()) {
JobListPage page = client.fineTuning().jobs().list();
while (true) {
for (FineTuningJob job : page.items()) {
System.out.println(job);
}

page = page.getNextPage().orElse(null);
if (!page.hasNextPage()) {
break;
}

page = page.nextPage();
}
```

Expand Down Expand Up @@ -657,9 +702,7 @@ Requests time out after 10 minutes by default.
To set a custom timeout, configure the method call using the `timeout` method:

```java
import com.openai.models.ChatModel;
import com.openai.models.chat.completions.ChatCompletion;
import com.openai.models.chat.completions.ChatCompletionCreateParams;

ChatCompletion chatCompletion = client.chat().completions().create(
params, RequestOptions.builder().timeout(Duration.ofSeconds(30)).build()
Expand Down Expand Up @@ -775,11 +818,12 @@ To set a documented parameter or property to an undocumented or not yet supporte

```java
import com.openai.core.JsonValue;
import com.openai.models.ChatModel;
import com.openai.models.chat.completions.ChatCompletionCreateParams;

ChatCompletionCreateParams params = ChatCompletionCreateParams.builder()
.addUserMessage("Say this is a test")
.model(JsonValue.from(42))
.messages(JsonValue.from(42))
.model(ChatModel.GPT_4_1)
.build();
```

Expand Down Expand Up @@ -909,9 +953,7 @@ ChatCompletion chatCompletion = client.chat().completions().create(params).valid
Or configure the method call to validate the response using the `responseValidation` method:

```java
import com.openai.models.ChatModel;
import com.openai.models.chat.completions.ChatCompletion;
import com.openai.models.chat.completions.ChatCompletionCreateParams;

ChatCompletion chatCompletion = client.chat().completions().create(
params, RequestOptions.builder().responseValidation(true).build()
Expand Down
2 changes: 1 addition & 1 deletion build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ repositories {

allprojects {
group = "com.openai"
version = "1.6.1" // x-release-please-version
version = "2.0.0" // x-release-please-version
}

subprojects {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ import com.openai.services.blocking.EmbeddingService
import com.openai.services.blocking.EvalService
import com.openai.services.blocking.FileService
import com.openai.services.blocking.FineTuningService
import com.openai.services.blocking.GraderService
import com.openai.services.blocking.ImageService
import com.openai.services.blocking.ModelService
import com.openai.services.blocking.ModerationService
Expand Down Expand Up @@ -65,6 +66,8 @@ interface OpenAIClient {

fun fineTuning(): FineTuningService

fun graders(): GraderService

fun vectorStores(): VectorStoreService

fun beta(): BetaService
Expand Down Expand Up @@ -111,6 +114,8 @@ interface OpenAIClient {

fun fineTuning(): FineTuningService.WithRawResponse

fun graders(): GraderService.WithRawResponse

fun vectorStores(): VectorStoreService.WithRawResponse

fun beta(): BetaService.WithRawResponse
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ import com.openai.services.async.EmbeddingServiceAsync
import com.openai.services.async.EvalServiceAsync
import com.openai.services.async.FileServiceAsync
import com.openai.services.async.FineTuningServiceAsync
import com.openai.services.async.GraderServiceAsync
import com.openai.services.async.ImageServiceAsync
import com.openai.services.async.ModelServiceAsync
import com.openai.services.async.ModerationServiceAsync
Expand Down Expand Up @@ -65,6 +66,8 @@ interface OpenAIClientAsync {

fun fineTuning(): FineTuningServiceAsync

fun graders(): GraderServiceAsync

fun vectorStores(): VectorStoreServiceAsync

fun beta(): BetaServiceAsync
Expand Down Expand Up @@ -111,6 +114,8 @@ interface OpenAIClientAsync {

fun fineTuning(): FineTuningServiceAsync.WithRawResponse

fun graders(): GraderServiceAsync.WithRawResponse

fun vectorStores(): VectorStoreServiceAsync.WithRawResponse

fun beta(): BetaServiceAsync.WithRawResponse
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ import com.openai.services.async.FileServiceAsync
import com.openai.services.async.FileServiceAsyncImpl
import com.openai.services.async.FineTuningServiceAsync
import com.openai.services.async.FineTuningServiceAsyncImpl
import com.openai.services.async.GraderServiceAsync
import com.openai.services.async.GraderServiceAsyncImpl
import com.openai.services.async.ImageServiceAsync
import com.openai.services.async.ImageServiceAsyncImpl
import com.openai.services.async.ModelServiceAsync
Expand Down Expand Up @@ -84,6 +86,10 @@ class OpenAIClientAsyncImpl(private val clientOptions: ClientOptions) : OpenAICl
FineTuningServiceAsyncImpl(clientOptionsWithUserAgent)
}

private val graders: GraderServiceAsync by lazy {
GraderServiceAsyncImpl(clientOptionsWithUserAgent)
}

private val vectorStores: VectorStoreServiceAsync by lazy {
VectorStoreServiceAsyncImpl(clientOptionsWithUserAgent)
}
Expand Down Expand Up @@ -126,6 +132,8 @@ class OpenAIClientAsyncImpl(private val clientOptions: ClientOptions) : OpenAICl

override fun fineTuning(): FineTuningServiceAsync = fineTuning

override fun graders(): GraderServiceAsync = graders

override fun vectorStores(): VectorStoreServiceAsync = vectorStores

override fun beta(): BetaServiceAsync = beta
Expand Down Expand Up @@ -179,6 +187,10 @@ class OpenAIClientAsyncImpl(private val clientOptions: ClientOptions) : OpenAICl
FineTuningServiceAsyncImpl.WithRawResponseImpl(clientOptions)
}

private val graders: GraderServiceAsync.WithRawResponse by lazy {
GraderServiceAsyncImpl.WithRawResponseImpl(clientOptions)
}

private val vectorStores: VectorStoreServiceAsync.WithRawResponse by lazy {
VectorStoreServiceAsyncImpl.WithRawResponseImpl(clientOptions)
}
Expand Down Expand Up @@ -221,6 +233,8 @@ class OpenAIClientAsyncImpl(private val clientOptions: ClientOptions) : OpenAICl

override fun fineTuning(): FineTuningServiceAsync.WithRawResponse = fineTuning

override fun graders(): GraderServiceAsync.WithRawResponse = graders

override fun vectorStores(): VectorStoreServiceAsync.WithRawResponse = vectorStores

override fun beta(): BetaServiceAsync.WithRawResponse = beta
Expand Down
Loading