[FLINK-39533][s3] Use abort() instead of drain on close/seek when remaining bytes exceed threshold in NativeS3InputStream#28012
[FLINK-39533][s3] Use abort() instead of drain on close/seek when remaining bytes exceed threshold in NativeS3InputStream#28012Samrat002 wants to merge 2 commits intoapache:masterfrom
Conversation
|
cc: @gaborgsomogyi |
…aining bytes exceed threshold in NativeS3InputStream
| */ | ||
| private void releaseStream() { | ||
| // Drop the wrapper without closing it; closing would trigger the drain path. | ||
| bufferedStream = null; |
There was a problem hiding this comment.
What makes sure that system resources which are normally freed in close() will be handled properly?
There was a problem hiding this comment.
In the revised approach, both bufferedStream.close() and currentStream.close() are still called. The abort() call placed before them terminates the underlying HTTP connection, so when BufferedInputStream.close() delegates to ResponseInputStream.close(), the connection is already dead, and no drain occurs. BufferedInputStream itself holds only a byte[] heap buffer with no native resources. The JVM GCs it upon dereferencing. The currentStream.close() call handles any remaining SDK resource cleanup (connection pool return, etc.) after the abort.
There was a problem hiding this comment.
Can we test that somehow? I mean missing this can cause quite some leaks
| byte[] tail = new byte[20]; | ||
| assertThat(in.read(tail, 0, 20)).isEqualTo(6); | ||
| assertThat(in.getPos()).isEqualTo(256); | ||
| // read past EOF |
There was a problem hiding this comment.
AFAIK hadoop is throwing such case, isn't it?
There was a problem hiding this comment.
Hadoop's S3AInputStream.seek() throws EOFException for negative positions with message "Cannot seek to a negative offset". here the implementation throws IOException with message "Cannot seek to negative position: ", which matches the Hadoop contract. The test verifies isInstanceOf(IOException.class),
so it covers both.
There was a problem hiding this comment.
I mean more like it throws EOFException when in.read() called but no data
| if (bufferedStream != null) { | ||
| try { | ||
| bufferedStream.close(); | ||
| } catch (IOException e) { | ||
| LOG.warn("Error closing buffered stream for {}/{}", bucketName, key, e); | ||
| } finally { | ||
| bufferedStream = null; | ||
| } | ||
| } | ||
| if (currentStream != null) { | ||
| try { | ||
| currentStream.close(); | ||
| } catch (IOException e) { | ||
| LOG.warn("Error closing S3 response stream for {}/{}", bucketName, key, e); | ||
| } finally { | ||
| currentStream = null; | ||
| } | ||
| } |
There was a problem hiding this comment.
While we're hanging around can we just collapse it into one or more function? This pattern is down below with some tiny diffs
| * @see ResponseInputStream#abort() | ||
| */ | ||
| private void abortCurrentStream() { | ||
| if (currentStream != null) { |
There was a problem hiding this comment.
If currentStream is guarded then the function must guard it. Otherwise a simple lock move from upper call will break things silently
What is the purpose of the change
NativeS3InputStreamcallsResponseInputStream.close()when releasing streams duringseek(),skip(), andclose()operations. Apache HttpClient'sclose()implementationdrains all remaining bytes from the response body to enable HTTP connection reuse. For large S3 objects where only a small portion was read (e.g., checkpoint metadata from a
multi-GB state file), this drains potentially gigabytes of data over the network — causing severe latency during checkpoint restore and seek-heavy read patterns.
The AWS SDK v2
ResponseInputStreamJavaDoc explicitly recommendscalling
abort()when remaining data is not needed. This PR replacesclose()withabort()in the stream release path.Brief change log
Added
releaseStream()method toNativeS3InputStreamthat callsabort()instead ofclose()on the underlyingResponseInputStream, and drops theBufferedInputStreamwrapper without closing it (closing would delegate to the drain path)
openStreamAtCurrentPosition()andclose()now usereleaseStream()for stream cleanupAdded
NativeS3InputStreamTestwith 8 tests covering abort lifecycle, data correctness, position tracking, and error pathsVerifying this change
This change added tests and can be verified as follows:
Unit Test
Manually validated end-to-end on a local Flink 2.3-SNAPSHOT cluster with a stateful job writing checkpoints (up to 199MB) to S3, triggering a savepoint, restoring from it, and confirming checkpoints completed successfully after restore with zero S3/stream errors
Does this pull request potentially affect one of the following parts:
@Public(Evolving): noDocumentation
Was generative AI tooling used to co-author this PR?