-
Notifications
You must be signed in to change notification settings - Fork 4.4k
Closed
Milestone
Description
What happened?
This affects versions 2.35 - 2.47, though it is most severe in 2.44-2.47 due to #26520 causing guaranteed exceptions.
When we get an exception from BQ when attempting to append to a stream, we will retry the current append on a new stream. However, in doing so, we abandon the existing stream. This results in messages on that stream that have not yet been committed not making it to BQ, resulting in data consistency issues.
public static void main(String[] args) {
BigQueryOptions options = PipelineOptionsFactory.fromArgs(args).create().as(BigQueryOptions.class);
options.setStorageApiAppendThresholdRecordCount(5);
Pipeline p= Pipeline.create(options);
p.apply("ReadLines", TextIO.read().from("gs://apache-beam-samples/shakespeare/kinglear.txt"))
.apply("Save Events To BigQuery", BigQueryIO.<String>write()
.to("google.com:clouddfe:reprodataset.reprotable")
.withFormatFunction(s -> new TableRow().set("words", s))
.withMethod(Write.Method.STORAGE_WRITE_API)
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
p.run();
}
repros the issue on versions 2.44-2.47
Issue Priority
Priority: 1 (data loss / total loss of function)
Issue Components
- Component: Python SDK
- Component: Java SDK
- Component: Go SDK
- Component: Typescript SDK
- Component: IO connector
- Component: Beam examples
- Component: Beam playground
- Component: Beam katas
- Component: Website
- Component: Spark Runner
- Component: Flink Runner
- Component: Samza Runner
- Component: Twister2 Runner
- Component: Hazelcast Jet Runner
- Component: Google Cloud Dataflow Runner
rdesgroppes