Class StreamsBuilder
StreamsBuilder provides the high-level Kafka Streams DSL to specify a Kafka Streams topology.
It is a requirement that the processing logic (Topology) be defined in a deterministic way,
as in, the order in which all operators are added must be predictable and the same across all application
instances.
Topologies are only identical if all operators are added in the same order.
If different KafkaStreams instances of the same application build different topologies the result may be
incompatible runtime code and unexpected results or errors
- See Also:
-
Constructor Summary
ConstructorsConstructorDescriptionStreamsBuilder(TopologyConfig topologyConfigs) Create aStreamsBuilderinstance. -
Method Summary
Modifier and TypeMethodDescription<KIn,VIn> StreamsBuilder addGlobalStore(StoreBuilder<?> storeBuilder, String topic, Consumed<KIn, VIn> consumed, ProcessorSupplier<KIn, VIn, Void, Void> stateUpdateSupplier) Adds a globalStateStoreto the topology.addStateStore(StoreBuilder<?> builder) Adds a state store to the underlyingTopology.build()Returns theTopologythat represents the specified processing logic.build(Properties props) Returns theTopologythat represents the specified processing logic and accepts aPropertiesinstance used to indicate whether to optimize topology or not.<K,V> GlobalKTable <K, V> globalTable(String topic) Create aGlobalKTablefor the specified topic.<K,V> GlobalKTable <K, V> globalTable(String topic, Consumed<K, V> consumed) Create aGlobalKTablefor the specified topic.<K,V> GlobalKTable <K, V> globalTable(String topic, Consumed<K, V> consumed, Materialized<K, V, KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>> materialized) Create aGlobalKTablefor the specified topic.<K,V> GlobalKTable <K, V> globalTable(String topic, Materialized<K, V, KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>> materialized) Create aGlobalKTablefor the specified topic.<K,V> KStream <K, V> Create aKStreamfrom the specified topic.<K,V> KStream <K, V> Create aKStreamfrom the specified topic.<K,V> KStream <K, V> stream(Collection<String> topics) Create aKStreamfrom the specified topics.<K,V> KStream <K, V> stream(Collection<String> topics, Consumed<K, V> consumed) Create aKStreamfrom the specified topics.<K,V> KStream <K, V> Create aKStreamfrom the specified topic pattern.<K,V> KStream <K, V> Create aKStreamfrom the specified topic pattern.<K,V> KTable <K, V> Create aKTablefor the specified topic.<K,V> KTable <K, V> Create aKTablefor the specified topic.<K,V> KTable <K, V> table(String topic, Consumed<K, V> consumed, Materialized<K, V, KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>> materialized) Create aKTablefor the specified topic.<K,V> KTable <K, V> table(String topic, Materialized<K, V, KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>> materialized) Create aKTablefor the specified topic.
-
Constructor Details
-
StreamsBuilder
public StreamsBuilder() -
StreamsBuilder
Create aStreamsBuilderinstance.- Parameters:
topologyConfigs- the streams configs that apply at the topology level. Please refer toTopologyConfigfor more detail
-
-
Method Details
-
stream
Create aKStreamfrom the specified topic. The default"auto.offset.reset"strategy, defaultTimestampExtractor, and default key and value deserializers as specified in theconfigare used.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topic must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
topic- the topic name; cannot benull- Returns:
- a
KStreamfor the specified topic
-
stream
Create aKStreamfrom the specified topic. The"auto.offset.reset"strategy,TimestampExtractor, key and value deserializers are defined by the options inConsumedare used.Note that the specified input topic must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream. -
stream
Create aKStreamfrom the specified topics. The default"auto.offset.reset"strategy, defaultTimestampExtractor, and default key and value deserializers as specified in theconfigare used.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
topics- the topic names; must contain at least one topic name- Returns:
- a
KStreamfor the specified topics
-
stream
Create aKStreamfrom the specified topics. The"auto.offset.reset"strategy,TimestampExtractor, key and value deserializers are defined by the options inConsumedare used.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream. -
stream
Create aKStreamfrom the specified topic pattern. The default"auto.offset.reset"strategy, defaultTimestampExtractor, and default key and value deserializers as specified in theconfigare used.If multiple topics are matched by the specified pattern, the created
KStreamwill read data from all of them and there is no ordering guarantee between records from different topics. This also means that the work will not be parallelized for multiple topics, and the number of tasks will scale with the maximum partition count of any matching topic rather than the total number of partitions across all topics.Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
topicPattern- the pattern to match for topic names- Returns:
- a
KStreamfor topics matching the regex pattern.
-
stream
Create aKStreamfrom the specified topic pattern. The"auto.offset.reset"strategy,TimestampExtractor, key and value deserializers are defined by the options inConsumedare used.If multiple topics are matched by the specified pattern, the created
KStreamwill read data from all of them and there is no ordering guarantee between records from different topics. This also means that the work will not be parallelized for multiple topics, and the number of tasks will scale with the maximum partition count of any matching topic rather than the total number of partitions across all topics.Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream. -
table
public <K,V> KTable<K,V> table(String topic, Consumed<K, V> consumed, Materialized<K, V, KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>> materialized) Create aKTablefor the specified topic. The"auto.offset.reset"strategy,TimestampExtractor, key and value deserializers are defined by the options inConsumedare used. Inputrecordswithnullkey will be dropped.Note that the specified input topic must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStoreusing the givenMaterializedinstance. An internal changelog topic is created by default. Because the source topic can be used for recovery, you can avoid creating the changelog topic by setting the"topology.optimization"to"all"in theStreamsConfig.You should only specify serdes in the
Consumedinstance as these will also be used to overwrite the serdes inMaterialized, i.e.,
To query the localstreamBuilder.table(topic, Consumed.with(Serde.String(), Serde.String()), Materialized.<String, String, KeyValueStore<Bytes, byte[]>as(storeName))ReadOnlyKeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... StoreQueryParameters<ReadOnlyKeyValueStore<K, ValueAndTimestamp<V>>> storeQueryParams = StoreQueryParameters.fromNameAndType(queryableStoreName, QueryableStoreTypes.timestampedKeyValueStore()); ReadOnlyKeyValueStore<K, ValueAndTimestamp<V>> localStore = streams.store(storeQueryParams); K key = "some-key"; ValueAndTimestamp<V> valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.metadataForAllStreamsClients()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
topic- the topic name; cannot benullconsumed- the instance ofConsumedused to define optional parameters; cannot benullmaterialized- the instance ofMaterializedused to materialize a state store; cannot benull- Returns:
- a
KTablefor the specified topic
-
table
Create aKTablefor the specified topic. The default"auto.offset.reset"strategy and default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queryable through Interactive Queries. An internal changelog topic is created by default. Because the source topic can be used for recovery, you can avoid creating the changelog topic by setting the"topology.optimization"to"all"in theStreamsConfig.- Parameters:
topic- the topic name; cannot benull- Returns:
- a
KTablefor the specified topic
-
table
Create aKTablefor the specified topic. The"auto.offset.reset"strategy,TimestampExtractor, key and value deserializers are defined by the options inConsumedare used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queryable through Interactive Queries. An internal changelog topic is created by default. Because the source topic can be used for recovery, you can avoid creating the changelog topic by setting the"topology.optimization"to"all"in theStreamsConfig. -
table
public <K,V> KTable<K,V> table(String topic, Materialized<K, V, KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>> materialized) Create aKTablefor the specified topic. The default"auto.offset.reset"strategy as specified in theconfigare used. Key and value deserializers as defined by the options inMaterializedare used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStoreusing theMaterializedinstance. An internal changelog topic is created by default. Because the source topic can be used for recovery, you can avoid creating the changelog topic by setting the"topology.optimization"to"all"in theStreamsConfig.- Parameters:
topic- the topic name; cannot benullmaterialized- the instance ofMaterializedused to materialize a state store; cannot benull- Returns:
- a
KTablefor the specified topic
-
globalTable
Create aGlobalKTablefor the specified topic. Inputrecordswithnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queryable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).Note that
GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfigorConsumed. Furthermore,GlobalKTablecannot be aversioned state store.- Parameters:
topic- the topic name; cannot benullconsumed- the instance ofConsumedused to define optional parameters- Returns:
- a
GlobalKTablefor the specified topic
-
globalTable
Create aGlobalKTablefor the specified topic. The default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queryable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).Note that
GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfig. Furthermore,GlobalKTablecannot be aversioned state store.- Parameters:
topic- the topic name; cannot benull- Returns:
- a
GlobalKTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(String topic, Consumed<K, V> consumed, Materialized<K, V, KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>> materialized) Create aGlobalKTablefor the specified topic.Input
KeyValuepairs withnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStoreconfigured with the provided instance ofMaterialized. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).You should only specify serdes in the
Consumedinstance as these will also be used to overwrite the serdes inMaterialized, i.e.,
To query the localstreamBuilder.globalTable(topic, Consumed.with(Serde.String(), Serde.String()), Materialized.<String, String, KeyValueStore<Bytes, byte[]>as(storeName))ReadOnlyKeyValueStoreit must be obtained viaKafkaStreams#store(...):
Note thatKafkaStreams streams = ... StoreQueryParameters<ReadOnlyKeyValueStore<K, ValueAndTimestamp<V>>> storeQueryParams = StoreQueryParameters.fromNameAndType(queryableStoreName, QueryableStoreTypes.timestampedKeyValueStore()); ReadOnlyKeyValueStore<K, ValueAndTimestamp<V>> localStore = streams.store(storeQueryParams); K key = "some-key"; ValueAndTimestamp<V> valueForKey = localStore.get(key);GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfigorConsumed. Furthermore,GlobalKTablecannot be aversioned state store.- Parameters:
topic- the topic name; cannot benullconsumed- the instance ofConsumedused to define optional parameters; can't benullmaterialized- the instance ofMaterializedused to materialize a state store; cannot benull- Returns:
- a
GlobalKTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(String topic, Materialized<K, V, KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>> materialized) Create aGlobalKTablefor the specified topic.Input
KeyValuepairs withnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStoreconfigured with the provided instance ofMaterialized. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
ReadOnlyKeyValueStoreit must be obtained viaKafkaStreams#store(...):
Note thatKafkaStreams streams = ... StoreQueryParameters<ReadOnlyKeyValueStore<K, ValueAndTimestamp<V>>> storeQueryParams = StoreQueryParameters.fromNameAndType(queryableStoreName, QueryableStoreTypes.timestampedKeyValueStore()); ReadOnlyKeyValueStore<K, ValueAndTimestamp<V>> localStore = streams.store(storeQueryParams); K key = "some-key"; ValueAndTimestamp<V> valueForKey = localStore.get(key);GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfig. Furthermore,GlobalKTablecannot be aversioned state store.- Parameters:
topic- the topic name; cannot benullmaterialized- the instance ofMaterializedused to materialize a state store; cannot benull- Returns:
- a
GlobalKTablefor the specified topic
-
addStateStore
Adds a state store to the underlyingTopology.It is required to connect state stores to
Processors, orValueTransformersbefore they can be used.- Parameters:
builder- the builder used to obtain this state storeStateStoreinstance- Returns:
- itself
- Throws:
TopologyException- if state store supplier is already added
-
addGlobalStore
public <KIn,VIn> StreamsBuilder addGlobalStore(StoreBuilder<?> storeBuilder, String topic, Consumed<KIn, VIn> consumed, ProcessorSupplier<KIn, VIn, Void, Void> stateUpdateSupplier) Adds a globalStateStoreto the topology. TheStateStoresources its data from all partitions of the provided input topic. There will be exactly one instance of thisStateStoreper Kafka Streams instance.A
SourceNodewith the provided sourceName will be added to consume the data arriving from the partitions of the input topic.The provided
ProcessorSupplierwill be used to create anProcessorthat will receive all records forwarded from theSourceNode. The supplier should always generate a new instance. Creating a singleProcessorobject and returning the same object reference inProcessorSupplier.get()is a violation of the supplier pattern and leads to runtime exceptions. ThisProcessorshould be used to keep theStateStoreup-to-date. The defaultTimestampExtractoras specified in theconfigis used.It is not required to connect a global store to the
Processors, orValueTransformer; those have read-only access to all global stores by default.- Parameters:
storeBuilder- user definedStoreBuilder; can't benulltopic- the topic to source the data fromconsumed- the instance ofConsumedused to define optional parameters; can't benullstateUpdateSupplier- the instance ofProcessorSupplier- Returns:
- itself
- Throws:
TopologyException- if the processor of state is already registered
-
build
Returns theTopologythat represents the specified processing logic. Note that using this method means no optimizations are performed.- Returns:
- the
Topologythat represents the specified processing logic
-
build
Returns theTopologythat represents the specified processing logic and accepts aPropertiesinstance used to indicate whether to optimize topology or not.- Parameters:
props- thePropertiesused for building possibly optimized topology- Returns:
- the
Topologythat represents the specified processing logic
-