Kind: SGStream
listKind: SGStreamList
plural: sgstreams
singular: sgstream
shortNames sgstr
The SGStream custom resource represents a stream of Change Data Capture (CDC) events from a source database to a target service.
Example:
apiVersion: stackgres.io/v1alpha1
kind: SGStream
metadata:
name: cloudevent
spec:
source:
type: SGCluster
sgCluster:
name: my-cluster
target:
type: CloudEvent
cloudEvent:
binding: http
format: json
http:
url: cloudevent-service
Property |
Description |
|---|---|
| apiVersion string |
stackgres.io/v1alpha1 Constraints: required, immutable |
| kind string |
SGStream Constraints: required, immutable |
| metadata object |
Refer to the Kubernetes API documentation for the fields of the metadata field.Constraints: required, updatable |
| spec object |
Specification of the desired behavior of a StackGres stream.
A stream represents the process of performing a change data capture (CDC) operation on a data source that generates a stream of events containing information about the changes happening (or happened) to the database in real time (or from the beginning). The stream allows specifying different types for the target of the CDC operation. See The stream performs two distinct operation to generate data source changes for the target:
The CDC is performed using Debezium Engine. SGStream extends functionality of Debezium by providing a custom signaling channel that allows sending signals by simply adding annotation to the SGStream resources. To send a signal simply create an annotation with the following format: Also, SGStream provide the following custom signals implementations:
Constraints: required, updatable |
| status object |
Status of a StackGres stream.
Constraints: optional, updatable |
Specification of the desired behavior of a StackGres stream.
A stream represents the process of performing a change data capture (CDC) operation on a data source that generates a stream of events containing information about the changes happening (or happened) to the database in real time (or from the beginning).
The stream allows specifying different types for the target of the CDC operation. See SGStream.spec.target.type.
The stream performs two distinct operation to generate data source changes for the target:
The CDC is performed using Debezium Engine. SGStream extends functionality of Debezium by providing a custom signaling channel that allows sending signals by simply adding annotation to the SGStream resources. To send a signal simply create an annotation with the following format:
metadata:
annotations:
debezium-signal.stackgres.io/<signal type>: <signal data>
Also, SGStream provide the following custom signals implementations:
tombstone: allows stopping completely Debezium streaming and the SGStream. This signal is useful to give an end to the streaming in a graceful way allowing for the removal of the logical slot created by Debezium.command: allows executing any SQL command on the target database. Only available then the target type is SGCluster.Property |
Description |
|---|---|
| pods object |
The configuration for SGStream Pod
Constraints: required, updatable |
| source object |
The data source of the stream to which change data capture will be applied.
Constraints: required, updatable |
| target object |
The target of this stream.
Constraints: required, updatable |
| debeziumEngineProperties object |
See https://debezium.io/documentation/reference/stable/development/engine.html#engine-properties
Each property is converted from myPropertyName to my.property.name
Constraints: optional, updatable |
| maxRetries integer |
The maximum number of retries the streaming operation is allowed to do after a failure.
A value of Constraints: optional, updatable |
| metadata object |
Metadata information for stream created resources.
Constraints: optional, updatable |
The configuration for SGStream Pod
Property |
Description |
|---|---|
| persistentVolume object |
Pod’s persistent volume configuration.
Constraints: required, updatable |
| resources object |
The resources assigned to the stream container.
See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Constraints: optional, updatable |
| scheduling object |
Pod custom scheduling, affinity and topology spread constratins configuration.
Constraints: optional, updatable |
Pod’s persistent volume configuration.
Example:
apiVersion: stackgres.io/v1alpha1
kind: SGStream
metadata:
name: stackgres
spec:
pods:
persistentVolume:
size: '5Gi'
storageClass: default
Property |
Description |
|---|---|
| size string |
Size of the PersistentVolume for stream Pod. This size is specified either in Mebibytes, Gibibytes or Tebibytes (multiples of 2^20, 2^30 or 2^40, respectively).
Constraints: required, updatable |
| storageClass string |
Name of an existing StorageClass in the Kubernetes cluster, used to create the PersistentVolume for stream.
Constraints: optional, updatable |
The resources assigned to the stream container.
See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Property |
Description |
|---|---|
| claims []object |
Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container.
This field depends on the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Constraints: optional, updatable |
| limits map[string]string |
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Constraints: optional, updatable |
| requests map[string]string |
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Constraints: optional, updatable |
ResourceClaim references one entry in PodSpec.ResourceClaims.
Property |
Description |
|---|---|
| name string |
Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
Constraints: required, updatable |
| request string |
Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request.
Constraints: optional, updatable |
Pod custom scheduling, affinity and topology spread constratins configuration.
Property |
Description |
|---|---|
| nodeAffinity object |
Node affinity is a group of node affinity scheduling rules.
See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#nodeaffinity-v1-core Constraints: optional, updatable |
| nodeSelector map[string]string |
NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
Constraints: optional, updatable |
| podAffinity object |
Pod affinity is a group of inter pod affinity scheduling rules.
See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#podaffinity-v1-core Constraints: optional, updatable |
| podAntiAffinity object |
Pod anti affinity is a group of inter pod anti affinity scheduling rules.
See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#podantiaffinity-v1-core Constraints: optional, updatable |
| priorityClassName string |
If specified, indicates the pod’s priority. “system-node-critical” and “system-cluster-critical” are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default.
Constraints: optional, updatable |
| tolerations []object |
If specified, the pod’s tolerations.
See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#toleration-v1-core Constraints: optional, updatable |
| topologySpreadConstraints []object |
TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.
Constraints: optional, updatable |
Node affinity is a group of node affinity scheduling rules.
See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#nodeaffinity-v1-core
Property |
Description |
|---|---|
| preferredDuringSchedulingIgnoredDuringExecution []object |
The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.
Constraints: optional, updatable |
| requiredDuringSchedulingIgnoredDuringExecution object |
A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.
Constraints: optional, updatable |
An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it’s a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).
Property |
Description |
|---|---|
| preference object |
A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
Constraints: required, updatable |
| weight integer |
Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
Constraints: required, updatable Format: int32 |
A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
Property |
Description |
|---|---|
| matchExpressions []object |
A list of node selector requirements by node’s labels.
Constraints: optional, updatable |
| matchFields []object |
A list of node selector requirements by node’s fields.
Constraints: optional, updatable |
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
The label key that the selector applies to.
Constraints: required, updatable |
| operator string |
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
Constraints: required, updatable |
| values []string |
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
The label key that the selector applies to.
Constraints: required, updatable |
| operator string |
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
Constraints: required, updatable |
| values []string |
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.
Property |
Description |
|---|---|
| nodeSelectorTerms []object |
Required. A list of node selector terms. The terms are ORed.
Constraints: required, updatable |
A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
Property |
Description |
|---|---|
| matchExpressions []object |
A list of node selector requirements by node’s labels.
Constraints: optional, updatable |
| matchFields []object |
A list of node selector requirements by node’s fields.
Constraints: optional, updatable |
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
The label key that the selector applies to.
Constraints: required, updatable |
| operator string |
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
Constraints: required, updatable |
| values []string |
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
The label key that the selector applies to.
Constraints: required, updatable |
| operator string |
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
Constraints: required, updatable |
| values []string |
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
Pod affinity is a group of inter pod affinity scheduling rules.
See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#podaffinity-v1-core
Property |
Description |
|---|---|
| preferredDuringSchedulingIgnoredDuringExecution []object |
The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
Constraints: optional, updatable |
| requiredDuringSchedulingIgnoredDuringExecution []object |
If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
Constraints: optional, updatable |
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
Property |
Description |
|---|---|
| podAffinityTerm object |
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key
Constraints: required, updatable |
| weight integer |
weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
Constraints: required, updatable Format: int32 |
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key
Property |
Description |
|---|---|
| topologyKey string |
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
Constraints: required, updatable |
| labelSelector object |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Constraints: optional, updatable |
| matchLabelKeys []string |
MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with
labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn’t set.Constraints: optional, updatable |
| mismatchLabelKeys []string |
MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with
labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn’t set.Constraints: optional, updatable |
| namespaceSelector object |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Constraints: optional, updatable |
| namespaces []string |
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
Constraints: optional, updatable |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Property |
Description |
|---|---|
| matchExpressions []object |
matchExpressions is a list of label selector requirements. The requirements are ANDed.
Constraints: optional, updatable |
| matchLabels map[string]string |
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
Constraints: optional, updatable |
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
key is the label key that the selector applies to.
Constraints: required, updatable |
| operator string |
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
Constraints: required, updatable |
| values []string |
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Property |
Description |
|---|---|
| matchExpressions []object |
matchExpressions is a list of label selector requirements. The requirements are ANDed.
Constraints: optional, updatable |
| matchLabels map[string]string |
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
Constraints: optional, updatable |
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
key is the label key that the selector applies to.
Constraints: required, updatable |
| operator string |
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
Constraints: required, updatable |
| values []string |
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key
Property |
Description |
|---|---|
| topologyKey string |
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
Constraints: required, updatable |
| labelSelector object |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Constraints: optional, updatable |
| matchLabelKeys []string |
MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with
labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn’t set.Constraints: optional, updatable |
| mismatchLabelKeys []string |
MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with
labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn’t set.Constraints: optional, updatable |
| namespaceSelector object |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Constraints: optional, updatable |
| namespaces []string |
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
Constraints: optional, updatable |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Property |
Description |
|---|---|
| matchExpressions []object |
matchExpressions is a list of label selector requirements. The requirements are ANDed.
Constraints: optional, updatable |
| matchLabels map[string]string |
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
Constraints: optional, updatable |
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
key is the label key that the selector applies to.
Constraints: required, updatable |
| operator string |
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
Constraints: required, updatable |
| values []string |
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Property |
Description |
|---|---|
| matchExpressions []object |
matchExpressions is a list of label selector requirements. The requirements are ANDed.
Constraints: optional, updatable |
| matchLabels map[string]string |
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
Constraints: optional, updatable |
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
key is the label key that the selector applies to.
Constraints: required, updatable |
| operator string |
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
Constraints: required, updatable |
| values []string |
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
Pod anti affinity is a group of inter pod anti affinity scheduling rules.
See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#podantiaffinity-v1-core
Property |
Description |
|---|---|
| preferredDuringSchedulingIgnoredDuringExecution []object |
The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and subtracting “weight” from the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
Constraints: optional, updatable |
| requiredDuringSchedulingIgnoredDuringExecution []object |
If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
Constraints: optional, updatable |
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
Property |
Description |
|---|---|
| podAffinityTerm object |
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key
Constraints: required, updatable |
| weight integer |
weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
Constraints: required, updatable Format: int32 |
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key
Property |
Description |
|---|---|
| topologyKey string |
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
Constraints: required, updatable |
| labelSelector object |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Constraints: optional, updatable |
| matchLabelKeys []string |
MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with
labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn’t set.Constraints: optional, updatable |
| mismatchLabelKeys []string |
MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with
labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn’t set.Constraints: optional, updatable |
| namespaceSelector object |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Constraints: optional, updatable |
| namespaces []string |
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
Constraints: optional, updatable |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Property |
Description |
|---|---|
| matchExpressions []object |
matchExpressions is a list of label selector requirements. The requirements are ANDed.
Constraints: optional, updatable |
| matchLabels map[string]string |
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
Constraints: optional, updatable |
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
key is the label key that the selector applies to.
Constraints: required, updatable |
| operator string |
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
Constraints: required, updatable |
| values []string |
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Property |
Description |
|---|---|
| matchExpressions []object |
matchExpressions is a list of label selector requirements. The requirements are ANDed.
Constraints: optional, updatable |
| matchLabels map[string]string |
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
Constraints: optional, updatable |
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
key is the label key that the selector applies to.
Constraints: required, updatable |
| operator string |
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
Constraints: required, updatable |
| values []string |
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key
Property |
Description |
|---|---|
| topologyKey string |
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
Constraints: required, updatable |
| labelSelector object |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Constraints: optional, updatable |
| matchLabelKeys []string |
MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with
labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn’t set.Constraints: optional, updatable |
| mismatchLabelKeys []string |
MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with
labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn’t set.Constraints: optional, updatable |
| namespaceSelector object |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Constraints: optional, updatable |
| namespaces []string |
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
Constraints: optional, updatable |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Property |
Description |
|---|---|
| matchExpressions []object |
matchExpressions is a list of label selector requirements. The requirements are ANDed.
Constraints: optional, updatable |
| matchLabels map[string]string |
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
Constraints: optional, updatable |
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
key is the label key that the selector applies to.
Constraints: required, updatable |
| operator string |
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
Constraints: required, updatable |
| values []string |
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Property |
Description |
|---|---|
| matchExpressions []object |
matchExpressions is a list of label selector requirements. The requirements are ANDed.
Constraints: optional, updatable |
| matchLabels map[string]string |
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
Constraints: optional, updatable |
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
key is the label key that the selector applies to.
Constraints: required, updatable |
| operator string |
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
Constraints: required, updatable |
| values []string |
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator
Property |
Description |
|---|---|
| effect string |
Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.
Constraints: optional, updatable |
| key string |
Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.
Constraints: optional, updatable |
| operator string |
Operator represents a key’s relationship to the value. Valid operators are Exists, Equal, Lt, and Gt. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Lt and Gt perform numeric comparisons (requires feature gate TaintTolerationComparisonOperators).
Constraints: optional, updatable |
| tolerationSeconds integer |
TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.
Constraints: optional, updatable Format: int64 |
| value string |
Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
Constraints: optional, updatable |
TopologySpreadConstraint specifies how to spread matching pods among the given topology.
Property |
Description |
|---|---|
| maxSkew integer |
MaxSkew describes the degree to which pods may be unevenly distributed. When
whenUnsatisfiable=DoNotSchedule, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway, it is used to give higher precedence to topologies that satisfy it. It’s a required field. Default value is 1 and 0 is not allowed.Constraints: required, updatable Format: int32 |
| topologyKey string |
TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a “bucket”, and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is “kubernetes.io/hostname”, each Node is a domain of that topology. And, if TopologyKey is “topology.kubernetes.io/zone”, each zone is a domain of that topology. It’s a required field.
Constraints: required, updatable |
| whenUnsatisfiable string |
WhenUnsatisfiable indicates how to deal with a pod if it doesn’t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location,
but giving higher precedence to topologies that would help reduce the
skew.
A constraint is considered “Unsatisfiable” for an incoming pod if and only if every possible node assignment for that pod would violate “MaxSkew” on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won’t make it more imbalanced. It’s a required field.
Constraints: required, updatable |
| labelSelector object |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Constraints: optional, updatable |
| matchLabelKeys []string |
MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn’t set. Keys that don’t exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). Constraints: optional, updatable |
| minDomains integer |
MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats “global minimum” as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won’t schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule.
For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so “global minimum” is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. Constraints: optional, updatable Format: int32 |
| nodeAffinityPolicy string |
NodeAffinityPolicy indicates how we will treat Pod’s nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
If this value is nil, the behavior is equivalent to the Honor policy. Constraints: optional, updatable |
| nodeTaintsPolicy string |
NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
If this value is nil, the behavior is equivalent to the Ignore policy. Constraints: optional, updatable |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Property |
Description |
|---|---|
| matchExpressions []object |
matchExpressions is a list of label selector requirements. The requirements are ANDed.
Constraints: optional, updatable |
| matchLabels map[string]string |
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
Constraints: optional, updatable |
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Description |
|---|---|
| key string |
key is the label key that the selector applies to.
Constraints: required, updatable |
| operator string |
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
Constraints: required, updatable |
| values []string |
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
Constraints: optional, updatable |
The data source of the stream to which change data capture will be applied.
Property |
Description |
|---|---|
| type string |
The type of data source. Available data source types are:
Constraints: required, updatable |
| postgres object |
The configuration of the data source required when type is
Postgres.
Constraints: optional, updatable |
| sgCluster object |
The configuration of the data source required when type is
SGCluster.
Constraints: optional, updatable |
The configuration of the data source required when type is Postgres.
Property |
Description |
|---|---|
| host string |
The hostname of the Postgres instance.
Constraints: required, updatable |
| database string |
The target database name to which the CDC process will connect to.
If not specified the default postgres database will be targeted.
Constraints: optional, updatable |
| debeziumProperties object |
Specific property of the debezium Postgres connector.
Each property is converted from myPropertyName to my.property.name
Constraints: optional, updatable |
| excludes []string |
A list of regular expressions that allow matching one or more
<schema>.<table>.<column> entries to be filtered out before sending to the target.
This property is mutually exclusive with Constraints: optional, updatable |
| includes []string |
A list of regular expressions that allow matching one or more
<schema>.<table>.<column> entries to be filtered before sending to the target.
This property is mutually exclusive with Constraints: optional, updatable |
| password object |
The password used by the CDC process to connect to the database.
If not specified the default superuser password will be used.
Constraints: optional, updatable |
| port integer |
The port of the Postgres instance. When not specified port 5432 will be used.
Constraints: optional, updatable |
| username object |
The username used by the CDC process to connect to the database.
If not specified the default superuser username (by default postgres) will be used.
Constraints: optional, updatable |
Specific property of the debezium Postgres connector.
Each property is converted from myPropertyName to my.property.name
Property |
Description |
|---|---|
| binaryHandlingMode string |
Default
bytes. Specifies how binary (bytea) columns should be represented in change events:
Constraints: optional, updatable |
| columnMaskHash map[string]map[string][]string |
An optional section, that allows specifying, for a hash algorithm and a salt, a list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form
The hash algorithm (e.g. SHA-256) type and configuration.
Constraints: optional, updatable |
| columnMaskHashV2 map[string]map[string][]string |
Similar to columnMaskHash but using hashing strategy version 2.
Hashing strategy version 2 should be used to ensure fidelity if the value is being hashed in different places or systems.
The hash algorithm (e.g. SHA-256) type and configuration.
Constraints: optional, updatable |
| columnMaskWithLengthChars []string |
An optional, list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Set length to a positive integer to replace data in the specified columns with the number of asterisk (*) characters specified by the length in the property name. Set length to 0 (zero) to replace data in the specified columns with an empty string.
The fully-qualified name of a column observes the following format: schemaName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
You can specify multiple properties with different lengths in a single configuration.
Constraints: optional, updatable |
| columnPropagateSourceType []string |
Default
[.*]. An optional, list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases.
The fully-qualified name of a column observes one of the following formats: databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
Constraints: optional, updatable |
| columnTruncateToLengthChars []string |
An optional, list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want to truncate the data in a set of columns when it exceeds the number of characters specified by the length in the property name. Set length to a positive integer value, for example, column.truncate.to.20.chars.
The fully-qualified name of a column observes the following format:
Constraints: optional, updatable |
| converters map[string]map[string]string |
Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use. For example,
You must set the converters property to enable the connector to use a custom converter.
For each converter that you configure for a connector, you must also add a .type property, which specifies the fully-qualified name of the class that implements the converter interface.
If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate any additional configuration parameter with a converter, prefix the parameter names with the symbolic name of the converter.
Each property is converted from myPropertyName to my.property.name
Constraints: optional, updatable |
| customMetricTags map[string]string |
The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example:
Constraints: optional, updatable |
| databaseInitialStatements []string |
A list of SQL statements that the connector executes when it establishes a JDBC connection to the database.
The connector may establish JDBC connections at its own discretion. Consequently, this property is useful for configuration of session parameters only, and not for executing DML statements.
The connector does not execute these statements when it creates a connection for reading the transaction log.
Constraints: optional, updatable |
| databaseQueryTimeoutMs integer |
Default `0`. Specifies the time, in milliseconds, that the connector waits for a query to complete. Set the value to 0 (zero) to remove the timeout limit.
Constraints: optional, updatable |
| datatypePropagateSourceType []string |
Default `[.*]`. An optional, list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database. When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases.
The fully-qualified name of a column observes one of the following formats: databaseName.tableName.typeName, or databaseName.schemaName.tableName.typeName.
To match the name of a data type, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the data type; the expression does not match substrings that might be present in a type name.
For the list of PostgreSQL-specific data type names, see the PostgreSQL data type mappings.
Constraints: optional, updatable |
| decimalHandlingMode string |
Default
precise. Specifies how the connector should handle values for DECIMAL and NUMERIC columns:
Constraints: optional, updatable |
| errorsMaxRetries integer |
Default
-1. Specifies how the connector responds after an operation that results in a retriable error, such as a connection error.
Set one of the following options:
Constraints: optional, updatable |
| eventProcessingFailureHandlingMode string |
Default
fail. Specifies how the connector should react to exceptions during processing of events:
Constraints: optional, updatable |
| extendedHeadersEnabled boolean |
Default
true. This property specifies whether Debezium adds context headers with the prefix __debezium.context. to the messages that it emits.
These headers are required by the OpenLineage integration and provide metadata that enables downstream processing systems to track and identify the sources of change events.
The property adds following headers:
Constraints: optional, updatable |
| fieldNameAdjustmentMode string |
Default
none. Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
For more information, see Avro naming.
Constraints: optional, updatable |
| flushLsnSource boolean |
Default
true. Determines whether the connector should commit the LSN of the processed records in the source postgres database so that the WAL logs can be deleted. Specify false if you don’t want the connector to do this. Please note that if set to false LSN will not be acknowledged by Debezium and as a result WAL logs will not be cleared which might result in disk space issues. User is expected to handle the acknowledgement of LSN outside Debezium.
Constraints: optional, updatable |
| guardrailCollectionsLimitAction string |
Default
warn. Specifies the action to trigger if the number of tables that the connector captures exceeds the number that you specify in the guardrailCollectionsMax property. Set the property to one of the following values:
Constraints: optional, updatable |
| guardrailCollectionsMax integer |
Default
0. Specifies the maximum number of tables that the connector can capture. Exceeding this limit triggers the action specified by guardrailCollectionsLimitAction. Set this property to 0 to prevent the connector from triggering guardrail actions.
Constraints: optional, updatable |
| heartbeatActionQuery string |
Specifies a query that the connector executes on the source database when the connector sends a heartbeat message.
This is useful for resolving the situation described in WAL disk space consumption, where capturing changes from a low-traffic database on the same host as a high-traffic database prevents Debezium from processing WAL records and thus acknowledging WAL positions with the database. To address this situation, create a heartbeat table in the low-traffic database, and set this property to a statement that inserts records into that table, for example:
This allows the connector to receive changes from the low-traffic database and acknowledge their LSNs, which prevents unbounded WAL growth on the database host.
Constraints: optional, updatable |
| heartbeatIntervalMs integer |
Default
0. Controls how frequently the connector sends heartbeat messages to a target topic. The default behavior is that the connector does not send heartbeat messages.
Heartbeat messages are useful for monitoring whether the connector is receiving change events from the database. Heartbeat messages might help decrease the number of change events that need to be re-sent when a connector restarts. To send heartbeat messages, set this property to a positive integer, which indicates the number of milliseconds between heartbeat messages.
Heartbeat messages are needed when there are many updates in a database that is being tracked but only a tiny number of updates are related to the table(s) and schema(s) for which the connector is capturing changes. In this situation, the connector reads from the database transaction log as usual but rarely emits change records to target. This means that no offset updates are committed to target and the connector does not have an opportunity to send the latest retrieved LSN to the database. The database retains WAL files that contain events that have already been processed by the connector. Sending heartbeat messages enables the connector to send the latest retrieved LSN to the database, which allows the database to reclaim disk space being used by no longer needed WAL files.
Constraints: optional, updatable |
| hstoreHandlingMode string |
Default
json. Specifies how the connector should handle values for hstore columns:
Constraints: optional, updatable |
| includeUnknownDatatypes boolean |
Default
true. Specifies connector behavior when the connector encounters a field whose data type is unknown. The default behavior is that the connector omits the field from the change event and logs a warning.
Set this property to true if you want the change event to contain an opaque binary representation of the field. This lets consumers decode the field. You can control the exact representation by setting the binaryHandlingMode property.
Constraints: optional, updatable |
| incrementalSnapshotChunkSize integer |
Default
1024. The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment.
Constraints: optional, updatable |
| incrementalSnapshotWatermarkingStrategy string |
Default
insert_insert. Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.
You can specify one of the following options:
Constraints: optional, updatable |
| intervalHandlingMode string |
Default
numeric. Specifies how the connector should handle values for interval columns:
Constraints: optional, updatable |
| maxBatchSize integer |
Default `2048`. Positive integer value that specifies the maximum size of each batch of events that the connector processes.
Constraints: optional, updatable |
| maxQueueSize integer |
Default `8192`. Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes/sends them. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write / send them, or when target becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of maxQueueSize to be larger than the value of maxBatchSize.
Constraints: optional, updatable |
| maxQueueSizeInBytes integer |
Default `0`. A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value.
If maxQueueSize is also set, writing to the queue is blocked when the size of the queue reaches the limit specified by either property. For example, if you set maxQueueSize=1000, and maxQueueSizeInBytes=5000, writing to the queue is blocked after the queue contains 1000 records, or after the volume of the records in the queue reaches 5000 bytes.
Constraints: optional, updatable |
| messageKeyColumns []string |
A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that are published to the topics for specified tables.
By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns.
To establish a custom message key for a table, list the table, followed by the columns to use as the message key. Each list entry takes the following format:
Constraints: optional, updatable |
| messagePrefixExcludeList []string |
An optional, comma-separated list of regular expressions that match the names of the logical decoding message prefixes that you do not want the connector to capture. When this property is set, the connector does not capture logical decoding messages that use the specified prefixes. All other messages are captured.
To exclude all logical decoding messages, set the value of this property to To match the name of a message prefix, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire message prefix string; the expression does not match substrings that might be present in a prefix. If you include this property in the configuration, do not also set For information about the structure of message events and about their ordering semantics, see message events.
Constraints: optional, updatable |
| messagePrefixIncludeList []string |
An optional, comma-separated list of regular expressions that match the names of the logical decoding message prefixes that you want the connector to capture. By default, the connector captures all logical decoding messages. When this property is set, the connector captures only logical decoding message with the prefixes specified by the property. All other logical decoding messages are excluded.
To match the name of a message prefix, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire message prefix string; the expression does not match substrings that might be present in a prefix. If you include this property in the configuration, do not also set the For information about the structure of message events and about their ordering semantics, see message events.
Constraints: optional, updatable |
| moneyFractionDigits integer |
Default
2. Specifies how many decimal digits should be used when converting Postgres money type to java.math.BigDecimal, which represents the values in change events. Applicable only when decimalHandlingMode is set to precise.
Constraints: optional, updatable |
| notificationEnabledChannels []string |
List of notification channel names that are enabled for the connector. By default, the following channels are available: sink, log and jmx. Optionally, you can also implement a custom notification channel.
Constraints: optional, updatable |
| pluginName string |
Default
pgoutput. The name of the PostgreSQL logical decoding plug-in installed on the PostgreSQL server. Supported values are decoderbufs, and pgoutput.
Constraints: optional, updatable |
| pollIntervalMs integer |
Default
500. Positive integer value that specifies the number of milliseconds the connector should wait for new change events to appear before it starts processing a batch of events. Defaults to 500 milliseconds.
Constraints: optional, updatable |
| provideTransactionMetadata boolean |
Default
false. Determines whether the connector generates events with transaction boundaries and enriches change event envelopes with transaction metadata. Specify true if you want the connector to do this. For more information, see Transaction metadata.
Constraints: optional, updatable |
| publicationAutocreateMode string |
Default
all_tables. Applies only when streaming changes by using the pgoutput plug-in. The setting determines how creation of a publication should work. Specify one of the following values:
Constraints: optional, updatable |
| publicationName string |
Default
[a-zA-Z0-9] changed to _ character). The name of the PostgreSQL publication created for streaming changes when using pgoutput. This publication is created at start-up if it does not already exist and it includes all tables. Debezium then applies its own include/exclude list filtering, if configured, to limit the publication to change events for the specific tables of interest. The connector user must have superuser permissions to create this publication, so it is usually preferable to create the publication before starting the connector for the first time. If the publication already exists, either for all tables or configured with a subset of tables, Debezium uses the publication as it is defined.
Constraints: optional, updatable |
| publishViaPartitionRoot boolean |
Default
false. Specifies how the connector captures and emits events for changes that it captures from partitioned tables. This setting applies only if the publicationAutocreateMode property is set to all_tables or filtered, and Debezium creates the publication for the captured tables.
Set one of the following options:
Constraints: optional, updatable |
| readOnly boolean |
Default
false. Specifies whether a connector writes watermarks to the signal data collection to track the progress of an incremental snapshot. Set the value to true to enable a connector that has a read-only connection to the database to use an incremental snapshot watermarking strategy that does not require writing to the signal data collection.
Constraints: optional, updatable |
| replicaIdentityAutosetValues []string |
The setting determines the value for replica identity at table level.
This option will overwrite the existing value in database. A comma-separated list of regular expressions that match fully-qualified tables and replica identity value to be used in the table.
Each expression must match the pattern ‘
Constraints: optional, updatable |
| retriableRestartConnectorWaitMs integer |
Default
10000 (10 seconds). The number of milliseconds to wait before restarting a connector after a retriable error occurs.
Constraints: optional, updatable |
| schemaNameAdjustmentMode string |
Default
none. Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
Constraints: optional, updatable |
| schemaRefreshMode string |
Default
columns_diff. Specify the conditions that trigger a refresh of the in-memory schema for a table.
This setting can significantly improve connector performance if there are frequently-updated tables that have TOASTed data that are rarely part of updates. However, it is possible for the in-memory schema to become outdated if TOASTable columns are dropped from the table.
Constraints: optional, updatable |
| signalDataCollection string |
Fully-qualified name of the data collection that is used to send signals to the connector. Use the following format to specify the collection name:
Constraints: optional, updatable |
| signalEnabledChannels []string |
Default
[sgstream-annotations]. List of the signaling channel names that are enabled for the connector. By default, the following channels are available: sgstream-annotations, source, kafka, file and jmx. Optionally, you can also implement a custom signaling channel.
Constraints: optional, updatable |
| skipMessagesWithoutChange boolean |
Default
false. Specifies whether to skip publishing messages when there is no change in included columns. This would essentially filter messages if there is no change in columns included as per includes or excludes fields. Note: Only works when REPLICA IDENTITY of the table is set to FULL
Constraints: optional, updatable |
| skippedOperations []string |
Default
none. A list of operation types that will be skipped during streaming.
The operations include:
Constraints: optional, updatable |
| slotDropOnStop boolean |
Default
true. Whether or not to delete the logical replication slot when the connector stops in a graceful, expected way. The default behavior is that the replication slot remains configured for the connector when the connector stops. When the connector restarts, having the same replication slot enables the connector to start processing where it left off. Set to true in only testing or development environments. Dropping the slot allows the database to discard WAL segments. When the connector restarts it performs a new snapshot or it can continue from a persistent offset in the target offsets topic.
Constraints: optional, updatable |
| slotFailover boolean |
Default
false. Specifies whether the connector creates a failover slot. If you omit this setting, or if the primary server runs PostgreSQL 16 or earlier, the connector does not create a failover slot.
PostgreSQL uses the Constraints: optional, updatable |
| slotMaxRetries integer |
Default
6. If connecting to a replication slot fails, this is the maximum number of consecutive attempts to connect.
Constraints: optional, updatable |
| slotName string |
Default
[a-zA-Z0-9] changed to _ character). The name of the PostgreSQL logical decoding slot that was created for streaming changes from a particular plug-in for a particular database/schema. The server uses this slot to stream events to the Debezium connector that you are configuring.
Slot names must conform to PostgreSQL replication slot naming rules, which state: “Each replication slot has a name, which can contain lower-case letters, numbers, and the underscore character.”
Constraints: optional, updatable |
| slotRetryDelayMs integer |
Default
10000 (10 seconds). The number of milliseconds to wait between retry attempts when the connector fails to connect to a replication slot.
Constraints: optional, updatable |
| slotStreamParams map[string]string |
Parameters to pass to the configured logical decoding plug-in. For example:
Constraints: optional, updatable |
| snapshotDelayMs integer |
An interval in milliseconds that the connector should wait before performing a snapshot when the connector starts. If you are starting multiple connectors in a cluster, this property is useful for avoiding snapshot interruptions, which might cause re-balancing of connectors.
Constraints: optional, updatable |
| snapshotFetchSize integer |
Default `10240`. During a snapshot, the connector reads table content in batches of rows. This property specifies the maximum number of rows in a batch.
Constraints: optional, updatable |
| snapshotIncludeCollectionList []string |
Default
Constraints: optional, updatable |
| snapshotIsolationMode string |
Default `serializable`. Specifies the transaction isolation level and the type of locking, if any, that the connector applies when it reads data during an initial snapshot or ad hoc blocking snapshot.
Each isolation level strikes a different balance between optimizing concurrency and performance on the one hand, and maximizing data consistency and accuracy on the other. Snapshots that use stricter isolation levels result in higher quality, more consistent data, but the cost of the improvement is decreased performance due to longer lock times and fewer concurrent transactions. Less restrictive isolation levels can increase efficiency, but at the expense of inconsistent data. For more information about transaction isolation levels in PostgreSQL, see the PostgreSQL documentation. Specify one of the following isolation levels:
Constraints: optional, updatable |
| snapshotLockTimeoutMs integer |
Default
10000. Positive integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If the connector cannot acquire table locks in this time interval, the snapshot fails. How the connector performs snapshots provides details.
Constraints: optional, updatable |
| snapshotLockingMode string |
Default
none. Specifies how the connector holds locks on tables while performing a schema snapshot. Set one of the following options:
Constraints: optional, updatable |
| snapshotLockingModeCustomName string |
When snapshotLockingMode is set to custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.SnapshotLock’ interface. For more information, see custom snapshotter SPI.
Constraints: optional, updatable |
| snapshotMaxThreads integer |
Default
1. Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently. This feature is incubating.
Constraints: optional, updatable |
| snapshotMode string |
Default
initial. Specifies the criteria for performing a snapshot when the connector starts:
For more information, see the table of snapshot.mode options.
Constraints: optional, updatable |
| snapshotModeConfigurationBasedSnapshotData boolean |
Default
false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes table data when it performs a snapshot.
Constraints: optional, updatable |
| snapshotModeConfigurationBasedSnapshotOnDataError boolean |
Default
false. If the snapshotMode is set to configuration_based, this property specifies whether the connector attempts to snapshot table data if it does not find the last committed offset in the transaction log. Set the value to true to instruct the connector to perform a new snapshot.
Constraints: optional, updatable |
| snapshotModeConfigurationBasedSnapshotOnSchemaError boolean |
Default
false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes table schema in a snapshot if the schema history topic is not available.
Constraints: optional, updatable |
| snapshotModeConfigurationBasedSnapshotSchema boolean |
Default
false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes the table schema when it performs a snapshot.
Constraints: optional, updatable |
| snapshotModeConfigurationBasedStartStream boolean |
Default
false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector begins to stream change events after a snapshot completes.
Constraints: optional, updatable |
| snapshotModeCustomName string |
When snapshotMode is set as custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.Snapshotter’ interface. The provided implementation is called after a connector restart to determine whether to perform a snapshot. For more information, see custom snapshotter SPI.
Constraints: optional, updatable |
| snapshotQueryMode string |
Default
select_all. Specifies how the connector queries data while performing a snapshot. Set one of the following options:
Constraints: optional, updatable |
| snapshotQueryModeCustomName string |
When snapshotQueryMode is set as custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.SnapshotQuery’ interface. For more information, see custom snapshotter SPI.
Constraints: optional, updatable |
| snapshotSelectStatementOverrides map[string]string |
Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log.
The property contains a hierarchy of fully-qualified table names in the form
In the resulting snapshot, the connector includes only the records for which delete_flag = 0.
Constraints: optional, updatable |
| statusUpdateIntervalMs integer |
Default
10000. Frequency for sending replication connection status updates to the server, given in milliseconds. The property also controls how frequently the database status is checked to detect a dead connection in case the database was shut down.
Constraints: optional, updatable |
| streamingDelayMs integer |
Specifies the time, in milliseconds, that the connector delays the start of the streaming process after it completes a snapshot. Setting a delay interval helps to prevent the connector from restarting snapshots in the event that a failure occurs immediately after the snapshot completes, but before the streaming process begins. Set a delay value that is higher than the value of the
offsetFlushIntervalMs property that is set for the Kafka Connect worker.
Constraints: optional, updatable |
| timePrecisionMode string |
Default
adaptive. Time, date, and timestamps can be represented with different kinds of precision:
Constraints: optional, updatable |
| tombstonesOnDelete boolean |
Default
true. Controls whether a delete event is followed by a tombstone event.
After a source record is deleted, emitting a tombstone event (the default behavior) allows to completely delete all events that pertain to the key of the deleted row in case log compaction is enabled for the topic.
Constraints: optional, updatable |
| topicCacheSize integer |
Default
10000. The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection.
Constraints: optional, updatable |
| topicDelimiter string |
Default
.. Specify the delimiter for topic name, defaults to “.”.
Constraints: optional, updatable |
| topicHeartbeatPrefix string |
Default
__debezium-heartbeat. Controls the name of the topic to which the connector sends heartbeat messages. For example, if the topic prefix is fulfillment, the default topic name is __debezium-heartbeat.fulfillment.
Constraints: optional, updatable |
| topicNamingStrategy string |
Default
io.debezium.schema.SchemaTopicNamingStrategy. The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to SchemaTopicNamingStrategy.
Constraints: optional, updatable |
| topicTransaction string |
Default
transaction. Controls the name of the topic to which the connector sends transaction metadata messages. For example, if the topic prefix is fulfillment, the default topic name is fulfillment.transaction.
Constraints: optional, updatable |
| unavailableValuePlaceholder string |
Default
__debezium_unavailable_value. Specifies the constant that the connector provides to indicate that the original value is a toasted value that is not provided by the database. If the setting of unavailable.value.placeholder starts with the hex: prefix it is expected that the rest of the string represents hexadecimally encoded octets. For more information, see toasted values.
Constraints: optional, updatable |
| xminFetchIntervalMs integer |
Default
0. How often, in milliseconds, the XMIN will be read from the replication slot. The XMIN value provides the lower bounds of where a new replication slot could start from. The default value of 0 disables XMIN tracking.
Constraints: optional, updatable |
The password used by the CDC process to connect to the database.
If not specified the default superuser password will be used.
Property |
Description |
|---|---|
| key string |
The Secret key where the password is stored.
Constraints: required, updatable |
| name string |
The Secret name where the password is stored.
Constraints: required, updatable |
The username used by the CDC process to connect to the database.
If not specified the default superuser username (by default postgres) will be used.
Property |
Description |
|---|---|
| key string |
The Secret key where the username is stored.
Constraints: required, updatable |
| name string |
The Secret name where the username is stored.
Constraints: required, updatable |
The configuration of the data source required when type is SGCluster.
Property |
Description |
|---|---|
| name string |
The target SGCluster name.
Constraints: required, updatable |
| database string |
The target database name to which the CDC process will connect to.
If not specified the default postgres database will be targeted.
Constraints: optional, updatable |
| debeziumProperties object |
Specific property of the debezium Postgres connector.
Each property is converted from myPropertyName to my.property.name
Constraints: optional, updatable |
| excludes []string |
A list of regular expressions that allow matching one or more
<schema>.<table>.<column> entries to be filtered out before sending to the target.
This property is mutually exclusive with Constraints: optional, updatable |
| includes []string |
A list of regular expressions that allow matching one or more
<schema>.<table>.<column> entries to be filtered before sending to the target.
This property is mutually exclusive with Constraints: optional, updatable |
| password object |
The password used by the CDC process to connect to the database.
If not specified the default superuser password will be used.
Constraints: optional, updatable |
| skipDropReplicationSlotAndPublicationOnTombstone boolean |
When set to
true replication slot and publication will not be dropped after receiving the tombstone signal.Constraints: optional, updatable |
| username object |
The username used by the CDC process to connect to the database.
If not specified the default superuser username (by default postgres) will be used.
Constraints: optional, updatable |
Specific property of the debezium Postgres connector.
Each property is converted from myPropertyName to my.property.name
Property |
Description |
|---|---|
| binaryHandlingMode string |
Default
bytes. Specifies how binary (bytea) columns should be represented in change events:
Constraints: optional, updatable |
| columnMaskHash map[string]map[string][]string |
An optional section, that allows specifying, for a hash algorithm and a salt, a list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form
The hash algorithm (e.g. SHA-256) type and configuration.
Constraints: optional, updatable |
| columnMaskHashV2 map[string]map[string][]string |
Similar to columnMaskHash but using hashing strategy version 2.
Hashing strategy version 2 should be used to ensure fidelity if the value is being hashed in different places or systems.
The hash algorithm (e.g. SHA-256) type and configuration.
Constraints: optional, updatable |
| columnMaskWithLengthChars []string |
An optional, list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Set length to a positive integer to replace data in the specified columns with the number of asterisk (*) characters specified by the length in the property name. Set length to 0 (zero) to replace data in the specified columns with an empty string.
The fully-qualified name of a column observes the following format: schemaName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
You can specify multiple properties with different lengths in a single configuration.
Constraints: optional, updatable |
| columnPropagateSourceType []string |
Default
[.*]. An optional, list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases.
The fully-qualified name of a column observes one of the following formats: databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
Constraints: optional, updatable |
| columnTruncateToLengthChars []string |
An optional, list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want to truncate the data in a set of columns when it exceeds the number of characters specified by the length in the property name. Set length to a positive integer value, for example, column.truncate.to.20.chars.
The fully-qualified name of a column observes the following format:
Constraints: optional, updatable |
| converters map[string]map[string]string |
Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use. For example,
You must set the converters property to enable the connector to use a custom converter.
For each converter that you configure for a connector, you must also add a .type property, which specifies the fully-qualified name of the class that implements the converter interface.
If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate any additional configuration parameter with a converter, prefix the parameter names with the symbolic name of the converter.
Each property is converted from myPropertyName to my.property.name
Constraints: optional, updatable |
| customMetricTags map[string]string |
The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example:
Constraints: optional, updatable |
| databaseInitialStatements []string |
A list of SQL statements that the connector executes when it establishes a JDBC connection to the database.
The connector may establish JDBC connections at its own discretion. Consequently, this property is useful for configuration of session parameters only, and not for executing DML statements.
The connector does not execute these statements when it creates a connection for reading the transaction log.
Constraints: optional, updatable |
| databaseQueryTimeoutMs integer |
Default `0`. Specifies the time, in milliseconds, that the connector waits for a query to complete. Set the value to 0 (zero) to remove the timeout limit.
Constraints: optional, updatable |
| datatypePropagateSourceType []string |
Default `[.*]`. An optional, list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database. When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases.
The fully-qualified name of a column observes one of the following formats: databaseName.tableName.typeName, or databaseName.schemaName.tableName.typeName.
To match the name of a data type, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the data type; the expression does not match substrings that might be present in a type name.
For the list of PostgreSQL-specific data type names, see the PostgreSQL data type mappings.
Constraints: optional, updatable |
| decimalHandlingMode string |
Default
precise. Specifies how the connector should handle values for DECIMAL and NUMERIC columns:
Constraints: optional, updatable |
| errorsMaxRetries integer |
Default
-1. Specifies how the connector responds after an operation that results in a retriable error, such as a connection error.
Set one of the following options:
Constraints: optional, updatable |
| eventProcessingFailureHandlingMode string |
Default
fail. Specifies how the connector should react to exceptions during processing of events:
Constraints: optional, updatable |
| extendedHeadersEnabled boolean |
Default
true. This property specifies whether Debezium adds context headers with the prefix __debezium.context. to the messages that it emits.
These headers are required by the OpenLineage integration and provide metadata that enables downstream processing systems to track and identify the sources of change events.
The property adds following headers:
Constraints: optional, updatable |
| fieldNameAdjustmentMode string |
Default
none. Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
For more information, see Avro naming.
Constraints: optional, updatable |
| flushLsnSource boolean |
Default
true. Determines whether the connector should commit the LSN of the processed records in the source postgres database so that the WAL logs can be deleted. Specify false if you don’t want the connector to do this. Please note that if set to false LSN will not be acknowledged by Debezium and as a result WAL logs will not be cleared which might result in disk space issues. User is expected to handle the acknowledgement of LSN outside Debezium.
Constraints: optional, updatable |
| guardrailCollectionsLimitAction string |
Default
warn. Specifies the action to trigger if the number of tables that the connector captures exceeds the number that you specify in the guardrailCollectionsMax property. Set the property to one of the following values:
Constraints: optional, updatable |
| guardrailCollectionsMax integer |
Default
0. Specifies the maximum number of tables that the connector can capture. Exceeding this limit triggers the action specified by guardrailCollectionsLimitAction. Set this property to 0 to prevent the connector from triggering guardrail actions.
Constraints: optional, updatable |
| heartbeatActionQuery string |
Specifies a query that the connector executes on the source database when the connector sends a heartbeat message.
This is useful for resolving the situation described in WAL disk space consumption, where capturing changes from a low-traffic database on the same host as a high-traffic database prevents Debezium from processing WAL records and thus acknowledging WAL positions with the database. To address this situation, create a heartbeat table in the low-traffic database, and set this property to a statement that inserts records into that table, for example:
This allows the connector to receive changes from the low-traffic database and acknowledge their LSNs, which prevents unbounded WAL growth on the database host.
Constraints: optional, updatable |
| heartbeatIntervalMs integer |
Default
0. Controls how frequently the connector sends heartbeat messages to a target topic. The default behavior is that the connector does not send heartbeat messages.
Heartbeat messages are useful for monitoring whether the connector is receiving change events from the database. Heartbeat messages might help decrease the number of change events that need to be re-sent when a connector restarts. To send heartbeat messages, set this property to a positive integer, which indicates the number of milliseconds between heartbeat messages.
Heartbeat messages are needed when there are many updates in a database that is being tracked but only a tiny number of updates are related to the table(s) and schema(s) for which the connector is capturing changes. In this situation, the connector reads from the database transaction log as usual but rarely emits change records to target. This means that no offset updates are committed to target and the connector does not have an opportunity to send the latest retrieved LSN to the database. The database retains WAL files that contain events that have already been processed by the connector. Sending heartbeat messages enables the connector to send the latest retrieved LSN to the database, which allows the database to reclaim disk space being used by no longer needed WAL files.
Constraints: optional, updatable |
| hstoreHandlingMode string |
Default
json. Specifies how the connector should handle values for hstore columns:
Constraints: optional, updatable |
| includeUnknownDatatypes boolean |
Default
true. Specifies connector behavior when the connector encounters a field whose data type is unknown. The default behavior is that the connector omits the field from the change event and logs a warning.
Set this property to true if you want the change event to contain an opaque binary representation of the field. This lets consumers decode the field. You can control the exact representation by setting the binaryHandlingMode property.
Constraints: optional, updatable |
| incrementalSnapshotChunkSize integer |
Default
1024. The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment.
Constraints: optional, updatable |
| incrementalSnapshotWatermarkingStrategy string |
Default
insert_insert. Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.
You can specify one of the following options:
Constraints: optional, updatable |
| intervalHandlingMode string |
Default
numeric. Specifies how the connector should handle values for interval columns:
Constraints: optional, updatable |
| maxBatchSize integer |
Default `2048`. Positive integer value that specifies the maximum size of each batch of events that the connector processes.
Constraints: optional, updatable |
| maxQueueSize integer |
Default `8192`. Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes/sends them. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write / send them, or when target becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of maxQueueSize to be larger than the value of maxBatchSize.
Constraints: optional, updatable |
| maxQueueSizeInBytes integer |
Default `0`. A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value.
If maxQueueSize is also set, writing to the queue is blocked when the size of the queue reaches the limit specified by either property. For example, if you set maxQueueSize=1000, and maxQueueSizeInBytes=5000, writing to the queue is blocked after the queue contains 1000 records, or after the volume of the records in the queue reaches 5000 bytes.
Constraints: optional, updatable |
| messageKeyColumns []string |
A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that are published to the topics for specified tables.
By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns.
To establish a custom message key for a table, list the table, followed by the columns to use as the message key. Each list entry takes the following format:
Constraints: optional, updatable |
| messagePrefixExcludeList []string |
An optional, comma-separated list of regular expressions that match the names of the logical decoding message prefixes that you do not want the connector to capture. When this property is set, the connector does not capture logical decoding messages that use the specified prefixes. All other messages are captured.
To exclude all logical decoding messages, set the value of this property to To match the name of a message prefix, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire message prefix string; the expression does not match substrings that might be present in a prefix. If you include this property in the configuration, do not also set For information about the structure of message events and about their ordering semantics, see message events.
Constraints: optional, updatable |
| messagePrefixIncludeList []string |
An optional, comma-separated list of regular expressions that match the names of the logical decoding message prefixes that you want the connector to capture. By default, the connector captures all logical decoding messages. When this property is set, the connector captures only logical decoding message with the prefixes specified by the property. All other logical decoding messages are excluded.
To match the name of a message prefix, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire message prefix string; the expression does not match substrings that might be present in a prefix. If you include this property in the configuration, do not also set the For information about the structure of message events and about their ordering semantics, see message events.
Constraints: optional, updatable |
| moneyFractionDigits integer |
Default
2. Specifies how many decimal digits should be used when converting Postgres money type to java.math.BigDecimal, which represents the values in change events. Applicable only when decimalHandlingMode is set to precise.
Constraints: optional, updatable |
| notificationEnabledChannels []string |
List of notification channel names that are enabled for the connector. By default, the following channels are available: sink, log and jmx. Optionally, you can also implement a custom notification channel.
Constraints: optional, updatable |
| pluginName string |
Default
pgoutput. The name of the PostgreSQL logical decoding plug-in installed on the PostgreSQL server. Supported values are decoderbufs, and pgoutput.
Constraints: optional, updatable |
| pollIntervalMs integer |
Default
500. Positive integer value that specifies the number of milliseconds the connector should wait for new change events to appear before it starts processing a batch of events. Defaults to 500 milliseconds.
Constraints: optional, updatable |
| provideTransactionMetadata boolean |
Default
false. Determines whether the connector generates events with transaction boundaries and enriches change event envelopes with transaction metadata. Specify true if you want the connector to do this. For more information, see Transaction metadata.
Constraints: optional, updatable |
| publicationAutocreateMode string |
Default
all_tables. Applies only when streaming changes by using the pgoutput plug-in. The setting determines how creation of a publication should work. Specify one of the following values:
Constraints: optional, updatable |
| publicationName string |
Default
[a-zA-Z0-9] changed to _ character). The name of the PostgreSQL publication created for streaming changes when using pgoutput. This publication is created at start-up if it does not already exist and it includes all tables. Debezium then applies its own include/exclude list filtering, if configured, to limit the publication to change events for the specific tables of interest. The connector user must have superuser permissions to create this publication, so it is usually preferable to create the publication before starting the connector for the first time. If the publication already exists, either for all tables or configured with a subset of tables, Debezium uses the publication as it is defined.
Constraints: optional, updatable |
| publishViaPartitionRoot boolean |
Default
false. Specifies how the connector captures and emits events for changes that it captures from partitioned tables. This setting applies only if the publicationAutocreateMode property is set to all_tables or filtered, and Debezium creates the publication for the captured tables.
Set one of the following options:
Constraints: optional, updatable |
| readOnly boolean |
Default
false. Specifies whether a connector writes watermarks to the signal data collection to track the progress of an incremental snapshot. Set the value to true to enable a connector that has a read-only connection to the database to use an incremental snapshot watermarking strategy that does not require writing to the signal data collection.
Constraints: optional, updatable |
| replicaIdentityAutosetValues []string |
The setting determines the value for replica identity at table level.
This option will overwrite the existing value in database. A comma-separated list of regular expressions that match fully-qualified tables and replica identity value to be used in the table.
Each expression must match the pattern ‘
Constraints: optional, updatable |
| retriableRestartConnectorWaitMs integer |
Default
10000 (10 seconds). The number of milliseconds to wait before restarting a connector after a retriable error occurs.
Constraints: optional, updatable |
| schemaNameAdjustmentMode string |
Default
none. Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
Constraints: optional, updatable |
| schemaRefreshMode string |
Default
columns_diff. Specify the conditions that trigger a refresh of the in-memory schema for a table.
This setting can significantly improve connector performance if there are frequently-updated tables that have TOASTed data that are rarely part of updates. However, it is possible for the in-memory schema to become outdated if TOASTable columns are dropped from the table.
Constraints: optional, updatable |
| signalDataCollection string |
Fully-qualified name of the data collection that is used to send signals to the connector. Use the following format to specify the collection name:
Constraints: optional, updatable |
| signalEnabledChannels []string |
Default
[sgstream-annotations]. List of the signaling channel names that are enabled for the connector. By default, the following channels are available: sgstream-annotations, source, kafka, file and jmx. Optionally, you can also implement a custom signaling channel.
Constraints: optional, updatable |
| skipMessagesWithoutChange boolean |
Default
false. Specifies whether to skip publishing messages when there is no change in included columns. This would essentially filter messages if there is no change in columns included as per includes or excludes fields. Note: Only works when REPLICA IDENTITY of the table is set to FULL
Constraints: optional, updatable |
| skippedOperations []string |
Default
none. A list of operation types that will be skipped during streaming.
The operations include:
Constraints: optional, updatable |
| slotDropOnStop boolean |
Default
true. Whether or not to delete the logical replication slot when the connector stops in a graceful, expected way. The default behavior is that the replication slot remains configured for the connector when the connector stops. When the connector restarts, having the same replication slot enables the connector to start processing where it left off. Set to true in only testing or development environments. Dropping the slot allows the database to discard WAL segments. When the connector restarts it performs a new snapshot or it can continue from a persistent offset in the target offsets topic.
Constraints: optional, updatable |
| slotFailover boolean |
Default
false. Specifies whether the connector creates a failover slot. If you omit this setting, or if the primary server runs PostgreSQL 16 or earlier, the connector does not create a failover slot.
PostgreSQL uses the Constraints: optional, updatable |
| slotMaxRetries integer |
Default
6. If connecting to a replication slot fails, this is the maximum number of consecutive attempts to connect.
Constraints: optional, updatable |
| slotName string |
Default
[a-zA-Z0-9] changed to _ character). The name of the PostgreSQL logical decoding slot that was created for streaming changes from a particular plug-in for a particular database/schema. The server uses this slot to stream events to the Debezium connector that you are configuring.
Slot names must conform to PostgreSQL replication slot naming rules, which state: “Each replication slot has a name, which can contain lower-case letters, numbers, and the underscore character.”
Constraints: optional, updatable |
| slotRetryDelayMs integer |
Default
10000 (10 seconds). The number of milliseconds to wait between retry attempts when the connector fails to connect to a replication slot.
Constraints: optional, updatable |
| slotStreamParams map[string]string |
Parameters to pass to the configured logical decoding plug-in. For example:
Constraints: optional, updatable |
| snapshotDelayMs integer |
An interval in milliseconds that the connector should wait before performing a snapshot when the connector starts. If you are starting multiple connectors in a cluster, this property is useful for avoiding snapshot interruptions, which might cause re-balancing of connectors.
Constraints: optional, updatable |
| snapshotFetchSize integer |
Default `10240`. During a snapshot, the connector reads table content in batches of rows. This property specifies the maximum number of rows in a batch.
Constraints: optional, updatable |
| snapshotIncludeCollectionList []string |
Default
Constraints: optional, updatable |
| snapshotIsolationMode string |
Default `serializable`. Specifies the transaction isolation level and the type of locking, if any, that the connector applies when it reads data during an initial snapshot or ad hoc blocking snapshot.
Each isolation level strikes a different balance between optimizing concurrency and performance on the one hand, and maximizing data consistency and accuracy on the other. Snapshots that use stricter isolation levels result in higher quality, more consistent data, but the cost of the improvement is decreased performance due to longer lock times and fewer concurrent transactions. Less restrictive isolation levels can increase efficiency, but at the expense of inconsistent data. For more information about transaction isolation levels in PostgreSQL, see the PostgreSQL documentation. Specify one of the following isolation levels:
Constraints: optional, updatable |
| snapshotLockTimeoutMs integer |
Default
10000. Positive integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If the connector cannot acquire table locks in this time interval, the snapshot fails. How the connector performs snapshots provides details.
Constraints: optional, updatable |
| snapshotLockingMode string |
Default
none. Specifies how the connector holds locks on tables while performing a schema snapshot. Set one of the following options:
Constraints: optional, updatable |
| snapshotLockingModeCustomName string |
When snapshotLockingMode is set to custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.SnapshotLock’ interface. For more information, see custom snapshotter SPI.
Constraints: optional, updatable |
| snapshotMaxThreads integer |
Default
1. Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently. This feature is incubating.
Constraints: optional, updatable |
| snapshotMode string |
Default
initial. Specifies the criteria for performing a snapshot when the connector starts:
For more information, see the table of snapshot.mode options.
Constraints: optional, updatable |
| snapshotModeConfigurationBasedSnapshotData boolean |
Default
false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes table data when it performs a snapshot.
Constraints: optional, updatable |
| snapshotModeConfigurationBasedSnapshotOnDataError boolean |
Default
false. If the snapshotMode is set to configuration_based, this property specifies whether the connector attempts to snapshot table data if it does not find the last committed offset in the transaction log. Set the value to true to instruct the connector to perform a new snapshot.
Constraints: optional, updatable |
| snapshotModeConfigurationBasedSnapshotOnSchemaError boolean |
Default
false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes table schema in a snapshot if the schema history topic is not available.
Constraints: optional, updatable |
| snapshotModeConfigurationBasedSnapshotSchema boolean |
Default
false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes the table schema when it performs a snapshot.
Constraints: optional, updatable |
| snapshotModeConfigurationBasedStartStream boolean |
Default
false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector begins to stream change events after a snapshot completes.
Constraints: optional, updatable |
| snapshotModeCustomName string |
When snapshotMode is set as custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.Snapshotter’ interface. The provided implementation is called after a connector restart to determine whether to perform a snapshot. For more information, see custom snapshotter SPI.
Constraints: optional, updatable |
| snapshotQueryMode string |
Default
select_all. Specifies how the connector queries data while performing a snapshot. Set one of the following options:
Constraints: optional, updatable |
| snapshotQueryModeCustomName string |
When snapshotQueryMode is set as custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.SnapshotQuery’ interface. For more information, see custom snapshotter SPI.
Constraints: optional, updatable |
| snapshotSelectStatementOverrides map[string]string |
Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log.
The property contains a hierarchy of fully-qualified table names in the form
In the resulting snapshot, the connector includes only the records for which delete_flag = 0.
Constraints: optional, updatable |
| statusUpdateIntervalMs integer |
Default
10000. Frequency for sending replication connection status updates to the server, given in milliseconds. The property also controls how frequently the database status is checked to detect a dead connection in case the database was shut down.
Constraints: optional, updatable |
| streamingDelayMs integer |
Specifies the time, in milliseconds, that the connector delays the start of the streaming process after it completes a snapshot. Setting a delay interval helps to prevent the connector from restarting snapshots in the event that a failure occurs immediately after the snapshot completes, but before the streaming process begins. Set a delay value that is higher than the value of the
offsetFlushIntervalMs property that is set for the Kafka Connect worker.
Constraints: optional, updatable |
| timePrecisionMode string |
Default
adaptive. Time, date, and timestamps can be represented with different kinds of precision:
Constraints: optional, updatable |
| tombstonesOnDelete boolean |
Default
true. Controls whether a delete event is followed by a tombstone event.
After a source record is deleted, emitting a tombstone event (the default behavior) allows to completely delete all events that pertain to the key of the deleted row in case log compaction is enabled for the topic.
Constraints: optional, updatable |
| topicCacheSize integer |
Default
10000. The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection.
Constraints: optional, updatable |
| topicDelimiter string |
Default
.. Specify the delimiter for topic name, defaults to “.”.
Constraints: optional, updatable |
| topicHeartbeatPrefix string |
Default
__debezium-heartbeat. Controls the name of the topic to which the connector sends heartbeat messages. For example, if the topic prefix is fulfillment, the default topic name is __debezium-heartbeat.fulfillment.
Constraints: optional, updatable |
| topicNamingStrategy string |
Default
io.debezium.schema.SchemaTopicNamingStrategy. The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to SchemaTopicNamingStrategy.
Constraints: optional, updatable |
| topicTransaction string |
Default
transaction. Controls the name of the topic to which the connector sends transaction metadata messages. For example, if the topic prefix is fulfillment, the default topic name is fulfillment.transaction.
Constraints: optional, updatable |
| unavailableValuePlaceholder string |
Default
__debezium_unavailable_value. Specifies the constant that the connector provides to indicate that the original value is a toasted value that is not provided by the database. If the setting of unavailable.value.placeholder starts with the hex: prefix it is expected that the rest of the string represents hexadecimally encoded octets. For more information, see toasted values.
Constraints: optional, updatable |
| xminFetchIntervalMs integer |
Default
0. How often, in milliseconds, the XMIN will be read from the replication slot. The XMIN value provides the lower bounds of where a new replication slot could start from. The default value of 0 disables XMIN tracking.
Constraints: optional, updatable |
The password used by the CDC process to connect to the database.
If not specified the default superuser password will be used.
Property |
Description |
|---|---|
| key string |
The Secret key where the password is stored.
Constraints: required, updatable |
| name string |
The Secret name where the password is stored.
Constraints: required, updatable |
The username used by the CDC process to connect to the database.
If not specified the default superuser username (by default postgres) will be used.
Property |
Description |
|---|---|
| key string |
The Secret key where the username is stored.
Constraints: required, updatable |
| name string |
The Secret name where the username is stored.
Constraints: required, updatable |
The target of this stream.
Property |
Description |
|---|---|
| type string |
Indicate the type of target of this stream. Possible values are:
Constraints: required, updatable |
| cloudEvent object |
Configuration section for
CloudEvent target type.
Constraints: optional, updatable |
| pgLambda object |
Configuration section for
PgLambda target type.
Constraints: optional, updatable |
| sgCluster object |
The configuration of the data target required when type is
SGCluster.
Constraints: optional, updatable |
Configuration section for CloudEvent target type.
Property |
Description |
|---|---|
| binding string |
The CloudEvent binding (http by default).
Only http is supported at the moment.
Constraints: optional, updatable |
| format string |
The CloudEvent format (json by default).
Only json is supported at the moment.
Constraints: optional, updatable |
| http object |
The http binding configuration.
Constraints: optional, updatable |
The http binding configuration.
Property |
Description |
|---|---|
| url string |
The URL used to send the CloudEvents to the endpoint.
Constraints: required, updatable |
| connectTimeout string |
Set the connect timeout.
Value 0 represents infinity (default). Negative values are not allowed.
Constraints: optional, updatable |
| headers map[string]string |
Headers to include when sending CloudEvents to the endpoint.
Constraints: optional, updatable |
| readTimeout string |
Set the read timeout. The value is the timeout to read a response.
Value 0 represents infinity (default). Negative values are not allowed.
Constraints: optional, updatable |
| retryBackoffDelay integer |
The maximum amount of delay in seconds after an error before retrying again.
The initial delay will use 10% of this value and then increase the value exponentially up to the maximum amount of seconds specified with this field.
Constraints: optional, updatable Default: 60 |
| retryLimit integer |
Set the retry limit. When set the event will be sent again after an error for the specified limit of times. When not set the event will be sent again after an error.
Constraints: optional, updatable |
| skipHostnameVerification boolean |
When
true disable hostname verification.Constraints: optional, updatable |
Configuration section for PgLambda target type.
Property |
Description |
|---|---|
| knative object |
Knative Service configuration.
Constraints: optional, updatable |
| script string |
Script to execute. This field is mutually exclusive with
scriptFrom field.
Constraints: optional, updatable |
| scriptFrom object |
Reference to either a Kubernetes Secret or a ConfigMap that contains the script to execute. This field is mutually exclusive with
script field.
Fields Constraints: optional, updatable |
| scriptType string |
The PgLambda script format (javascript by default).
Constraints: optional, updatable |
Knative Service configuration.
Property |
Description |
|---|---|
| annotations map[string]string |
Annotations to set to Knative Service
Constraints: optional, updatable |
| http object |
PgLambda uses a CloudEvent http binding to send events to the Knative Service. This section allows modifying the configuration of this binding.
Constraints: optional, updatable |
| labels map[string]string |
Labels to set to Knative Service
Constraints: optional, updatable |
PgLambda uses a CloudEvent http binding to send events to the Knative Service. This section allows modifying the configuration of this binding.
Property |
Description |
|---|---|
| connectTimeout string |
Set the connect timeout.
Value 0 represents infinity (default). Negative values are not allowed.
Constraints: optional, updatable |
| headers map[string]string |
Headers to include when sending CloudEvents to the endpoint.
Constraints: optional, updatable |
| readTimeout string |
Set the read timeout. The value is the timeout to read a response.
Value 0 represents infinity (default). Negative values are not allowed.
Constraints: optional, updatable |
| retryBackoffDelay integer |
The maximum amount of delay in seconds after an error before retrying again.
The initial delay will use 10% of this value and then increase the value exponentially up to the maximum amount of seconds specified with this field.
Constraints: optional, updatable Default: 60 |
| retryLimit integer |
Set the retry limit. When set the event will be sent again after an error for the specified limit of times. When not set the event will be sent again after an error.
Constraints: optional, updatable |
| skipHostnameVerification boolean |
When
true disable hostname verification.Constraints: optional, updatable |
| url string |
The URL used to send the CloudEvents to the endpoint.
Constraints: optional, updatable |
Reference to either a Kubernetes Secret or a ConfigMap that contains the script to execute. This field is mutually exclusive with script field.
Fields secretKeyRef and configMapKeyRef are mutually exclusive, and one of them is required.
Property |
Description |
|---|---|
| configMapKeyRef object |
A ConfigMap reference that contains the script to execute. This field is mutually exclusive with
secretKeyRef field.
Constraints: optional, updatable |
| secretKeyRef object |
A Kubernetes SecretKeySelector that contains the script to execute. This field is mutually exclusive with
configMapKeyRef field.
Constraints: optional, updatable |
A ConfigMap reference that contains the script to execute. This field is mutually exclusive with secretKeyRef field.
Property |
Description |
|---|---|
| key string |
The key name within the ConfigMap that contains the script to execute.
Constraints: optional, updatable |
| name string |
The name of the ConfigMap that contains the script to execute.
Constraints: optional, updatable |
A Kubernetes SecretKeySelector that contains the script to execute. This field is mutually exclusive with configMapKeyRef field.
Property |
Description |
|---|---|
| key string |
The key of the secret to select from. Must be a valid secret key.
Constraints: optional, updatable |
| name string |
Name of the referent. More information.
Constraints: optional, updatable |
The configuration of the data target required when type is SGCluster.
Property |
Description |
|---|---|
| name string |
The target SGCluster name.
Constraints: required, updatable |
| database string |
The target database name to which the data will be migrated.
If not specified the default postgres database will be targeted.
Constraints: optional, updatable |
| ddlImportRoleSkipFilter string |
Allow to set a SIMILAR TO regular expression to match the names of the roles to skip during import of DDL.
When not set and source is an SGCluster will match the superuser, replicator and authenticator usernames.
Constraints: optional, updatable |
| debeziumProperties object |
Specific property of the debezium JDBC sink.
See https://debezium.io/documentation/reference/stable/connectors/jdbc.html#jdbc-connector-configuration Each property is converted from myPropertyName to my.property.name
Constraints: optional, updatable |
| password object |
The password used by the CDC sink process to connect to the database.
If not specified the default superuser password will be used.
Constraints: optional, updatable |
| skipDdlImport boolean |
When
true disable import of DDL and tables will be created on demand by Debezium.
Constraints: optional, updatable |
| skipDropIndexesAndConstraints boolean |
When
true disable drop of indexes and constraints. Indexes and constraints are dropped in order to improve snapshotting performance.
Constraints: optional, updatable |
| skipDropPrimaryKeys boolean |
When
true disable drop of primary keys. Primary keys are dropped to improve snapshotting performance. This option is required to be set to true when using incremental snapshotting.
Constraints: optional, updatable |
| skipRestoreIndexesAfterSnapshot boolean |
When
true disable restore of indexes on the first non-snapshot event. This option is required to be set to true when using incremental snapshotting. This option is ignored when skipDropIndexesAndConstraints is set to true.
Constraints: optional, updatable |
| username object |
The username used by the CDC sink process to connect to the database.
If not specified the default superuser username (by default postgres) will be used.
Constraints: optional, updatable |
Specific property of the debezium JDBC sink.
See https://debezium.io/documentation/reference/stable/connectors/jdbc.html#jdbc-connector-configuration
Each property is converted from myPropertyName to my.property.name
Property |
Description |
|---|---|
| batchSize integer |
Default
500. Specifies how many records to attempt to batch together into the destination table.
Constraints: optional, updatable |
| collectionNameFormat string |
Default
${topic}. Specifies a string pattern that the connector uses to construct the names of destination tables.
When the property is set to ${topic}, SGStream writes the event record to a destination table with a name that matches the name of the source topic.
You can also configure this property to extract values from specific fields in incoming event records and then use those values to dynamically generate the names of target tables. This ability to generate table names from values in the message source would otherwise require the use of a custom single message transformation (SMT).
To configure the property to dynamically generate the names of destination tables, set its value to a pattern such as ${source._field_}. When you specify this type of pattern, the connector extracts values from the source block of the Debezium change event, and then uses those values to construct the table name. For example, you might set the value of the property to the pattern ${source.schema}_${source.table}. Based on this pattern, if the connector reads an event in which the schema field in the source block contains the value, user, and the table field contains the value, tab, the connector writes the event record to a table with the name user_tab.
Constraints: optional, updatable |
| collectionNamingStrategy string |
Default
io.stackgres.stream.jobs.migration.StreamMigrationTableNamingStrategy. Specifies the fully-qualified class name of a TableNamingStrategy implementation that the connector uses to resolve table names from incoming event topic names.
The default behavior is to:
Constraints: optional, updatable |
| columnNamingStrategy string |
Default
io.debezium.connector.jdbc.naming.DefaultColumnNamingStrategy. Specifies the fully-qualified class name of a ColumnNamingStrategy implementation that the connector uses to resolve column names from event field names.
By default, the connector uses the field name as the column name.
Constraints: optional, updatable |
| connectionPoolAcquire_increment integer |
Default
32. Specifies the number of connections that the connector attempts to acquire if the connection pool exceeds its maximum size.
Constraints: optional, updatable |
| connectionPoolMax_size integer |
Default
32. Specifies the maximum number of concurrent connections that the pool maintains.
Constraints: optional, updatable |
| connectionPoolMin_size integer |
Default
5. Specifies the minimum number of connections in the pool.
Constraints: optional, updatable |
| connectionPoolTimeout integer |
Default
1800. Specifies the number of seconds that an unused connection is kept before it is discarded.
Constraints: optional, updatable |
| connectionRestartOnErrors boolean |
Default
false. Specifies whether the connector retries after a transient JDBC connection error.
When enabled (true), the connector treats connection issues (such as socket closures or timeouts) as retriable, allowing it to retry processing instead of failing the task. This reduces downtime and improves resilience against temporary disruptions.
Constraints: optional, updatable |
| connectionUrlParameters string |
Parameters that are set in the JDBC connection URL. See https://jdbc.postgresql.org/documentation/use/
Constraints: optional, updatable |
| deleteEnabled boolean |
Default
true. Specifies whether the connector processes DELETE or tombstone events and removes the corresponding row from the database. Use of this option requires that you set the primaryKeyMode to record_key.
Constraints: optional, updatable |
| detectInsertMode boolean |
Default
true. Parameter insertMode is ignored and the insert mode is detected from the record hints.
Constraints: optional, updatable |
| dialectPostgresPostgisSchema string |
Default
public. Specifies the schema name where the PostgreSQL PostGIS extension is installed. The default is public; however, if the PostGIS extension was installed in another schema, this property should be used to specify the alternate schema name.
Constraints: optional, updatable |
| dialectSqlserverIdentityInsert boolean |
Default
false. Specifies whether the connector automatically sets an IDENTITY_INSERT before an INSERT or UPSERT operation into the identity column of SQL Server tables, and then unsets it immediately after the operation. When the default setting (false) is in effect, an INSERT or UPSERT operation into the IDENTITY column of a table results in a SQL exception.
Constraints: optional, updatable |
| flushMaxRetries integer |
Default
5. Specifies the maximum number of retries that the connector performs after an attempt to flush changes to the target database results in certain database errors. If the number of retries exceeds the retry value, the sink connector enters a FAILED state.
Constraints: optional, updatable |
| flushRetryDelayMs integer |
Default
1000. Specifies the number of milliseconds that the connector waits to retry a flush operation that failed.
Constraints: optional, updatable |
| insertMode string |
Default
upsert. Specifies the strategy used to insert events into the database. The following options are available:
Constraints: optional, updatable |
| primaryKeyFields []string |
Either the name of the primary key column or a comma-separated list of fields to derive the primary key from.
When
primaryKeyMode is set to record_key and the event’s key is a primitive type, it is expected that this property specifies the column name to be used for the key.
When the primaryKeyMode is set to record_key with a non-primitive key, or record_value, it is expected that this property specifies a comma-separated list of field names from either the key or value. If the primary.key.mode is set to record_key with a non-primitive key, or record_value, and this property is not specified, the connector derives the primary key from all fields of either the record key or record value, depending on the specified mode.
Constraints: optional, updatable |
| primaryKeyMode string |
Default
record_key. Specifies how the connector resolves the primary key columns from the event.
none: Specifies that no primary key columns are created.record_key: Specifies that the primary key columns are sourced from the event’s record key. If the record key is a primitive type, the primaryKeyFields property is required to specify the name of the primary key column. If the record key is a struct type, the primaryKeyFields property is optional, and can be used to specify a subset of columns from the event’s key as the table’s primary key.record_value: Specifies that the primary key columns are sourced from the event’s value. You can set the primaryKeyFields property to define the primary key as a subset of fields from the event’s value; otherwise all fields are used by default.
Constraints: optional, updatable |
| quoteIdentifiers boolean |
Default
true. Specifies whether generated SQL statements use quotation marks to delimit table and column names. See the Quoting and case sensitivity section for more details.
Constraints: optional, updatable |
| removePlaceholders boolean |
Default
true. When true the placeholders are removed from the records.
Constraints: optional, updatable |
| schemaEvolution string |
Default
basic. Specifies how the connector evolves the destination table schemas. For more information, see Schema evolution. The following options are available:
none: Specifies that the connector does not evolve the destination schema.
basic: Specifies that basic evolution occurs. The connector adds missing columns to the table by comparing the incoming event’s record schema to the database table structure.
Constraints: optional, updatable |
| truncateEnabled boolean |
Default
true. Specifies whether the connector processes TRUNCATE events and truncates the corresponding tables from the database.
Although support for TRUNCATE statements has been available in Db2 since version 9.7, currently, the JDBC connector is unable to process standard TRUNCATE events that the Db2 connector emits.
To ensure that the JDBC connector can process TRUNCATE events received from Db2, perform the truncation by using an alternative to the standard TRUNCATE TABLE statement. For example:
The user account that submits the preceding query requires ALTER privileges on the table to be truncated.
Constraints: optional, updatable |
| useReductionBuffer boolean |
Specifies whether to enable the Debezium JDBC connector’s reduction buffer.
Choose one of the following settings:
To optimize query processing in a PostgreSQL sink database when the reduction buffer is enabled, you must also enable the database to execute the batched queries by adding the Constraints: optional, updatable |
| useTimeZone string |
Default
UTC. Specifies the timezone used when inserting JDBC temporal values.
Constraints: optional, updatable |
The password used by the CDC sink process to connect to the database.
If not specified the default superuser password will be used.
Property |
Description |
|---|---|
| key string |
The Secret key where the password is stored.
Constraints: required, updatable |
| name string |
The Secret name where the password is stored.
Constraints: required, updatable |
The username used by the CDC sink process to connect to the database.
If not specified the default superuser username (by default postgres) will be used.
Property |
Description |
|---|---|
| key string |
The Secret key where the username is stored.
Constraints: required, updatable |
| name string |
The Secret name where the username is stored.
Constraints: required, updatable |
See https://debezium.io/documentation/reference/stable/development/engine.html#engine-properties Each property is converted from myPropertyName to my.property.name
Property |
Description |
|---|---|
| errorsMaxRetries integer |
Default
-1. The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, > 0 = num of retries).
Constraints: optional, updatable |
| errorsRetryDelayInitialMs integer |
Default
300. Initial delay (in ms) for retries when encountering connection errors. This value will be doubled upon every retry but won’t exceed errorsRetryDelayMaxMs.
Constraints: optional, updatable |
| errorsRetryDelayMaxMs integer |
Default
10000. Max delay (in ms) between retries when encountering conn
Constraints: optional, updatable |
| offsetCommitPolicy string |
Default
io.debezium.engine.spi.OffsetCommitPolicy.PeriodicCommitOffsetPolicy. The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface OffsetCommitPolicy. The default is a periodic commit policy based upon time intervals.
Constraints: optional, updatable |
| offsetFlushIntervalMs integer |
Default
60000. Interval at which to try committing offsets. The default is 1 minute.
Constraints: optional, updatable |
| offsetFlushTimeoutMs integer |
Default
5000. Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.
Constraints: optional, updatable |
| predicates map[string]map[string]string |
Predicates can be applied to transformations to make the transformations optional.
An example of the configuration is:
Constraints: optional, updatable |
| recordProcessingOrder string |
Default `ORDERED`.
Determines how the records should be produced.
The non-sequential processing of the Constraints: optional, updatable |
| recordProcessingShutdownTimeoutMs integer |
Default
1000. Maximum time in milliseconds to wait for processing submitted records after a task shutdown is called.Constraints: optional, updatable |
| recordProcessingThreads integer |
The number of threads that are available to process change event records. If no value is specified (the default), the engine uses the Java ThreadPoolExecutor to dynamically adjust the number of threads, based on the current workload. Maximum number of threads is number of CPU cores on given machine. If a value is specified, the engine uses the Java fixed thread pool method to create a thread pool with the specified number of threads. To use all available cores on given machine, set the placeholder value, AVAILABLE_CORES.
Constraints: optional, updatable |
| recordProcessingWithSerialConsumer boolean |
Default
false. Specifies whether the default ChangeConsumer should be created from the provided Consumer, resulting in serial Consumer processing. This option has no effect if you specified the ChangeConsumer interface when you used the API to create the engine.Constraints: optional, updatable |
| taskManagementTimeoutMs integer |
Default
180000. Time, in milliseconds, that the engine waits for a task’s lifecycle management operations (starting and stopping) to complete.Constraints: optional, updatable |
| transforms map[string]map[string]string |
Before the messages are delivered to the handler it is possible to run them through a pipeline of Kafka Connect Simple Message Transforms (SMT). Each SMT can pass the message unchanged, modify it or filter it out. The chain is configured using property transforms. The property contains a list of logical names of the transformations to be applied (the specified keys). Properties transforms.<logical_name>.type then defines the name of the implementation class for each transformation and transforms.<logical_name>.* configuration options that are passed to the transformation.
An example of the configuration is:
Constraints: optional, updatable |
Metadata information for stream created resources.
Property |
Description |
|---|---|
| annotations object |
Custom Kubernetes annotations to be passed to resources created and managed by StackGres.
Constraints: optional, updatable |
| labels object |
Custom Kubernetes [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) to be passed to resources created and managed by StackGres.
Constraints: optional, updatable |
Custom Kubernetes annotations to be passed to resources created and managed by StackGres.
Example:
apiVersion: stackgres.io/v1alpha1
kind: SGStream
metadata:
name: stackgres
spec:
metadata:
annotations:
pods:
key: value
Property |
Description |
|---|---|
| allResources map[string]string |
Custom Kubernetes annotations.
Constraints: optional, updatable |
| pods map[string]string |
Custom Kubernetes annotations.
Constraints: optional, updatable |
| serviceAccount map[string]string |
Custom Kubernetes annotations.
Constraints: optional, updatable |
Custom Kubernetes labels to be passed to resources created and managed by StackGres.
Example:
apiVersion: stackgres.io/v1alpha1
kind: SGStream
metadata:
name: stackgres
spec:
metadata:
labels:
pods:
customLabel: customLabelValue
Property |
Description |
|---|---|
| allResources map[string]string |
Custom Kubernetes labels.
Constraints: optional, updatable |
| pods map[string]string |
Custom Kubernetes labels.
Constraints: optional, updatable |
| serviceAccount map[string]string |
Custom Kubernetes labels.
Constraints: optional, updatable |
Status of a StackGres stream.
Property |
Description |
|---|---|
| conditions []object |
Possible conditions are:
Constraints: optional, updatable |
| events object |
Events status
Constraints: optional, updatable |
| failure string |
The failure message
Constraints: optional, updatable |
| snapshot object |
Snapshot status
Constraints: optional, updatable |
| streaming object |
Streaming status
Constraints: optional, updatable |
Property |
Description |
|---|---|
| lastTransitionTime string |
Last time the condition transitioned from one status to another.
Constraints: optional, updatable |
| message string |
A human-readable message indicating details about the transition.
Constraints: optional, updatable |
| reason string |
The reason for the condition last transition.
Constraints: optional, updatable |
| status string |
Status of the condition, one of
True, False or Unknown.Constraints: optional, updatable |
| type string |
Type of deployment condition.
Constraints: optional, updatable |
Events status
Property |
Description |
|---|---|
| lastErrorSeen string |
The last error seen sending events that this stream has seen since the last start or metrics reset.
Constraints: optional, updatable |
| lastEventSent string |
The last event that the stream has sent since the last start or metrics reset.
Constraints: optional, updatable |
| lastEventWasSent boolean |
It is true if the last event that the stream has tried to send since the last start or metrics reset was sent successfully.
Constraints: optional, updatable |
| totalNumberOfErrorsSeen integer |
The total number of errors sending events that this stream has seen since the last start or metrics reset.
Constraints: optional, updatable |
| totalNumberOfEventsSent integer |
The total number of events that this stream has sent since the last start or metrics reset.
Constraints: optional, updatable |
Snapshot status
Property |
Description |
|---|---|
| capturedTables []string |
The list of tables that are captured by the connector.
Constraints: optional, updatable |
| chunkFrom string |
The lower bound of the primary key set defining the current chunk.
Constraints: optional, updatable |
| chunkId string |
The identifier of the current snapshot chunk.
Constraints: optional, updatable |
| chunkTo string |
The upper bound of the primary key set defining the current chunk.
Constraints: optional, updatable |
| currentQueueSizeInBytes integer |
The current volume, in bytes, of records in the queue.
Constraints: optional, updatable |
| lastEvent string |
The last snapshot event that the connector has read.
Constraints: optional, updatable |
| maxQueueSizeInBytes integer |
The maximum buffer of the queue in bytes. This metric is available if max.queue.size.in.bytes is set to a positive long value.
Constraints: optional, updatable |
| milliSecondsSinceLastEvent integer |
The number of milliseconds since the connector has read and processed the most recent event.
Constraints: optional, updatable |
| numberOfEventsFiltered integer |
The number of events that have been filtered by include/exclude list filtering rules configured on the connector.
Constraints: optional, updatable |
| queueRemainingCapacity integer |
The free capacity of the queue used to cache events from the snapshotter.
Constraints: optional, updatable |
| queueTotalCapacity integer |
The length of the queue used to cache events from the snapshotter.
Constraints: optional, updatable |
| remainingTableCount integer |
The number of tables that the snapshot has yet to copy.
Constraints: optional, updatable |
| rowsScanned map[string]integer |
Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table.
Constraints: optional, updatable |
| snapshotAborted boolean |
Whether the snapshot was aborted.
Constraints: optional, updatable |
| snapshotCompleted boolean |
Whether the snapshot completed.
Constraints: optional, updatable |
| snapshotDurationInSeconds integer |
The total number of seconds that the snapshot has taken so far, even if not complete. Includes also time when snapshot was paused.
Constraints: optional, updatable |
| snapshotPaused boolean |
Whether the snapshot was paused.
Constraints: optional, updatable |
| snapshotPausedDurationInSeconds integer |
The total number of seconds that the snapshot was paused. If the snapshot was paused several times, the paused time adds up.
Constraints: optional, updatable |
| snapshotRunning boolean |
Whether the snapshot was started.
Constraints: optional, updatable |
| tableFrom string |
The lower bound of the primary key set of the currently snapshotted table.
Constraints: optional, updatable |
| tableTo string |
The upper bound of the primary key set of the currently snapshotted table.
Constraints: optional, updatable |
| totalNumberOfEventsSeen integer |
The total number of events that this connector has seen since last started or reset.
Constraints: optional, updatable |
| totalTableCount integer |
The total number of tables that are being included in the snapshot.
Constraints: optional, updatable |
Streaming status
Property |
Description |
|---|---|
| capturedTables []string |
The list of tables that are captured by the connector.
Constraints: optional, updatable |
| connected boolean |
Flag that denotes whether the connector is currently connected to the database server.
Constraints: optional, updatable |
| currentQueueSizeInBytes integer |
The current volume, in bytes, of records in the queue.
Constraints: optional, updatable |
| lastEvent string |
The last streaming event that the connector has read.
Constraints: optional, updatable |
| lastTransactionId string |
Transaction identifier of the last processed transaction.
Constraints: optional, updatable |
| maxQueueSizeInBytes integer |
The maximum buffer of the queue in bytes. This metric is available if max.queue.size.in.bytes is set to a positive long value.
Constraints: optional, updatable |
| milliSecondsBehindSource integer |
The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the database server and the connector are running.
Constraints: optional, updatable |
| milliSecondsSinceLastEvent integer |
The number of milliseconds since the connector has read and processed the most recent event.
Constraints: optional, updatable |
| numberOfCommittedTransactions integer |
The number of processed transactions that were committed.
Constraints: optional, updatable |
| numberOfEventsFiltered integer |
The number of events that have been filtered by include/exclude list filtering rules configured on the connector.
Constraints: optional, updatable |
| queueRemainingCapacity integer |
The free capacity of the queue used to cache events from the streamer.
Constraints: optional, updatable |
| queueTotalCapacity integer |
The length of the queue used to cache events from the streamer.
Constraints: optional, updatable |
| sourceEventPosition map[string]string |
The coordinates of the last received event.
Constraints: optional, updatable |
| totalNumberOfCreateEventsSeen integer |
The total number of create events that this connector has seen since the last start or metrics reset.
Constraints: optional, updatable |
| totalNumberOfDeleteEventsSeen integer |
The total number of delete events that this connector has seen since the last start or metrics reset.
Constraints: optional, updatable |
| totalNumberOfEventsSeen integer |
The total number of events that this connector has seen since the last start or metrics reset.
Constraints: optional, updatable |
| totalNumberOfUpdateEventsSeen integer |
The total number of update events that this connector has seen since the last start or metrics reset.
Constraints: optional, updatable |