SGStream


Kind: SGStream

listKind: SGStreamList

plural: sgstreams

singular: sgstream

shortNames sgstr


The SGStream custom resource represents a stream of Change Data Capture (CDC) events from a source database to a target service.

Example:

apiVersion: stackgres.io/v1alpha1
kind: SGStream
metadata:
  name: cloudevent
spec:
  source:
    type: SGCluster
    sgCluster:
      name: my-cluster
  target:
    type: CloudEvent
    cloudEvent:
      binding: http
      format: json
      http:
        url: cloudevent-service
Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

apiVersion string stackgres.io/v1alpha1
kind string SGStream
metadata object Refer to the Kubernetes API documentation for the fields of the metadata field.
spec object Specification of the desired behavior of a StackGres stream.

A stream represent the process of performing a change data capture (CDC) operation on a data source that generates a stream of event containing information about the changes happening (or happened) to the database in real time (or from the beginning).

The stream allow to specify different types for the target of the CDC operation. See SGStream.spec.target.type.

The stream perform two distinct operation to generate data source changes for the target:

  • Snapshotting: allows to capture the content of the data source in a specific point in time and stream it as if they were changes, thus providing a stream of events as they were an aggregate from the beginning of the existence of the data source.
  • Streaming: allows to capture the changes that are happening in real time in the data source and stream them as changes continuously.

The CDC is performed using Debezium Engine. SGStream extends functionality of Debezium by providing a custom signaling channel that allow to send signals by simply adding annotation to the SGStream resources. To send a signal simply create an annotation with the following formar:

metadata:
  annotations:
    debezium-signal.stackgres.io/<signal type>: <signal data>

Also, SGStream provide the following custom singals implementations:

  • tombstone: allow to stop completely Debezium streaming and the SGStream. This signal is useful to give an end to the streaming in a graceful way allowing for the removal of the logical slot created by Debezium.
  • command: allow to execute any SQL command on the target database. Only available then the target type is SGCluster.
status object Status of a StackGres stream.

SGStream.spec

↩ Parent

Specification of the desired behavior of a StackGres stream.

A stream represent the process of performing a change data capture (CDC) operation on a data source that generates a stream of event containing information about the changes happening (or happened) to the database in real time (or from the beginning).

The stream allow to specify different types for the target of the CDC operation. See SGStream.spec.target.type.

The stream perform two distinct operation to generate data source changes for the target:

  • Snapshotting: allows to capture the content of the data source in a specific point in time and stream it as if they were changes, thus providing a stream of events as they were an aggregate from the beginning of the existence of the data source.
  • Streaming: allows to capture the changes that are happening in real time in the data source and stream them as changes continuously.

The CDC is performed using Debezium Engine. SGStream extends functionality of Debezium by providing a custom signaling channel that allow to send signals by simply adding annotation to the SGStream resources. To send a signal simply create an annotation with the following formar:

metadata:
  annotations:
    debezium-signal.stackgres.io/<signal type>: <signal data>

Also, SGStream provide the following custom singals implementations:

  • tombstone: allow to stop completely Debezium streaming and the SGStream. This signal is useful to give an end to the streaming in a graceful way allowing for the removal of the logical slot created by Debezium.
  • command: allow to execute any SQL command on the target database. Only available then the target type is SGCluster.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

source object The data source of the stream to which change data capture will be applied.
target object The target of this sream.
debeziumEngineProperties object See https://debezium.io/documentation/reference/stable/development/engine.html#engine-properties Each property is converted from myPropertyName to my.property.name
maxRetries integer The maximum number of retries the streaming operation is allowed to do after a failure.

A value of 0 (zero) means no retries are made. A value of -1 means retries are unlimited. Defaults to: -1.

pods object The configuration for SGStream Pod

SGStream.spec.source

↩ Parent

The data source of the stream to which change data capture will be applied.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

type string The type of data source. Available data source types are:

  • SGCluster: an SGCluster in the same namespace
  • Postgres: any Postgres instance
postgres object The configuration of the data source required when type is Postgres.
sgCluster object The configuration of the data source required when type is SGCluster.

SGStream.spec.source.postgres

↩ Parent

The configuration of the data source required when type is Postgres.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

host string The hostname of the Postgres instance.
database string The target database name to which the CDC process will connect to.

If not specified the default postgres database will be targeted.

debeziumProperties object Specific property of the debezium Postgres connector.

See https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-connector-properties

Each property is converted from myPropertyName to my.property.name

excludes []string A list of regular expressions that allow to match one or more <schema>.<table>.<column> entries to be filtered out before sending to the target.

This property is mutually exclusive with includes.

includes []string A list of regular expressions that allow to match one or more <schema>.<table>.<column> entries to be filtered before sending to the target.

This property is mutually exclusive with excludes.

password object The password used by the CDC process to connect to the database.

If not specified the default superuser password will be used.

port integer The port of the Postgres instance. When not specified port 5432 will be used.
username object The username used by the CDC process to connect to the database.

If not specified the default superuser username (by default postgres) will be used.

SGStream.spec.source.postgres.debeziumProperties

↩ Parent

Specific property of the debezium Postgres connector.

See https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-connector-properties

Each property is converted from myPropertyName to my.property.name

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

binaryHandlingMode string Default bytes. Specifies how binary (bytea) columns should be represented in change events:

  • bytes represents binary data as byte array.

  • base64 represents binary data as base64-encoded strings.

  • base64-url-safe represents binary data as base64-url-safe-encoded strings.

  • hex represents binary data as hex-encoded (base16) strings.

columnMaskHash map[string]map[string]map[string][]string An optional section, that allow to specify, for an hash algorithm and a salt, a list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form ... To match the name of a column Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. In the resulting change event record, the values for the specified columns are replaced with pseudonyms. A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt. Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation. In the following example, CzQMA0cB5K is a randomly selected salt. columnMaskHash.SHA-256.CzQMA0cB5K=[inventory.orders.customerName,inventory.shipment.customerName] If necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts. Depending on the hash algorithm used, the salt selected, and the actual data set, the resulting data set might not be completely masked.
columnMaskHashV2 map[string]map[string]map[string][]string Similar to also columnMaskHash but using hashing strategy version 2. Hashing strategy version 2 should be used to ensure fidelity if the value is being hashed in different places or systems.
columnMaskWithLengthChars []string An optional, list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Set length to a positive integer to replace data in the specified columns with the number of asterisk (*) characters specified by the length in the property name. Set length to 0 (zero) to replace data in the specified columns with an empty string. The fully-qualified name of a column observes the following format: schemaName.tableName.columnName. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. You can specify multiple properties with different lengths in a single configuration.
columnPropagateSourceType []string Default [.*]. An optional, list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records:

  • __debezium.source.column.type

  • __debezium.source.column.length

  • __debezium.source.column.scale

  • These parameters propagate a column’s original type name and length (for variable-width types), respectively. Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases. The fully-qualified name of a column observes one of the following formats: databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.

    columnTruncateToLengthChars []string An optional, list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want to truncate the data in a set of columns when it exceeds the number of characters specified by the length in the property name. Set length to a positive integer value, for example, column.truncate.to.20.chars. The fully-qualified name of a column observes the following format: ... To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. You can specify multiple properties with different lengths in a single configuration.
    converters map[string]map[string]string Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use. For example,

    isbn:
      type: io.debezium.test.IsbnConverter
      schemaName: io.debezium.postgresql.type.Isbn
    

    You must set the converters property to enable the connector to use a custom converter. For each converter that you configure for a connector, you must also add a .type property, which specifies the fully-qualified name of the class that implements the converter interface. If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate any additional configuration parameter with a converter, prefix the parameter names with the symbolic name of the converter. Each property is converted from myPropertyName to my.property.name

    customMetricTags map[string]string The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example:

    customMetricTags:
      k1: v1
      k2: v2
    

    databaseInitialStatements []string A list of SQL statements that the connector executes when it establishes a JDBC connection to the database. The connector may establish JDBC connections at its own discretion. Consequently, this property is useful for configuration of session parameters only, and not for executing DML statements. The connector does not execute these statements when it creates a connection for reading the transaction log.
    datatypePropagateSourceType []string Default `[.*]`. An optional, list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database. When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema:
    • __debezium.source.column.type
    • __debezium.source.column.length
    • __debezium.source.column.scale

    These parameters propagate a column’s original type name and length (for variable-width types), respectively. Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases. The fully-qualified name of a column observes one of the following formats: databaseName.tableName.typeName, or databaseName.schemaName.tableName.typeName. To match the name of a data type, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the data type; the expression does not match substrings that might be present in a type name. For the list of PostgreSQL-specific data type names, see the PostgreSQL data type mappings.

    decimalHandlingMode string Default precise. Specifies how the connector should handle values for DECIMAL and NUMERIC columns:

    • precise: represents values by using java.math.BigDecimal to represent values in binary form in change events.
    • double: represents values by using double values, which might result in a loss of precision but which is easier to use.
    • string: encodes values as formatted strings, which are easy to consume but semantic information about the real type is lost. For more information, see Decimal types.
    errorsMaxRetries integer Default -1. Specifies how the connector responds after an operation that results in a retriable error, such as a connection error.

    Set one of the following options:

    • -1: No limit. The connector always restarts automatically, and retries the operation, regardless of the number of previous failures.

    • 0: Disabled. The connector fails immediately, and never retries the operation. User intervention is required to restart the connector.

    • > 0: The connector restarts automatically until it reaches the specified maximum number of retries. After the next failure, the connector stops, and user intervention is required to restart it.

    eventProcessingFailureHandlingMode string Default fail. Specifies how the connector should react to exceptions during processing of events:

  • fail: propagates the exception, indicates the offset of the problematic event, and causes the connector to stop.

  • warn: logs the offset of the problematic event, skips that event, and continues processing.

  • skip: skips the problematic event and continues processing.

  • fieldNameAdjustmentMode string Default none. Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings:

  • none does not apply any adjustment.

  • avro replaces the characters that cannot be used in the Avro type name with underscore.

  • avro_unicode replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java

  • For more information, see Avro naming.

    flushLsnSource boolean Default true. Determines whether the connector should commit the LSN of the processed records in the source postgres database so that the WAL logs can be deleted. Specify false if you don’t want the connector to do this. Please note that if set to false LSN will not be acknowledged by Debezium and as a result WAL logs will not be cleared which might result in disk space issues. User is expected to handle the acknowledgement of LSN outside Debezium.
    heartbeatActionQuery string Specifies a query that the connector executes on the source database when the connector sends a heartbeat message. This is useful for resolving the situation described in WAL disk space consumption, where capturing changes from a low-traffic database on the same host as a high-traffic database prevents Debezium from processing WAL records and thus acknowledging WAL positions with the database. To address this situation, create a heartbeat table in the low-traffic database, and set this property to a statement that inserts records into that table, for example:

    INSERT INTO test_heartbeat_table (text) VALUES ('test_heartbeat')
    

    This allows the connector to receive changes from the low-traffic database and acknowledge their LSNs, which prevents unbounded WAL growth on the database host.

    heartbeatIntervalMs integer Default 0. Controls how frequently the connector sends heartbeat messages to a Kafka topic. The default behavior is that the connector does not send heartbeat messages. Heartbeat messages are useful for monitoring whether the connector is receiving change events from the database. Heartbeat messages might help decrease the number of change events that need to be re-sent when a connector restarts. To send heartbeat messages, set this property to a positive integer, which indicates the number of milliseconds between heartbeat messages. Heartbeat messages are needed when there are many updates in a database that is being tracked but only a tiny number of updates are related to the table(s) and schema(s) for which the connector is capturing changes. In this situation, the connector reads from the database transaction log as usual but rarely emits change records to Kafka. This means that no offset updates are committed to Kafka and the connector does not have an opportunity to send the latest retrieved LSN to the database. The database retains WAL files that contain events that have already been processed by the connector. Sending heartbeat messages enables the connector to send the latest retrieved LSN to the database, which allows the database to reclaim disk space being used by no longer needed WAL files.
    hstoreHandlingMode string Default json. Specifies how the connector should handle values for hstore columns:

    • map: represents values by using MAP.
    • json: represents values by using json string. This setting encodes values as formatted strings such as {“key” : “val”}. For more information, see PostgreSQL HSTORE type.
    includeUnknownDatatypes boolean Default true. Specifies connector behavior when the connector encounters a field whose data type is unknown. The default behavior is that the connector omits the field from the change event and logs a warning. Set this property to true if you want the change event to contain an opaque binary representation of the field. This lets consumers decode the field. You can control the exact representation by setting the binaryHandlingMode property.

    NOTE: Consumers risk backward compatibility issues when includeUnknownDatatypes is set to true. Not only may the database-specific binary representation change between releases, but if the data type is eventually supported by Debezium, the data type will be sent downstream in a logical type, which would require adjustments by consumers. In general, when encountering unsupported data types, create a feature request so that support can be added.

    incrementalSnapshotChunkSize integer Default 1024. The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment.
    incrementalSnapshotWatermarkingStrategy string Default insert_insert. Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.

    You can specify one of the following options:

    • insert_insert: When you send a signal to initiate an incremental snapshot, for every chunk that Debezium reads during the snapshot, it writes an entry to the signaling data collection to record the signal to open the snapshot window. After the snapshot completes, Debezium inserts a second entry to record the closing of the window.

    • insert_delete: When you send a signal to initiate an incremental snapshot, for every chunk that Debezium reads, it writes a single entry to the signaling data collection to record the signal to open the snapshot window. After the snapshot completes, this entry is removed. No entry is created for the signal to close the snapshot window. Set this option to prevent rapid growth of the signaling data collection.

    intervalHandlingMode string Default numeric. Specifies how the connector should handle values for interval columns:

  • numeric: represents intervals using approximate number of microseconds.

  • string: represents intervals exactly by using the string pattern representation PYMDTHMS. For example: P1Y2M3DT4H5M6.78S. For more information, see PostgreSQL basic types.

  • maxBatchSize integer Default `2048`. Positive integer value that specifies the maximum size of each batch of events that the connector processes.
    maxQueueSize integer Default `8192`. Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of maxQueueSize to be larger than the value of maxBatchSize.
    maxQueueSizeInBytes integer Default `0`. A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value. If maxQueueSize is also set, writing to the queue is blocked when the size of the queue reaches the limit specified by either property. For example, if you set maxQueueSize=1000, and maxQueueSizeInBytes=5000, writing to the queue is blocked after the queue contains 1000 records, or after the volume of the records in the queue reaches 5000 bytes.
    messageKeyColumns []string A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables. By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns. To establish a custom message key for a table, list the table, followed by the columns to use as the message key. Each list entry takes the following format: :, To base a table key on multiple column names, insert commas between the column names. Each fully-qualified table name is a regular expression in the following format: . The property can include entries for multiple tables. Use a semicolon to separate table entries in the list. The following example sets the message key for the tables inventory.customers and purchase.orders: inventory.customers:pk1,pk2;(.*).purchaseorders:pk3,pk4 For the table inventory.customer, the columns pk1 and pk2 are specified as the message key. For the purchaseorders tables in any schema, the columns pk3 and pk4 server as the message key. There is no limit to the number of columns that you use to create custom message keys. However, it’s best to use the minimum number that are required to specify a unique key. Note that having this property set and REPLICA IDENTITY set to DEFAULT on the tables, will cause the tombstone events to not be created properly if the key columns are not part of the primary key of the table. Setting REPLICA IDENTITY to FULL is the only solution.
    moneyFractionDigits integer Default `2`. Specifies how many decimal digits should be used when converting Postgres money type to java.math.BigDecimal, which represents the values in change events. Applicable only when decimalHandlingMode is set to precise.
    notificationEnabledChannels []string List of notification channel names that are enabled for the connector. By default, the following channels are available: sink, log and jmx. Optionally, you can also implement a [custom notification channel](https://debezium.io/documentation/reference/stable/configuration/signalling.html#debezium-signaling-enabling-custom-signaling-channel).
    pluginName string Default `pgoutput`. The name of the [PostgreSQL logical decoding plug-in](https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-output-plugin) installed on the PostgreSQL server. Supported values are decoderbufs, and pgoutput.
    pollIntervalMs integer Default `500`. Positive integer value that specifies the number of milliseconds the connector should wait for new change events to appear before it starts processing a batch of events. Defaults to 500 milliseconds.
    provideTransactionMetadata boolean Default `false`. Determines whether the connector generates events with transaction boundaries and enriches change event envelopes with transaction metadata. Specify true if you want the connector to do this. For more information, see [Transaction metadata](https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-transaction-metadata).
    publicationAutocreateMode string Default `all_tables`. Applies only when streaming changes by using [the pgoutput plug-in](https://www.postgresql.org/docs/current/sql-createpublication.html). The setting determines how creation of a [publication](https://www.postgresql.org/docs/current/logical-replication-publication.html) should work. Specify one of the following values:
    • all_tables - If a publication exists, the connector uses it. If a publication does not exist, the connector creates a publication for all tables in the database for which the connector is capturing changes. For the connector to create a publication it must access the database through a database user account that has permission to create publications and perform replications. You grant the required permission by using the following SQL command CREATE PUBLICATION <publication_name> FOR ALL TABLES;.

    • disabled - The connector does not attempt to create a publication. A database administrator or the user configured to perform replications must have created the publication before running the connector. If the connector cannot find the publication, the connector throws an exception and stops.

    • filtered - If a publication exists, the connector uses it. If no publication exists, the connector creates a new publication for tables that match the current filter configuration as specified by the schema.include.list, schema.exclude.list, and table.include.list, and table.exclude.list connector configuration properties. For example: CREATE PUBLICATION <publication_name> FOR TABLE <tbl1, tbl2, tbl3>. If the publication exists, the connector updates the publication for tables that match the current filter configuration. For example: ALTER PUBLICATION <publication_name> SET TABLE <tbl1, tbl2, tbl3>.

    publicationName string Default . (with all characters that are not [a-zA-Z0-9] changed to _ character). The name of the PostgreSQL publication created for streaming changes when using pgoutput. This publication is created at start-up if it does not already exist and it includes all tables. Debezium then applies its own include/exclude list filtering, if configured, to limit the publication to change events for the specific tables of interest. The connector user must have superuser permissions to create this publication, so it is usually preferable to create the publication before starting the connector for the first time. If the publication already exists, either for all tables or configured with a subset of tables, Debezium uses the publication as it is defined.
    replicaIdentityAutosetValues []string The setting determines the value for replica identity at table level. This option will overwrite the existing value in database. A comma-separated list of regular expressions that match fully-qualified tables and replica identity value to be used in the table. Each expression must match the pattern ‘:’, where the table name could be defined as (SCHEMA_NAME.TABLE_NAME), and the replica identity values are: DEFAULT - Records the old values of the columns of the primary key, if any. This is the default for non-system tables. INDEX index_name - Records the old values of the columns covered by the named index, that must be unique, not partial, not deferrable, and include only columns marked NOT NULL. If this index is dropped, the behavior is the same as NOTHING. FULL - Records the old values of all columns in the row. NOTHING - Records no information about the old row. This is the default for system tables. For example, schema1.*:FULL,schema2.table2:NOTHING,schema2.table3:INDEX idx_name
    retriableRestartConnectorWaitMs integer Default 10000 (10 seconds). The number of milliseconds to wait before restarting a connector after a retriable error occurs.
    schemaNameAdjustmentMode string Default none. Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings:

  • none does not apply any adjustment.

  • avro replaces the characters that cannot be used in the Avro type name with underscore.

  • avro_unicode replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java

  • schemaRefreshMode string Default columns_diff. Specify the conditions that trigger a refresh of the in-memory schema for a table.

  • columns_diff: is the safest mode. It ensures that the in-memory schema stays in sync with the database table’s schema at all times.

  • columns_diff_exclude_unchanged_toast: instructs the connector to refresh the in-memory schema cache if there is a discrepancy with the schema derived from the incoming message, unless unchanged TOASTable data fully accounts for the discrepancy.

  • This setting can significantly improve connector performance if there are frequently-updated tables that have TOASTed data that are rarely part of updates. However, it is possible for the in-memory schema to become outdated if TOASTable columns are dropped from the table.

    signalDataCollection string Fully-qualified name of the data collection that is used to send signals to the connector. Use the following format to specify the collection name: .
    signalEnabledChannels []string Default [sgstream-annotations]. List of the signaling channel names that are enabled for the connector. By default, the following channels are available: sgstream-annotations, source, kafka, file and jmx. Optionally, you can also implement a custom signaling channel.
    skipMessagesWithoutChange boolean Default false. Specifies whether to skip publishing messages when there is no change in included columns. This would essentially filter messages if there is no change in columns included as per includes or excludes fields. Note: Only works when REPLICA IDENTITY of the table is set to FULL
    skippedOperations []string Default none. A list of operation types that will be skipped during streaming. The operations include: c for inserts/create, u for updates, d for deletes, t for truncates, and none to not skip any operations. By default, no operations are skipped.
    slotDropOnStop boolean Default true. Whether or not to delete the logical replication slot when the connector stops in a graceful, expected way. The default behavior is that the replication slot remains configured for the connector when the connector stops. When the connector restarts, having the same replication slot enables the connector to start processing where it left off. Set to true in only testing or development environments. Dropping the slot allows the database to discard WAL segments. When the connector restarts it performs a new snapshot or it can continue from a persistent offset in the Kafka Connect offsets topic.
    slotMaxRetries integer Default 6. If connecting to a replication slot fails, this is the maximum number of consecutive attempts to connect.
    slotName string Default . (with all characters that are not [a-zA-Z0-9] changed to _ character). The name of the PostgreSQL logical decoding slot that was created for streaming changes from a particular plug-in for a particular database/schema. The server uses this slot to stream events to the Debezium connector that you are configuring.

    Slot names must conform to PostgreSQL replication slot naming rules, which state: “Each replication slot has a name, which can contain lower-case letters, numbers, and the underscore character.”

    slotRetryDelayMs integer Default 10000 (10 seconds). The number of milliseconds to wait between retry attempts when the connector fails to connect to a replication slot.
    slotStreamParams map[string]string Parameters to pass to the configured logical decoding plug-in. For example:

    slotStreamParams:
      add-tables: "public.table,public.table2"
      include-lsn: "true"
    

    snapshotDelayMs integer An interval in milliseconds that the connector should wait before performing a snapshot when the connector starts. If you are starting multiple connectors in a cluster, this property is useful for avoiding snapshot interruptions, which might cause re-balancing of connectors.
    snapshotFetchSize integer Default `10240`. During a snapshot, the connector reads table content in batches of rows. This property specifies the maximum number of rows in a batch.
    snapshotIncludeCollectionList []string Default . An optional, list of regular expressions that match the fully-qualified names (.) of the tables to include in a snapshot. The specified items must be named in the connector’s table.include.list property. This property takes effect only if the connector’s snapshotMode property is set to a value other than `never`. This property does not affect the behavior of incremental snapshots. To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name.
    snapshotLockTimeoutMs integer Default `10000`. Positive integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If the connector cannot acquire table locks in this time interval, the snapshot fails. [How the connector performs snapshots](https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-snapshots) provides details.
    snapshotLockingMode string Default `none`. Specifies how the connector holds locks on tables while performing a schema snapshot. Set one of the following options:
    • shared: The connector holds a table lock that prevents exclusive table access during the initial portion phase of the snapshot in which database schemas and other metadata are read. After the initial phase, the snapshot no longer requires table locks.
    • none: The connector avoids locks entirely. Do not use this mode if schema changes might occur during the snapshot.

    WARNING: Do not use this mode if schema changes might occur during the snapshot.

    • custom: The connector performs a snapshot according to the implementation specified by the snapshotLockingModeCustomName property, which is a custom implementation of the io.debezium.spi.snapshot.SnapshotLock interface.

    snapshotLockingModeCustomName string When snapshotLockingMode is set to custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.SnapshotLock’ interface. For more information, see custom snapshotter SPI.
    snapshotMaxThreads integer Default 1. Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently. This feature is incubating.
    snapshotMode string Default initial. Specifies the criteria for performing a snapshot when the connector starts:

  • always - The connector performs a snapshot every time that it starts. The snapshot includes the structure and data of the captured tables. Specify this value to populate topics with a complete representation of the data from the captured tables every time that the connector starts. After the snapshot completes, the connector begins to stream event records for subsequent database changes.

  • initial - The connector performs a snapshot only when no offsets have been recorded for the logical server name.

  • initial_only - The connector performs an initial snapshot and then stops, without processing any subsequent changes.

  • no_data - The connector never performs snapshots. When a connector is configured this way, after it starts, it behaves as follows: If there is a previously stored LSN in the Kafka offsets topic, the connector continues streaming changes from that position. If no LSN is stored, the connector starts streaming changes from the point in time when the PostgreSQL logical replication slot was created on the server. Use this snapshot mode only when you know all data of interest is still reflected in the WAL.

  • never - Deprecated see no_data.

  • when_needed - After the connector starts, it performs a snapshot only if it detects one of the following circumstances: It cannot detect any topic offsets. A previously recorded offset specifies a log position that is not available on the server.

  • configuration_based - With this option, you control snapshot behavior through a set of connector properties that have the prefix ‘snapshotModeConfigurationBased’.

  • custom - The connector performs a snapshot according to the implementation specified by the snapshotModeCustomName property, which defines a custom implementation of the io.debezium.spi.snapshot.Snapshotter interface.

  • For more information, see the table of snapshot.mode options.

    snapshotModeConfigurationBasedSnapshotData boolean Default false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes table data when it performs a snapshot.
    snapshotModeConfigurationBasedSnapshotOnDataError boolean Default false. If the snapshotMode is set to configuration_based, this property specifies whether the connector attempts to snapshot table data if it does not find the last committed offset in the transaction log. Set the value to true to instruct the connector to perform a new snapshot.
    snapshotModeConfigurationBasedSnapshotOnSchemaError boolean Default false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes table schema in a snapshot if the schema history topic is not available.
    snapshotModeConfigurationBasedSnapshotSchema boolean Default false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes the table schema when it performs a snapshot.
    snapshotModeConfigurationBasedStartStream boolean Default false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector begins to stream change events after a snapshot completes.
    snapshotModeCustomName string When snapshotMode is set as custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.Snapshotter’ interface. The provided implementation is called after a connector restart to determine whether to perform a snapshot. For more information, see custom snapshotter SPI.
    snapshotQueryMode string Default select_all. Specifies how the connector queries data while performing a snapshot. Set one of the following options:

    • select_all: The connector performs a select all query by default, optionally adjusting the columns selected based on the column include and exclude list configurations.
    • custom: The connector performs a snapshot query according to the implementation specified by the snapshotQueryModeCustomName property, which defines a custom implementation of the io.debezium.spi.snapshot.SnapshotQuery interface. This setting enables you to manage snapshot content in a more flexible manner compared to using the snapshotSelectStatementOverrides property.
    snapshotQueryModeCustomName string When snapshotQueryMode is set as custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.SnapshotQuery’ interface. For more information, see custom snapshotter SPI.
    snapshotSelectStatementOverrides map[string]string Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log. The property contains a hierarchy of fully-qualified table names in the form .. For example,
    snapshotSelectStatementOverrides: 
      "customers.orders": "SELECT * FROM [customers].[orders] WHERE delete_flag = 0 ORDER BY id DESC"
    

    In the resulting snapshot, the connector includes only the records for which delete_flag = 0.

    statusUpdateIntervalMs integer Default 10000. Frequency for sending replication connection status updates to the server, given in milliseconds. The property also controls how frequently the database status is checked to detect a dead connection in case the database was shut down.
    timePrecisionMode string Default adaptive. Time, date, and timestamps can be represented with different kinds of precision:

    • adaptive: captures the time and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column’s type.

    • adaptive_time_microseconds: captures the date, datetime and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column’s type. An exception is TIME type fields, which are always captured as microseconds.

    • connect: always represents time and timestamp values by using Kafka Connect’s built-in representations for Time, Date, and Timestamp, which use millisecond precision regardless of the database columns' precision. For more information, see temporal values.

    tombstonesOnDelete boolean Default true. Controls whether a delete event is followed by a tombstone event.

  • true - a delete operation is represented by a delete event and a subsequent tombstone event.

  • false - only a delete event is emitted.

  • After a source record is deleted, emitting a tombstone event (the default behavior) allows Kafka to completely delete all events that pertain to the key of the deleted row in case log compaction is enabled for the topic.

    topicCacheSize integer Default 10000. The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection.
    topicDelimiter string Default .. Specify the delimiter for topic name, defaults to “.”.
    topicHeartbeatPrefix string Default __debezium-heartbeat. Controls the name of the topic to which the connector sends heartbeat messages. For example, if the topic prefix is fulfillment, the default topic name is __debezium-heartbeat.fulfillment.
    topicNamingStrategy string Default io.debezium.schema.SchemaTopicNamingStrategy. The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to SchemaTopicNamingStrategy.
    topicTransaction string Default transaction. Controls the name of the topic to which the connector sends transaction metadata messages. For example, if the topic prefix is fulfillment, the default topic name is fulfillment.transaction.
    unavailableValuePlaceholder string Default __debezium_unavailable_value. Specifies the constant that the connector provides to indicate that the original value is a toasted value that is not provided by the database. If the setting of unavailable.value.placeholder starts with the hex: prefix it is expected that the rest of the string represents hexadecimally encoded octets. For more information, see toasted values.
    xminFetchIntervalMs integer Default 0. How often, in milliseconds, the XMIN will be read from the replication slot. The XMIN value provides the lower bounds of where a new replication slot could start from. The default value of 0 disables tracking XMIN tracking.
    SGStream.spec.source.postgres.password

    ↩ Parent

    The password used by the CDC process to connect to the database.

    If not specified the default superuser password will be used.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The Secret key where the password is stored.
    name string The Secret name where the password is stored.
    SGStream.spec.source.postgres.username

    ↩ Parent

    The username used by the CDC process to connect to the database.

    If not specified the default superuser username (by default postgres) will be used.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The Secret key where the username is stored.
    name string The Secret name where the username is stored.

    SGStream.spec.source.sgCluster

    ↩ Parent

    The configuration of the data source required when type is SGCluster.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    name string The target SGCluster name.
    database string The target database name to which the CDC process will connect to.

    If not specified the default postgres database will be targeted.

    debeziumProperties object Specific property of the debezium Postgres connector.

    See https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-connector-properties

    Each property is converted from myPropertyName to my.property.name

    excludes []string A list of regular expressions that allow to match one or more <schema>.<table>.<column> entries to be filtered out before sending to the target.

    This property is mutually exclusive with includes.

    includes []string A list of regular expressions that allow to match one or more <schema>.<table>.<column> entries to be filtered before sending to the target.

    This property is mutually exclusive with excludes.

    password object The password used by the CDC process to connect to the database.

    If not specified the default superuser password will be used.

    username object The username used by the CDC process to connect to the database.

    If not specified the default superuser username (by default postgres) will be used.

    SGStream.spec.source.sgCluster.debeziumProperties

    ↩ Parent

    Specific property of the debezium Postgres connector.

    See https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-connector-properties

    Each property is converted from myPropertyName to my.property.name

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    binaryHandlingMode string Default bytes. Specifies how binary (bytea) columns should be represented in change events:

    • bytes represents binary data as byte array.

    • base64 represents binary data as base64-encoded strings.

    • base64-url-safe represents binary data as base64-url-safe-encoded strings.

    • hex represents binary data as hex-encoded (base16) strings.

    columnMaskHash map[string]map[string]map[string][]string An optional section, that allow to specify, for an hash algorithm and a salt, a list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form ... To match the name of a column Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. In the resulting change event record, the values for the specified columns are replaced with pseudonyms. A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt. Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation. In the following example, CzQMA0cB5K is a randomly selected salt. columnMaskHash.SHA-256.CzQMA0cB5K=[inventory.orders.customerName,inventory.shipment.customerName] If necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts. Depending on the hash algorithm used, the salt selected, and the actual data set, the resulting data set might not be completely masked.
    columnMaskHashV2 map[string]map[string]map[string][]string Similar to also columnMaskHash but using hashing strategy version 2. Hashing strategy version 2 should be used to ensure fidelity if the value is being hashed in different places or systems.
    columnMaskWithLengthChars []string An optional, list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Set length to a positive integer to replace data in the specified columns with the number of asterisk (*) characters specified by the length in the property name. Set length to 0 (zero) to replace data in the specified columns with an empty string. The fully-qualified name of a column observes the following format: schemaName.tableName.columnName. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. You can specify multiple properties with different lengths in a single configuration.
    columnPropagateSourceType []string Default [.*]. An optional, list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records:

  • __debezium.source.column.type

  • __debezium.source.column.length

  • __debezium.source.column.scale

  • These parameters propagate a column’s original type name and length (for variable-width types), respectively. Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases. The fully-qualified name of a column observes one of the following formats: databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.

    columnTruncateToLengthChars []string An optional, list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want to truncate the data in a set of columns when it exceeds the number of characters specified by the length in the property name. Set length to a positive integer value, for example, column.truncate.to.20.chars. The fully-qualified name of a column observes the following format: ... To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. You can specify multiple properties with different lengths in a single configuration.
    converters map[string]map[string]string Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use. For example,

    isbn:
      type: io.debezium.test.IsbnConverter
      schemaName: io.debezium.postgresql.type.Isbn
    

    You must set the converters property to enable the connector to use a custom converter. For each converter that you configure for a connector, you must also add a .type property, which specifies the fully-qualified name of the class that implements the converter interface. If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate any additional configuration parameter with a converter, prefix the parameter names with the symbolic name of the converter. Each property is converted from myPropertyName to my.property.name

    customMetricTags map[string]string The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example:

    customMetricTags:
      k1: v1
      k2: v2
    

    databaseInitialStatements []string A list of SQL statements that the connector executes when it establishes a JDBC connection to the database. The connector may establish JDBC connections at its own discretion. Consequently, this property is useful for configuration of session parameters only, and not for executing DML statements. The connector does not execute these statements when it creates a connection for reading the transaction log.
    datatypePropagateSourceType []string Default `[.*]`. An optional, list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database. When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema:
    • __debezium.source.column.type
    • __debezium.source.column.length
    • __debezium.source.column.scale

    These parameters propagate a column’s original type name and length (for variable-width types), respectively. Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases. The fully-qualified name of a column observes one of the following formats: databaseName.tableName.typeName, or databaseName.schemaName.tableName.typeName. To match the name of a data type, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the data type; the expression does not match substrings that might be present in a type name. For the list of PostgreSQL-specific data type names, see the PostgreSQL data type mappings.

    decimalHandlingMode string Default precise. Specifies how the connector should handle values for DECIMAL and NUMERIC columns:

    • precise: represents values by using java.math.BigDecimal to represent values in binary form in change events.
    • double: represents values by using double values, which might result in a loss of precision but which is easier to use.
    • string: encodes values as formatted strings, which are easy to consume but semantic information about the real type is lost. For more information, see Decimal types.
    errorsMaxRetries integer Default -1. Specifies how the connector responds after an operation that results in a retriable error, such as a connection error.

    Set one of the following options:

    • -1: No limit. The connector always restarts automatically, and retries the operation, regardless of the number of previous failures.

    • 0: Disabled. The connector fails immediately, and never retries the operation. User intervention is required to restart the connector.

    • > 0: The connector restarts automatically until it reaches the specified maximum number of retries. After the next failure, the connector stops, and user intervention is required to restart it.

    eventProcessingFailureHandlingMode string Default fail. Specifies how the connector should react to exceptions during processing of events:

  • fail: propagates the exception, indicates the offset of the problematic event, and causes the connector to stop.

  • warn: logs the offset of the problematic event, skips that event, and continues processing.

  • skip: skips the problematic event and continues processing.

  • fieldNameAdjustmentMode string Default none. Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings:

  • none does not apply any adjustment.

  • avro replaces the characters that cannot be used in the Avro type name with underscore.

  • avro_unicode replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java

  • For more information, see Avro naming.

    flushLsnSource boolean Default true. Determines whether the connector should commit the LSN of the processed records in the source postgres database so that the WAL logs can be deleted. Specify false if you don’t want the connector to do this. Please note that if set to false LSN will not be acknowledged by Debezium and as a result WAL logs will not be cleared which might result in disk space issues. User is expected to handle the acknowledgement of LSN outside Debezium.
    heartbeatActionQuery string Specifies a query that the connector executes on the source database when the connector sends a heartbeat message. This is useful for resolving the situation described in WAL disk space consumption, where capturing changes from a low-traffic database on the same host as a high-traffic database prevents Debezium from processing WAL records and thus acknowledging WAL positions with the database. To address this situation, create a heartbeat table in the low-traffic database, and set this property to a statement that inserts records into that table, for example:

    INSERT INTO test_heartbeat_table (text) VALUES ('test_heartbeat')
    

    This allows the connector to receive changes from the low-traffic database and acknowledge their LSNs, which prevents unbounded WAL growth on the database host.

    heartbeatIntervalMs integer Default 0. Controls how frequently the connector sends heartbeat messages to a Kafka topic. The default behavior is that the connector does not send heartbeat messages. Heartbeat messages are useful for monitoring whether the connector is receiving change events from the database. Heartbeat messages might help decrease the number of change events that need to be re-sent when a connector restarts. To send heartbeat messages, set this property to a positive integer, which indicates the number of milliseconds between heartbeat messages. Heartbeat messages are needed when there are many updates in a database that is being tracked but only a tiny number of updates are related to the table(s) and schema(s) for which the connector is capturing changes. In this situation, the connector reads from the database transaction log as usual but rarely emits change records to Kafka. This means that no offset updates are committed to Kafka and the connector does not have an opportunity to send the latest retrieved LSN to the database. The database retains WAL files that contain events that have already been processed by the connector. Sending heartbeat messages enables the connector to send the latest retrieved LSN to the database, which allows the database to reclaim disk space being used by no longer needed WAL files.
    hstoreHandlingMode string Default json. Specifies how the connector should handle values for hstore columns:

    • map: represents values by using MAP.
    • json: represents values by using json string. This setting encodes values as formatted strings such as {“key” : “val”}. For more information, see PostgreSQL HSTORE type.
    includeUnknownDatatypes boolean Default true. Specifies connector behavior when the connector encounters a field whose data type is unknown. The default behavior is that the connector omits the field from the change event and logs a warning. Set this property to true if you want the change event to contain an opaque binary representation of the field. This lets consumers decode the field. You can control the exact representation by setting the binaryHandlingMode property.

    NOTE: Consumers risk backward compatibility issues when includeUnknownDatatypes is set to true. Not only may the database-specific binary representation change between releases, but if the data type is eventually supported by Debezium, the data type will be sent downstream in a logical type, which would require adjustments by consumers. In general, when encountering unsupported data types, create a feature request so that support can be added.

    incrementalSnapshotChunkSize integer Default 1024. The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment.
    incrementalSnapshotWatermarkingStrategy string Default insert_insert. Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.

    You can specify one of the following options:

    • insert_insert: When you send a signal to initiate an incremental snapshot, for every chunk that Debezium reads during the snapshot, it writes an entry to the signaling data collection to record the signal to open the snapshot window. After the snapshot completes, Debezium inserts a second entry to record the closing of the window.

    • insert_delete: When you send a signal to initiate an incremental snapshot, for every chunk that Debezium reads, it writes a single entry to the signaling data collection to record the signal to open the snapshot window. After the snapshot completes, this entry is removed. No entry is created for the signal to close the snapshot window. Set this option to prevent rapid growth of the signaling data collection.

    intervalHandlingMode string Default numeric. Specifies how the connector should handle values for interval columns:

  • numeric: represents intervals using approximate number of microseconds.

  • string: represents intervals exactly by using the string pattern representation PYMDTHMS. For example: P1Y2M3DT4H5M6.78S. For more information, see PostgreSQL basic types.

  • maxBatchSize integer Default `2048`. Positive integer value that specifies the maximum size of each batch of events that the connector processes.
    maxQueueSize integer Default `8192`. Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of maxQueueSize to be larger than the value of maxBatchSize.
    maxQueueSizeInBytes integer Default `0`. A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value. If maxQueueSize is also set, writing to the queue is blocked when the size of the queue reaches the limit specified by either property. For example, if you set maxQueueSize=1000, and maxQueueSizeInBytes=5000, writing to the queue is blocked after the queue contains 1000 records, or after the volume of the records in the queue reaches 5000 bytes.
    messageKeyColumns []string A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables. By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns. To establish a custom message key for a table, list the table, followed by the columns to use as the message key. Each list entry takes the following format: :, To base a table key on multiple column names, insert commas between the column names. Each fully-qualified table name is a regular expression in the following format: . The property can include entries for multiple tables. Use a semicolon to separate table entries in the list. The following example sets the message key for the tables inventory.customers and purchase.orders: inventory.customers:pk1,pk2;(.*).purchaseorders:pk3,pk4 For the table inventory.customer, the columns pk1 and pk2 are specified as the message key. For the purchaseorders tables in any schema, the columns pk3 and pk4 server as the message key. There is no limit to the number of columns that you use to create custom message keys. However, it’s best to use the minimum number that are required to specify a unique key. Note that having this property set and REPLICA IDENTITY set to DEFAULT on the tables, will cause the tombstone events to not be created properly if the key columns are not part of the primary key of the table. Setting REPLICA IDENTITY to FULL is the only solution.
    moneyFractionDigits integer Default `2`. Specifies how many decimal digits should be used when converting Postgres money type to java.math.BigDecimal, which represents the values in change events. Applicable only when decimalHandlingMode is set to precise.
    notificationEnabledChannels []string List of notification channel names that are enabled for the connector. By default, the following channels are available: sink, log and jmx. Optionally, you can also implement a [custom notification channel](https://debezium.io/documentation/reference/stable/configuration/signalling.html#debezium-signaling-enabling-custom-signaling-channel).
    pluginName string Default `pgoutput`. The name of the [PostgreSQL logical decoding plug-in](https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-output-plugin) installed on the PostgreSQL server. Supported values are decoderbufs, and pgoutput.
    pollIntervalMs integer Default `500`. Positive integer value that specifies the number of milliseconds the connector should wait for new change events to appear before it starts processing a batch of events. Defaults to 500 milliseconds.
    provideTransactionMetadata boolean Default `false`. Determines whether the connector generates events with transaction boundaries and enriches change event envelopes with transaction metadata. Specify true if you want the connector to do this. For more information, see [Transaction metadata](https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-transaction-metadata).
    publicationAutocreateMode string Default `all_tables`. Applies only when streaming changes by using [the pgoutput plug-in](https://www.postgresql.org/docs/current/sql-createpublication.html). The setting determines how creation of a [publication](https://www.postgresql.org/docs/current/logical-replication-publication.html) should work. Specify one of the following values:
    • all_tables - If a publication exists, the connector uses it. If a publication does not exist, the connector creates a publication for all tables in the database for which the connector is capturing changes. For the connector to create a publication it must access the database through a database user account that has permission to create publications and perform replications. You grant the required permission by using the following SQL command CREATE PUBLICATION <publication_name> FOR ALL TABLES;.

    • disabled - The connector does not attempt to create a publication. A database administrator or the user configured to perform replications must have created the publication before running the connector. If the connector cannot find the publication, the connector throws an exception and stops.

    • filtered - If a publication exists, the connector uses it. If no publication exists, the connector creates a new publication for tables that match the current filter configuration as specified by the schema.include.list, schema.exclude.list, and table.include.list, and table.exclude.list connector configuration properties. For example: CREATE PUBLICATION <publication_name> FOR TABLE <tbl1, tbl2, tbl3>. If the publication exists, the connector updates the publication for tables that match the current filter configuration. For example: ALTER PUBLICATION <publication_name> SET TABLE <tbl1, tbl2, tbl3>.

    publicationName string Default . (with all characters that are not [a-zA-Z0-9] changed to _ character). The name of the PostgreSQL publication created for streaming changes when using pgoutput. This publication is created at start-up if it does not already exist and it includes all tables. Debezium then applies its own include/exclude list filtering, if configured, to limit the publication to change events for the specific tables of interest. The connector user must have superuser permissions to create this publication, so it is usually preferable to create the publication before starting the connector for the first time. If the publication already exists, either for all tables or configured with a subset of tables, Debezium uses the publication as it is defined.
    replicaIdentityAutosetValues []string The setting determines the value for replica identity at table level. This option will overwrite the existing value in database. A comma-separated list of regular expressions that match fully-qualified tables and replica identity value to be used in the table. Each expression must match the pattern ‘:’, where the table name could be defined as (SCHEMA_NAME.TABLE_NAME), and the replica identity values are: DEFAULT - Records the old values of the columns of the primary key, if any. This is the default for non-system tables. INDEX index_name - Records the old values of the columns covered by the named index, that must be unique, not partial, not deferrable, and include only columns marked NOT NULL. If this index is dropped, the behavior is the same as NOTHING. FULL - Records the old values of all columns in the row. NOTHING - Records no information about the old row. This is the default for system tables. For example, schema1.*:FULL,schema2.table2:NOTHING,schema2.table3:INDEX idx_name
    retriableRestartConnectorWaitMs integer Default 10000 (10 seconds). The number of milliseconds to wait before restarting a connector after a retriable error occurs.
    schemaNameAdjustmentMode string Default none. Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings:

  • none does not apply any adjustment.

  • avro replaces the characters that cannot be used in the Avro type name with underscore.

  • avro_unicode replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java

  • schemaRefreshMode string Default columns_diff. Specify the conditions that trigger a refresh of the in-memory schema for a table.

  • columns_diff: is the safest mode. It ensures that the in-memory schema stays in sync with the database table’s schema at all times.

  • columns_diff_exclude_unchanged_toast: instructs the connector to refresh the in-memory schema cache if there is a discrepancy with the schema derived from the incoming message, unless unchanged TOASTable data fully accounts for the discrepancy.

  • This setting can significantly improve connector performance if there are frequently-updated tables that have TOASTed data that are rarely part of updates. However, it is possible for the in-memory schema to become outdated if TOASTable columns are dropped from the table.

    signalDataCollection string Fully-qualified name of the data collection that is used to send signals to the connector. Use the following format to specify the collection name: .
    signalEnabledChannels []string Default [sgstream-annotations]. List of the signaling channel names that are enabled for the connector. By default, the following channels are available: sgstream-annotations, source, kafka, file and jmx. Optionally, you can also implement a custom signaling channel.
    skipMessagesWithoutChange boolean Default false. Specifies whether to skip publishing messages when there is no change in included columns. This would essentially filter messages if there is no change in columns included as per includes or excludes fields. Note: Only works when REPLICA IDENTITY of the table is set to FULL
    skippedOperations []string Default none. A list of operation types that will be skipped during streaming. The operations include: c for inserts/create, u for updates, d for deletes, t for truncates, and none to not skip any operations. By default, no operations are skipped.
    slotDropOnStop boolean Default true. Whether or not to delete the logical replication slot when the connector stops in a graceful, expected way. The default behavior is that the replication slot remains configured for the connector when the connector stops. When the connector restarts, having the same replication slot enables the connector to start processing where it left off. Set to true in only testing or development environments. Dropping the slot allows the database to discard WAL segments. When the connector restarts it performs a new snapshot or it can continue from a persistent offset in the Kafka Connect offsets topic.
    slotMaxRetries integer Default 6. If connecting to a replication slot fails, this is the maximum number of consecutive attempts to connect.
    slotName string Default . (with all characters that are not [a-zA-Z0-9] changed to _ character). The name of the PostgreSQL logical decoding slot that was created for streaming changes from a particular plug-in for a particular database/schema. The server uses this slot to stream events to the Debezium connector that you are configuring.

    Slot names must conform to PostgreSQL replication slot naming rules, which state: “Each replication slot has a name, which can contain lower-case letters, numbers, and the underscore character.”

    slotRetryDelayMs integer Default 10000 (10 seconds). The number of milliseconds to wait between retry attempts when the connector fails to connect to a replication slot.
    slotStreamParams map[string]string Parameters to pass to the configured logical decoding plug-in. For example:

    slotStreamParams:
      add-tables: "public.table,public.table2"
      include-lsn: "true"
    

    snapshotDelayMs integer An interval in milliseconds that the connector should wait before performing a snapshot when the connector starts. If you are starting multiple connectors in a cluster, this property is useful for avoiding snapshot interruptions, which might cause re-balancing of connectors.
    snapshotFetchSize integer Default `10240`. During a snapshot, the connector reads table content in batches of rows. This property specifies the maximum number of rows in a batch.
    snapshotIncludeCollectionList []string Default . An optional, list of regular expressions that match the fully-qualified names (.) of the tables to include in a snapshot. The specified items must be named in the connector’s table.include.list property. This property takes effect only if the connector’s snapshotMode property is set to a value other than `never`. This property does not affect the behavior of incremental snapshots. To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name.
    snapshotLockTimeoutMs integer Default `10000`. Positive integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If the connector cannot acquire table locks in this time interval, the snapshot fails. [How the connector performs snapshots](https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-snapshots) provides details.
    snapshotLockingMode string Default `none`. Specifies how the connector holds locks on tables while performing a schema snapshot. Set one of the following options:
    • shared: The connector holds a table lock that prevents exclusive table access during the initial portion phase of the snapshot in which database schemas and other metadata are read. After the initial phase, the snapshot no longer requires table locks.
    • none: The connector avoids locks entirely. Do not use this mode if schema changes might occur during the snapshot.

    WARNING: Do not use this mode if schema changes might occur during the snapshot.

    • custom: The connector performs a snapshot according to the implementation specified by the snapshotLockingModeCustomName property, which is a custom implementation of the io.debezium.spi.snapshot.SnapshotLock interface.

    snapshotLockingModeCustomName string When snapshotLockingMode is set to custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.SnapshotLock’ interface. For more information, see custom snapshotter SPI.
    snapshotMaxThreads integer Default 1. Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently. This feature is incubating.
    snapshotMode string Default initial. Specifies the criteria for performing a snapshot when the connector starts:

  • always - The connector performs a snapshot every time that it starts. The snapshot includes the structure and data of the captured tables. Specify this value to populate topics with a complete representation of the data from the captured tables every time that the connector starts. After the snapshot completes, the connector begins to stream event records for subsequent database changes.

  • initial - The connector performs a snapshot only when no offsets have been recorded for the logical server name.

  • initial_only - The connector performs an initial snapshot and then stops, without processing any subsequent changes.

  • no_data - The connector never performs snapshots. When a connector is configured this way, after it starts, it behaves as follows: If there is a previously stored LSN in the Kafka offsets topic, the connector continues streaming changes from that position. If no LSN is stored, the connector starts streaming changes from the point in time when the PostgreSQL logical replication slot was created on the server. Use this snapshot mode only when you know all data of interest is still reflected in the WAL.

  • never - Deprecated see no_data.

  • when_needed - After the connector starts, it performs a snapshot only if it detects one of the following circumstances: It cannot detect any topic offsets. A previously recorded offset specifies a log position that is not available on the server.

  • configuration_based - With this option, you control snapshot behavior through a set of connector properties that have the prefix ‘snapshotModeConfigurationBased’.

  • custom - The connector performs a snapshot according to the implementation specified by the snapshotModeCustomName property, which defines a custom implementation of the io.debezium.spi.snapshot.Snapshotter interface.

  • For more information, see the table of snapshot.mode options.

    snapshotModeConfigurationBasedSnapshotData boolean Default false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes table data when it performs a snapshot.
    snapshotModeConfigurationBasedSnapshotOnDataError boolean Default false. If the snapshotMode is set to configuration_based, this property specifies whether the connector attempts to snapshot table data if it does not find the last committed offset in the transaction log. Set the value to true to instruct the connector to perform a new snapshot.
    snapshotModeConfigurationBasedSnapshotOnSchemaError boolean Default false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes table schema in a snapshot if the schema history topic is not available.
    snapshotModeConfigurationBasedSnapshotSchema boolean Default false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector includes the table schema when it performs a snapshot.
    snapshotModeConfigurationBasedStartStream boolean Default false. If the snapshotMode is set to configuration_based, set this property to specify whether the connector begins to stream change events after a snapshot completes.
    snapshotModeCustomName string When snapshotMode is set as custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.Snapshotter’ interface. The provided implementation is called after a connector restart to determine whether to perform a snapshot. For more information, see custom snapshotter SPI.
    snapshotQueryMode string Default select_all. Specifies how the connector queries data while performing a snapshot. Set one of the following options:

    • select_all: The connector performs a select all query by default, optionally adjusting the columns selected based on the column include and exclude list configurations.
    • custom: The connector performs a snapshot query according to the implementation specified by the snapshotQueryModeCustomName property, which defines a custom implementation of the io.debezium.spi.snapshot.SnapshotQuery interface. This setting enables you to manage snapshot content in a more flexible manner compared to using the snapshotSelectStatementOverrides property.
    snapshotQueryModeCustomName string When snapshotQueryMode is set as custom, use this setting to specify the name of the custom implementation provided in the name() method that is defined by the ‘io.debezium.spi.snapshot.SnapshotQuery’ interface. For more information, see custom snapshotter SPI.
    snapshotSelectStatementOverrides map[string]string Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log. The property contains a hierarchy of fully-qualified table names in the form .. For example,
    snapshotSelectStatementOverrides: 
      "customers.orders": "SELECT * FROM [customers].[orders] WHERE delete_flag = 0 ORDER BY id DESC"
    

    In the resulting snapshot, the connector includes only the records for which delete_flag = 0.

    statusUpdateIntervalMs integer Default 10000. Frequency for sending replication connection status updates to the server, given in milliseconds. The property also controls how frequently the database status is checked to detect a dead connection in case the database was shut down.
    timePrecisionMode string Default adaptive. Time, date, and timestamps can be represented with different kinds of precision:

    • adaptive: captures the time and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column’s type.

    • adaptive_time_microseconds: captures the date, datetime and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column’s type. An exception is TIME type fields, which are always captured as microseconds.

    • connect: always represents time and timestamp values by using Kafka Connect’s built-in representations for Time, Date, and Timestamp, which use millisecond precision regardless of the database columns' precision. For more information, see temporal values.

    tombstonesOnDelete boolean Default true. Controls whether a delete event is followed by a tombstone event.

  • true - a delete operation is represented by a delete event and a subsequent tombstone event.

  • false - only a delete event is emitted.

  • After a source record is deleted, emitting a tombstone event (the default behavior) allows Kafka to completely delete all events that pertain to the key of the deleted row in case log compaction is enabled for the topic.

    topicCacheSize integer Default 10000. The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection.
    topicDelimiter string Default .. Specify the delimiter for topic name, defaults to “.”.
    topicHeartbeatPrefix string Default __debezium-heartbeat. Controls the name of the topic to which the connector sends heartbeat messages. For example, if the topic prefix is fulfillment, the default topic name is __debezium-heartbeat.fulfillment.
    topicNamingStrategy string Default io.debezium.schema.SchemaTopicNamingStrategy. The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to SchemaTopicNamingStrategy.
    topicTransaction string Default transaction. Controls the name of the topic to which the connector sends transaction metadata messages. For example, if the topic prefix is fulfillment, the default topic name is fulfillment.transaction.
    unavailableValuePlaceholder string Default __debezium_unavailable_value. Specifies the constant that the connector provides to indicate that the original value is a toasted value that is not provided by the database. If the setting of unavailable.value.placeholder starts with the hex: prefix it is expected that the rest of the string represents hexadecimally encoded octets. For more information, see toasted values.
    xminFetchIntervalMs integer Default 0. How often, in milliseconds, the XMIN will be read from the replication slot. The XMIN value provides the lower bounds of where a new replication slot could start from. The default value of 0 disables tracking XMIN tracking.
    SGStream.spec.source.sgCluster.password

    ↩ Parent

    The password used by the CDC process to connect to the database.

    If not specified the default superuser password will be used.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The Secret key where the password is stored.
    name string The Secret name where the password is stored.
    SGStream.spec.source.sgCluster.username

    ↩ Parent

    The username used by the CDC process to connect to the database.

    If not specified the default superuser username (by default postgres) will be used.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The Secret key where the username is stored.
    name string The Secret name where the username is stored.

    SGStream.spec.target

    ↩ Parent

    The target of this sream.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    type string Indicate the type of target of this stream. Possible values are:

    • CloudEvent: events will be sent to a cloud event receiver.
    • PgLambda: events will trigger the execution of a lambda script by integrating with Knative Service (Knative must be already installed).
    • SGCluster: events will be sinked to an SGCluster allowing migration of data.
    cloudEvent object Configuration section for CloudEvent target type.
    pgLambda object Configuration section for PgLambda target type.
    sgCluster object The configuration of the data target required when type is SGCluster.

    SGStream.spec.target.cloudEvent

    ↩ Parent

    Configuration section for CloudEvent target type.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    binding string The CloudEvent binding (http by default).

    Only http is supported at the moment.

    format string The CloudEvent format (json by default).

    Only json is supported at the moment.

    http object The http binding configuration.
    SGStream.spec.target.cloudEvent.http

    ↩ Parent

    The http binding configuration.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    url string The URL used to send the CloudEvents to the endpoint.
    connectTimeout string Set the connect timeout.

    Value 0 represents infinity (default). Negative values are not allowed.

    headers map[string]string Headers to include when sending CloudEvents to the endpoint.
    readTimeout string Set the read timeout. The value is the timeout to read a response.

    Value 0 represents infinity (default). Negative values are not allowed.

    retryBackoffDelay integer The maximum amount of delay in seconds after an error before retrying again.

    The initial delay will use 10% of this value and then increase the value exponentially up to the maximum amount of seconds specified with this field.

    Default: 60

    retryLimit integer Set the retry limit. When set the event will be sent again after an error for the specified limit of times. When not set the event will be sent again after an error.
    skipHostnameVerification boolean When true disable hostname verification.

    SGStream.spec.target.pgLambda

    ↩ Parent

    Configuration section for PgLambda target type.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    knative object Knative Service configuration.
    script string Script to execute. This field is mutually exclusive with scriptFrom field.
    scriptFrom object Reference to either a Kubernetes Secret or a ConfigMap that contains the script to execute. This field is mutually exclusive with script field.

    Fields secretKeyRef and configMapKeyRef are mutually exclusive, and one of them is required.

    scriptType string The PgLambda script format (javascript by default).

    SGStream.spec.target.pgLambda.knative

    ↩ Parent

    Knative Service configuration.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    annotations map[string]string Annotations to set to Knative Service
    http object PgLambda uses a CloudEvent http binding to send events to the Knative Service. This section allow to modify the configuration of this binding.
    labels map[string]string Labels to set to Knative Service
    SGStream.spec.target.pgLambda.knative.http

    ↩ Parent

    PgLambda uses a CloudEvent http binding to send events to the Knative Service. This section allow to modify the configuration of this binding.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    connectTimeout string Set the connect timeout.

    Value 0 represents infinity (default). Negative values are not allowed.

    headers map[string]string Headers to include when sending CloudEvents to the endpoint.
    readTimeout string Set the read timeout. The value is the timeout to read a response.

    Value 0 represents infinity (default). Negative values are not allowed.

    retryBackoffDelay integer The maximum amount of delay in seconds after an error before retrying again.

    The initial delay will use 10% of this value and then increase the value exponentially up to the maximum amount of seconds specified with this field.

    Default: 60

    retryLimit integer Set the retry limit. When set the event will be sent again after an error for the specified limit of times. When not set the event will be sent again after an error.
    skipHostnameVerification boolean When true disable hostname verification.
    url string The URL used to send the CloudEvents to the endpoint.
    SGStream.spec.target.pgLambda.scriptFrom

    ↩ Parent

    Reference to either a Kubernetes Secret or a ConfigMap that contains the script to execute. This field is mutually exclusive with script field.

    Fields secretKeyRef and configMapKeyRef are mutually exclusive, and one of them is required.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    configMapKeyRef object A ConfigMap reference that contains the script to execute. This field is mutually exclusive with secretKeyRef field.
    secretKeyRef object A Kubernetes SecretKeySelector that contains the script to execute. This field is mutually exclusive with configMapKeyRef field.
    SGStream.spec.target.pgLambda.scriptFrom.configMapKeyRef

    ↩ Parent

    A ConfigMap reference that contains the script to execute. This field is mutually exclusive with secretKeyRef field.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The key name within the ConfigMap that contains the script to execute.
    name string The name of the ConfigMap that contains the script to execute.
    SGStream.spec.target.pgLambda.scriptFrom.secretKeyRef

    ↩ Parent

    A Kubernetes SecretKeySelector that contains the script to execute. This field is mutually exclusive with configMapKeyRef field.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The key of the secret to select from. Must be a valid secret key.
    name string Name of the referent. More information.

    SGStream.spec.target.sgCluster

    ↩ Parent

    The configuration of the data target required when type is SGCluster.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    name string The target SGCluster name.
    database string The target database name to which the data will be migrated to.

    If not specified the default postgres database will be targeted.

    ddlImportRoleSkipFilter string Allow to set a SIMILAR TO regular expression to match the names of the roles to skip during import of DDL.

    When not set and source is an SGCluster will match the superuser, replicator and authenticator usernames.

    debeziumProperties object Specific property of the debezium JDBC sink.

    See https://debezium.io/documentation/reference/stable/connectors/jdbc.html#jdbc-connector-configuration

    Each property is converted from myPropertyName to my.property.name

    password object The password used by the CDC sink process to connect to the database.

    If not specified the default superuser password will be used.

    skipDdlImport boolean When true disable import of DDL and tables will be created on demand by Debezium.
    username object The username used by the CDC sink process to connect to the database.

    If not specified the default superuser username (by default postgres) will be used.

    SGStream.spec.target.sgCluster.debeziumProperties

    ↩ Parent

    Specific property of the debezium JDBC sink.

    See https://debezium.io/documentation/reference/stable/connectors/jdbc.html#jdbc-connector-configuration

    Each property is converted from myPropertyName to my.property.name

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    batchSize integer Default 500. Specifies how many records to attempt to batch together into the destination table.

    Note that if you set consumerMaxPollRecords in the Connect worker properties to a value lower than batchSize, batch processing will be caped by consumerMaxPollRecords and the desired batchSize won’t be reached. You can also configure the connector’s underlying consumer’s maxPollRecords using consumerOverrideMaxPollRecords in the connector configuration.

    columnNamingStrategy string Default io.debezium.connector.jdbc.naming.DefaultColumnNamingStrategy. Specifies the fully-qualified class name of a ColumnNamingStrategy implementation that the connector uses to resolve column names from event field names. By default, the connector uses the field name as the column name.
    connectionPoolAcquire_increment integer Default 32. Specifies the number of connections that the connector attempts to acquire if the connection pool exceeds its maximum size.
    connectionPoolMax_size integer Default 32. Specifies the maximum number of concurrent connections that the pool maintains.
    connectionPoolMin_size integer Default 5. Specifies the minimum number of connections in the pool.
    connectionPoolTimeout integer Default 1800. Specifies the number of seconds that an unused connection is kept before it is discarded.
    databaseTime_zone string Default UTC. Specifies the timezone used when inserting JDBC temporal values.
    deleteEnabled boolean Default true. Specifies whether the connector processes DELETE or tombstone events and removes the corresponding row from the database. Use of this option requires that you set the primaryKeyMode to record_key.
    dialectPostgresPostgisSchema string Default public. Specifies the schema name where the PostgreSQL PostGIS extension is installed. The default is public; however, if the PostGIS extension was installed in another schema, this property should be used to specify the alternate schema name.
    dialectSqlserverIdentityInsert boolean Default false. Specifies whether the connector automatically sets an IDENTITY_INSERT before an INSERT or UPSERT operation into the identity column of SQL Server tables, and then unsets it immediately after the operation. When the default setting (false) is in effect, an INSERT or UPSERT operation into the IDENTITY column of a table results in a SQL exception.
    insertMode string Default upsert. Specifies the strategy used to insert events into the database. The following options are available:

    • insert: Specifies that all events should construct INSERT-based SQL statements. Use this option only when no primary key is used, or when you can be certain that no updates can occur to rows with existing primary key values.
    • update: Specifies that all events should construct UPDATE-based SQL statements. Use this option only when you can be certain that the connector receives only events that apply to existing rows.
    • upsert: Specifies that the connector adds events to the table using upsert semantics. That is, if the primary key does not exist, the connector performs an INSERT operation, and if the key does exist, the connector performs an UPDATE operation. When idempotent writes are required, the connector should be configured to use this option.
    primaryKeyFields []string Either the name of the primary key column or a comma-separated list of fields to derive the primary key from. When primaryKeyMode is set to record_key and the event’s key is a primitive type, it is expected that this property specifies the column name to be used for the key. When the primaryKeyMode is set to record_key with a non-primitive key, or record_value, it is expected that this property specifies a comma-separated list of field names from either the key or value. If the primary.key.mode is set to record_key with a non-primitive key, or record_value, and this property is not specifies, the connector derives the primary key from all fields of either the record key or record value, depending on the specified mode.
    primaryKeyMode string Default record_key. Specifies how the connector resolves the primary key columns from the event.
  • none: Specifies that no primary key columns are created.
  • record_key: Specifies that the primary key columns are sourced from the event’s record key. If the record key is a primitive type, the primaryKeyFields property is required to specify the name of the primary key column. If the record key is a struct type, the primaryKeyFields property is optional, and can be used to specify a subset of columns from the event’s key as the table’s primary key.
  • record_value: Specifies that the primary key columns is sourced from the event’s value. You can set the primaryKeyFields property to define the primary key as a subset of fields from the event’s value; otherwise all fields are used by default.
  • quoteIdentifiers boolean Default true. Specifies whether generated SQL statements use quotation marks to delimit table and column names. See the Quoting and case sensitivity section for more details.
    schemaEvolution string Default basic. Specifies how the connector evolves the destination table schemas. For more information, see Schema evolution. The following options are available: none: Specifies that the connector does not evolve the destination schema. basic: Specifies that basic evolution occurs. The connector adds missing columns to the table by comparing the incoming event’s record schema to the database table structure.
    tableNameFormat string Default ${original}. Specifies a string that determines how the destination table name is formatted, based on the topic name of the event. The placeholder ${original} is replaced with the schema name and the table name separated by a point character (.).
    tableNamingStrategy string Default io.stackgres.stream.jobs.migration.StreamMigrationTableNamingStrategy. Specifies the fully-qualified class name of a TableNamingStrategy implementation that the connector uses to resolve table names from incoming event topic names. The default behavior is to:
  • Replace the ${topic} placeholder in the tableNameFormat configuration property with the event’s topic.
  • Sanitize the table name by replacing dots (.) with underscores (_).
  • truncateEnabled boolean Default true. Specifies whether the connector processes TRUNCATE events and truncates the corresponding tables from the database. Although support for TRUNCATE statements has been available in Db2 since version 9.7, currently, the JDBC connector is unable to process standard TRUNCATE events that the Db2 connector emits. To ensure that the JDBC connector can process TRUNCATE events received from Db2, perform the truncation by using an alternative to the standard TRUNCATE TABLE statement. For example:
    ALTER TABLE <table_name> ACTIVATE NOT LOGGED INITIALLY WITH EMPTY TABLE
    

    The user account that submits the preceding query requires ALTER privileges on the table to be truncated.

    SGStream.spec.target.sgCluster.password

    ↩ Parent

    The password used by the CDC sink process to connect to the database.

    If not specified the default superuser password will be used.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The Secret key where the password is stored.
    name string The Secret name where the password is stored.
    SGStream.spec.target.sgCluster.username

    ↩ Parent

    The username used by the CDC sink process to connect to the database.

    If not specified the default superuser username (by default postgres) will be used.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The Secret key where the username is stored.
    name string The Secret name where the username is stored.

    SGStream.spec.debeziumEngineProperties

    ↩ Parent

    See https://debezium.io/documentation/reference/stable/development/engine.html#engine-properties Each property is converted from myPropertyName to my.property.name

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    errorsMaxRetries integer Default -1. The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, > 0 = num of retries).
    errorsRetryDelayInitialMs integer Default 300. Initial delay (in ms) for retries when encountering connection errors. This value will be doubled upon every retry but won’t exceed errorsRetryDelayMaxMs.
    errorsRetryDelayMaxMs integer Default 10000. Max delay (in ms) between retries when encountering conn
    offsetCommitPolicy string Default io.debezium.engine.spi.OffsetCommitPolicy.PeriodicCommitOffsetPolicy. The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface OffsetCommitPolicy. The default is a periodic commity policy based upon time intervals.
    offsetFlushIntervalMs integer Default 60000. Interval at which to try committing offsets. The default is 1 minute.
    offsetFlushTimeoutMs integer Default 5000. Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds.
    predicates map[string]map[string]string Predicates can be applied to transformations to make the transformations optional.

    An example of the configuration is:

    predicates:
      headerExists: # (1)
       type: "org.apache.kafka.connect.transforms.predicates.HasHeaderKey" # (2)
       name: "header.name" # (3)
    transforms:
      filter: # (4)
        type: "io.debezium.embedded.ExampleFilterTransform" # (5)
        predicate: "headerExists" # (6)
        negate: "true" # (7)
    
    1. One predicate is defined - headerExists
    2. Implementation of the headerExists predicate is org.apache.kafka.connect.transforms.predicates.HasHeaderKey
    3. The headerExists predicate has one configuration option - name
    4. One transformation is defined - filter
    5. Implementation of the filter transformation is io.debezium.embedded.ExampleFilterTransform
    6. The filter transformation requires the predicate headerExists
    7. The filter transformation expects the value of the predicate to be negated, making the predicate determine if the header does not exist
    transforms map[string]map[string]string Before the messages are delivered to the handler it is possible to run them through a pipeline of Kafka Connect Simple Message Transforms (SMT). Each SMT can pass the message unchanged, modify it or filter it out. The chain is configured using property transforms. The property contains a list of logical names of the transformations to be applied (the specified keys). Properties transforms..type then defines the name of the implementation class for each transformation and transforms..* configuration options that are passed to the transformation.

    An example of the configuration is:

    transforms: # (1)
      router:
        type: "org.apache.kafka.connect.transforms.RegexRouter" # (2)
        regex: "(.*)" # (3)
        replacement: "trf$1" # (3)
      filter:
        type: "io.debezium.embedded.ExampleFilterTransform" # (4)
    
    1. Two transformations are defined - filter and router
    2. Implementation of the router transformation is org.apache.kafka.connect.transforms.RegexRouter
    3. The router transformation has two configurations options -regex and replacement
    4. Implementation of the filter transformation is io.debezium.embedded.ExampleFilterTransform

    SGStream.spec.pods

    ↩ Parent

    The configuration for SGStream Pod

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    persistentVolume object Pod’s persistent volume configuration.


    resources object The resources assigned to the stream container.

    See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

    scheduling object Pod custom scheduling, affinity and topology spread constratins configuration.

    SGStream.spec.pods.persistentVolume

    ↩ Parent

    Pod’s persistent volume configuration.

    Example:

    apiVersion: stackgres.io/v1
    kind: SGCluster
    metadata:
      name: stackgres
    spec:
      pods:
        persistentVolume:
          size: '5Gi'
          storageClass: default
    

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    size string Size of the PersistentVolume for stream Pod. This size is specified either in Mebibytes, Gibibytes or Tebibytes (multiples of 2^20, 2^30 or 2^40, respectively).
    storageClass string Name of an existing StorageClass in the Kubernetes cluster, used to create the PersistentVolume for stream.

    SGStream.spec.pods.resources

    ↩ Parent

    The resources assigned to the stream container.

    See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    claims []object Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container.

    This is an alpha field and requires enabling the DynamicResourceAllocation feature gate.

    This field is immutable. It can only be set for containers.

    limits map[string]string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
    requests map[string]string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
    SGStream.spec.pods.resources.claims[index]

    ↩ Parent

    ResourceClaim references one entry in PodSpec.ResourceClaims.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.

    SGStream.spec.pods.scheduling

    ↩ Parent

    Pod custom scheduling, affinity and topology spread constratins configuration.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    nodeAffinity object Node affinity is a group of node affinity scheduling rules.

    See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#nodeaffinity-v1-core

    nodeSelector map[string]string NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
    podAffinity object Pod affinity is a group of inter pod affinity scheduling rules.

    See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#podaffinity-v1-core

    podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules.

    See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#podantiaffinity-v1-core

    priorityClassName string If specified, indicates the pod’s priority. “system-node-critical” and “system-cluster-critical” are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default.
    tolerations []object If specified, the pod’s tolerations.

    See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#toleration-v1-core

    topologySpreadConstraints []object TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.

    See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#topologyspreadconstraint-v1-core

    SGStream.spec.pods.scheduling.nodeAffinity

    ↩ Parent

    Node affinity is a group of node affinity scheduling rules.

    See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#nodeaffinity-v1-core

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    preferredDuringSchedulingIgnoredDuringExecution []object The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.
    requiredDuringSchedulingIgnoredDuringExecution object A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.
    SGStream.spec.pods.scheduling.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[index]

    ↩ Parent

    An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it’s a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    preference object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
    weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.

    Format: int32
    SGStream.spec.pods.scheduling.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].preference

    ↩ Parent

    A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    matchExpressions []object A list of node selector requirements by node’s labels.
    matchFields []object A list of node selector requirements by node’s fields.
    SGStream.spec.pods.scheduling.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].preference.matchExpressions[index]

    ↩ Parent

    A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The label key that the selector applies to.
    operator string Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
    values []string An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].preference.matchFields[index]

    ↩ Parent

    A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The label key that the selector applies to.
    operator string Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
    values []string An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution

    ↩ Parent

    A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    nodeSelectorTerms []object Required. A list of node selector terms. The terms are ORed.
    SGStream.spec.pods.scheduling.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[index]

    ↩ Parent

    A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    matchExpressions []object A list of node selector requirements by node’s labels.
    matchFields []object A list of node selector requirements by node’s fields.
    SGStream.spec.pods.scheduling.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[index].matchExpressions[index]

    ↩ Parent

    A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The label key that the selector applies to.
    operator string Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
    values []string An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[index].matchFields[index]

    ↩ Parent

    A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string The label key that the selector applies to.
    operator string Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
    values []string An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.podAffinity

    ↩ Parent

    Pod affinity is a group of inter pod affinity scheduling rules.

    See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#podaffinity-v1-core

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    preferredDuringSchedulingIgnoredDuringExecution []object The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
    requiredDuringSchedulingIgnoredDuringExecution []object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
    SGStream.spec.pods.scheduling.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[index]

    ↩ Parent

    The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running
    weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100.

    Format: int32
    SGStream.spec.pods.scheduling.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].podAffinityTerm

    ↩ Parent

    Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
    labelSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
    matchLabelKeys []string MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn’t set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
    mismatchLabelKeys []string MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn’t set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
    namespaceSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
    namespaces []string namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
    SGStream.spec.pods.scheduling.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].podAffinityTerm.labelSelector

    ↩ Parent

    A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
    matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
    SGStream.spec.pods.scheduling.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].podAffinityTerm.labelSelector.matchExpressions[index]

    ↩ Parent

    A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string key is the label key that the selector applies to.
    operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
    values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].podAffinityTerm.namespaceSelector

    ↩ Parent

    A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
    matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
    SGStream.spec.pods.scheduling.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].podAffinityTerm.namespaceSelector.matchExpressions[index]

    ↩ Parent

    A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string key is the label key that the selector applies to.
    operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
    values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[index]

    ↩ Parent

    Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
    labelSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
    matchLabelKeys []string MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn’t set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
    mismatchLabelKeys []string MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn’t set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
    namespaceSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
    namespaces []string namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
    SGStream.spec.pods.scheduling.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[index].labelSelector

    ↩ Parent

    A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
    matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
    SGStream.spec.pods.scheduling.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[index].labelSelector.matchExpressions[index]

    ↩ Parent

    A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string key is the label key that the selector applies to.
    operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
    values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[index].namespaceSelector

    ↩ Parent

    A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
    matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
    SGStream.spec.pods.scheduling.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[index].namespaceSelector.matchExpressions[index]

    ↩ Parent

    A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string key is the label key that the selector applies to.
    operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
    values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.podAntiAffinity

    ↩ Parent

    Pod anti affinity is a group of inter pod anti affinity scheduling rules.

    See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#podantiaffinity-v1-core

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    preferredDuringSchedulingIgnoredDuringExecution []object The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
    requiredDuringSchedulingIgnoredDuringExecution []object If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
    SGStream.spec.pods.scheduling.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[index]

    ↩ Parent

    The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running
    weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100.

    Format: int32
    SGStream.spec.pods.scheduling.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].podAffinityTerm

    ↩ Parent

    Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
    labelSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
    matchLabelKeys []string MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn’t set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
    mismatchLabelKeys []string MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn’t set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
    namespaceSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
    namespaces []string namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
    SGStream.spec.pods.scheduling.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].podAffinityTerm.labelSelector

    ↩ Parent

    A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
    matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
    SGStream.spec.pods.scheduling.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].podAffinityTerm.labelSelector.matchExpressions[index]

    ↩ Parent

    A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string key is the label key that the selector applies to.
    operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
    values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].podAffinityTerm.namespaceSelector

    ↩ Parent

    A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
    matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
    SGStream.spec.pods.scheduling.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[index].podAffinityTerm.namespaceSelector.matchExpressions[index]

    ↩ Parent

    A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string key is the label key that the selector applies to.
    operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
    values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[index]

    ↩ Parent

    Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
    labelSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
    matchLabelKeys []string MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn’t set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
    mismatchLabelKeys []string MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod’s pod (anti) affinity. Keys that don’t exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn’t set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate.
    namespaceSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
    namespaces []string namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
    SGStream.spec.pods.scheduling.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[index].labelSelector

    ↩ Parent

    A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
    matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
    SGStream.spec.pods.scheduling.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[index].labelSelector.matchExpressions[index]

    ↩ Parent

    A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string key is the label key that the selector applies to.
    operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
    values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[index].namespaceSelector

    ↩ Parent

    A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
    matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
    SGStream.spec.pods.scheduling.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[index].namespaceSelector.matchExpressions[index]

    ↩ Parent

    A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string key is the label key that the selector applies to.
    operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
    values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
    SGStream.spec.pods.scheduling.tolerations[index]

    ↩ Parent

    The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator .

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.
    key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.
    operator string Operator represents a key’s relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.
    tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.

    Format: int64
    value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
    SGStream.spec.pods.scheduling.topologySpreadConstraints[index]

    ↩ Parent

    TopologySpreadConstraint specifies how to spread matching pods among the given topology.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway, it is used to give higher precedence to topologies that satisfy it. It’s a required field. Default value is 1 and 0 is not allowed.

    Format: int32
    topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a “bucket”, and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is “kubernetes.io/hostname”, each Node is a domain of that topology. And, if TopologyKey is “topology.kubernetes.io/zone”, each zone is a domain of that topology. It’s a required field.
    whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn’t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered “Unsatisfiable” for an incoming pod if and only if every possible node assignment for that pod would violate “MaxSkew” on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won’t make it more imbalanced. It’s a required field.
    labelSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
    matchLabelKeys []string MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn’t set. Keys that don’t exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.

    This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default).

    minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats “global minimum” as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won’t schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule.

    For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so “global minimum” is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew.

    This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default).

    Format: int32

    nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod’s nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.

    If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.

    nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.

    If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.

    SGStream.spec.pods.scheduling.topologySpreadConstraints[index].labelSelector

    ↩ Parent

    A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
    matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
    SGStream.spec.pods.scheduling.topologySpreadConstraints[index].labelSelector.matchExpressions[index]

    ↩ Parent

    A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    key string key is the label key that the selector applies to.
    operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
    values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

    SGStream.status

    ↩ Parent

    Status of a StackGres stream.

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    conditions []object Possible conditions are:

    • Running: to indicate when the operation is actually running
    • Completed: to indicate when the operation has completed successfully
    • Failed: to indicate when the operation has failed
    events object Events status
    failure string The failure message
    snapshot object Snapshot status
    streaming object Streaming status

    SGStream.status.conditions[index]

    ↩ Parent

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    lastTransitionTime string Last time the condition transitioned from one status to another.
    message string A human-readable message indicating details about the transition.
    reason string The reason for the condition last transition.
    status string Status of the condition, one of True, False or Unknown.
    type string Type of deployment condition.

    SGStream.status.events

    ↩ Parent

    Events status

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    lastErrorSeen string The last error seen sending events that this stream has seen since the last start or metrics reset.
    lastEventSent string The last event that the stream has sent since the last start or metrics reset.
    lastEventWasSent boolean It is true if the last event that the stream has tried to send since the last start or metrics reset was sent successfully.
    totalNumberOfErrorsSeen integer The total number of errors sending events that this stream has seen since the last start or metrics reset.
    totalNumberOfEventsSent integer The total number of events that this stream has sent since the last start or metrics reset.

    SGStream.status.snapshot

    ↩ Parent

    Snapshot status

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    capturedTables []string The list of tables that are captured by the connector.
    chunkFrom string The lower bound of the primary key set defining the current chunk.
    chunkId string The identifier of the current snapshot chunk.
    chunkTo string The upper bound of the primary key set defining the current chunk.
    currentQueueSizeInBytes integer The current volume, in bytes, of records in the queue.
    lastEvent string The last snapshot event that the connector has read.
    maxQueueSizeInBytes integer The maximum buffer of the queue in bytes. This metric is available if max.queue.size.in.bytes is set to a positive long value.
    milliSecondsSinceLastEvent integer The number of milliseconds since the connector has read and processed the most recent event.
    numberOfEventsFiltered integer The number of events that have been filtered by include/exclude list filtering rules configured on the connector.
    queueRemainingCapacity integer The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop.
    queueTotalCapacity integer The length the queue used to pass events between the snapshotter and the main Kafka Connect loop.
    remainingTableCount integer The number of tables that the snapshot has yet to copy.
    rowsScanned map[string]integer Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table.
    snapshotAborted boolean Whether the snapshot was aborted.
    snapshotCompleted boolean Whether the snapshot completed.
    snapshotDurationInSeconds integer The total number of seconds that the snapshot has taken so far, even if not complete. Includes also time when snapshot was paused.
    snapshotPaused boolean Whether the snapshot was paused.
    snapshotPausedDurationInSeconds integer The total number of seconds that the snapshot was paused. If the snapshot was paused several times, the paused time adds up.
    snapshotRunning boolean Whether the snapshot was started.
    tableFrom string The lower bound of the primary key set of the currently snapshotted table.
    tableTo string The upper bound of the primary key set of the currently snapshotted table.
    totalNumberOfEventsSeen integer The total number of events that this connector has seen since last started or reset.
    totalTableCount integer The total number of tables that are being included in the snapshot.

    SGStream.status.streaming

    ↩ Parent

    Streaming status

    Property
    Required
    Updatable
    May Require Restart
    Type
    Description

    Workaround for hugo bug not rendering first table row

    capturedTables []string The list of tables that are captured by the connector.
    connected boolean Flag that denotes whether the connector is currently connected to the database server.
    currentQueueSizeInBytes integer The current volume, in bytes, of records in the queue.
    lastEvent string The last streaming event that the connector has read.
    lastTransactionId string Transaction identifier of the last processed transaction.
    maxQueueSizeInBytes integer The maximum buffer of the queue in bytes. This metric is available if max.queue.size.in.bytes is set to a positive long value.
    milliSecondsBehindSource integer The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incoporate any differences between the clocks on the machines where the database server and the connector are running.
    milliSecondsSinceLastEvent integer The number of milliseconds since the connector has read and processed the most recent event.
    numberOfCommittedTransactions integer The number of processed transactions that were committed.
    numberOfEventsFiltered integer The number of events that have been filtered by include/exclude list filtering rules configured on the connector.
    queueRemainingCapacity integer The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop.
    queueTotalCapacity integer The length the queue used to pass events between the streamer and the main Kafka Connect loop.
    sourceEventPosition map[string]string The coordinates of the last received event.
    totalNumberOfCreateEventsSeen integer The total number of create events that this connector has seen since the last start or metrics reset.
    totalNumberOfDeleteEventsSeen integer The total number of delete events that this connector has seen since the last start or metrics reset.
    totalNumberOfEventsSeen integer The total number of events that this connector has seen since the last start or metrics reset.
    totalNumberOfUpdateEventsSeen integer The total number of update events that this connector has seen since the last start or metrics reset.