SGShardedCluster


Kind: SGShardedCluster

listKind: SGShardedClusterList

plural: sgshardedclusters

singular: sgshardedcluster

shortNames sgscl


StackGres PostgreSQL sharded clusters are created using the SGShardedCluster Custom Resource.

Example:

apiVersion: stackgres.io/v1alpha1
kind: SGShardedCluster
metadata:
  name: stackgres
spec:
  postgres:
    version: 'latest'
  coordinator:
    instances: 1
    pods:
      persistentVolume:
        size: '5Gi'
  shards:
    clusters: 2
    instancesPerCluster: 1
    pods:
      persistentVolume:
        size: '5Gi'

See also Sharded Cluster Creation section.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

apiVersion string stackgres.io/v1alpha1
kind string SGShardedCluster
metadata object Refer to the Kubernetes API documentation for the fields of the metadata field.
spec object Specification of the desired behavior of a StackGres sharded cluster.
status object Current status of a StackGres sharded cluster.

SGShardedCluster.spec

↩ Parent

Specification of the desired behavior of a StackGres sharded cluster.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

coordinator object The coordinator is a StackGres cluster responsible of coordinating data storage and access from the shards.
database string The database name that will be created and used across all node and where “partitioned” (distributed) tables will live in.
postgres object This section allows to configure Postgres features
shards object The shards are a group of StackGres clusters where the partitioned data chunks are stored.

When referring to the cluster in the descriptions belove it apply to any shard’s StackGres cluster.

configurations object Sharded cluster custom configurations.


distributedLogs object StackGres features a functionality for all pods to send Postgres, Patroni and PgBouncer logs to a central (distributed) location, which is in turn another Postgres database. Logs can then be accessed via SQL interface or from the web UI. This section controls whether to enable this feature or not. If not enabled, logs are send to the pod's standard output.
initialData object Sharded cluster initialization data options. Sharded cluster may be initialized empty, or from a sharded backup restoration.

This field can only be set on creation.

metadata object Metadata information from any cluster created resources.
nonProductionOptions object
postgresServices object Kubernetes services created or managed by StackGres.
profile string The profile allow to change in a convenient place a set of configuration defaults that affect how the cluster is generated.

All those defaults can be overwritten by setting the correspoinding fields.

Available profiles are:

  • production:

    Prevents two Pods from running in the same Node (set .spec.nonProductionOptions.disableClusterPodAntiAffinity to false by default). Sets both limits and requests using SGInstanceProfile for patroni container that runs both Patroni and Postgres (set .spec.nonProductionOptions.disablePatroniResourceRequirements to false by default). Sets requests using the referenced SGInstanceProfile for sidecar containers other than patroni (set .spec.nonProductionOptions.disableClusterResourceRequirements to false by default).

  • testing:

    Allows two Pods to running in the same Node (set .spec.nonProductionOptions.disableClusterPodAntiAffinity to true by default). Sets both limits and requests using SGInstanceProfile for patroni container that runs both Patroni and Postgres (set .spec.nonProductionOptions.disablePatroniResourceRequirements to false by default). Sets requests using the referenced SGInstanceProfile for sidecar containers other than patroni (set .spec.nonProductionOptions.disableClusterResourceRequirements to false by default).

  • development:

    Allows two Pods from running in the same Node (set .spec.nonProductionOptions.disableClusterPodAntiAffinity to true by default). Unset both limits and requests for patroni container that runs both Patroni and Postgres (set .spec.nonProductionOptions.disablePatroniResourceRequirements to true by default). Unsets requests for sidecar containers other than patroni (set .spec.nonProductionOptions.disableClusterResourceRequirements to true by default).

Changing this field may require a restart.

Default: production

prometheusAutobind boolean If enabled, a ServiceMonitor is created for each Prometheus instance found in order to collect metrics.
replication object This section allows to configure the global Postgres replication mode.

The main replication group is implicit and contains the total number of instances less the sum of all instances in other replication groups.

The total number of instances is always specified by .spec.instances.

type string The sharding technology that will be used for the sharded cluster.

Currently the only possible value for this field is citus.

SGShardedCluster.spec.coordinator

↩ Parent

The coordinator is a StackGres cluster responsible of coordinating data storage and access from the shards.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

instances integer Number of StackGres instances for the cluster. Each instance contains one Postgres server. Out of all of the Postgres servers, one is elected as the primary, the rest remain as read-only replicas.

Minimum: 1
Maximum: 16
pods object Cluster pod’s configuration.
configurations object Coordinator custom configurations.
managedSql object This section allows to reference SQL scripts that will be applied to the cluster live.
metadata object Metadata information from coordinator cluster created resources.
replication object This section allows to configure the global Postgres replication mode.

The main replication group is implicit and contains the total number of instances less the sum of all instances in other replication groups.

The total number of instances is always specified by .spec.instances.

sgInstanceProfile string Name of the SGInstanceProfile.

A SGInstanceProfile defines CPU and memory limits. Must exist before creating a cluster.

When no profile is set, a default (1 core, 2 GiB RAM) one is used.

Changing this field may require a restart.

SGShardedCluster.spec.coordinator.pods

↩ Parent

Cluster pod’s configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

persistentVolume object Pod’s persistent volume configuration.
customContainers []object A list of custom application containers that run within the coordinator cluster’s Pods.

The name used in this section will be prefixed with the string custom- so that when referencing them in the .spec.containers section of SGInstanceProfile the name used have to be prepended with the same prefix.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#container-v1-core

customInitContainers []object A list of custom application init containers that run within the shards cluster’s Pods. The custom init containers will run following the defined sequence as the end of cluster’s Pods init containers.

The name used in this section will be prefixed with the string custom- so that when referencing them in the .spec.containers section of SGInstanceProfile the name used have to be prepended with the same prefix.

Changing this field may require a restart.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#container-v1-core

customVolumes []object A list of custom volumes that may be used along with any container defined in customInitContainers or customContainers sections for the coordinator.

The name used in this section will be prefixed with the string custom- so that when referencing them in the customInitContainers or customContainers sections the name used have to be prepended with the same prefix.

Only the following volume types are allowed: configMap, downwardAPI, emptyDir, gitRepo, glusterfs, hostPath, nfs, projected and secret

Changing this field may require a restart.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#volume-v1-core

disableConnectionPooling boolean If set to true, avoids creating a connection pooling (using PgBouncer) sidecar.

Changing this field may require a restart.

disableMetricsExporter boolean If set to true, avoids creating the Prometheus exporter sidecar. Recommended when there’s no intention to use Prometheus for monitoring.
disablePostgresUtil boolean If set to true, avoids creating the postgres-util sidecar. This sidecar contains usual Postgres administration utilities that are not present in the main (patroni) container, like psql. Only disable if you know what you are doing.

Changing this field may require a restart.

managementPolicy string managementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is OrderedReady, where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is Parallel which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once.
resources object Pod custom resources configuration.
scheduling object Pod custom scheduling, affinity and topology spread constratins configuration.

Changing this field may require a restart.

SGShardedCluster.spec.coordinator.pods.persistentVolume

↩ Parent

Pod’s persistent volume configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

size string Size of the PersistentVolume set for each instance of the cluster. This size is specified either in Mebibytes, Gibibytes or Tebibytes (multiples of 2^20, 2^30 or 2^40, respectively).
storageClass string Name of an existing StorageClass in the Kubernetes cluster, used to create the PersistentVolumes for the instances of the cluster.
SGShardedCluster.spec.coordinator.pods.resources

↩ Parent

Pod custom resources configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

disableResourcesRequestsSplitFromTotal boolean When set to true the resources requests values in fields SGInstanceProfile.spec.requests.cpu and SGInstanceProfile.spec.requests.memory will represent the resources requests of the patroni container and the total resources requests calculated by adding the resources requests of all the containers (including the patroni container).

Changing this field may require a restart.

enableClusterLimitsRequirements boolean When enabled resource limits for containers other than the patroni container wil be set just like for patroni contianer as specified in the SGInstanceProfile.

Changing this field may require a restart.

SGShardedCluster.spec.coordinator.pods.scheduling

↩ Parent

Pod custom scheduling, affinity and topology spread constratins configuration.

Changing this field may require a restart.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

backup object Backup Pod custom scheduling and affinity configuration.
nodeAffinity object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

nodeSelector map[string]string NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
podAffinity object Pod affinity is a group of inter pod affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podaffinity-v1-core

podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podantiaffinity-v1-core

priorityClassName string Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible.
tolerations []object If specified, the pod’s tolerations.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#toleration-v1-core

topologySpreadConstraints []object TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.
SGShardedCluster.spec.coordinator.pods.scheduling.backup

↩ Parent

Backup Pod custom scheduling and affinity configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

nodeAffinity object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

nodeSelector object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

podAffinity object Pod affinity is a group of inter pod affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podaffinity-v1-core

podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podantiaffinity-v1-core

priorityClassName string Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible.
tolerations object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

SGShardedCluster.spec.coordinator.pods.scheduling.topologySpreadConstraints[index]

↩ Parent

TopologySpreadConstraint specifies how to spread matching pods among the given topology.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#topologyspreadconstraint-v1-core

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway, it is used to give higher precedence to topologies that satisfy it. It’s a required field. Default value is 1 and 0 is not allowed.

Format: int32
topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a “bucket”, and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is “kubernetes.io/hostname”, each Node is a domain of that topology. And, if TopologyKey is “topology.kubernetes.io/zone”, each zone is a domain of that topology. It’s a required field.
whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn’t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered “Unsatisfiable” for an incoming pod if and only if every possible node assignment for that pod would violate “MaxSkew” on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won’t make it more imbalanced. It’s a required field.


labelSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
matchLabelKeys []string MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule.

For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so “global minimum” is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew.

This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default).

Format: int32

nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod’s nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.

If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.

nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.

If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.

SGShardedCluster.spec.coordinator.pods.scheduling.topologySpreadConstraints[index].labelSelector

↩ Parent

A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
SGShardedCluster.spec.coordinator.pods.scheduling.topologySpreadConstraints[index].labelSelector.matchExpressions[index]

↩ Parent

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string key is the label key that the selector applies to.
operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

SGShardedCluster.spec.coordinator.configurations

↩ Parent

Coordinator custom configurations.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

sgPoolingConfig string Name of the SGPoolingConfig used for this cluster. Each pod contains a sidecar with a connection pooler (currently: PgBouncer). The connection pooler is implemented as a sidecar.

If not set, a default configuration will be used. Disabling connection pooling altogether is possible if the disableConnectionPooling property of the pods object is set to true.

Changing this field may require a restart.

sgPostgresConfig string Name of the SGPostgresConfig used for the cluster. It must exist. When not set, a default Postgres config, for the major version selected, is used.

Changing this field may require a restart.

SGShardedCluster.spec.coordinator.managedSql

↩ Parent

This section allows to reference SQL scripts that will be applied to the cluster live.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

continueOnSGScriptError boolean If true, when any entry of any SGScript fail will not prevent subsequent SGScript from being executed. By default is false.
scripts []object A list of script references that will be executed in sequence.
SGShardedCluster.spec.coordinator.managedSql.scripts[index]

↩ Parent

A script reference. Each version of each entry of the script referenced will be executed exactly once following the sequence defined in the referenced script and skipping any script entry that have already been executed.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

id integer The id is immutable and must be unique across all the SGScript entries. It is replaced by the operator and is used to identify the SGScript entry.
sgScript string A reference to an SGScript

SGShardedCluster.spec.coordinator.metadata

↩ Parent

Metadata information from coordinator cluster created resources.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

annotations object Custom Kubernetes annotations to be passed to resources created and managed by StackGres.
labels object Custom Kubernetes labels to be passed to resources created and managed by StackGres.
SGShardedCluster.spec.coordinator.metadata.annotations

↩ Parent

Custom Kubernetes annotations to be passed to resources created and managed by StackGres.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

allResources map[string]string Annotations to attach to any resource created or managed by StackGres.
clusterPods map[string]string Annotations to attach to pods created or managed by StackGres.
primaryService map[string]string Custom Kubernetes annotations passed to the -primary service.
replicasService map[string]string Custom Kubernetes annotations passed to the -replicas service.
services map[string]string Annotations to attach to all services created or managed by StackGres.
SGShardedCluster.spec.coordinator.metadata.labels

↩ Parent

Custom Kubernetes labels to be passed to resources created and managed by StackGres.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

clusterPods map[string]string Labels to attach to Pods created or managed by StackGres.
services map[string]string Labels to attach to Services and Endpoints created or managed by StackGres.

SGShardedCluster.spec.coordinator.replication

↩ Parent

This section allows to configure the global Postgres replication mode.

The main replication group is implicit and contains the total number of instances less the sum of all instances in other replication groups.

The total number of instances is always specified by .spec.instances.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

mode string The replication mode applied to the whole cluster. Possible values are:

  • async (default)
  • sync
  • strict-sync
  • sync-all
  • strict-sync-all

async

When in asynchronous mode the cluster is allowed to lose some committed transactions. When the primary server fails or becomes unavailable for any other reason a sufficiently healthy standby will automatically be promoted to primary. Any transactions that have not been replicated to that standby remain in a “forked timeline” on the primary, and are effectively unrecoverable (the data is still there, but recovering it requires a manual recovery effort by data recovery specialists).

sync

When in synchronous mode a standby will not be promoted unless it is certain that the standby contains all transactions that may have returned a successful commit status to client (clients can change the behavior per transaction using PostgreSQL’s synchronous_commit setting. Transactions with synchronous_commit values of off and local may be lost on fail over, but will not be blocked by replication delays). This means that the system may be unavailable for writes even though some servers are available. System administrators can still use manual failover commands to promote a standby even if it results in transaction loss.

Synchronous mode does not guarantee multi node durability of commits under all circumstances. When no suitable standby is available, primary server will still accept writes, but does not guarantee their replication. When the primary fails in this mode no standby will be promoted. When the host that used to be the primary comes back it will get promoted automatically, unless system administrator performed a manual failover. This behavior makes synchronous mode usable with 2 node clusters.

When synchronous mode is used and a standby crashes, commits will block until the primary is switched to standalone mode. Manually shutting down or restarting a standby will not cause a commit service interruption. Standby will signal the primary to release itself from synchronous standby duties before PostgreSQL shutdown is initiated.

strict-sync

When it is absolutely necessary to guarantee that each write is stored durably on at least two nodes, use the strict synchronous mode. This mode prevents synchronous replication to be switched off on the primary when no synchronous standby candidates are available. As a downside, the primary will not be available for writes (unless the Postgres transaction explicitly turns off synchronous_mode parameter), blocking all client write requests until at least one synchronous replica comes up.

Note: Because of the way synchronous replication is implemented in PostgreSQL it is still possible to lose transactions even when using strict synchronous mode. If the PostgreSQL backend is cancelled while waiting to acknowledge replication (as a result of packet cancellation due to client timeout or backend failure) transaction changes become visible for other backends. Such changes are not yet replicated and may be lost in case of standby promotion.

sync-all

The same as sync but syncInstances is ignored and the number of synchronous instances is equals to the total number of instances less one.

strict-sync-all

The same as strict-sync but syncInstances is ignored and the number of synchronous instances is equals to the total number of instances less one.

Default: sync-all

syncInstances integer Number of synchronous standby instances. Must be less than the total number of instances. It is set to 1 by default. Only setteable if mode is sync or strict-sync.

Minimum: 1

SGShardedCluster.spec.postgres

↩ Parent

This section allows to configure Postgres features

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

version string Postgres version used on the cluster. It is either of:

  • The string ‘latest’, which automatically sets the latest major.minor Postgres version.
  • A major version, like ‘14’ or ‘13’, which sets that major version and the latest minor version.
  • A specific major.minor version, like ‘14.4’.
extensions []object StackGres support deploy of extensions at runtime by simply adding an entry to this array. A deployed extension still

requires the creation in a database using the CREATE EXTENSION statement. After an extension is deployed correctly it will be present until removed and the cluster restarted.

A cluster restart is required for:

  • Extensions that requires to add an entry to shared_preload_libraries configuration parameter.
  • Upgrading extensions that overwrite any file that is not the extension'’s control file or extension'’s script file.
  • Removing extensions. Until the cluster is not restarted a removed extension will still be available.
  • Install of extensions that require extra mount. After installed the cluster will require to be restarted.

Exmaple:

apiVersion: stackgres.io/v1alpha1
kind: SGShardedCluster
metadata:
  name: stackgres
spec:
  postgres:
    extensions:
      - {name: 'timescaledb', version: '2.3.1'}

flavor string Postgres flavor used on the cluster. It is either of: * `babelfish` will use the [Babelfish for Postgres](https://babelfish-for-postgresql.github.io/babelfish-for-postgresql/).

If not specified then the vanilla Postgres will be used for the cluster.

This field can only be set on creation.

ssl object This section allows to use SSL when connecting to Postgres


SGShardedCluster.spec.postgres.extensions[index]

↩ Parent

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

name string The name of the extension to deploy.
publisher string The id of the publisher of the extension to deploy. If not specified com.ongres will be used by default.
repository string The repository base URL from where to obtain the extension to deploy.

This section is filled by the operator.

version string The version of the extension to deploy. If not specified version of stable channel will be used by default.

SGShardedCluster.spec.postgres.ssl

↩ Parent

This section allows to use SSL when connecting to Postgres

Example:

apiVersion: stackgres.io/v1alpha1
kind: SGShardedCluster
metadata:
  name: stackgres
spec:
  postgres:
    ssl:
      enabled: true
      certificateSecretKeySelector:
        name: stackgres-secrets
        key: cert
      privateKeySecretKeySelector:
        name: stackgres-secrets
        key: key

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

certificateSecretKeySelector object Secret key selector for the certificate or certificate chain used for SSL connections.
enabled boolean Allow to enable SSL for connections to Postgres. By default is true.

If true certificate and private key will be auto-generated unless fields certificateSecretKeySelector and privateKeySecretKeySelector are specified.

privateKeySecretKeySelector object Secret key selector for the private key used for SSL connections.
SGShardedCluster.spec.postgres.ssl.certificateSecretKeySelector

↩ Parent

Secret key selector for the certificate or certificate chain used for SSL connections.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string The key of Secret that contains the certificate or certificate chain for SSL connections
name string The name of Secret that contains the certificate or certificate chain for SSL connections
SGShardedCluster.spec.postgres.ssl.privateKeySecretKeySelector

↩ Parent

Secret key selector for the private key used for SSL connections.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string The key of Secret that contains the private key for SSL connections
name string The name of Secret that contains the private key for SSL connections

SGShardedCluster.spec.shards

↩ Parent

The shards are a group of StackGres clusters where the partitioned data chunks are stored.

When referring to the cluster in the descriptions belove it apply to any shard’s StackGres cluster.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

clusters integer Number of shard’s StackGres clusters

Minimum: 1
Maximum: 16
instancesPerCluster integer Number of StackGres instances per shard’s StackGres cluster. Each instance contains one Postgres server. Out of all of the Postgres servers, one is elected as the primary, the rest remain as read-only replicas.

Minimum: 1
Maximum: 16
pods object Cluster pod’s configuration.
configurations object Shards custom configurations.
managedSql object This section allows to reference SQL scripts that will be applied to the cluster live.
metadata object Metadata information from shards cluster created resources.
overrides []object Any shard can be overriden by this section.
replication object This section allows to configure the global Postgres replication mode.

The main replication group is implicit and contains the total number of instances less the sum of all instances in other replication groups.

The total number of instances is always specified by .spec.instances.

sgInstanceProfile string Name of the SGInstanceProfile.

A SGInstanceProfile defines CPU and memory limits. Must exist before creating a cluster.

When no profile is set, a default (1 core, 2 GiB RAM) one is used.

Changing this field may require a restart.

SGShardedCluster.spec.shards.pods

↩ Parent

Cluster pod’s configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

persistentVolume object Pod’s persistent volume configuration.
customContainers []object A list of custom application containers that run within the shards cluster’s Pods.

The name used in this section will be prefixed with the string custom- so that when referencing them in the .spec.containers section of SGInstanceProfile the name used have to be prepended with the same prefix.

Changing this field may require a restart.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#container-v1-core

customInitContainers []object A list of custom application init containers that run within the coordinator cluster’s Pods. The custom init containers will run following the defined sequence as the end of cluster’s Pods init containers.

The name used in this section will be prefixed with the string custom- so that when referencing them in the .spec.containers section of SGInstanceProfile the name used have to be prepended with the same prefix.

Changing this field may require a restart.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#container-v1-core

customVolumes []object A list of custom volumes that may be used along with any container defined in customInitContainers or customContainers sections for the shards.

The name used in this section will be prefixed with the string custom- so that when referencing them in the customInitContainers or customContainers sections the name used have to be prepended with the same prefix.

Only the following volume types are allowed: configMap, downwardAPI, emptyDir, gitRepo, glusterfs, hostPath, nfs, projected and secret

Changing this field may require a restart.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#volume-v1-core

disableConnectionPooling boolean If set to true, avoids creating a connection pooling (using PgBouncer) sidecar.

Changing this field may require a restart.

disableMetricsExporter boolean If set to true, avoids creating the Prometheus exporter sidecar. Recommended when there’s no intention to use Prometheus for monitoring.

Changing this field may require a restart.

disablePostgresUtil boolean If set to true, avoids creating the postgres-util sidecar. This sidecar contains usual Postgres administration utilities that are not present in the main (patroni) container, like psql. Only disable if you know what you are doing.

Changing this field may require a restart.

managementPolicy string managementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is OrderedReady, where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is Parallel which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once.
resources object Pod custom resources configuration.
scheduling object Pod custom scheduling, affinity and topology spread constratins configuration.

Changing this field may require a restart.

SGShardedCluster.spec.shards.pods.persistentVolume

↩ Parent

Pod’s persistent volume configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

size string Size of the PersistentVolume set for each instance of the cluster. This size is specified either in Mebibytes, Gibibytes or Tebibytes (multiples of 2^20, 2^30 or 2^40, respectively).
storageClass string Name of an existing StorageClass in the Kubernetes cluster, used to create the PersistentVolumes for the instances of the cluster.
SGShardedCluster.spec.shards.pods.resources

↩ Parent

Pod custom resources configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

disableResourcesRequestsSplitFromTotal boolean When set to true the resources requests values in fields SGInstanceProfile.spec.requests.cpu and SGInstanceProfile.spec.requests.memory will represent the resources requests of the patroni container and the total resources requests calculated by adding the resources requests of all the containers (including the patroni container).

Changing this field may require a restart.

enableClusterLimitsRequirements boolean When enabled resource limits for containers other than the patroni container wil be set just like for patroni contianer as specified in the SGInstanceProfile.

Changing this field may require a restart.

SGShardedCluster.spec.shards.pods.scheduling

↩ Parent

Pod custom scheduling, affinity and topology spread constratins configuration.

Changing this field may require a restart.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

backup object Backup Pod custom scheduling and affinity configuration.
nodeAffinity object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

nodeSelector map[string]string NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
podAffinity object Pod affinity is a group of inter pod affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podaffinity-v1-core

podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podantiaffinity-v1-core

priorityClassName string Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible.
tolerations []object If specified, the pod’s tolerations.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#toleration-v1-core

topologySpreadConstraints []object TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.
SGShardedCluster.spec.shards.pods.scheduling.backup

↩ Parent

Backup Pod custom scheduling and affinity configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

nodeAffinity object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

nodeSelector object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

podAffinity object Pod affinity is a group of inter pod affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podaffinity-v1-core

podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podantiaffinity-v1-core

priorityClassName string Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible.
tolerations object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

SGShardedCluster.spec.shards.pods.scheduling.topologySpreadConstraints[index]

↩ Parent

TopologySpreadConstraint specifies how to spread matching pods among the given topology.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#topologyspreadconstraint-v1-core

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway, it is used to give higher precedence to topologies that satisfy it. It’s a required field. Default value is 1 and 0 is not allowed.

Format: int32
topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a “bucket”, and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is “kubernetes.io/hostname”, each Node is a domain of that topology. And, if TopologyKey is “topology.kubernetes.io/zone”, each zone is a domain of that topology. It’s a required field.
whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn’t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered “Unsatisfiable” for an incoming pod if and only if every possible node assignment for that pod would violate “MaxSkew” on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won’t make it more imbalanced. It’s a required field.


labelSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
matchLabelKeys []string MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule.

For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so “global minimum” is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew.

This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default).

Format: int32

nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod’s nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.

If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.

nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.

If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.

SGShardedCluster.spec.shards.pods.scheduling.topologySpreadConstraints[index].labelSelector

↩ Parent

A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
SGShardedCluster.spec.shards.pods.scheduling.topologySpreadConstraints[index].labelSelector.matchExpressions[index]

↩ Parent

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string key is the label key that the selector applies to.
operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

SGShardedCluster.spec.shards.configurations

↩ Parent

Shards custom configurations.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

sgPoolingConfig string Name of the SGPoolingConfig used for this cluster. Each pod contains a sidecar with a connection pooler (currently: PgBouncer). The connection pooler is implemented as a sidecar.

If not set, a default configuration will be used. Disabling connection pooling altogether is possible if the disableConnectionPooling property of the pods object is set to true.

Changing this field may require a restart.

sgPostgresConfig string Name of the SGPostgresConfig used for the cluster. It must exist. When not set, a default Postgres config, for the major version selected, is used.

Changing this field may require a restart.

SGShardedCluster.spec.shards.managedSql

↩ Parent

This section allows to reference SQL scripts that will be applied to the cluster live.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

continueOnSGScriptError boolean If true, when any entry of any SGScript fail will not prevent subsequent SGScript from being executed. By default is false.
scripts []object A list of script references that will be executed in sequence.
SGShardedCluster.spec.shards.managedSql.scripts[index]

↩ Parent

A script reference. Each version of each entry of the script referenced will be executed exactly once following the sequence defined in the referenced script and skipping any script entry that have already been executed.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

id integer The id is immutable and must be unique across all the SGScript entries. It is replaced by the operator and is used to identify the SGScript entry.
sgScript string A reference to an SGScript

SGShardedCluster.spec.shards.metadata

↩ Parent

Metadata information from shards cluster created resources.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

annotations object Custom Kubernetes annotations to be passed to resources created and managed by StackGres.
labels object Custom Kubernetes labels to be passed to resources created and managed by StackGres.
SGShardedCluster.spec.shards.metadata.annotations

↩ Parent

Custom Kubernetes annotations to be passed to resources created and managed by StackGres.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

allResources map[string]string Annotations to attach to any resource created or managed by StackGres.
clusterPods map[string]string Annotations to attach to pods created or managed by StackGres.
primaryService map[string]string Custom Kubernetes annotations passed to the -primary service.
replicasService map[string]string Custom Kubernetes annotations passed to the -replicas service.
services map[string]string Annotations to attach to all services created or managed by StackGres.
SGShardedCluster.spec.shards.metadata.labels

↩ Parent

Custom Kubernetes labels to be passed to resources created and managed by StackGres.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

clusterPods map[string]string Labels to attach to Pods created or managed by StackGres.
services map[string]string Labels to attach to Services and Endpoints created or managed by StackGres.

SGShardedCluster.spec.shards.overrides[index]

↩ Parent

Any shard can be overriden by this section.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

index integer Identifier of the shard StackGres cluster to override (starting from 0)

Minimum: 0
Maximum: 15
configurations object Shards custom configurations.
instancesPerCluster integer Number of StackGres instances per shard’s StackGres cluster. Each instance contains one Postgres server. Out of all of the Postgres servers, one is elected as the primary, the rest remain as read-only replicas.

Minimum: 1
Maximum: 16
managedSql object This section allows to reference SQL scripts that will be applied to the cluster live.
metadata object Metadata information from shards cluster created resources.
pods object Cluster pod’s configuration.
replication object This section allows to configure the global Postgres replication mode.

The main replication group is implicit and contains the total number of instances less the sum of all instances in other replication groups.

The total number of instances is always specified by .spec.instances.

sgInstanceProfile string Name of the SGInstanceProfile. A SGInstanceProfile defines CPU and memory limits. Must exist before creating a cluster. When no profile is set, a default (currently: 1 core, 2 GiB RAM) one is used.
SGShardedCluster.spec.shards.overrides[index].configurations

↩ Parent

Shards custom configurations.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

sgPoolingConfig string Name of the SGPoolingConfig used for this cluster. Each pod contains a sidecar with a connection pooler (currently: PgBouncer). The connection pooler is implemented as a sidecar.

If not set, a default configuration will be used. Disabling connection pooling altogether is possible if the disableConnectionPooling property of the pods object is set to true.

sgPostgresConfig string Name of the SGPostgresConfig used for the cluster. It must exist. When not set, a default Postgres config, for the major version selected, is used.
SGShardedCluster.spec.shards.overrides[index].managedSql

↩ Parent

This section allows to reference SQL scripts that will be applied to the cluster live.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

continueOnSGScriptError boolean If true, when any entry of any SGScript fail will not prevent subsequent SGScript from being executed. By default is false.
scripts []object A list of script references that will be executed in sequence.
SGShardedCluster.spec.shards.overrides[index].managedSql.scripts[index]

↩ Parent

A script reference. Each version of each entry of the script referenced will be executed exactly once following the sequence defined in the referenced script and skipping any script entry that have already been executed.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

id integer The id is immutable and must be unique across all the SGScript entries. It is replaced by the operator and is used to identify the SGScript entry.
sgScript string A reference to an SGScript
SGShardedCluster.spec.shards.overrides[index].metadata

↩ Parent

Metadata information from shards cluster created resources.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

annotations object Custom Kubernetes annotations to be passed to resources created and managed by StackGres.
labels object Custom Kubernetes labels to be passed to resources created and managed by StackGres.
SGShardedCluster.spec.shards.overrides[index].metadata.annotations

↩ Parent

Custom Kubernetes annotations to be passed to resources created and managed by StackGres.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

allResources map[string]string Annotations to attach to any resource created or managed by StackGres.
clusterPods map[string]string Annotations to attach to pods created or managed by StackGres.
primaryService map[string]string Custom Kubernetes annotations passed to the -primary service.
replicasService map[string]string Custom Kubernetes annotations passed to the -replicas service.
services map[string]string Annotations to attach to all services created or managed by StackGres.
SGShardedCluster.spec.shards.overrides[index].metadata.labels

↩ Parent

Custom Kubernetes labels to be passed to resources created and managed by StackGres.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

clusterPods map[string]string Labels to attach to Pods created or managed by StackGres.
services map[string]string Labels to attach to Services and Endpoints created or managed by StackGres.
SGShardedCluster.spec.shards.overrides[index].pods

↩ Parent

Cluster pod’s configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

customContainers []object A list of custom application containers that run within the shards cluster’s Pods.

The name used in this section will be prefixed with the string custom- so that when referencing them in the .spec.containers section of SGInstanceProfile the name used have to be prepended with the same prefix.

Changing this field may require a restart.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#container-v1-core

customInitContainers []object A list of custom application init containers that run within the coordinator cluster’s Pods. The custom init containers will run following the defined sequence as the end of cluster’s Pods init containers.

The name used in this section will be prefixed with the string custom- so that when referencing them in the .spec.containers section of SGInstanceProfile the name used have to be prepended with the same prefix.

Changing this field may require a restart.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#container-v1-core

customVolumes []object A list of custom volumes that may be used along with any container defined in customInitContainers or customContainers sections for the shards.

The name used in this section will be prefixed with the string custom- so that when referencing them in the customInitContainers or customContainers sections the name used have to be prepended with the same prefix.

Only the following volume types are allowed: configMap, downwardAPI, emptyDir, gitRepo, glusterfs, hostPath, nfs, projected and secret

Changing this field may require a restart.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#volume-v1-core

disableConnectionPooling boolean If set to true, avoids creating a connection pooling (using PgBouncer) sidecar.

Changing this field may require a restart.

disableMetricsExporter boolean If set to true, avoids creating the Prometheus exporter sidecar. Recommended when there’s no intention to use Prometheus for monitoring.
disablePostgresUtil boolean If set to true, avoids creating the postgres-util sidecar. This sidecar contains usual Postgres administration utilities that are not present in the main (patroni) container, like psql. Only disable if you know what you are doing.

Changing this field may require a restart.

managementPolicy string managementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is OrderedReady, where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is Parallel which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once.
persistentVolume object Pod’s persistent volume configuration.
resources object Pod custom resources configuration.
scheduling object Pod custom scheduling, affinity and topology spread constratins configuration.

Changing this field may require a restart.

SGShardedCluster.spec.shards.overrides[index].pods.persistentVolume

↩ Parent

Pod’s persistent volume configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

size string Size of the PersistentVolume set for each instance of the cluster. This size is specified either in Mebibytes, Gibibytes or Tebibytes (multiples of 2^20, 2^30 or 2^40, respectively).
storageClass string Name of an existing StorageClass in the Kubernetes cluster, used to create the PersistentVolumes for the instances of the cluster.
SGShardedCluster.spec.shards.overrides[index].pods.resources

↩ Parent

Pod custom resources configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

disableResourcesRequestsSplitFromTotal boolean When set to true the resources requests values in fields SGInstanceProfile.spec.requests.cpu and SGInstanceProfile.spec.requests.memory will represent the resources requests of the patroni container and the total resources requests calculated by adding the resources requests of all the containers (including the patroni container).

Changing this field may require a restart.

enableClusterLimitsRequirements boolean When enabled resource limits for containers other than the patroni container wil be set just like for patroni contianer as specified in the SGInstanceProfile.

Changing this field may require a restart.

SGShardedCluster.spec.shards.overrides[index].pods.scheduling

↩ Parent

Pod custom scheduling, affinity and topology spread constratins configuration.

Changing this field may require a restart.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

backup object Backup Pod custom scheduling and affinity configuration.
nodeAffinity object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

nodeSelector map[string]string NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
podAffinity object Pod affinity is a group of inter pod affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podaffinity-v1-core

podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podantiaffinity-v1-core

priorityClassName string Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible.
tolerations []object If specified, the pod’s tolerations.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#toleration-v1-core

topologySpreadConstraints []object TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.
SGShardedCluster.spec.shards.overrides[index].pods.scheduling.backup

↩ Parent

Backup Pod custom scheduling and affinity configuration.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

nodeAffinity object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

nodeSelector object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

podAffinity object Pod affinity is a group of inter pod affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podaffinity-v1-core

podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podantiaffinity-v1-core

priorityClassName string Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible.
tolerations object Node affinity is a group of node affinity scheduling rules.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core

SGShardedCluster.spec.shards.overrides[index].pods.scheduling.topologySpreadConstraints[index]

↩ Parent

TopologySpreadConstraint specifies how to spread matching pods among the given topology.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#topologyspreadconstraint-v1-core

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway, it is used to give higher precedence to topologies that satisfy it. It’s a required field. Default value is 1 and 0 is not allowed.

Format: int32
topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a “bucket”, and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is “kubernetes.io/hostname”, each Node is a domain of that topology. And, if TopologyKey is “topology.kubernetes.io/zone”, each zone is a domain of that topology. It’s a required field.
whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn’t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered “Unsatisfiable” for an incoming pod if and only if every possible node assignment for that pod would violate “MaxSkew” on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won’t make it more imbalanced. It’s a required field.


labelSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
matchLabelKeys []string MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule.

For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so “global minimum” is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew.

This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default).

Format: int32

nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod’s nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.

If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.

nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.

If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.

SGShardedCluster.spec.shards.overrides[index].pods.scheduling.topologySpreadConstraints[index].labelSelector

↩ Parent

A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

matchExpressions []object matchExpressions is a list of label selector requirements. The requirements are ANDed.
matchLabels map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
SGShardedCluster.spec.shards.overrides[index].pods.scheduling.topologySpreadConstraints[index].labelSelector.matchExpressions[index]

↩ Parent

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string key is the label key that the selector applies to.
operator string operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
values []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
SGShardedCluster.spec.shards.overrides[index].replication

↩ Parent

This section allows to configure the global Postgres replication mode.

The main replication group is implicit and contains the total number of instances less the sum of all instances in other replication groups.

The total number of instances is always specified by .spec.instances.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

mode string The replication mode applied to the whole cluster. Possible values are:

  • async (default)
  • sync
  • strict-sync
  • sync-all
  • strict-sync-all

async

When in asynchronous mode the cluster is allowed to lose some committed transactions. When the primary server fails or becomes unavailable for any other reason a sufficiently healthy standby will automatically be promoted to primary. Any transactions that have not been replicated to that standby remain in a “forked timeline” on the primary, and are effectively unrecoverable (the data is still there, but recovering it requires a manual recovery effort by data recovery specialists).

sync

When in synchronous mode a standby will not be promoted unless it is certain that the standby contains all transactions that may have returned a successful commit status to client (clients can change the behavior per transaction using PostgreSQL’s synchronous_commit setting. Transactions with synchronous_commit values of off and local may be lost on fail over, but will not be blocked by replication delays). This means that the system may be unavailable for writes even though some servers are available. System administrators can still use manual failover commands to promote a standby even if it results in transaction loss.

Synchronous mode does not guarantee multi node durability of commits under all circumstances. When no suitable standby is available, primary server will still accept writes, but does not guarantee their replication. When the primary fails in this mode no standby will be promoted. When the host that used to be the primary comes back it will get promoted automatically, unless system administrator performed a manual failover. This behavior makes synchronous mode usable with 2 node clusters.

When synchronous mode is used and a standby crashes, commits will block until the primary is switched to standalone mode. Manually shutting down or restarting a standby will not cause a commit service interruption. Standby will signal the primary to release itself from synchronous standby duties before PostgreSQL shutdown is initiated.

strict-sync

When it is absolutely necessary to guarantee that each write is stored durably on at least two nodes, use the strict synchronous mode. This mode prevents synchronous replication to be switched off on the primary when no synchronous standby candidates are available. As a downside, the primary will not be available for writes (unless the Postgres transaction explicitly turns off synchronous_mode parameter), blocking all client write requests until at least one synchronous replica comes up.

Note: Because of the way synchronous replication is implemented in PostgreSQL it is still possible to lose transactions even when using strict synchronous mode. If the PostgreSQL backend is cancelled while waiting to acknowledge replication (as a result of packet cancellation due to client timeout or backend failure) transaction changes become visible for other backends. Such changes are not yet replicated and may be lost in case of standby promotion.

sync-all

The same as sync but syncInstances is ignored and the number of synchronous instances is equals to the total number of instances less one.

strict-sync-all

The same as strict-sync but syncInstances is ignored and the number of synchronous instances is equals to the total number of instances less one.

Default: async

syncInstances integer Number of synchronous standby instances. Must be less than the total number of instances. It is set to 1 by default. Only setteable if mode is sync or strict-sync.

Minimum: 1

SGShardedCluster.spec.shards.replication

↩ Parent

This section allows to configure the global Postgres replication mode.

The main replication group is implicit and contains the total number of instances less the sum of all instances in other replication groups.

The total number of instances is always specified by .spec.instances.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

mode string The replication mode applied to the whole cluster. Possible values are:

  • async (default)
  • sync
  • strict-sync
  • sync-all
  • strict-sync-all

async

When in asynchronous mode the cluster is allowed to lose some committed transactions. When the primary server fails or becomes unavailable for any other reason a sufficiently healthy standby will automatically be promoted to primary. Any transactions that have not been replicated to that standby remain in a “forked timeline” on the primary, and are effectively unrecoverable (the data is still there, but recovering it requires a manual recovery effort by data recovery specialists).

sync

When in synchronous mode a standby will not be promoted unless it is certain that the standby contains all transactions that may have returned a successful commit status to client (clients can change the behavior per transaction using PostgreSQL’s synchronous_commit setting. Transactions with synchronous_commit values of off and local may be lost on fail over, but will not be blocked by replication delays). This means that the system may be unavailable for writes even though some servers are available. System administrators can still use manual failover commands to promote a standby even if it results in transaction loss.

Synchronous mode does not guarantee multi node durability of commits under all circumstances. When no suitable standby is available, primary server will still accept writes, but does not guarantee their replication. When the primary fails in this mode no standby will be promoted. When the host that used to be the primary comes back it will get promoted automatically, unless system administrator performed a manual failover. This behavior makes synchronous mode usable with 2 node clusters.

When synchronous mode is used and a standby crashes, commits will block until the primary is switched to standalone mode. Manually shutting down or restarting a standby will not cause a commit service interruption. Standby will signal the primary to release itself from synchronous standby duties before PostgreSQL shutdown is initiated.

strict-sync

When it is absolutely necessary to guarantee that each write is stored durably on at least two nodes, use the strict synchronous mode. This mode prevents synchronous replication to be switched off on the primary when no synchronous standby candidates are available. As a downside, the primary will not be available for writes (unless the Postgres transaction explicitly turns off synchronous_mode parameter), blocking all client write requests until at least one synchronous replica comes up.

Note: Because of the way synchronous replication is implemented in PostgreSQL it is still possible to lose transactions even when using strict synchronous mode. If the PostgreSQL backend is cancelled while waiting to acknowledge replication (as a result of packet cancellation due to client timeout or backend failure) transaction changes become visible for other backends. Such changes are not yet replicated and may be lost in case of standby promotion.

sync-all

The same as sync but syncInstances is ignored and the number of synchronous instances is equals to the total number of instances less one.

strict-sync-all

The same as strict-sync but syncInstances is ignored and the number of synchronous instances is equals to the total number of instances less one.

Default: async

syncInstances integer Number of synchronous standby instances. Must be less than the total number of instances. It is set to 1 by default. Only setteable if mode is sync or strict-sync.

Minimum: 1

SGShardedCluster.spec.configurations

↩ Parent

Sharded cluster custom configurations.

Example:

apiVersion: stackgres.io/v1alpha1
kind: SGShardedCluster
metadata:
  name: stackgres
spec:
  configurations:
    backups:
    - sgObjectStorage: 'backupconf'

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

backups []object List of sharded backups configurations for this SGShardedCluster
binding object This section allows to specify the properties of Service Binding spec for provisioned service. If not specified, then some default will be used.

For more information see https://servicebinding.io/spec/core/1.0.0/

credentials object Allow to specify custom credentials for Postgres users and Patroni REST API

Changing this field may require a restart.

SGShardedCluster.spec.configurations.backups[index]

↩ Parent

Sharded backup configuration for this SGShardedCluster

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

sgObjectStorage string Name of the SGObjectStorage to use for the cluster. It defines the location in which the the backups will be stored.
compression enum Specifies the backup compression algorithm. Possible options are: lz4, lzma, brotli. The default method is lz4. LZ4 is the fastest method, but compression ratio is the worst. LZMA is way slower, but it compresses backups about 6 times better than LZ4. Brotli is a good trade-off between speed and compression ratio, being about 3 times better than LZ4.

Enum: lz4, lzma, brotli
cronSchedule string Continuous Archiving backups are composed of periodic base backups and all the WAL segments produced in between those base backups for the coordinator and each shard. This parameter specifies at what time and with what frequency to start performing a new base backup.

Use cron syntax (m h dom mon dow) for this parameter, i.e., 5 values separated by spaces:

  • m: minute, 0 to 59.
  • h: hour, 0 to 23.
  • dom: day of month, 1 to 31 (recommended not to set it higher than 28).
  • mon: month, 1 to 12.
  • dow: day of week, 0 to 7 (0 and 7 both represent Sunday).

Also ranges of values (start-end), the symbol * (meaning first-last) or even */N, where N is a number, meaning ““every N, may be used. All times are UTC. It is recommended to avoid 00:00 as base backup time, to avoid overlapping with any other external operations happening at this time.

If not set, full backups are never performed automatically.

paths []string The paths were the backups are stored. If not set this field is filled up by the operator.

When provided will indicate were the backups and WAL files will be stored.

The first path indicate the coordinator path and the other paths indicate the shards paths

performance object Configuration that affects the backup network and disk usage performance.
retention integer When an automatic retention policy is defined to delete old base backups, this parameter specifies the number of base backups to keep, in a sliding window.

Consequently, the time range covered by backups is periodicity*retention, where periodicity is the separation between backups as specified by the cronSchedule property.

Default is 5.

Minimum: 1

SGShardedCluster.spec.configurations.backups[index].performance

↩ Parent

Configuration that affects the backup network and disk usage performance.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

downloadConcurrency integer Backup storage may use several concurrent streams to read the data. This parameter configures the number of parallel streams to use. By default, it’s set to the minimum between the number of file to read and 10.

Minimum: 1
maxDiskBandwidth integer Maximum disk read I/O when performing a backup. In bytes (per second).
maxNetworkBandwidth integer Maximum storage upload bandwidth used when storing a backup. In bytes (per second).
uploadConcurrency integer Backup storage may use several concurrent streams to store the data. This parameter configures the number of parallel streams to use. By default, it’s set to 16.

Minimum: 1
uploadDiskConcurrency integer Backup storage may use several concurrent streams to store the data. This parameter configures the number of parallel streams to use to reading from disk. By default, it’s set to 1.

Minimum: 1

SGShardedCluster.spec.configurations.binding

↩ Parent

This section allows to specify the properties of Service Binding spec for provisioned service. If not specified, then some default will be used.

For more information see https://servicebinding.io/spec/core/1.0.0/

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

database string Allow to specify the database name. If not specified, then the default value is postgres
password object Allow to reference Secret that contains the user’s password. If not specified, then the superuser password will be used.
provider string It’s the reference of custom provider name. If not specified, then the default value will be stackgres
username string Allow to specify the username. If not specified, then the superuser username will be used.
SGShardedCluster.spec.configurations.binding.password

↩ Parent

Allow to reference Secret that contains the user’s password. If not specified, then the superuser password will be used.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string The key of the Secret
name string The name of the Secret

SGShardedCluster.spec.configurations.credentials

↩ Parent

Allow to specify custom credentials for Postgres users and Patroni REST API

Changing this field may require a restart.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

patroni object Kubernetes SecretKeySelectors that contains the credentials for patroni REST API.

Changing this field may require a restart.

users object Kubernetes SecretKeySelectors that contains the credentials of the users.

Changing this field may require a manual modification of the database users to reflect the new values specified.

In particular you may have to create those users if username is changed or alter password if it is changed. Here are the SQL commands to perform such operation (replace default usernames with the new ones and *** with their respective passwords):

  • Superuser username changed:
CREATE ROLE postgres;
  • Superuser password changed:
ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS PASSWORD '***';
  • Replication username changed:
CREATE ROLE replicator;
  • Replication password changed:
ALTER ROLE replicator WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN REPLICATION NOBYPASSRLS PASSWORD '***';
  • Authenticator username changed:
CREATE ROLE authenticator;
  • Authenticator password changed:
ALTER ROLE authenticator WITH SUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS PASSWORD '***';

Changing this field may require a restart.

SGShardedCluster.spec.configurations.credentials.patroni

↩ Parent

Kubernetes SecretKeySelectors that contains the credentials for patroni REST API.

Changing this field may require a restart.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

restApiPassword object A Kubernetes SecretKeySelector that contains the password for the patroni REST API.
SGShardedCluster.spec.configurations.credentials.patroni.restApiPassword

↩ Parent

A Kubernetes SecretKeySelector that contains the password for the patroni REST API.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string The key of the secret to select from. Must be a valid secret key.
name string Name of the referent. More information.
SGShardedCluster.spec.configurations.credentials.users

↩ Parent

Kubernetes SecretKeySelectors that contains the credentials of the users.

Changing this field may require a manual modification of the database users to reflect the new values specified.

In particular you may have to create those users if username is changed or alter password if it is changed. Here are the SQL commands to perform such operation (replace default usernames with the new ones and *** with their respective passwords):

  • Superuser username changed:
CREATE ROLE postgres;
  • Superuser password changed:
ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS PASSWORD '***';
  • Replication username changed:
CREATE ROLE replicator;
  • Replication password changed:
ALTER ROLE replicator WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN REPLICATION NOBYPASSRLS PASSWORD '***';
  • Authenticator username changed:
CREATE ROLE authenticator;
  • Authenticator password changed:
ALTER ROLE authenticator WITH SUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS PASSWORD '***';

Changing this field may require a restart.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

authenticator object A Kubernetes SecretKeySelector that contains the credentials of the authenticator user used by pgbouncer to authenticate other users.
replication object A Kubernetes SecretKeySelector that contains the credentials of the replication user used to replicate from the primary cluster and from replicas of this cluster.
superuser object A Kubernetes SecretKeySelector that contains the credentials of the superuser (usually the postgres user).
SGShardedCluster.spec.configurations.credentials.users.authenticator

↩ Parent

A Kubernetes SecretKeySelector that contains the credentials of the authenticator user used by pgbouncer to authenticate other users.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

password object A Kubernetes SecretKeySelector that contains the password of the user.
username object A Kubernetes SecretKeySelector that contains the username of the user.
SGShardedCluster.spec.configurations.credentials.users.authenticator.password

↩ Parent

A Kubernetes SecretKeySelector that contains the password of the user.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string The key of the secret to select from. Must be a valid secret key.
name string Name of the referent. More information.
SGShardedCluster.spec.configurations.credentials.users.authenticator.username

↩ Parent

A Kubernetes SecretKeySelector that contains the username of the user.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string The key of the secret to select from. Must be a valid secret key.
name string Name of the referent. More information.
SGShardedCluster.spec.configurations.credentials.users.replication

↩ Parent

A Kubernetes SecretKeySelector that contains the credentials of the replication user used to replicate from the primary cluster and from replicas of this cluster.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

password object A Kubernetes SecretKeySelector that contains the password of the user.
username object A Kubernetes SecretKeySelector that contains the username of the user.
SGShardedCluster.spec.configurations.credentials.users.replication.password

↩ Parent

A Kubernetes SecretKeySelector that contains the password of the user.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string The key of the secret to select from. Must be a valid secret key.
name string Name of the referent. More information.
SGShardedCluster.spec.configurations.credentials.users.replication.username

↩ Parent

A Kubernetes SecretKeySelector that contains the username of the user.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string The key of the secret to select from. Must be a valid secret key.
name string Name of the referent. More information.
SGShardedCluster.spec.configurations.credentials.users.superuser

↩ Parent

A Kubernetes SecretKeySelector that contains the credentials of the superuser (usually the postgres user).

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

password object A Kubernetes SecretKeySelector that contains the password of the user.
username object A Kubernetes SecretKeySelector that contains the username of the user.
SGShardedCluster.spec.configurations.credentials.users.superuser.password

↩ Parent

A Kubernetes SecretKeySelector that contains the password of the user.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string The key of the secret to select from. Must be a valid secret key.
name string Name of the referent. More information.
SGShardedCluster.spec.configurations.credentials.users.superuser.username

↩ Parent

A Kubernetes SecretKeySelector that contains the username of the user.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

key string The key of the secret to select from. Must be a valid secret key.
name string Name of the referent. More information.

SGShardedCluster.spec.distributedLogs

↩ Parent

StackGres features a functionality for all pods to send Postgres, Patroni and PgBouncer logs to a central (distributed) location, which is in turn another Postgres database. Logs can then be accessed via SQL interface or from the web UI. This section controls whether to enable this feature or not. If not enabled, logs are send to the pod’s standard output.

Example:

apiVersion: stackgres.io/v1alpha1
kind: SGShardedCluster
metadata:
  name: stackgres
spec:
  distributedLogs:
    sgDistributedLogs: distributedlogs

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

retention string Define a retention window with the syntax <integer> (minutes|hours|days|months) in which log entries are kept. Log entries will be removed when they get older more than the double of the specified retention window.

When this field is changed the retention will be applied only to log entries that are newer than the end of the retention window previously specified. If no retention window was previously specified it is considered to be of 7 days. This means that if previous retention window is of 7 days new retention configuration will apply after UTC timestamp calculated with: SELECT date_trunc('days', now() at time zone 'UTC') - INTERVAL '7 days'.

sgDistributedLogs string Name of the SGDistributedLogs to use for this cluster. It must exist.

SGShardedCluster.spec.initialData

↩ Parent

Sharded cluster initialization data options. Sharded cluster may be initialized empty, or from a sharded backup restoration.

This field can only be set on creation.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

restore object This section allows to restore a sharded cluster from an existing copy of the metadata and data.

SGShardedCluster.spec.initialData.restore

↩ Parent

This section allows to restore a sharded cluster from an existing copy of the metadata and data.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

downloadDiskConcurrency integer The backup fetch process may fetch several streams in parallel. Parallel fetching is enabled when set to a value larger than one.

If not specified it will be interpreted as latest.

Minimum: 1

fromBackup object From which sharded backup to restore and how the process is configured


SGShardedCluster.spec.initialData.restore.fromBackup

↩ Parent

From which sharded backup to restore and how the process is configured

Example:

apiVersion: stackgres.io/v1
kind: SGShardedCluster
metadata:
  name: stackgres
spec:
  initialData:
    restore:
      fromBackup:
        name: stackgres-backup
      downloadDiskConcurrency: 1

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

name string When set to the name of an existing SGShardedBackup, the sharded cluster is initialized by restoring the backup data to it. If not set, the sharded cluster is initialized empty. The selected sharded backup must be in the same namespace.
pointInTimeRecovery object It is possible to restore the database to its state at any time since your backup was taken using Point-in-Time Recovery (PITR) as long as another backup newer than the PITR requested restoration date does not exists.

Point In Time Recovery (PITR). PITR allow to restore the database state to an arbitrary point of time in the past, as long as you specify a backup older than the PITR requested restoration date and does not exists a backup newer than the same restoration date.

See also: https://www.postgresql.org/docs/current/continuous-archiving.html

targetInclusive boolean Specify the recovery_target_inclusive to stop recovery just after the specified recovery target (true), or just before the recovery target (false). Applies when targetLsn, pointInTimeRecovery, or targetXid is specified. This setting controls whether transactions having exactly the target WAL location (LSN), commit time, or transaction ID, respectively, will be included in the recovery. Default is true.
SGShardedCluster.spec.initialData.restore.fromBackup.pointInTimeRecovery

↩ Parent

It is possible to restore the database to its state at any time since your backup was taken using Point-in-Time Recovery (PITR) as long as another backup newer than the PITR requested restoration date does not exists.

Point In Time Recovery (PITR). PITR allow to restore the database state to an arbitrary point of time in the past, as long as you specify a backup older than the PITR requested restoration date and does not exists a backup newer than the same restoration date.

See also: https://www.postgresql.org/docs/current/continuous-archiving.html

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

restoreToTimestamp string An ISO 8601 date, that holds UTC date indicating at which point-in-time the database have to be restored.

SGShardedCluster.spec.metadata

↩ Parent

Metadata information from any cluster created resources.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

annotations object Custom Kubernetes annotations to be passed to resources created and managed by StackGres.


labels object Custom Kubernetes [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) to be passed to resources created and managed by StackGres.

SGShardedCluster.spec.metadata.annotations

↩ Parent

Custom Kubernetes annotations to be passed to resources created and managed by StackGres.

Example:

apiVersion: stackgres.io/v1alpha1
kind: SGShardedCluster
metadata:
  name: stackgres
spec:
  metadata:
    annotations:
      clusterPods:
        customAnnotations: customAnnotationValue
      primaryService:
        customAnnotations: customAnnotationValue
      replicasService:
        customAnnotations: customAnnotationValue

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

allResources map[string]string Annotations to attach to any resource created or managed by StackGres.
clusterPods map[string]string Annotations to attach to pods created or managed by StackGres.
primaryService map[string]string Custom Kubernetes annotations passed to the -primary service.
replicasService map[string]string Custom Kubernetes annotations passed to the -replicas service.
services map[string]string Annotations to attach to all services created or managed by StackGres.

SGShardedCluster.spec.metadata.labels

↩ Parent

Custom Kubernetes labels to be passed to resources created and managed by StackGres.

Example:

apiVersion: stackgres.io/v1alpha1
kind: SGShardedCluster
metadata:
  name: stackgres
spec:
  metadata:
    labels:
      clusterPods:
        customLabel: customLabelValue
      services:
        customLabel: customLabelValue

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

clusterPods map[string]string Labels to attach to Pods created or managed by StackGres.
services map[string]string Labels to attach to Services and Endpoints created or managed by StackGres.

SGShardedCluster.spec.nonProductionOptions

↩ Parent

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

disableClusterPodAntiAffinity boolean It is a best practice, on non-containerized environments, when running production workloads, to run each database server on a different server (virtual or physical), i.e., not to co-locate more than one database server per host.

The same best practice applies to databases on containers. By default, StackGres will not allow to run more than one StackGres pod on a given Kubernetes node. Set this property to true to allow more than one StackGres pod per node.

This property default value may be changed depending on the value of field .spec.profile.

This property default value may be changed depending on the value of field .spec.profile.

disableClusterResourceRequirements boolean It is a best practice, on containerized environments, when running production workloads, to enforce container’s resources requirements.

By default, StackGres will configure resource requirements for all the containers. Set this property to true to prevent StackGres from setting container’s resources requirements (except for patroni container, see disablePatroniResourceRequirements).

This property default value may be changed depending on the value of field .spec.profile.

disablePatroniResourceRequirements boolean It is a best practice, on containerized environments, when running production workloads, to enforce container’s resources requirements.

The same best practice applies to databases on containers. By default, StackGres will configure resource requirements for patroni container. Set this property to true to prevent StackGres from setting patroni container’s resources requirement.

This property default value may be changed depending on the value of field .spec.profile.

enableSetClusterCpuRequests boolean Deprecated this value is ignored and you can consider it as always true.

On containerized environments, when running production workloads, enforcing container’s cpu requirements request to be equals to the limit allow to achieve the highest level of performance. Doing so, reduces the chances of leaving the workload with less cpu than it requires. It also allow to set static CPU management policy that allows to guarantee a pod the usage exclusive CPUs on the node.

By default, StackGres will configure cpu requirements to have the same limit and request for all the containers. Set this property to true to prevent StackGres from setting container’s cpu requirements request equals to the limit (except for patroni container, see enablePatroniCpuRequests) when .spec.requests.containers.<container name>.cpu .spec.requests.initContainers.<container name>.cpu is configured in the referenced SGInstanceProfile.

Default: false

enableSetClusterMemoryRequests boolean Deprecated this value is ignored and you can consider it as always true.

On containerized environments, when running production workloads, enforcing container’s memory requirements request to be equals to the limit allow to achieve the highest level of performance. Doing so, reduces the chances of leaving the workload with less memory than it requires.

By default, StackGres will configure memory requirements to have the same limit and request for all the containers. Set this property to true to prevent StackGres from setting container’s memory requirements request equals to the limit (except for patroni container, see enablePatroniCpuRequests) when .spec.requests.containers.<container name>.memory .spec.requests.initContainers.<container name>.memory is configured in the referenced SGInstanceProfile.

Default: false

enableSetPatroniCpuRequests boolean Deprecated this value is ignored and you can consider it as always true.

On containerized environments, when running production workloads, enforcing container’s cpu requirements request to be equals to the limit allow to achieve the highest level of performance. Doing so, reduces the chances of leaving the workload with less cpu than it requires. It also allow to set static CPU management policy that allows to guarantee a pod the usage exclusive CPUs on the node.

By default, StackGres will configure cpu requirements to have the same limit and request for the patroni container. Set this property to true to prevent StackGres from setting patroni container’s cpu requirements request equals to the limit when .spec.requests.cpu is configured in the referenced SGInstanceProfile.

Default: false

enableSetPatroniMemoryRequests boolean Deprecated this value is ignored and you can consider it as always true.

On containerized environments, when running production workloads, enforcing container’s memory requirements request to be equals to the limit allow to achieve the highest level of performance. Doing so, reduces the chances of leaving the workload with less memory than it requires.

By default, StackGres will configure memory requirements to have the same limit and request for the patroni container. Set this property to true to prevent StackGres from setting patroni container’s memory requirements request equals to the limit when .spec.requests.memory is configured in the referenced SGInstanceProfile.

Default: false

enabledFeatureGates []string A list of StackGres feature gates to enable (not suitable for a production environment).

Available feature gates are:

  • babelfish-flavor: Allow to use babelfish flavor.

SGShardedCluster.spec.postgresServices

↩ Parent

Kubernetes services created or managed by StackGres.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

coordinator object Configuration for the coordinator services
shards object Configuration for the shards services

SGShardedCluster.spec.postgresServices.coordinator

↩ Parent

Configuration for the coordinator services

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

any object Configure the coordinator service to any instance of the coordinator with the same name as the SGShardedCluster plus the -reads suffix.

It provides a stable connection (regardless of node failures) to any Postgres server of the coordinator cluster. Servers are load-balanced via this service.

See also https://kubernetes.io/docs/concepts/services-networking/service/

customPorts []object The list of custom ports that will be exposed by the coordinator services.

The names of custom ports will be prefixed with the string custom- so they do not conflict with ports defined for the coordinator services.

The names of target ports will be prefixed with the string custom- so that the ports that can be referenced in this section will be only those defined under .spec.pods.customContainers[].ports sections were names are also prepended with the same prefix.

Changing this field may require a restart.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#serviceport-v1-core

primary object Configure the coordinator service to the primary of the coordinator with the name as the SGShardedCluster.

It provides a stable connection (regardless of primary failures or switchovers) to the read-write Postgres server of the coordinator cluster.

See also https://kubernetes.io/docs/concepts/services-networking/service/

SGShardedCluster.spec.postgresServices.coordinator.any

↩ Parent

Configure the coordinator service to any instance of the coordinator with the same name as the SGShardedCluster plus the -reads suffix.

It provides a stable connection (regardless of node failures) to any Postgres server of the coordinator cluster. Servers are load-balanced via this service.

See also https://kubernetes.io/docs/concepts/services-networking/service/

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

allocateLoadBalancerNodePorts boolean allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is “true”. It may be set to “false” if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
enabled boolean Specify if the service should be created or not.
externalIPs []string externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system.
externalTrafficPolicy string externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service’s “externally-facing” addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to “Local”, the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, “Cluster”, uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get “Cluster” semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node.


healthCheckNodePort integer healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set.

Format: int32
internalTrafficPolicy string InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to "Local", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features).
ipFamilies []string IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName.

This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field.

ipFamilyPolicy string IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be “SingleStack” (a single IP family), “PreferDualStack” (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or “RequireDualStack” (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName.
loadBalancerClass string loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. “internal-vip” or “example.com/internal-vip”. Unprefixed names are reserved for end-users. This field can only be set when the Service type is ‘LoadBalancer’. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type ‘LoadBalancer’. Once set, it can not be changed. This field will be wiped when a service is updated to a non ‘LoadBalancer’ type.
loadBalancerIP string Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.
loadBalancerSourceRanges []string If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature.” More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
sessionAffinity string Supports “ClientIP” and “None”. Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies


sessionAffinityConfig object SessionAffinityConfig represents the configurations of session affinity.
type enum type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. "NodePort" builds on ClusterIP and allocates a port on every node. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud). More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

Enum: ClusterIP, LoadBalancer, NodePort
SGShardedCluster.spec.postgresServices.coordinator.any.sessionAffinityConfig

↩ Parent

SessionAffinityConfig represents the configurations of session affinity.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

clientIP object ClientIPConfig represents the configurations of Client IP based session affinity.
SGShardedCluster.spec.postgresServices.coordinator.any.sessionAffinityConfig.clientIP

↩ Parent

ClientIPConfig represents the configurations of Client IP based session affinity.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

timeoutSeconds integer timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && <=86400(for 1 day) if ServiceAffinity == “ClientIP”. Default value is 10800(for 3 hours).

Format: int32
SGShardedCluster.spec.postgresServices.coordinator.primary

↩ Parent

Configure the coordinator service to the primary of the coordinator with the name as the SGShardedCluster.

It provides a stable connection (regardless of primary failures or switchovers) to the read-write Postgres server of the coordinator cluster.

See also https://kubernetes.io/docs/concepts/services-networking/service/

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

allocateLoadBalancerNodePorts boolean allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is “true”. It may be set to “false” if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
enabled boolean Specify if the service should be created or not.
externalIPs []string externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system.
externalTrafficPolicy string externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service’s “externally-facing” addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to “Local”, the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, “Cluster”, uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get “Cluster” semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node.


healthCheckNodePort integer healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set.

Format: int32
internalTrafficPolicy string InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to "Local", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features).
ipFamilies []string IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName.

This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field.

ipFamilyPolicy string IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be “SingleStack” (a single IP family), “PreferDualStack” (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or “RequireDualStack” (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName.
loadBalancerClass string loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. “internal-vip” or “example.com/internal-vip”. Unprefixed names are reserved for end-users. This field can only be set when the Service type is ‘LoadBalancer’. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type ‘LoadBalancer’. Once set, it can not be changed. This field will be wiped when a service is updated to a non ‘LoadBalancer’ type.
loadBalancerIP string Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.
loadBalancerSourceRanges []string If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature.” More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
sessionAffinity string Supports “ClientIP” and “None”. Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies


sessionAffinityConfig object SessionAffinityConfig represents the configurations of session affinity.
type enum type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. "NodePort" builds on ClusterIP and allocates a port on every node. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud). More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

Enum: ClusterIP, LoadBalancer, NodePort
SGShardedCluster.spec.postgresServices.coordinator.primary.sessionAffinityConfig

↩ Parent

SessionAffinityConfig represents the configurations of session affinity.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

clientIP object ClientIPConfig represents the configurations of Client IP based session affinity.
SGShardedCluster.spec.postgresServices.coordinator.primary.sessionAffinityConfig.clientIP

↩ Parent

ClientIPConfig represents the configurations of Client IP based session affinity.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

timeoutSeconds integer timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && <=86400(for 1 day) if ServiceAffinity == “ClientIP”. Default value is 10800(for 3 hours).

Format: int32

SGShardedCluster.spec.postgresServices.shards

↩ Parent

Configuration for the shards services

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

customPorts []object The list of custom ports that will be exposed by the shards services.

The names of custom ports will be prefixed with the string custom- so they do not conflict with ports defined for the shards services.

The names of target ports will be prefixed with the string custom- so that the ports that can be referenced in this section will be only those defined under .spec.pods.customContainers[].ports sections were names are also prepended with the same prefix.

Changing this field may require a restart.

See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#serviceport-v1-core

primaries object Configure the shards service to any primary in the shards with the name as the SGShardedCluster plus the -shards suffix.

It provides a stable connection (regardless of primary failures or switchovers) to read-write Postgres servers of any shard cluster. Read-write servers are load-balanced via this service.

See also https://kubernetes.io/docs/concepts/services-networking/service/

SGShardedCluster.spec.postgresServices.shards.primaries

↩ Parent

Configure the shards service to any primary in the shards with the name as the SGShardedCluster plus the -shards suffix.

It provides a stable connection (regardless of primary failures or switchovers) to read-write Postgres servers of any shard cluster. Read-write servers are load-balanced via this service.

See also https://kubernetes.io/docs/concepts/services-networking/service/

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

allocateLoadBalancerNodePorts boolean allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is “true”. It may be set to “false” if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
enabled boolean Specify if the service should be created or not.
externalIPs []string externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system.
externalTrafficPolicy string externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service’s “externally-facing” addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to “Local”, the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, “Cluster”, uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get “Cluster” semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node.


healthCheckNodePort integer healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set.

Format: int32
internalTrafficPolicy string InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to "Local", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features).
ipFamilies []string IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName.

This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field.

ipFamilyPolicy string IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be “SingleStack” (a single IP family), “PreferDualStack” (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or “RequireDualStack” (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName.
loadBalancerClass string loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. “internal-vip” or “example.com/internal-vip”. Unprefixed names are reserved for end-users. This field can only be set when the Service type is ‘LoadBalancer’. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type ‘LoadBalancer’. Once set, it can not be changed. This field will be wiped when a service is updated to a non ‘LoadBalancer’ type.
loadBalancerIP string Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.
loadBalancerSourceRanges []string If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
sessionAffinity string Supports “ClientIP” and “None”. Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies


sessionAffinityConfig object SessionAffinityConfig represents the configurations of session affinity.
type enum type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. "NodePort" builds on ClusterIP and allocates a port on every node. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud). More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

Enum: ClusterIP, LoadBalancer, NodePort
SGShardedCluster.spec.postgresServices.shards.primaries.sessionAffinityConfig

↩ Parent

SessionAffinityConfig represents the configurations of session affinity.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

clientIP object ClientIPConfig represents the configurations of Client IP based session affinity.
SGShardedCluster.spec.postgresServices.shards.primaries.sessionAffinityConfig.clientIP

↩ Parent

ClientIPConfig represents the configurations of Client IP based session affinity.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

timeoutSeconds integer timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && <=86400(for 1 day) if ServiceAffinity == “ClientIP”. Default value is 10800(for 3 hours).

Format: int32

SGShardedCluster.spec.replication

↩ Parent

This section allows to configure the global Postgres replication mode.

The main replication group is implicit and contains the total number of instances less the sum of all instances in other replication groups.

The total number of instances is always specified by .spec.instances.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

mode string The replication mode applied to the whole cluster. Possible values are:

  • async (default)
  • sync
  • strict-sync
  • sync-all
  • strict-sync-all

async

When in asynchronous mode the cluster is allowed to lose some committed transactions. When the primary server fails or becomes unavailable for any other reason a sufficiently healthy standby will automatically be promoted to primary. Any transactions that have not been replicated to that standby remain in a “forked timeline” on the primary, and are effectively unrecoverable (the data is still there, but recovering it requires a manual recovery effort by data recovery specialists).

sync

When in synchronous mode a standby will not be promoted unless it is certain that the standby contains all transactions that may have returned a successful commit status to client (clients can change the behavior per transaction using PostgreSQL’s synchronous_commit setting. Transactions with synchronous_commit values of off and local may be lost on fail over, but will not be blocked by replication delays). This means that the system may be unavailable for writes even though some servers are available. System administrators can still use manual failover commands to promote a standby even if it results in transaction loss.

Synchronous mode does not guarantee multi node durability of commits under all circumstances. When no suitable standby is available, primary server will still accept writes, but does not guarantee their replication. When the primary fails in this mode no standby will be promoted. When the host that used to be the primary comes back it will get promoted automatically, unless system administrator performed a manual failover. This behavior makes synchronous mode usable with 2 node clusters.

When synchronous mode is used and a standby crashes, commits will block until the primary is switched to standalone mode. Manually shutting down or restarting a standby will not cause a commit service interruption. Standby will signal the primary to release itself from synchronous standby duties before PostgreSQL shutdown is initiated.

strict-sync

When it is absolutely necessary to guarantee that each write is stored durably on at least two nodes, use the strict synchronous mode. This mode prevents synchronous replication to be switched off on the primary when no synchronous standby candidates are available. As a downside, the primary will not be available for writes (unless the Postgres transaction explicitly turns off synchronous_mode parameter), blocking all client write requests until at least one synchronous replica comes up.

Note: Because of the way synchronous replication is implemented in PostgreSQL it is still possible to lose transactions even when using strict synchronous mode. If the PostgreSQL backend is cancelled while waiting to acknowledge replication (as a result of packet cancellation due to client timeout or backend failure) transaction changes become visible for other backends. Such changes are not yet replicated and may be lost in case of standby promotion.

sync-all

The same as sync but syncInstances is ignored and the number of synchronous instances is equals to the total number of instances less one.

strict-sync-all

The same as strict-sync but syncInstances is ignored and the number of synchronous instances is equals to the total number of instances less one.

Default: async

syncInstances integer Number of synchronous standby instances. Must be less than the total number of instances. It is set to 1 by default. Only setteable if mode is sync or strict-sync.

Minimum: 1

SGShardedCluster.status

↩ Parent

Current status of a StackGres sharded cluster.

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

binding object This section follow the schema specified in Service Binding spec for provisioned service.

For more information see https://servicebinding.io/spec/core/1.0.0/

clusterStatuses []object The list of cluster statuses.
conditions []object
sgBackups []string The list of SGBackups that compose the SGShardedBackup used to restore the sharded cluster.
toInstallPostgresExtensions []object The list of Postgres extensions to install

SGShardedCluster.status.binding

↩ Parent

This section follow the schema specified in Service Binding spec for provisioned service.

For more information see https://servicebinding.io/spec/core/1.0.0/

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

name string The name of the Secret as specified in Service Binding spec for provisioned service.

SGShardedCluster.status.clusterStatuses[index]

↩ Parent

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

name string The name of the cluster.
pendingRestart boolean Indicates if the cluster requires restart

SGShardedCluster.status.conditions[index]

↩ Parent

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

lastTransitionTime string Last time the condition transitioned from one status to another.
message string A human readable message indicating details about the transition.
reason string The reason for the condition’s last transition.
status string Status of the condition, one of True, False, Unknown.
type string Type of deployment condition.

SGShardedCluster.status.toInstallPostgresExtensions[index]

↩ Parent

Property
Required
Updatable
May Require Restart
Type
Description

Workaround for hugo bug not rendering first table row

name string The name of the extension to install.
postgresVersion string The postgres major version of the extension to install.
publisher string The id of the publisher of the extension to install.
repository string The repository base URL from where the extension will be installed from.
version string The version of the extension to install.
build string The build version of the extension to install.
extraMounts []string The extra mounts of the extension to install.