Kind: SGCluster
listKind: SGClusterList
plural: sgclusters
singular: sgcluster
shortNames sgclu
StackGres PostgreSQL cluster can be created using an SGCluster Custom Resource.
Example:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
instances: 1
postgres:
version: 'latest'
pods:
persistentVolume:
size: '5Gi'
sgInstanceProfile: 'size-xs'
See also Cluster Creation section.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
apiVersion | ✓ | string | stackgres.io/v1 | ||
kind | ✓ | string | SGCluster | ||
metadata | ✓ | ✓ | object | Refer to the Kubernetes API documentation for the fields of the metadata field. |
|
spec | ✓ | ✓ | object |
Specification of the desired behavior of a StackGres cluster.
|
|
status | ✓ | object |
Current status of a StackGres cluster. |
Specification of the desired behavior of a StackGres cluster.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
instances | ✓ | ✓ | integer |
Number of StackGres instances for the cluster. Each instance contains one Postgres server.
Out of all of the Postgres servers, one is elected as the primary, the rest remain as read-only replicas.
Minimum: 1 Maximum: 16 |
|
pods | ✓ | ✓ | object |
Cluster pod’s configuration. |
|
postgres | ✓ | ✓ | object |
This section allows to configure Postgres features
|
|
configurations | ✓ | object |
Cluster custom configurations.
|
||
distributedLogs | ✓ | object |
StackGres features a functionality for all pods to send Postgres, Patroni and PgBouncer logs to a central (distributed) location, which is in turn another Postgres database. Logs can then be accessed via SQL interface or from the web UI. This section controls whether to enable this feature or not. If not enabled, logs are send to the pod's standard output.
|
||
initialData | object |
Cluster initialization data options. Cluster may be initialized empty, or from a backup restoration.
Specifying scripts to run on the database after cluster creation is also possible.
This field can only be set on creation.
|
|||
managedSql | ✓ | object |
This section allows to reference SQL scripts that will be applied to the cluster live.
|
||
metadata | ✓ | object |
Metadata information from cluster created resources. |
||
nonProductionOptions | ✓ | object |
|
||
postgresServices | ✓ | object |
Kubernetes services created or managed by StackGres. |
||
prometheusAutobind | ✓ | boolean |
If enabled, a ServiceMonitor is created for each Prometheus instance found in order to collect metrics.
Default: false |
||
replicateFrom | ✓ | object |
Make the cluster a read-only standby replica allowing to replicate from another PostgreSQL instance and acting as a rely.
Changing this section is allowed to fix issues or to change the replication source. Removing this section convert the cluster in a normal cluster where the standby leader is converted into the a primary instance. |
||
replication | ✓ | object |
This section allows to configure Postgres replication mode and HA roles groups.
The main replication group is implicit and contains the total number of instances less the sum of all instances in other replication groups. The total number of instances is always specified by |
||
sgInstanceProfile | ✓ | ✓ | string |
Name of the SGInstanceProfile.
A SGInstanceProfile defines CPU and memory limits. Must exist before creating a cluster. When no profile is set, a default (1 core, 2 GiB RAM) one is used. Changing this field may require a restart.
|
|
toInstallPostgresExtensions | ✓ | []object |
The list of Postgres extensions to install.
This section is filled by the operator.
|
Cluster pod’s configuration.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
persistentVolume | ✓ | ✓ | object |
Pod’s persistent volume configuration.
|
|
customContainers | ✓ | ✓ | []object |
A list of custom application containers that run within the cluster's Pods.
The name used in this section will be prefixed with the string Changing this field may require a restart. See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#container-v1-core
|
|
customInitContainers | ✓ | ✓ | []object |
A list of custom application init containers that run within the cluster’s Pods. The
custom init containers will run following the defined sequence as the end of
cluster’s Pods init containers.
The name used in this section will be prefixed with the string Changing this field may require a restart. See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#container-v1-core
|
|
customVolumes | ✓ | ✓ | []object |
A list of custom volumes that may be used along with any container defined in
customInitContainers or customContainers sections.
The name used in this section will be prefixed with the string Only the following volume types are allowed: configMap, downwardAPI, emptyDir, gitRepo, glusterfs, hostPath, nfs, projected and secret Changing this field may require a restart. See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#volume-v1-core
|
|
disableConnectionPooling | ✓ | ✓ | boolean |
If set to true , avoids creating a connection pooling (using PgBouncer) sidecar.
Changing this field may require a restart.
|
|
disableMetricsExporter | ✓ | boolean |
If set to true , avoids creating the Prometheus exporter sidecar. Recommended when there’s no intention to use Prometheus for monitoring.Default: false |
||
disablePostgresUtil | ✓ | ✓ | boolean |
If set to true , avoids creating the postgres-util sidecar. This sidecar contains usual Postgres administration utilities that are not present in the main (patroni ) container, like psql . Only disable if you know what you are doing.
Changing this field may require a restart.
|
|
managementPolicy | ✓ | string |
managementPolicy controls how pods are created during initial scale up, when replacing pods
on nodes, or when scaling down. The default policy is OrderedReady , where pods are created
in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is
ready before continuing. When scaling down, the pods are removed in the opposite order.
The alternative policy is Parallel which will create pods in parallel to match the desired
scale without waiting, and on scale down will delete all pods at once.
Default: OrderedReady |
||
resources | ✓ | object |
Pod custom resources configuration. |
||
scheduling | ✓ | ✓ | object |
Pod custom scheduling, affinity and topology spread constratins configuration.
Changing this field may require a restart.
|
Pod’s persistent volume configuration.
Example:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
pods:
persistentVolume:
size: '5Gi'
storageClass: default
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
size | ✓ | ✓ | string |
Size of the PersistentVolume set for each instance of the cluster. This size is specified either in Mebibytes, Gibibytes or Tebibytes (multiples of 2^20, 2^30 or 2^40, respectively).
|
|
storageClass | ✓ | string |
Name of an existing StorageClass in the Kubernetes cluster, used to create the PersistentVolumes for the instances of the cluster.
|
Pod custom resources configuration.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
enableClusterLimitsRequirements | ✓ | ✓ | boolean |
When enabled resource limits for containers other than the patroni container wil be set just like for patroni contianer as specified in the SGInstanceProfile.
Changing this field may require a restart.
|
Pod custom scheduling, affinity and topology spread constratins configuration.
Changing this field may require a restart.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
backup | ✓ | ✓ | object |
Backup Pod custom scheduling and affinity configuration. |
|
nodeAffinity | ✓ | ✓ | object |
Node affinity is a group of node affinity scheduling rules.
See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core
|
|
nodeSelector | ✓ | ✓ | map[string]string |
NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
|
|
podAffinity | ✓ | ✓ | object |
Pod affinity is a group of inter pod affinity scheduling rules.
See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podaffinity-v1-core
|
|
podAntiAffinity | ✓ | ✓ | object |
Pod anti affinity is a group of inter pod anti affinity scheduling rules.
See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podantiaffinity-v1-core
|
|
priorityClassName | ✓ | ✓ | string |
Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible.
|
|
tolerations | ✓ | ✓ | []object |
If specified, the pod’s tolerations.
See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#toleration-v1-core
|
|
topologySpreadConstraints | ✓ | ✓ | []object |
TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.
|
Backup Pod custom scheduling and affinity configuration.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
nodeAffinity | ✓ | ✓ | object |
Node affinity is a group of node affinity scheduling rules.
See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#nodeaffinity-v1-core
|
|
nodeSelector | ✓ | ✓ | map[string]string |
NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
|
|
podAffinity | ✓ | ✓ | object |
Pod affinity is a group of inter pod affinity scheduling rules.
See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podaffinity-v1-core
|
|
podAntiAffinity | ✓ | ✓ | object |
Pod anti affinity is a group of inter pod anti affinity scheduling rules.
See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#podantiaffinity-v1-core
|
|
priorityClassName | ✓ | ✓ | string |
Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible.
|
|
tolerations | ✓ | ✓ | []object |
If specified, the pod’s tolerations.
See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#toleration-v1-core
|
TopologySpreadConstraint specifies how to spread matching pods among the given topology.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
maxSkew | ✓ | ✓ | ✓ | integer |
MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It’s a required field. Default value is 1 and 0 is not allowed.Format: int32 |
topologyKey | ✓ | ✓ | ✓ | string |
TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a “bucket”, and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is “kubernetes.io/hostname”, each Node is a domain of that topology. And, if TopologyKey is “topology.kubernetes.io/zone”, each zone is a domain of that topology. It’s a required field. |
whenUnsatisfiable | ✓ | ✓ | ✓ | string |
WhenUnsatisfiable indicates how to deal with a pod if it doesn’t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location,
but giving higher precedence to topologies that would help reduce the
skew.
A constraint is considered “Unsatisfiable” for an incoming pod if and only if every possible node assignment for that pod would violate “MaxSkew” on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won’t make it more imbalanced. It’s a required field.
|
labelSelector | ✓ | ✓ | object |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. |
|
matchLabelKeys | ✓ | ✓ | []string |
MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. |
|
minDomains | ✓ | ✓ | integer |
MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule.
For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so “global minimum” is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). |
|
nodeAffinityPolicy | ✓ | ✓ | string |
NodeAffinityPolicy indicates how we will treat Pod’s nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. |
|
nodeTaintsPolicy | ✓ | ✓ | string |
NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. |
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
matchExpressions | ✓ | ✓ | []object |
matchExpressions is a list of label selector requirements. The requirements are ANDed. |
|
matchLabels | ✓ | ✓ | map[string]string |
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed. |
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | ✓ | string |
key is the label key that the selector applies to. |
operator | ✓ | ✓ | ✓ | string |
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. |
values | ✓ | ✓ | []string |
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. |
This section allows to configure Postgres features
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
version | ✓ | ✓ | string |
Postgres version used on the cluster. It is either of:
|
|
extensions | ✓ | []object |
StackGres support deploy of extensions at runtime by simply adding an entry to this array. A deployed extension still
requires the creation in a database using the A cluster restart is required for:
|
||
flavor | string |
Postgres flavor used on the cluster. It is either of:
If not specified then the vanilla Postgres will be used for the cluster. This field can only be set on creation.
|
|||
ssl | ✓ | object |
This section allows to use SSL when connecting to Postgres
|
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
name | ✓ | ✓ | string |
The name of the extension to deploy. |
|
publisher | ✓ | string |
The id of the publisher of the extension to deploy. If not specified com.ongres will be used by default.Default: com.ongres |
||
repository | ✓ | string |
The repository base URL from where to obtain the extension to deploy.
This section is filled by the operator.
|
||
version | ✓ | string |
The version of the extension to deploy. If not specified version of stable channel will be used by default and if only a version is available that one will be used. |
This section allows to use SSL when connecting to Postgres
Example:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
postgres:
ssl:
enabled: true
certificateSecretKeySelector:
name: stackgres-secrets
key: cert
privateKeySecretKeySelector:
name: stackgres-secrets
key: key
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
certificateSecretKeySelector | ✓ | object |
Secret key selector for the certificate or certificate chain used for SSL connections.
|
||
enabled | ✓ | boolean |
Allow to enable SSL for connections to Postgres. By default is false .
If |
||
privateKeySecretKeySelector | ✓ | object |
Secret key selector for the private key used for SSL connections.
|
Secret key selector for the certificate or certificate chain used for SSL connections.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | string |
The key of Secret that contains the certificate or certificate chain for SSL connections
|
|
name | ✓ | ✓ | string |
The name of Secret that contains the certificate or certificate chain for SSL connections
|
Secret key selector for the private key used for SSL connections.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | string |
The key of Secret that contains the private key for SSL connections
|
|
name | ✓ | ✓ | string |
The name of Secret that contains the private key for SSL connections
|
Cluster custom configurations.
Example:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
configurations:
sgPostgresConfig: 'postgresconf'
sgPoolingConfig: 'pgbouncerconf'
backups:
- sgObjectStorage: 'backupconf'
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
backupPath | ✓ | string |
Deprecated: use instead .spec.configurations.backups[].path
The path were the backup is stored. If not set this field is filled up by the operator. When provided will indicate were the backups and WAL files will be stored.
|
||
backups | ✓ | []object |
List of backups configurations for this SGCluster
|
||
credentials | ✓ | object |
Allow to specify custom credentials for Postgres users and Patroni REST API
|
||
patroni | ✓ | object |
Allow to specify Patroni configuration that will extend the generated one |
||
sgBackupConfig | ✓ | string |
Deprecated: use instead .spec.configurations.backups[].sgObjectStorage
Name of the SGBackupConfig to use for the cluster. It defines the backups policy, storage and retention, among others, applied to the cluster. When not set, backup configuration will not be used.
|
||
sgPoolingConfig | ✓ | ✓ | string |
Name of the SGPoolingConfig used for this cluster.
Each pod contains a sidecar with a connection pooler (currently: PgBouncer). The connection pooler is implemented as a sidecar. If not set, a default configuration will be used. Disabling connection pooling altogether is possible if the disableConnectionPooling property of the pods object is set to true. Changing this field may require a restart.
|
|
sgPostgresConfig | ✓ | ✓ | string |
Name of the SGPostgresConfig used for the cluster.
It must exist. When not set, a default Postgres config, for the major version selected, is used. Changing this field may require a restart.
|
Backup configuration for this SGCluster
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
sgObjectStorage | ✓ | ✓ | string |
Name of the SGObjectStorage to use for the cluster.
It defines the location in which the the backups will be stored.
|
|
compression | ✓ | enum |
Specifies the backup compression algorithm. Possible options are: lz4, lzma, brotli. The default method is lz4 . LZ4 is the fastest method, but compression ratio is the worst. LZMA is way slower, but it compresses backups about 6 times better than LZ4. Brotli is a good trade-off between speed and compression ratio, being about 3 times better than LZ4.
Enum: lz4, lzma, brotli |
||
cronSchedule | ✓ | string |
Continuous Archiving backups are composed of periodic base backups and all the WAL segments produced in between those base backups. This parameter specifies at what time and with what frequency to start performing a new base backup.
Use cron syntax (
Also ranges of values ( If not set, full backups are performed each day at 05:00 UTC.
|
||
path | ✓ | string |
The path were the backup is stored. If not set this field is filled up by the operator.
When provided will indicate were the backups and WAL files will be stored.
|
||
performance | ✓ | object |
Configuration that affects the backup network and disk usage performance.
|
||
retention | ✓ | integer |
When an automatic retention policy is defined to delete old base backups, this parameter specifies the number of base backups to keep, in a sliding window.
Consequently, the time range covered by backups is Default is 5.
|
Configuration that affects the backup network and disk usage performance.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
downloadConcurrency | ✓ | integer |
Backup storage may use several concurrent streams to read the data. This parameter configures the number of parallel streams to use. By default, it’s set to the minimum between the number of file to read and 10.
Minimum: 1 |
||
maxDiskBandwidth | ✓ | integer |
Maximum disk read I/O when performing a backup. In bytes (per second).
|
||
maxNetworkBandwidth | ✓ | integer |
Maximum storage upload bandwidth used when storing a backup. In bytes (per second).
|
||
uploadConcurrency | ✓ | integer |
Backup storage may use several concurrent streams to store the data. This parameter configures the number of parallel streams to use. By default, it’s set to 16.
Minimum: 1 |
||
uploadDiskConcurrency | ✓ | integer |
Backup storage may use several concurrent streams to store the data. This parameter configures the number of parallel streams to use to reading from disk. By default, it’s set to 1.
Minimum: 1 |
Allow to specify custom credentials for Postgres users and Patroni REST API
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
patroni | ✓ | ✓ | object |
Kubernetes SecretKeySelectors that contains the credentials for patroni REST API.
Changing this field may require a restart.
|
|
users | ✓ | ✓ | object |
Kubernetes SecretKeySelectors that contains the credentials of the users.
Changing this field may require a manual modification of the database users to reflect the new values specified. In particular you may have to create those users if username is changed or alter password if it is changed. Here are the SQL commands to perform such operation (replace
default usernames with the new ones and
Changing this field may require a restart.
|
Kubernetes SecretKeySelectors that contains the credentials for patroni REST API.
Changing this field may require a restart.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
restApiPassword | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the password for the patroni REST API.
|
A Kubernetes SecretKeySelector that contains the password for the patroni REST API.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
name | ✓ | ✓ | ✓ | string |
Name of the referent. More information. |
Kubernetes SecretKeySelectors that contains the credentials of the users.
Changing this field may require a manual modification of the database users to reflect the new values specified.
In particular you may have to create those users if username is changed or alter password if it is changed. Here are the SQL commands to perform such operation (replace
default usernames with the new ones and ***
with their respective passwords):
CREATE ROLE postgres;
ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS PASSWORD '***';
CREATE ROLE replicator;
ALTER ROLE replicator WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN REPLICATION NOBYPASSRLS PASSWORD '***';
CREATE ROLE authenticator;
ALTER ROLE authenticator WITH SUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS PASSWORD '***';
Changing this field may require a restart.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
authenticator | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the credentials of the authenticator user used by pgbouncer to authenticate other users.
|
|
replication | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the credentials of the replication user used to replicate from the primary cluster and from replicas of this cluster.
|
|
superuser | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the credentials of the superuser (usually the postgres user).
|
A Kubernetes SecretKeySelector that contains the credentials of the authenticator user used by pgbouncer to authenticate other users.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
password | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the password of the user.
|
|
username | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the username of the user.
|
A Kubernetes SecretKeySelector that contains the password of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
name | ✓ | ✓ | ✓ | string |
Name of the referent. More information. |
A Kubernetes SecretKeySelector that contains the username of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
name | ✓ | ✓ | ✓ | string |
Name of the referent. More information. |
A Kubernetes SecretKeySelector that contains the credentials of the replication user used to replicate from the primary cluster and from replicas of this cluster.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
password | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the password of the user.
|
|
username | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the username of the user.
|
A Kubernetes SecretKeySelector that contains the password of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
name | ✓ | ✓ | ✓ | string |
Name of the referent. More information. |
A Kubernetes SecretKeySelector that contains the username of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
name | ✓ | ✓ | ✓ | string |
Name of the referent. More information. |
A Kubernetes SecretKeySelector that contains the credentials of the superuser (usually the postgres user).
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
password | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the password of the user.
|
|
username | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the username of the user.
|
A Kubernetes SecretKeySelector that contains the password of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
name | ✓ | ✓ | ✓ | string |
Name of the referent. More information. |
A Kubernetes SecretKeySelector that contains the username of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
name | ✓ | ✓ | ✓ | string |
Name of the referent. More information. |
Allow to specify Patroni configuration that will extend the generated one
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
initialConfig | object |
Allow to specify Patroni configuration that will overwrite the generated one
This field can only be set on creation.
|
StackGres features a functionality for all pods to send Postgres, Patroni and PgBouncer logs to a central (distributed) location, which is in turn another Postgres database. Logs can then be accessed via SQL interface or from the web UI. This section controls whether to enable this feature or not. If not enabled, logs are send to the pod’s standard output.
Example:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
distributedLogs:
sgDistributedLogs: distributedlogs
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
retention | ✓ | string |
Define a retention window with the syntax <integer> (minutes|hours|days|months) in which log entries are kept.
Log entries will be removed when they get older more than the double of the specified retention window.
When this field is changed the retention will be applied only to log entries that are newer than the end of
the retention window previously specified. If no retention window was previously specified it is considered
to be of 7 days. This means that if previous retention window is of |
||
sgDistributedLogs | ✓ | string |
Name of the SGDistributedLogs to use for this cluster. It must exist.
|
Cluster initialization data options. Cluster may be initialized empty, or from a backup restoration. Specifying scripts to run on the database after cluster creation is also possible.
This field can only be set on creation.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
restore | object |
This section allows to restore a cluster from an existing copy of the metadata and data.
|
|||
scripts | []object |
Deprecated use instead .spec.managedSql with SGScript.
A list of SQL scripts executed in sequence, exactly once, when the database is bootstrap and/or after restore is completed.
|
This section allows to restore a cluster from an existing copy of the metadata and data.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
downloadDiskConcurrency | integer |
The backup fetch process may fetch several streams in parallel. Parallel fetching is enabled when set to a value larger than one.
If not specified it will be interpreted as latest.
|
|||
fromBackup | object |
From which backup to restore and how the process is configured
|
From which backup to restore and how the process is configured
Example:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
initialData:
restore:
fromBackup:
name: stackgres-backup
downloadDiskConcurrency: 1
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
name | string |
When set to the name of an existing SGBackup, the cluster is initialized by restoring the
backup data to it. If not set, the cluster is initialized empty. The selected backup must be in the same namespace.
|
|||
pointInTimeRecovery | object |
Using Point-in-Time Recovery (PITR) it is possible to restore the database to its state at any moment in the past by setting restoreToTimestamp
to a value between the timestamps at which your chosen SGBackup and the subsequent one were taken. If the chosen SGBackup is the latest one, the
restoreToTimestamp value can be between the timestamps at which that last SGBackup was taken and the current one.
See also: https://www.postgresql.org/docs/current/continuous-archiving.html
|
|||
target | string |
Specify the recovery_target that specifies that recovery should end as soon as a consistent
state is reached, i.e., as early as possible. When restoring from an online backup, this means the point where taking the backup ended.
Technically, this is a string parameter, but ‘immediate’ is currently the only allowed value.
|
|||
targetInclusive | boolean |
Specify the recovery_target_inclusive to stop recovery just after the specified
recovery target (true), or just before the recovery target (false). Applies when targetLsn, pointInTimeRecovery, or targetXid is specified. This
setting controls whether transactions having exactly the target WAL location (LSN), commit time, or transaction ID, respectively, will be included
in the recovery. Default is true.
|
|||
targetLsn | string |
recovery_target_lsn specifies the LSN of the write-ahead log location up to which
recovery will proceed. The precise stopping point is also influenced by targetInclusive. This parameter is parsed using the system data type
pg_lsn.
|
|||
targetName | string |
recovery_target_name specifies the named restore point
(created with pg_create_restore_point()) to which recovery will proceed.
|
|||
targetTimeline | string |
Specify the recovery_target_timeline to recover into a particular timeline.
The default is to recover along the same timeline that was current when the base backup was taken. Setting this to latest recovers to the latest
timeline found in the archive, which is useful in a standby server. Other than that you only need to set this parameter in complex re-recovery
situations, where you need to return to a state that itself was reached after a point-in-time recovery.
|
|||
targetXid | string |
recovery_target_xid specifies the transaction ID up to which recovery will proceed.
Keep in mind that while transaction IDs are assigned sequentially at transaction start, transactions can complete in a different numeric order.
The transactions that will be recovered are those that committed before (and optionally including) the specified one. The precise stopping point
is also influenced by targetInclusive.
|
|||
uid | string |
When set to the UID of an existing SGBackup, the cluster is initialized by restoring the
backup data to it. If not set, the cluster is initialized empty. This field is deprecated.
|
Using Point-in-Time Recovery (PITR) it is possible to restore the database to its state at any moment in the past by setting restoreToTimestamp
to a value between the timestamps at which your chosen SGBackup and the subsequent one were taken. If the chosen SGBackup is the latest one, the
restoreToTimestamp
value can be between the timestamps at which that last SGBackup was taken and the current one.
See also: https://www.postgresql.org/docs/current/continuous-archiving.html
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
restoreToTimestamp | string |
An ISO 8601 date, that holds UTC date indicating at which point-in-time the database have to be restored.
|
Deprecated use instead .spec.managedSql with SGScript.
Scripts are executed in auto-commit mode with the user postgres
in the specified database (or in database postgres
if not specified).
Fields script
and scriptFrom
are mutually exclusive and only one of them is required.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
database | string |
Database where the script is executed. Defaults to the postgres database, if not specified.
|
|||
name | string |
Name of the script. Must be unique across this SGCluster.
|
|||
script | string |
Raw SQL script to execute. This field is mutually exclusive with scriptFrom field.
|
|||
scriptFrom | object |
Reference to either a Kubernetes Secret or a ConfigMap that contains the SQL script to execute. This field is mutually exclusive with script field.
Fields |
Reference to either a Kubernetes Secret or a ConfigMap that contains the SQL script to execute. This field is mutually exclusive with script
field.
Fields secretKeyRef
and configMapKeyRef
are mutually exclusive, and one of them is required.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
configMapKeyRef | object |
A ConfigMap reference that contains the SQL script to execute. This field is mutually exclusive with secretKeyRef field.
|
|||
secretKeyRef | object |
A Kubernetes SecretKeySelector that contains the SQL script to execute. This field is mutually exclusive with configMapKeyRef field.
|
A ConfigMap reference that contains the SQL script to execute. This field is mutually exclusive with secretKeyRef
field.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | string |
The key name within the ConfigMap that contains the SQL script to execute.
|
|||
name | string |
The name of the ConfigMap that contains the SQL script to execute.
|
A Kubernetes SecretKeySelector that contains the SQL script to execute. This field is mutually exclusive with configMapKeyRef
field.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | string |
The key of the secret to select from. Must be a valid secret key. |
|||
name | string |
Name of the referent. More information. |
This section allows to reference SQL scripts that will be applied to the cluster live.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
continueOnSGScriptError | ✓ | boolean |
If true, when any entry of any SGScript fail will not prevent subsequent SGScript from being executed. By default is false .Default: false |
||
scripts | ✓ | []object |
A list of script references that will be executed in sequence.
|
A script reference. Each version of each entry of the script referenced will be executed exactly once following the sequence defined in the referenced script and skipping any script entry that have already been executed.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
id | ✓ | integer |
The id is immutable and must be unique across all the SGScript entries. It is replaced by the operator and is used to identify the SGScript entry.
|
||
sgScript | ✓ | string |
A reference to an SGScript |
Metadata information from cluster created resources.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
annotations | ✓ | object |
Custom Kubernetes annotations to be passed to resources created and managed by StackGres.
|
||
labels | ✓ | object |
Custom Kubernetes [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) to be passed to resources created and managed by StackGres.
|
Custom Kubernetes annotations to be passed to resources created and managed by StackGres.
Example:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
metadata:
annotations:
clusterPods:
customAnnotations: customAnnotationValue
primaryService:
customAnnotations: customAnnotationValue
replicasService:
customAnnotations: customAnnotationValue
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
allResources | ✓ | map[string]string |
Annotations to attach to any resource created or managed by StackGres. |
||
clusterPods | ✓ | map[string]string |
Annotations to attach to pods created or managed by StackGres. |
||
primaryService | ✓ | map[string]string |
Custom Kubernetes annotations passed to the -primary service. |
||
replicasService | ✓ | map[string]string |
Custom Kubernetes annotations passed to the -replicas service. |
||
services | ✓ | map[string]string |
Annotations to attach to all services created or managed by StackGres. |
Custom Kubernetes labels to be passed to resources created and managed by StackGres.
Example:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
metadata:
labels:
clusterPods:
customLabel: customLabelValue
services:
customLabel: customLabelValue
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
clusterPods | ✓ | map[string]string |
Labels to attach to Pods created or managed by StackGres. |
||
services | ✓ | map[string]string |
Labels to attach to Services and Endpoints created or managed by StackGres. |
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
disableClusterPodAntiAffinity | ✓ | ✓ | boolean |
It is a best practice, on non-containerized environments, when running production workloads, to run each database server on a different server (virtual or physical), i.e., not to co-locate more than one database server per host.
The same best practice applies to databases on containers. By default, StackGres will not allow to run more than one StackGres pod on a given Kubernetes node. Set this property to true to allow more than one StackGres pod per node. Changing this field may require a restart.
|
|
disableClusterResourceRequirements | ✓ | ✓ | boolean |
It is a best practice, on containerized environments, when running production workloads, to enforce container’s resources requirements.
By default, StackGres will configure resource requirements for all the containers. Set this property to true to prevent StackGres from setting container’s resources requirements (except for patroni container, see Changing this field may require a restart.
|
|
disablePatroniResourceRequirements | ✓ | ✓ | boolean |
It is a best practice, on containerized environments, when running production workloads, to enforce container’s resources requirements.
The same best practice applies to databases on containers. By default, StackGres will configure resource requirements for patroni container. Set this property to true to prevent StackGres from setting patroni container’s resources requirement. Changing this field may require a restart.
|
|
enableSetClusterCpuRequests | ✓ | ✓ | boolean |
On containerized environments, when running production workloads, enforcing container’s cpu requirements request to be equals to the limit allow to achieve the highest level of performance. Doing so, reduces the chances of leaving
the workload with less cpu than it requires. It also allow to set static CPU management policy that allows to guarantee a pod the usage exclusive CPUs on the node.
By default, StackGres will configure cpu requirements to have the same limit and request for all the containers. Set this property to true to prevent StackGres from setting container’s cpu requirements request equals to the limit (except for patroni container, see Changing this field may require a restart.
|
|
enableSetClusterMemoryRequests | ✓ | ✓ | boolean |
On containerized environments, when running production workloads, enforcing container’s memory requirements request to be equals to the limit allow to achieve the highest level of performance. Doing so, reduces the chances of leaving
the workload with less memory than it requires.
By default, StackGres will configure memory requirements to have the same limit and request for all the containers. Set this property to true to prevent StackGres from setting container’s memory requirements request equals to the limit (except for patroni container, see Changing this field may require a restart.
|
|
enableSetPatroniCpuRequests | ✓ | ✓ | boolean |
On containerized environments, when running production workloads, enforcing container’s cpu requirements request to be equals to the limit allow to achieve the highest level of performance. Doing so, reduces the chances of leaving
the workload with less cpu than it requires. It also allow to set static CPU management policy that allows to guarantee a pod the usage exclusive CPUs on the node.
By default, StackGres will configure cpu requirements to have the same limit and request for the patroni container. Set this property to true to prevent StackGres from setting patroni container’s cpu requirements request equals to the limit
when Changing this field may require a restart.
|
|
enableSetPatroniMemoryRequests | ✓ | ✓ | boolean |
On containerized environments, when running production workloads, enforcing container’s memory requirements request to be equals to the limit allow to achieve the highest level of performance. Doing so, reduces the chances of leaving
the workload with less memory than it requires.
By default, StackGres will configure memory requirements to have the same limit and request for the patroni container. Set this property to true to prevent StackGres from setting patroni container’s memory requirements request equals to the limit
when Changing this field may require a restart.
|
|
enabledFeatureGates | ✓ | []string |
A list of StackGres feature gates to enable (not suitable for a production environment).
Available feature gates are:
|
Kubernetes services created or managed by StackGres.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
primary | ✓ | object |
Configure the service to the primary with the same name as the SGCluster. A legacy service
Provides a stable connection (regardless of primary failures or switchovers) to the read-write Postgres server of the cluster. See also https://kubernetes.io/docs/concepts/services-networking/service/
|
||
replicas | ✓ | object |
Configure the service to any replica with the name as the SGCluster plus the -replicas suffix.
It provides a stable connection (regardless of replica node failures) to any read-only Postgres server of the cluster. Read-only servers are load-balanced via this service. See also https://kubernetes.io/docs/concepts/services-networking/service/
|
Configure the service to the primary with the same name as the SGCluster. A legacy service
Provides a stable connection (regardless of primary failures or switchovers) to the read-write Postgres server of the cluster.
See also https://kubernetes.io/docs/concepts/services-networking/service/
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
allocateLoadBalancerNodePorts | ✓ | boolean |
allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is “true”. It may be set to “false” if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. |
||
customPorts | ✓ | ✓ | []object |
The list of custom ports that will be exposed by the service.
The names of custom ports will be prefixed with the string The names of target ports will be prefixed with the string Changing this field may require a restart. See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#serviceport-v1-core
|
|
enabled | ✓ | boolean |
Specify if the service should be created or not. Default: true |
||
externalIPs | ✓ | []string |
externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system. |
||
externalTrafficPolicy | ✓ | string |
externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service’s “externally-facing” addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to “Local”, the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, “Cluster”, uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get “Cluster” semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node.
|
||
healthCheckNodePort | ✓ | integer |
healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set. Format: int32 |
||
internalTrafficPolicy | ✓ | string |
InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to "Local", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). |
||
ipFamilies | ✓ | []string |
IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName.
This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. |
||
ipFamilyPolicy | ✓ | string |
IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be “SingleStack” (a single IP family), “PreferDualStack” (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or “RequireDualStack” (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName. |
||
loadBalancerClass | ✓ | string |
loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. “internal-vip” or “example.com/internal-vip”. Unprefixed names are reserved for end-users. This field can only be set when the Service type is ‘LoadBalancer’. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type ‘LoadBalancer’. Once set, it can not be changed. This field will be wiped when a service is updated to a non ‘LoadBalancer’ type. |
||
loadBalancerIP | ✓ | string |
Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version. |
||
loadBalancerSourceRanges | ✓ | []string |
If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature.” More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/ |
||
sessionAffinity | ✓ | string |
Supports “ClientIP” and “None”. Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
|
||
sessionAffinityConfig | ✓ | object |
SessionAffinityConfig represents the configurations of session affinity. |
||
type | ✓ | enum |
type determines how the Service is exposed. Defaults to ClusterIP. Valid
options are ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates
a cluster-internal IP address for load-balancing to endpoints.
"NodePort" builds on ClusterIP and allocates a port on every node.
"LoadBalancer" builds on NodePort and creates
an external load-balancer (if supported in the current cloud).
More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
Enum: ClusterIP, LoadBalancer, NodePort Default: ClusterIP |
SessionAffinityConfig represents the configurations of session affinity.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
clientIP | ✓ | object |
ClientIPConfig represents the configurations of Client IP based session affinity. |
ClientIPConfig represents the configurations of Client IP based session affinity.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
timeoutSeconds | ✓ | integer |
timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && <=86400(for 1 day) if ServiceAffinity == “ClientIP”. Default value is 10800(for 3 hours). Format: int32 |
Configure the service to any replica with the name as the SGCluster plus the -replicas
suffix.
It provides a stable connection (regardless of replica node failures) to any read-only Postgres server of the cluster. Read-only servers are load-balanced via this service.
See also https://kubernetes.io/docs/concepts/services-networking/service/
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
allocateLoadBalancerNodePorts | ✓ | boolean |
allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is “true”. It may be set to “false” if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. |
||
customPorts | ✓ | ✓ | []object |
The list of custom ports that will be exposed by the service.
The names of custom ports will be prefixed with the string The names of target ports will be prefixed with the string Changing this field may require a restart. See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#serviceport-v1-core
|
|
enabled | ✓ | boolean |
Specify if the service should be created or not. Default: true |
||
externalIPs | ✓ | []string |
externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system. |
||
externalTrafficPolicy | ✓ | string |
externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service’s “externally-facing” addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to “Local”, the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, “Cluster”, uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get “Cluster” semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node.
|
||
healthCheckNodePort | ✓ | integer |
healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set. Format: int32 |
||
internalTrafficPolicy | ✓ | string |
InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to "Local", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). |
||
ipFamilies | ✓ | []string |
IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName.
This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. |
||
ipFamilyPolicy | ✓ | string |
IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be “SingleStack” (a single IP family), “PreferDualStack” (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or “RequireDualStack” (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName. |
||
loadBalancerClass | ✓ | string |
loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. “internal-vip” or “example.com/internal-vip”. Unprefixed names are reserved for end-users. This field can only be set when the Service type is ‘LoadBalancer’. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type ‘LoadBalancer’. Once set, it can not be changed. This field will be wiped when a service is updated to a non ‘LoadBalancer’ type. |
||
loadBalancerIP | ✓ | string |
Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version. |
||
loadBalancerSourceRanges | ✓ | []string |
If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature.” More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/ |
||
sessionAffinity | ✓ | string |
Supports “ClientIP” and “None”. Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
|
||
sessionAffinityConfig | ✓ | object |
SessionAffinityConfig represents the configurations of session affinity. |
||
type | ✓ | enum |
type determines how the Service is exposed. Defaults to ClusterIP. Valid
options are ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates
a cluster-internal IP address for load-balancing to endpoints.
"NodePort" builds on ClusterIP and allocates a port on every node.
"LoadBalancer" builds on NodePort and creates
an external load-balancer (if supported in the current cloud).
More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
Enum: ClusterIP, LoadBalancer, NodePort Default: ClusterIP |
SessionAffinityConfig represents the configurations of session affinity.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
clientIP | ✓ | object |
ClientIPConfig represents the configurations of Client IP based session affinity. |
ClientIPConfig represents the configurations of Client IP based session affinity.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
timeoutSeconds | ✓ | integer |
timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && <=86400(for 1 day) if ServiceAffinity == “ClientIP”. Default value is 10800(for 3 hours). Format: int32 |
Make the cluster a read-only standby replica allowing to replicate from another PostgreSQL instance and acting as a rely.
Changing this section is allowed to fix issues or to change the replication source.
Removing this section convert the cluster in a normal cluster where the standby leader is converted into the a primary instance.
Example:
From SGCluster instance:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
replicateFrom:
instance:
sgCluster: my-cluster
Note: The above example allow to replicate from another SGCluster instance that in the same namespace and the same K8s cluster.
This option cannot be combined with external instance, storage and users.
From external instance:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
replicateFrom:
instance:
external:
host: ${HOST_IP}
port: 5433
users:
superuser:
username:
name: pg-origin-secret
key: superuser-username
password:
name: pg-origin-secret
key: superuser-password
replication:
username:
name: pg-origin-secret
key: replication-username
password:
name: pg-origin-secret
key: replication-password
authenticator:
username:
name: pg-origin-secret
key: authenticator-username
password:
name: pg-origin-secret
key: authenticator-password
Note: Replace the ${HOST_IP} with the actual IP of the external instance.
From Storage:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
initialData:
restore:
fromBackup:
name: backup-name
replicateFrom:
storage:
path: ${PG_ORIGIN_BACKUP_PATH}
sgObjectStorage: stackgres-backups
users:
superuser:
username:
name: pg-origin-secret
key: superuser-username
password:
name: pg-origin-secret
key: superuser-password
replication:
username:
name: pg-origin-secret
key: replication-username
password:
name: pg-origin-secret
key: replication-password
authenticator:
username:
name: pg-origin-secret
key: authenticator-username
password:
name: pg-origin-secret
key: authenticator-password
Note: Using storage only to replicate from requires to recover from a backup in order to bootstrap the database.
Replace the ${PG_ORIGIN_BACKUP_PATH} with the actual path in the object storage where the backups are stored.
From external instance and storage:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
replicateFrom:
instance:
external:
host: ${HOST_IP}
port: 5433
storage:
path: ${PG_ORIGIN_BACKUP_PATH}
sgObjectStorage: stackgres-backups
users:
superuser:
username:
name: pg-origin-secret
key: superuser-username
password:
name: pg-origin-secret
key: superuser-password
replication:
username:
name: pg-origin-secret
key: replication-username
password:
name: pg-origin-secret
key: replication-password
authenticator:
username:
name: pg-origin-secret
key: authenticator-username
password:
name: pg-origin-secret
key: authenticator-password
Note: Replace the ${HOST_IP} with the actual IP of the external instance.
Replace the ${PG_ORIGIN_BACKUP_PATH} with the actual path in the object storage where the backups are stored.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
instance | ✓ | object |
Configure replication from a PostgreSQL instance.
|
||
storage | ✓ | object |
Configure replication from an SGObjectStorage using WAL shipping.
The file structure of the object storage must follow the
WAL-G file structure.
|
||
users | ✓ | object |
Kubernetes SecretKeySelectors that contains the credentials of the users.
|
Configure replication from a PostgreSQL instance.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
external | ✓ | object |
Configure replication from an external PostgreSQL instance.
|
||
sgCluster | ✓ | string |
Configure replication from an SGCluster.
|
Configure replication from an external PostgreSQL instance.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
host | ✓ | ✓ | string |
The host of the PostgreSQL to replicate from. |
|
port | ✓ | ✓ | integer |
The port of the PostgreSQL to replicate from. |
Configure replication from an SGObjectStorage using WAL shipping.
The file structure of the object storage must follow the WAL-G file structure.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
path | ✓ | ✓ | string |
The path in the SGObjectStorage to replicate from. |
|
sgObjectStorage | ✓ | ✓ | string |
The SGObjectStorage name to replicate from. |
|
performance | ✓ | object |
Configuration that affects the backup network and disk usage performance.
|
Configuration that affects the backup network and disk usage performance.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
downloadConcurrency | ✓ | integer |
Backup storage may use several concurrent streams to read the data. This parameter configures the number of parallel streams to use. By default, it’s set to the minimum between the number of file to read and 10.
Minimum: 1 |
||
maxDiskBandwidth | ✓ | integer |
Maximum disk read I/O when performing a backup. In bytes (per second).
|
||
maxNetworkBandwidth | ✓ | integer |
Maximum storage upload bandwidth used when storing a backup. In bytes (per second).
|
Kubernetes SecretKeySelectors that contains the credentials of the users.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
authenticator | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the credentials of the authenticator user used by pgbouncer to authenticate other users.
|
|
replication | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the credentials of the replication user used to replicate from the primary cluster and from replicas of this cluster.
|
|
superuser | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the credentials of the superuser (usually the postgres user).
|
A Kubernetes SecretKeySelector that contains the credentials of the authenticator user used by pgbouncer to authenticate other users.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
password | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the password of the user.
|
|
username | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the username of the user.
|
A Kubernetes SecretKeySelector that contains the password of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
|
name | ✓ | ✓ | string |
Name of the referent. More information. |
A Kubernetes SecretKeySelector that contains the username of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
|
name | ✓ | ✓ | string |
Name of the referent. More information. |
A Kubernetes SecretKeySelector that contains the credentials of the replication user used to replicate from the primary cluster and from replicas of this cluster.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
password | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the password of the user.
|
|
username | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the username of the user.
|
A Kubernetes SecretKeySelector that contains the password of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
|
name | ✓ | ✓ | string |
Name of the referent. More information. |
A Kubernetes SecretKeySelector that contains the username of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
|
name | ✓ | ✓ | string |
Name of the referent. More information. |
A Kubernetes SecretKeySelector that contains the credentials of the superuser (usually the postgres user).
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
password | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the password of the user.
|
|
username | ✓ | ✓ | object |
A Kubernetes SecretKeySelector that contains the username of the user.
|
A Kubernetes SecretKeySelector that contains the password of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
|
name | ✓ | ✓ | string |
Name of the referent. More information. |
A Kubernetes SecretKeySelector that contains the username of the user.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
key | ✓ | ✓ | string |
The key of the secret to select from. Must be a valid secret key. |
|
name | ✓ | ✓ | string |
Name of the referent. More information. |
This section allows to configure Postgres replication mode and HA roles groups.
The main replication group is implicit and contains the total number of instances less the sum of all instances in other replication groups.
The total number of instances is always specified by .spec.instances
.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
groups | ✓ | []object |
StackGres support replication groups where a replication group of a specified number of instances could have different
replication role. The main replication group is implicit and contains the total number of instances less the sum of all
instances in other replication groups.
|
||
mode | ✓ | string |
The replication mode applied to the whole cluster.
Possible values are:
async When in asynchronous mode the cluster is allowed to lose some committed transactions. When the primary server fails or becomes unavailable for any other reason a sufficiently healthy standby will automatically be promoted to primary. Any transactions that have not been replicated to that standby remain in a “forked timeline” on the primary, and are effectively unrecoverable (the data is still there, but recovering it requires a manual recovery effort by data recovery specialists). sync When in synchronous mode a standby will not be promoted unless it is certain that the standby contains all
transactions that may have returned a successful commit status to client (clients can change the behavior
per transaction using PostgreSQL’s Synchronous mode does not guarantee multi node durability of commits under all circumstances. When no suitable standby is available, primary server will still accept writes, but does not guarantee their replication. When the primary fails in this mode no standby will be promoted. When the host that used to be the primary comes back it will get promoted automatically, unless system administrator performed a manual failover. This behavior makes synchronous mode usable with 2 node clusters. When synchronous mode is used and a standby crashes, commits will block until the primary is switched to standalone mode. Manually shutting down or restarting a standby will not cause a commit service interruption. Standby will signal the primary to release itself from synchronous standby duties before PostgreSQL shutdown is initiated. strict-sync When it is absolutely necessary to guarantee that each write is stored durably on at least two nodes, use the strict
synchronous mode. This mode prevents synchronous replication to be switched off on the primary when no synchronous
standby candidates are available. As a downside, the primary will not be available for writes (unless the Postgres
transaction explicitly turns off Note: Because of the way synchronous replication is implemented in PostgreSQL it is still possible to lose transactions even when using strict synchronous mode. If the PostgreSQL backend is cancelled while waiting to acknowledge replication (as a result of packet cancellation due to client timeout or backend failure) transaction changes become visible for other backends. Such changes are not yet replicated and may be lost in case of standby promotion. sync-all The same as strict-sync-all The same as |
||
role | ✓ | string |
This role is applied to the instances of the implicit replication group that is composed by .spec.instances number of instances.
Possible values are:
|
||
syncInstances | ✓ | integer |
Number of synchronous standby instances. Must be less than the total number of instances. It is set to 1 by default.
Only setteable if mode is sync or strict-sync .
Minimum: 1 Default: 1 |
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
instances | ✓ | ✓ | integer |
Number of StackGres instances for this replication group.
The total number of instance of a cluster is always |
|
role | ✓ | ✓ | string |
This role is applied to the instances of this replication group.
Possible values are:
|
|
name | ✓ | string |
The name of the replication group. If not set will default to the group-<index> . |
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
name | ✓ | ✓ | string |
The name of the extension to install. |
|
postgresVersion | ✓ | ✓ | string |
The postgres major version of the extension to install. |
|
publisher | ✓ | ✓ | string |
The id of the publisher of the extension to install. |
|
repository | ✓ | ✓ | string |
The repository base URL from where the extension will be installed from. |
|
version | ✓ | ✓ | string |
The version of the extension to install. |
|
build | ✓ | string |
The build version of the extension to install. |
||
extraMounts | ✓ | []string |
The extra mounts of the extension to install. |
Current status of a StackGres cluster.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
arch | ✓ | string |
The architecture on which the cluster has been initialized. |
||
conditions | ✓ | []object |
|
||
dbOps | ✓ | object |
Used by some SGDbOps to indicate the operation configuration and status to the operator.
|
||
labelPrefix | ✓ | string |
The custom prefix that is prepended to all labels. |
||
managedSql | ✓ | object |
This section stores the state of referenced SQL scripts that are applied to the cluster live.
|
||
os | ✓ | string |
The operative system on which the cluster has been initialized. |
||
podStatuses | ✓ | []object |
The list of pod statuses. |
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
lastTransitionTime | ✓ | string |
Last time the condition transitioned from one status to another. |
||
message | ✓ | string |
A human readable message indicating details about the transition. |
||
reason | ✓ | string |
The reason for the condition’s last transition. |
||
status | ✓ | string |
Status of the condition, one of True, False, Unknown. |
||
type | ✓ | string |
Type of deployment condition. |
Used by some SGDbOps to indicate the operation configuration and status to the operator.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
majorVersionUpgrade | ✓ | object |
The major version upgrade configuration and status
|
||
minorVersionUpgrade | ✓ | object |
The minor version upgrade configuration and status
|
||
restart | ✓ | object |
The minor version upgrade configuration and status
|
||
securityUpgrade | ✓ | object |
The minor version upgrade configuration and status
|
The major version upgrade configuration and status
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
check | ✓ | boolean |
Run pg_upgrade with check option instead of performing the real upgrade
|
||
clone | ✓ | boolean |
Use --clone option when running pg_upgrade
|
||
dataChecksum | ✓ | boolean |
Indicates if PostgreSQL data checksum is enabled
|
||
encoding | ✓ | string |
The PostgreSQL encoding
|
||
initialInstances | ✓ | []string |
The instances that this operation is targetting
|
||
link | ✓ | boolean |
Use --link option when running pg_upgrade
|
||
locale | ✓ | string |
The PostgreSQL locale
|
||
primaryInstance | ✓ | string |
The primary instance that this operation is targetting
|
||
rollback | ✓ | boolean |
Indicates to rollback from a previous major version upgrade
|
||
sourceBackupPath | ✓ | string |
The source backup path
|
||
sourcePostgresVersion | ✓ | string |
The source PostgreSQL version
|
||
sourceSgPostgresConfig | ✓ | string |
The source SGPostgresConfig reference
|
||
targetPostgresVersion | ✓ | string |
The target PostgreSQL version
|
The minor version upgrade configuration and status
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
initialInstances | ✓ | []string |
The instances that this operation is targetting
|
||
primaryInstance | ✓ | string |
The primary instance that this operation is targetting
|
||
sourcePostgresVersion | ✓ | string |
Postgres version that is currently running on the cluster
|
||
targetPostgresVersion | ✓ | string |
The desired Postgres version for the cluster
|
The minor version upgrade configuration and status
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
initialInstances | ✓ | []string |
The instances that this operation is targetting
|
||
primaryInstance | ✓ | string |
The primary instance that this operation is targetting
|
The minor version upgrade configuration and status
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
initialInstances | ✓ | []string |
The instances that this operation is targetting
|
||
primaryInstance | ✓ | string |
The primary instance that this operation is targetting
|
This section stores the state of referenced SQL scripts that are applied to the cluster live.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
scripts | ✓ | []object |
A list of statuses for script references. |
The status of a script reference.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
completedAt | ✓ | string |
ISO-8601 datetime of when the script execution had completed (mutually exclusive with failedAt ). |
||
failedAt | ✓ | string |
ISO-8601 datetime of when the script execution had failed (mutually exclusive with completedAt ). |
||
id | ✓ | integer |
Identify the associated SGScript entry with the same value in the id field. |
||
scripts | ✓ | []object |
A list of statuses for script entries of referenced script. |
||
startedAt | ✓ | string |
ISO-8601 datetime of when the script execution has been started. |
||
updatedAt | ✓ | string |
ISO-8601 datetime of when the last script execution occurred. Will be reset each time the referenced SGScripts entry will be applied. |
The status of a script entry of a referenced script.
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
failure | ✓ | string |
If failed, a message of the failure |
||
failureCode | ✓ | string |
If failed, the error code of the failure. See also https://www.postgresql.org/docs/current/errcodes-appendix.html |
||
id | ✓ | integer |
Identify the associated script entry with the same value in the id field. |
||
intents | ✓ | integer |
Indicates the number of intents or failures occurred |
||
version | ✓ | integer |
The latest version applied |
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
name | ✓ | ✓ | string |
The name of the pod. |
|
installedPostgresExtensions | ✓ | []object |
The list of Postgres extensions currently installed. |
||
pendingRestart | ✓ | boolean |
Indicates if the pod requires restart |
||
primary | ✓ | boolean |
Indicates if the pod is the elected primary |
||
replicationGroup | ✓ | integer |
Indicates the replication group this Pod belongs to. |
Property |
Required |
Updatable |
May Require Restart |
Type |
Description |
---|---|---|---|---|---|
name | ✓ | ✓ | string |
The name of the installed extension. |
|
postgresVersion | ✓ | ✓ | string |
The postgres major version of the installed extension. |
|
publisher | ✓ | ✓ | string |
The id of the publisher of the installed extension. |
|
repository | ✓ | ✓ | string |
The repository base URL from where the extension was installed from. |
|
version | ✓ | ✓ | string |
The version of the installed extension. |
|
build | ✓ | string |
The build version of the installed extension. |
||
extraMounts | ✓ | []string |
The extra mounts of the installed extension. |
A sidecar container is a container that adds functionality to PostgreSQL or to the cluster infrastructure. Currently StackGres implement following sidecar containers:
cluster-controller
: this container is always present, and it is not possible to disable it.
It serves to reconcile local configurations, collects Pod status, and performs local actions (like extensions installation, execution of SGScript entries, etc.).envoy
: this container is always present, and it is not possible to disable it.
It serve as a edge proxy from client to PostgreSQL instances or between PostgreSQL instances.
It enables network metrics collection to provide connection statistics.pgbouncer
: PgBouncer that serves as connection pooler for the PostgreSQL instances.prometheus-postgres-exporter
: Postgres exporter that exports metrics for the PostgreSQL instances.fluent-bit
: Fluent-bit that send logs to a distributed logs cluster.postgres-util
: Contains psql
and all PostgreSQL common tools in order to perform common administration tasks.The following example disables all non-essential sidecars:
apiVersion: stackgres.io/v1
kind: SGCluster
metadata:
name: stackgres
spec:
pods:
disableConnectionPooling: false
disableMetricsExporter: false
disablePostgresUtil: false