In this section you show how to create a Container Engine for Kubernetes.
Assuming that you already had created a Virtual Cloud Network with the pre-requisites to create an OKE and has the OCI-CLI configured.
This is an example to create a OKE cluster into a single AD
1. Create the necessary environment variables and replace with you tenancy information:
export compartment_id=[compartment-OCID]
export vnc_id=[VNC-OCID]
export endpoint_subnet_id=[endpoint-subnet-OCID]
export lb_subnet_id=[loadbalancer-subnet-OCID]
export nodes_subnet_id=[nodes-subnet-OCID]
Create the Kubernetes Cluster:
oci ce cluster create \
--compartment-id $compartment_id \
--kubernetes-version v1.21.5 \
--name stackgres \
--vcn-id $vnc_id \
--endpoint-subnet-id $endpoint_subnet_id \
--service-lb-subnet-ids '["'$lb_subnet_id'"]' \
--endpoint-public-ip-enabled true \
--persistent-volume-freeform-tags '{"stackgres" : "OKE"}'
Output will be similar to this:
{
""opc-work-request-id": "ocid1.clustersworkrequest.oc1.[OCI-Regions].aaaaaaaa2p26em5geexn...""
}
After the Cluster creation, create the node pool for the kubernetes worknodes:
oci ce node-pool create \
--cluster-id $(oci ce cluster list --compartment-id $compartment_id --name stackgres --lifecycle-state ACTIVE --query data[0].id --raw-output) \
--compartment-id $compartment_id \
--kubernetes-version v1.21.5 \
--name Pool1 \
--node-shape VM.Standard.E4.Flex \
--node-shape-config '{"memoryInGBs": 8.0, "ocpus": 1.0}' \
--node-image-id $(oci compute image list --operating-system 'Oracle Linux' --operating-system-version 7.9 --sort-by TIMECREATED --compartment-id $compartment_id --query data[1].id --raw-output) \
--node-boot-volume-size-in-gbs 50 \
--size 3 \
--placement-configs '[{"availabilityDomain": "'$(oci iam availability-domain list --compartment-id $compartment_id --query data[0].name --raw-output)'", "subnetId": "'$nodes_subnet_id'"}]'
Output will be similar to this:
{
"opc-work-request-id": "ocid1.clustersworkrequest.oc1.[OCI-Regions].aaaaaaaa2p26em5geexn..."
}
After the cluster provisioning is highly recommand to change the default Kubernetes storage class:
kubectl patch storageclass oci -p '{"metadata": {"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass oci-bv -p '{"metadata": {"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"true"}}}'
To cleanup the kubernetes cluster you may issue following:
Delete the node pool:
oci ce node-pool delete \
--node-pool-id $(oci ce node-pool list --cluster-id $(oci ce cluster list --compartment-id $compartment_id --name stackgres --lifecycle-state ACTIVE --query data[0].id --raw-output) --compartment-id $compartment_id --query data[0].id --raw-output) \
--force
Delete the Kubernetes Cluster:
oci ce cluster delete \
--cluster-id $(oci ce cluster list --compartment-id $compartment_id --name stackgres --lifecycle-state ACTIVE --query data[0].id --raw-output) \
--force
You may also want to cleanup compute disks used by persistence volumes that may have been created:
This code terminates all Block Volumes with the Free Form Tag {“stackgress”:“OKE”}, if you had provisioned more than one cluster in the same compartment with the code above, this may delete all your PV data.
oci bv volume list \
--compartment-id $compartment_id \
--lifecycle-state AVAILABLE \
--query 'data[?"freeform-tags".stackgres == '\''OKE'\''].id' \
| jq -r .[] | xargs -r -n 1 -I % oci bv volume delete --volume-id % --force