If you have followed the instruction in the previous article, you should be the proud owner of your very own
MicroK8s Kubernetes instance. Now the hard work starts. In this article, we will create the
Kubernetes configuration manifests necessary to run reference implementation. In previous articles, when we would build a new service, we would edit the reference implementation's
docker-compose.yamlfile and include our new service configuration. With
Kubernetes, the process is more complicated. To understand why we will start by reviewing how
Kubernetes differs from
Docker Compose.
With Docker Compose, we define an application composed of multiple services in a single file. Each application service maps to a single container image deployed 1-N times. These application services then execute together on a single machine. This approach allows the application to share a single, common network between containers and enables it to be started and stopped with a single command (up/down). While this approach simplifies the process of defining an application from a collection of service, it limits both fault-tolerance and scaling. Even though a crashed service can be restarted, there is no recourse from a crashing host. Additionally, while we can scale the application by deploying multiple instances of the application's services, scaling is resource-constrained by the hard limits of memory, CPU, and I/O.
Kubernetes, on the other hand, is intended to run on multiple machines (referred to as nodes). This approach is what gives Kubernetes its ability to scale, its high-availability, and its fault-tolerance. Unfortunately, this comes at the cost of increased configuration complexity. Sadly, we bid farewell to the simplicity of our Docker Compose file and wade into the deep waters of Kubernetes Objects, and Kubernetes Manifests.
What is a Kubernetes Object? A Kubernetes object is any persistent entity in the Kubernetes system that represents the state of the cluster. These entities describe what application components are running, what resources have been made available to the application, and any policies applied to the application.
We declare these objects to Kubernetes, and it works to ensure that those objects exist for the life of the application. Each object is treated as a record of intent. It instructs the Kubernetes system what the desired state of the cluster should be. Each of these objects includes two child objects; Object Spec and Status. The Object Spec describes the resource's desired state, and the Status describes the current state of the object within the Kubernetes system. Kubernetes's Control Plane monitors the state of each object and attempts to achieve the target state.
We can create individual objects from the command line (using microk8s kubectl), or we can describe them using Kubernetes manifest files. Both approaches are valid; however, describing objects in manifest files captures all the relevant information in a file that can be easily managed by version control systems and Continuous Integration/Continuous Deployment pipelines.
Each Kubernetes manifest file mandates the following four fields:
With Docker Compose, we define an application composed of multiple services in a single file. Each application service maps to a single container image deployed 1-N times. These application services then execute together on a single machine. This approach allows the application to share a single, common network between containers and enables it to be started and stopped with a single command (up/down). While this approach simplifies the process of defining an application from a collection of service, it limits both fault-tolerance and scaling. Even though a crashed service can be restarted, there is no recourse from a crashing host. Additionally, while we can scale the application by deploying multiple instances of the application's services, scaling is resource-constrained by the hard limits of memory, CPU, and I/O.
Kubernetes, on the other hand, is intended to run on multiple machines (referred to as nodes). This approach is what gives Kubernetes its ability to scale, its high-availability, and its fault-tolerance. Unfortunately, this comes at the cost of increased configuration complexity. Sadly, we bid farewell to the simplicity of our Docker Compose file and wade into the deep waters of Kubernetes Objects, and Kubernetes Manifests.
Kubernetes Objects
In our docker-compose.yaml file, the service is the center of our focus. For every docker image that composed our application, we would add a single service declaration that identified it by name, its source image, and relevant configuration. With Kubernetes, the service definition becomes more granular. Each Docker Compose service declaration may map to one or more Kubernetes Objects.What is a Kubernetes Object? A Kubernetes object is any persistent entity in the Kubernetes system that represents the state of the cluster. These entities describe what application components are running, what resources have been made available to the application, and any policies applied to the application.
We declare these objects to Kubernetes, and it works to ensure that those objects exist for the life of the application. Each object is treated as a record of intent. It instructs the Kubernetes system what the desired state of the cluster should be. Each of these objects includes two child objects; Object Spec and Status. The Object Spec describes the resource's desired state, and the Status describes the current state of the object within the Kubernetes system. Kubernetes's Control Plane monitors the state of each object and attempts to achieve the target state.
We can create individual objects from the command line (using microk8s kubectl), or we can describe them using Kubernetes manifest files. Both approaches are valid; however, describing objects in manifest files captures all the relevant information in a file that can be easily managed by version control systems and Continuous Integration/Continuous Deployment pipelines.
Kubernetes Manifests
A Kubernetes manifest is a description of an API Object written in YAML or JSON (we will be using YAML exclusively). We use these manifests to create, modify, and delete our application's Kubernetes resources. Each application is composed of a collection of manifest files that describe the required objects. To create a particular resource object, we pass the manifest file to Kubernetes through kubectrl using the apply command:microk8s kubectl apply -f resource-manifest.yamlTo delete the resource, we pass the manifest file to Kubernetes through kubectrl using the apply command:
kubectl delete -f resource-manifest.yaml
Each Kubernetes manifest file mandates the following four fields:
- apiVersion- The apiVersion instructs Kubernetes which version of the object you want it to create. For a complete list of available versions, refer to the Kubernetes API Overview.
- kind- Selects the class of resource object you are declaring. We can access the supported list of resources for our Kubernetes instance through kubectl:
microk8s kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap endpoints ep true Endpoints events ev true Event limitranges limits true LimitRange namespaces ns false Namespace nodes no false Node persistentvolumeclaims pvc true PersistentVolumeClaim persistentvolumes pv false PersistentVolume pods po true Pod podtemplates true PodTemplate replicationcontrollers rc true ReplicationController resourcequotas quota true ResourceQuota secrets true Secret serviceaccounts sa true ServiceAccount services svc true Service mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration validatingwebhookconfigurations admissionregistration.k8s.io false ValidatingWebhookConfiguration customresourcedefinitions crd,crds apiextensions.k8s.io false CustomResourceDefinition apiservices apiregistration.k8s.io false APIService controllerrevisions apps true ControllerRevision daemonsets ds apps true DaemonSet deployments deploy apps true Deployment replicasets rs apps true ReplicaSet statefulsets sts apps true StatefulSet tokenreviews authentication.k8s.io false TokenReview localsubjectaccessreviews authorization.k8s.io true LocalSubjectAccessReview selfsubjectaccessreviews authorization.k8s.io false SelfSubjectAccessReview selfsubjectrulesreviews authorization.k8s.io false SelfSubjectRulesReview subjectaccessreviews authorization.k8s.io false SubjectAccessReview horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler cronjobs cj batch true CronJob jobs batch true Job certificatesigningrequests csr certificates.k8s.io false CertificateSigningRequest leases coordination.k8s.io true Lease bgpconfigurations crd.projectcalico.org false BGPConfiguration bgppeers crd.projectcalico.org false BGPPeer blockaffinities crd.projectcalico.org false BlockAffinity clusterinformations crd.projectcalico.org false ClusterInformation felixconfigurations crd.projectcalico.org false FelixConfiguration globalnetworkpolicies crd.projectcalico.org false GlobalNetworkPolicy globalnetworksets crd.projectcalico.org false GlobalNetworkSet hostendpoints crd.projectcalico.org false HostEndpoint ipamblocks crd.projectcalico.org false IPAMBlock ipamconfigs crd.projectcalico.org false IPAMConfig ipamhandles crd.projectcalico.org false IPAMHandle ippools crd.projectcalico.org false IPPool networkpolicies crd.projectcalico.org true NetworkPolicy networksets crd.projectcalico.org true NetworkSet endpointslices discovery.k8s.io true EndpointSlice events ev events.k8s.io true Event ingresses ing extensions true Ingress nodes metrics.k8s.io false NodeMetrics pods metrics.k8s.io true PodMetrics ingressclasses networking.k8s.io false IngressClass ingresses ing networking.k8s.io true Ingress networkpolicies netpol networking.k8s.io true NetworkPolicy runtimeclasses node.k8s.io false RuntimeClass poddisruptionbudgets pdb policy true PodDisruptionBudget podsecuritypolicies psp policy false PodSecurityPolicy clusterrolebindings rbac.authorization.k8s.io false ClusterRoleBinding clusterroles rbac.authorization.k8s.io false ClusterRole rolebindings rbac.authorization.k8s.io true RoleBinding roles rbac.authorization.k8s.io true Role priorityclasses pc scheduling.k8s.io false PriorityClass csidrivers storage.k8s.io false CSIDriver csinodes storage.k8s.io false CSINode storageclasses sc storage.k8s.io false StorageClass volumeattachments storage.k8s.io false VolumeAttachment
As you can see from this listing, the API for this instance contains 70 resource types. - metadata- Is data that helps uniquely identify the object, including its name, UID, and optional namespace. The name allows other objects to refer to this object.
- spec- Each spec's content is specified by its kind field. For detailed descriptions of available spec fields, you can refer to the online Kubernetes API Overview or you can request the details from your Kubernetes instance directly through kubectl.
microk8s kubernetes explain <api-resource-type>
To view the spec for the API Object service:
microk8s kubernetes explain <api-resource-type>
KIND: Service VERSION: v1 RESOURCE: spec <Object> DESCRIPTION: Spec defines the behavior of a service. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status ServiceSpec describes the attributes that a user creates on a service. FIELDS: clusterIP <string> clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies externalIPs <[]string> externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system. externalName <string> externalName is the external reference that kubedns or equivalent will return as a CNAME record for this service. No proxying will be involved. Must be a valid RFC-1123 hostname (https://tools.ietf.org/html/rfc1123) and requires Type to be ExternalName. externalTrafficPolicy <string> externalTrafficPolicy denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. "Local" preserves the client source IP and avoids a second hop for LoadBalancer and Nodeport type services, but risks potentially imbalanced traffic spreading. "Cluster" obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. healthCheckNodePort <integer> healthCheckNodePort specifies the healthcheck nodePort for the service. If not specified, HealthCheckNodePort is created by the service api backend with the allocated nodePort. Will use user-specified nodePort value if specified by the client. Only effects when Type is set to LoadBalancer and ExternalTrafficPolicy is set to Local. ipFamily <string> ipFamily specifies whether this Service has a preference for a particular IP family (e.g. IPv4 vs. IPv6) when the IPv6DualStack feature gate is enabled. In a dual-stack cluster, you can specify ipFamily when creating a ClusterIP Service to determine whether the controller will allocate an IPv4 or IPv6 IP for it, and you can specify ipFamily when creating a headless Service to determine whether it will have IPv4 or IPv6 Endpoints. In either case, if you do not specify an ipFamily explicitly, it will default to the cluster's primary IP family. This field is part of an alpha feature, and you should not make any assumptions about its semantics other than those described above. In particular, you should not assume that it can (or cannot) be changed after creation time; that it can only have the values "IPv4" and "IPv6"; or that its current value on a given Service correctly reflects the current state of that Service. (For ClusterIP Services, look at clusterIP to see if the Service is IPv4 or IPv6. For headless Services, look at the endpoints, which may be dual-stack in the future. For ExternalName Services, ipFamily has no meaning, but it may be set to an irrelevant value anyway.) loadBalancerIP <string> Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. loadBalancerSourceRanges <[]string> If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/ ports <[]Object> The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies publishNotReadyAddresses <boolean> publishNotReadyAddresses indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready. The primary use case for setting this field is for a StatefulSet's Headless Service to propagate SRV DNS records for its Pods for the purpose of peer discovery. The Kubernetes controllers that generate Endpoints and EndpointSlice resources for Services interpret this to mean that all endpoints are considered "ready" even if the Pods themselves are not. Agents which consume only Kubernetes generated endpoints through the Endpoints or EndpointSlice resources can safely assume this behavior. selector <map[string]string> Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/ sessionAffinity <string&glt; Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies sessionAffinityConfig <Object> sessionAffinityConfig contains the configurations of session affinity. topologyKeys <[]string> topologyKeys is a preference-order list of topology keys which implementations of services should use to preferentially sort endpoints when accessing this Service, it can not be used at the same time as externalTrafficPolicy=Local. Topology keys must be valid label keys and at most 16 keys may be specified. Endpoints are chosen based on the first topology key with available backends. If this field is specified and all entries have no backends that match the topology of the client, the service has no backends for that client and connections should fail. The special value "*" may be used to mean "any topology". This catch-all value, if used, only makes sense as the last value in the list. If this is not specified or empty, no topology constraints will be applied. type <string> type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
Twitter
Facebook
Reddit
LinkedIn
Email