Photo by Pixabay on Pexels
Kubernetes and REST
While most users rely on kubectl to interact with Kubernetes, there are use-cases where we need to interact with it programmatically. When we use kubectl, under the covers, it invokes Kubernetes's native REST interface. While it is possible to invoke the REST interface directly, it is often simpler to use a language-specific client library. As a result, the Kubernetes team and the Kubernetes community have produced client libraries for a various languages. The Kubernetes team provides for dotnet, Go, Haskell, Java, JavaScript, and Python while the Kubernetes community supports an even more extensive collection of language clients (and, in several cases, competing clients), including Clojure, Lisp, Node.js, Perl, PHP, Ruby, Rust, Scala, and Swift.Choosing a client library
We have seen that we have many options to choose from when selecting a Kubernetes client. In this article, we will be interacting with our Microk8s instance using the Java language. The Java language's popularity has provided several candidates from which to choose. Your first instinct may be to use the Kubernetes team's >Java client. After all, who better to create a language-specific client than the authors of Kubernetes itself. While this library is certainly a valid option, it is important to understand how the client is built. The Kubernetes Java client, like its alternative language siblings, is generated from a common OpenAPI generator script ( kubernetes-client/gen). Using this technique, each language client produced provides a very thin wrapper around the Kubernetes REST API. While using this type of client is undoubtedly preferable to creating our own custom REST client, perhaps there is a better option.One (very) popular alternative is the Fabric8 Kubernetes Java client. With the Fabric8 Kubernetes Java client, we get much more than a thin wrapper around Kubernetes's REST API. Fabric8 allows us to interact with our Kubernetes instance using a rich Domain Specific Language (DSL). This DSL will enable us to compose our commands using a fluent API. This approach reduces the code verbosity and leads to more readable code. We can compare the Kubernetes team's client with the fabric8 client to see the difference. Let's get a list of all pods:
Kubernetes Java Client
V1PodList pods = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null);
Fabric8 Java Client
PodList pods = api.pods().inAnyNamespace().list();
Unless you have a compulsion to count method arguments, the
fabric8 client is easier to write and more enjoyable to read.
It also allows us more control over how we create our objects by chaining methods together. To illustrate this, let's create a service with fabric8:
Service myservice = client.services()
.inNamespace("thinkmicroservices")
.create(new ServiceBuilder()
.withNewMetadata()
.withName("signaling-svc")
.addToLabels("signaling", "another-lable")
.endMetadata()
.build()
);
Here we create a new
service by using the
ServiceBuilder in the
thinkmicroservices namespace. We configure the ServiceBuilder with a new
metadata instance containing the service name and a couple of labels. We call
.endMetadata() to complete our metadata construction and call
.build to invoker the builder.
In addition to the DSL, we also get support for mock server testing. With Fabric8, we get an extension that also provides a separate DSL for mocking Kubernetes operations. As a result, we can test our code without needing to stand up a Kubernetes instance. Lastly, we also get support for a couple additional/complementary technologies including OpenShift, Knative ,and Tekton CI/CD.
As you have undoubtedly have already suspected, we will be using the Fabric8 Kubernetes Java client in this article.
Kubernetes Configuration information
Before we can begin issuing commands to Kubernetes, our client will need to be configured to locate and securely connect to our instance. For example, in many Kubernetes environments, we can obtain the Kubernetes configuration using the following command:
Terminal
cat ~/.kube/config
However, if you are using MicroK8s, you will need to use a different command:
Terminal
microk8s config
Here we see the contents of our MicroK8s configuration:
Terminal
apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBVENDQWVtZ0F3SUJBZ0lKQU4vNGdtNFdxTGJSTUEwR0NTcUdTSWIzRFFFQkN3VUFNQmN4RlRBVEJnTlYKQkFNTURERXdMakUxTWk0eE9ETXVNVEFlRncweU1UQTBNekF4TlRReE16TmFGdzB6TVRBME1qZ3hOVFF4TXpOYQpNQmN4RlRBVEJnTlZCQU1NRERFd0xqRTFNaTR4T0RNdU1UQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQCkFEQ0NBUW9DZ2dFQkFOU3hwR3VwRXNlWk5rN0xubXNGQTJRRElOTUhjWm5lMmU2WjhSMmZVZEZGeEFwaXErZHcKWkVPRnlyc29Wc2ljZ1VRTWtrb043dmxyUkxJdG1xa1RvT2NCbnhJR3M4WHdoUXg2UGRDWVhKVjd5cXlaakMxSwppOHc4Y3RiNWpIOW1MSTBuYXN3WXVhTDZBZHdmcnBmYk01RC9jenBCekttSFlYYVdHb3puYjRYNkppYWlEMWJNCmdZOXN6WW0yN3l3QzFCYVM0WHNid0NWS2dpRTNwa1ZoNFdzNFFoUGtSMzJPNTE4NG5PN0pRVUlCenBIWThySUkKRENpUWVPSnVpTWM3U2kvZDF3cTVNYldERGoxVTlZSVo4Z3NOVEFCbmFUNTc5R3F5OTNBKytMbGVWdlBBWTVpYwo2V05Gd212Z2Z6QlVvb2NERm9abTVLTnJRcnlPZlVBdXVhMENBd0VBQWFOUU1FNHdIUVlEVlIwT0JCWUVGSTNBCmJ2UTlWKzJzZHpDVVRJajdsSENKbUx2WU1COEdBMVVkSXdRWU1CYUFGSTNBYnZROVYrMnNkekNVVElqN2xIQ0oKbUx2WU1Bd0dBMVVkRXdRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUJLTlZLT0hESzRXV3NmaApUcnp0NURoa25BbjFCeDVNaGkrUzRiMzdVWHNoc2lHQTJlWFAwMGFOQkZuOTlFenQ1dzJJSE44eStpV3c5SUl5ClV4clRJMHJXNkN5RVoxaHp5ODh0TDkremZvcTdPWjRlRUNqT3phYWNZZ256eDVGQ3FtUml5RVJqenMwV1FMby8KM1luMDhhb2pIVytYSFFnb3ZEZFpZamZ3bG1lY2Y1SEQxZllBVENjYy9UejV2SWo5N2k4ZVNPbVlIWS9IRDV4SQpJUVd0bXNaazY3Y3NyRGhHQmxVU24vS3lVUmtNQ2ZjeGlnblpqMVA2cng3VUM0OTVScHFGQmhSdUVtODZLenFzCmhXUHpjU1owK2hOMklJNzVuaHlkSEpPNk1CNWtpaURLRE5ROXF1Z056RVhjMTNoOWlIS1lxTEhhcUd1VVVvQmoKN095MU5Ebz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://192.168.0.15:16443 name: microk8s-cluster contexts: - context: cluster: microk8s-cluster user: admin name: microk8s current-context: microk8s kind: Config preferences: {} users: - name: admin user: token: MFlyQldHMGlQSGovbjZyVUZCQlRNbFRvZ3N4c1hKVE53WUJiSzYzVXRnUT0K
We will copy the contents of the config file and use it when we create our client.
Fabric8 dependency
The first step is, of course, adding all fabric8 Kubernetes client dependencies to our project. The Fabric8 Maven repository contains many Kubernetes related artifacts, but we will only require:<dependency>
<groupId>io.fabric8</groupId>
<artifactId>kubernetes-client</artifactId>
<version>${fabric8.kubernetes.java.client.version}</version>
</dependency>
Now that we have that handled, we can now create an instance of the client.
Interacting with Kubernetes
Creating the client
The method we choose to create our client will depend on where we intend for it to run. There are three scenarios:On the Kubernetes host
In this scenario, the client will run colocated with the Kubernetes instance.
try (final KubernetesClient client = new DefaultKubernetesClient()) {
// ...
}
Since the client is running on the same machine, it will have access to the Kubernetes configuration file. The DefaultKubernetesClient class will attempt to access the ~/.kube/config file directly.
on a remote machine
If we are running the client from a remote application, we must provide it with access to the configuration file. We can either read it from a file (e.g., the one we copied from the console earlier):
File file = new File("./client-config");
String kubeconfigContents = Files.readString(file.toPath());
Config config = Config.fromKubeconfig(kubeconfigContents);
try (final KubernetesClient client = new DefaultKubernetesClient(config)) {
// ...
}
or we can create a
ConfigBuilder and obtains it from a URL:
Config kubeConfig = new ConfigBuilder()
.withMasterUrl("https://192.168.42.20:8443/")
.build()
try (final KubernetesClient client = new DefaultKubernetesClient(kubeConfig)) {
// Do stuff with client
}
which will retrieve the configuration file from a remote location.
Within a Kubernetes Pod
The last use case is to connect to Kubernetes from within a pod. The code will appear similar to:
try (final KubernetesClient client = new DefaultKubernetesClient()) {
// ...
}
however, it will load the
~/.kube/config file from the
ServiceAccount volume mounted inside the pod.
Examples
The following helper classes demonstrate how to implement several common functions using the Fabric8 client we created in the previous section. Each helper class generally follows the common CRUD pattern when appropriate.Namespaces
Namespaces allow us to subdivide our clusters into multiple, isolated, virtual clusters. This approach will enable us to organize our resources into logical groups, isolate resources from resources in a separate namespace, and prevents resources from exceeding CPU, disk, or object limits.List Namespaces
We can obtain a list of all namespaces in the cluster with:
public NamespaceList getAllNamespaces() {
return client.namespaces().list();
}
Here we see the
fluent API in action. To interact with
namespaces, we obtain a handle to the
namespaces object directly from our
client instance. This pattern is repeated throughout the examples.
To obtain a list of namespaces filtered by a map of labels, we can use:
public NamespaceList getNamespaceListWithLabels(Map
labelMap) {
return client.namespaces().withLabels(labelMap).list();
}
By adding the
withLabels(...) method, we are able to filter which
namespaces are returned.
Create a Namespace
We can create a new namespace using:
public Namespace createNamespace(String namespace) {
Namespace newNamespace = new NamespaceBuilder().withNewMetadata().withName(namespace).endMetadata().build();
return client.namespaces().create(newNamespace);
}
Here pass the
create(...) method the new
namespace we wish to create.
We can also create a labeled namespace with:
public Namespace createNamespace(String namespace, Map
labelMap) {
Namespace newNamespace = new NamespaceBuilder().withNewMetadata().withName(namespace).withLabels(labelMap).endMetadata().build();
return client.namespaces().create(newNamespace);
}
Delete a Namespace
When we no longer need the namespace, we can dispose of it with:
public boolean deleteNamespace(String namespace) {
return client.namespaces().withName(namespace).delete();
}
Here, we first obtain the
namespace using the
withName(...) method and then dispose of it with the
delete() call.
Pods
Pods provide a useful abstraction layer above our containers to allow us to deploy one or more containers that are guaranteed to run on the same machine in our cluster.List pods
To get a list of all the pods in a namespace, we will access the pods() object on our client.
public PodList getAllPodsInNamespace(String namespace) {
return client.pods().inNamespace(namespace).list();
}
After obtaining the
pods() object, we filter it using the
inNamespace(...) method and the call
list() to return the collection of
pods.
Create a Pod
Creating a pod is a two-step process. First, we use the PodBuilder class to configure the pod, and then we must create it in a namespace. In this helper method, we create a pod with a single container.
public Pod createPod(String namespace, String podname, String containerName, String imageName, int port) {
Pod pod = new PodBuilder().withNewMetadata().withName(podname).endMetadata()
.withNewSpec()
.addNewContainer()
.withName(containerName)
.withImage(imageName)
.addNewPort()
.withContainerPort(port)
.endPort()
.endContainer()
.endSpec()
.build();
return updatePod(namespace,pod);
}
Here we name the
pod by first creating its metadata. We call
.withNewMetadata() to instruct the
PodBuilder that we are creating a new metadata instance. We then supply the name using the
.withName(...) method. We then close the metadata instance with the
.endMetadata() method. This sequence allows us to build the metadata instance within the broader builder instance. This pattern of
subordinate builders is common within the
PodBuilder and the
fluent API in general.
The next step is to define a new Spec instance using the .withNewSpec()/ .endSpec() methods. Contained with the Spec, we create a new Container and its associated Port.
Within the container we declare its name and image using the .withName(...) and .withImage(..) methods respectively. We then define the container port using the .addNewPort(..)/ .endPort() pair, declaring the container port using the .withContainerPort(...) method inside.
If we wanted to add additional containers to our Pod (for a sidecar container perhaps), we could repeat the steps between the .addNewContainer()/ .endContainer() methods with the appropriate values.
Once we have finished configuring the PodBuilder, we invoke the traditional .build() method, and we should have a configured pod instance. This instance, however, is not yet active. We must call the updatePod(...) helper method to deploy it. This helper method is explained in the next section.
Update a Pod
The updatePod() helper method is simply a convenience wrapper.
public Pod updatePod(String namespace, Pod updatePod){
return client.pods().inNamespace(namespace).createOrReplace(updatePod);
}
The method invokes the
.createOrReplace(...) method with the configured
pod in the declared
namespace.
Delete a Pod
When we are done with our pod, we can delete it using the deletePod(...) method.
public boolean deletePod(String namespace, String podname) {
return client.pods().inNamespace(namespace).withName(podname).delete();
}
Here we identify the
namespace and
podname and call the
delete() method. If successful, the method will return
true.
Deployments
Deployments let you manage a set of identical pods through a single parent object to scale up and down.List deployment
When we need a list of all deployments in a namespace, we can call the DeploymentHelper's getDeploymentsInNamespace method.
public DeploymentList getDeploymentsInNamespace(String namespace) {
return client.apps().deployments().inNamespace(namespace).list();
}
To access
deployments, we must first access the
apps object on our client. We then filter by
namespace with the
.inNamespace(...) method, before calling the
.list() to return the collection.
create a deployment
To create a deployment using the DeploymentHelper requires us to supply quite a large number of arguments. In this helper method we build a relatively simple deployment.
public Deployment createDeployment(String namespace, String deploymentName, Map
deploymentLabelMap,
int replicaCount, Map
specLabelMap, String containerName, String containerImage, String[] commands,
Map
selectorLabelMap) {
Deployment newDeployment = new DeploymentBuilder()
.withNewMetadata()
.withName(deploymentName)
.withNamespace(namespace)
.addToLabels(deploymentLabelMap)
.endMetadata()
.withNewSpec()
.withReplicas(replicaCount)
.withNewTemplate()
.withNewMetadata()
.addToLabels(specLabelMap)
.endMetadata()
.withNewSpec()
.addNewContainer()
.withName(containerName)
.withImage(containerImage)
.withCommand(commands)
.endContainer()
.endSpec()
.endTemplate()
.withNewSelector()
.addToMatchLabels(selectorLabelMap)
.endSelector()
.endSpec()
.build();
return client.apps().deployments().inNamespace(namespace).create(newDeployment);
}
We start with the
DeploymentBuilder class. Similar to the
PodBuilder, we must supply a new
metadata instance, as well as a new
spec instance. Within the
spec instance, we declare our
replica count using the
.withReplicas(...) method, which has its own
metadata instance containing a
label map. We declare our container within the
.addnewContainer()/
.endContainer() pair and also create a
selector using
.withNewSelector() and
.addToMatchLabels(...). then call
.build() to return our configured
Deployment. However, the
deployment isn't finished until we call the
.create() method.
Rolling Updates for a deployment
When we need to perform a rolling update on our deployment container image, we can call the updateDeployment method.
public Deployment updateDeployment(String namespace,String deploymentName, Map
containerToImageMap){
return client.apps().deployments().inNamespace(namespace).withName(deploymentName).rolling().updateImage(containerToImageMap);
}
This method will update our container image with zero downtime by incrementally update the
pod instances with the new container image.
Scale a deployment
public Deployment scaleDeployment(String namespace, String deploymentName, int replicaCount) {
return client.apps().deployments().inNamespace(namespace).withName(deploymentName).edit(
d -> new DeploymentBuilder(d).editSpec().withReplicas(replicaCount).endSpec().build()
);
}
Delete a deployment
When we no longer need our deployment, we can get rid of it using the deleteDeployment(...) method.
public boolean deleteDeployment(String namespace, String deploymentName) {
return client.apps().deployments().inNamespace(namespace).withName(deploymentName).delete();
}
Following the API's standard
delete pattern, we filter our
deployments by
namespace and
name and then call the
delete() method.
Services
To access a pod, we must define a service to enable networking. The following helper methods illustrate several common service-related operations.List services
Let get a list of all the services in a given namespace:
public ServiceList getAllServicesInNamespace(String namespace) {
return client.services().inNamespace(namespace).list();
}
Not much new here. The primary difference is we are using the
services() object.
create a service
To create a service, we will start by using the ServiceBuilder.
public Service createService(String serviceNamespace, String serviceName, String selectorKey, String selectorValue, String portName, ServiceProtocol protocol,
int port, int targetPort, ServiceType type) {
Service service = new ServiceBuilder()
.withNewMetadata()
.withName(serviceName)
.withNamespace(serviceNamespace)
.endMetadata()
.withNewSpec()
.withSelector(Collections.singletonMap(selectorKey, selectorValue))
.addNewPort()
.withName(portName)
.withProtocol(protocol.getProtocol())
.withPort(port)
.withTargetPort(new IntOrString(targetPort))
.endPort()
.withType(type.getType())
.endSpec()
.build();
return client.services().inNamespace(serviceNamespace).withName(serviceName).create(service);
}
After building the
metadata object, we create a new
spec containing a
selector, a named
port, the desired
protocol, the internal port, the
service type, and the external port. We then call
.build() generate the
service. When that builder has completed, we must call the
.create().
update a service
When we need to change a service, we obtain the service instance, make the desire changes, then call the updateService helper method.
public Service updateService(String namespace, Service newService) {
return client.services().inNamespace(namespace).createOrReplace(newService);
}
Again, we see the use of the
createOrReplace(...) method to do the heavy lifting during the update.
Delete a service
When we are finished with the service, we can call the deleteService(...) method.
public boolean deleteService(String namespace, String serviceName) {
return client.services().inNamespace(namespace).withName(serviceName).delete();
}
Once again, we see we lookup the
service by
namespace and
name and call
delete().
Jobs
Jobs provide a mechanism for executing specific tasks within the cluster. Each job creates at least one pod and ensures that some percentage of them successfully complete.List jobs
We can obtain a list of jobs using the JobHelper method getJobsInNamespace(...).
public JobList getJobsInNamespace(String namespace) {
return client.batch().jobs().inNamespace(namespace).list();
}
To access jobs, we must first get an instance of the batch object. We can then filter by namespace and get the collection of jobs using the .list() method.
create a job
Creating a job requires the JobBuilder class. Here we create metadata with the name, labels, and any annotations. We then must create a new spec with the job's container information. We also include an array of commands that will be invoked when the container runs. Additionally, we must include a RestartPolicy to indicate if we want to restart the container if it fails.
public Job createJob(String namespace, String jobName, Map
labelMap,
Map
annotationMap, String containerName, String containerImage,
String[] containerArgs, RestartPolicy restartPolicy
) {
Job job = new JobBuilder()
.withApiVersion(API_BATCH_VERSION_V1)
.withNewMetadata()
.withName(jobName)
.withLabels(labelMap)
.withAnnotations(annotationMap)
.endMetadata()
.withNewSpec()
.withNewTemplate()
.withNewSpec()
.addNewContainer()
.withName(containerName)
.withImage(containerImage)
.withArgs(containerArgs)
.endContainer()
.withRestartPolicy(restartPolicy.getPolicy())
.endSpec()
.endTemplate()
.endSpec()
.build();
return client.batch().jobs().inNamespace(namespace).create(job);
}
Once the builder has completed, we must then call the
.create(...) method.
Update a job
We can change a job's configuration and replace it using the updateJob method.
public Job updateJob(String namespace, Job updateJob){
return client.batch().jobs().inNamespace(namespace).createOrReplace(updateJob);
}
The method takes the updated
job instance and calls the
.createOrReplace(...) method.
Delete a job
Deleting a job requires we supply the deleteJob with both the namespace and name of the job.
public boolean deleteJob(String namespace, String jobName){
return client.batch().jobs().inNamespace(namespace).withName(jobName).delete();
}
The
namespace and
name allow us to lookup the
job, and we call the
.delete() method to remove the job.
Cron Jobs
Cron Jobs are essentially identical to jobs, except they can run at specific times or on a recurring schedule. To schedule a cron job, we supply a schedule string containing the cron expression. This expression contains five fields: minute, hour, day of month, month, and day of week. Here we see several expression examples:expression | description |
---|---|
* * * * * | run every minute. |
0 12 * * ? | run at 12:00pm every day. |
*/5 * * * * | run every 5 minutes. |
0 0 1 * * | run at 12:00 AM, on the first day of each month. |
List cron jobs
To obtains a list of CronJobs in a particular namespace, the CronHelper class contains the following method:
public CronJobList getCronJobsInNamespace(String namespace) {
return client.batch().cronjobs().inNamespace(namespace).list();
}
create cron jobs
Creating a new CronJob is similar to creating a job with the notable addition of the withSchedule(...) method.
public CronJob createCronJob(String namespace, String cronJobName,
Map
labelMap, String cronScheduleString,
String containerName, String containerImage, String[] containerArgs, RestartPolicy restartPolicy) {
CronJob cronJob1 = new CronJobBuilder()
.withApiVersion(API_BATCH_VERSION_V1)
.withNewMetadata()
.withName(cronJobName)
.withLabels(labelMap)
.endMetadata()
.withNewSpec()
.withSchedule(cronScheduleString)
.withNewJobTemplate()
.withNewSpec()
.withNewTemplate()
.withNewSpec()
.addNewContainer()
.withName(containerName)
.withImage(containerImage)
.withArgs(containerArgs)
.endContainer()
.withRestartPolicy(restartPolicy.getPolicy())
.endSpec()
.endTemplate()
.endSpec()
.endJobTemplate()
.endSpec()
.build();
return client.batch().cronjobs().inNamespace(namespace).create(cronJob1);
}
We supply the
withSchedule(...) method with a valid
Cron Expression to indicate when this
cron job will fire.
Update cron jobs
We update the cron job the same way we update a regular job.
public CronJob updateJob(String namespace, CronJob updateCronJob) {
return client.batch().cronjobs().inNamespace(namespace).createOrReplace(updateCronJob);
}
After obtaining the
cron job instance we wish to update, we modify the instance and call the
updateCronJob(...) method to make our changes real.
Delete cron jobs
To delete a cron job, we simply supply the namespace, and the cron job name to the deletCronJob(...) method.
public boolean deleteCronJob(String namespace, String cronJobName) {
return client.batch().cronjobs().inNamespace(namespace).withName(cronJobName).delete();
}
If successful, the boolean result will return
true. A failure to delete will return a value of
false.
ConfigMaps
ConfigMaps are key-value dictionary objects that stores configuration information. These maps get injected into a container's environment at runtime.List config maps
We can get a listing of all configuration maps using the ConfigMapHelper getAllConfigMaps() method.
public ConfigMapList getAllConfigMaps() {
return client.configMaps().inAnyNamespace().list();
}
We can filter our listing by
namespace using the
getConfigMapsByNamespace(...) method>
public ConfigMapList getConfigMapsByNamespace(String namespace) {
return client.configMaps().inNamespace(namespace).list();
}
We can obtain a specific
ConfigMap by supply the
namespace and
configMapName to the
getConfigMap(...) method.
public ConfigMap getConfigMap(String namespace, String configMapName) {
return client.configMaps().inNamespace(namespace).withName(configMapName).get();
}
Create config map
Unlike most Kubernetes objects, ConfigMaps do not contain a spec. Instead, ConfigMaps contain a key-value data container. We can create a ConfigMap by providing the namespace, configMapName, and a configMapData map object containing the desired data.
public ConfigMap createConfigMap(String namespace, String configMapName, Map
configMapData) {
ConfigMap configMap = new ConfigMapBuilder()
.withNewMetadata()
.withName(configMapName)
.endMetadata()
.addToData(configMapData)
.build();
return client.configMaps().inNamespace(namespace).create(configMap);
}
The
createConfigMap<(...) method uses the
ConfigMapBuilder to create the configured
ConfigMap, and calls the
create(...) method to make it active.
Update config maps
We can modify our ConfigMap data and call the updateConfigMap(...) helper method to make our changes active.
public ConfigMap updateConfigMap(String namespace, ConfigMap updatedConfigMap) {
return client.configMaps().inNamespace(namespace).createOrReplace(updatedConfigMap);
}
Delete config maps
Calling deleteConfigMap(...) method will remove the configMap from the cluster.
public boolean deleteConfigMap(String namespace, String configMapName) {
return client.configMaps().inNamespace(namespace).withName(configMapName).delete();
}
Secrets
Secrets are identical to ConfigMaps but are used to hold sensitive data (e.g., passwords, tokens, keys, etc.). Kubernetes avoids writing secrets to disk, so they won't be left behind after the container is destroyed. Instead, Kubernetes stores secrets using tmpfs volumes.List Secrets
We can programmatically get all secrets using the SecretHelper class's getAllSecrets() method.
public SecretList getAllSecrets() {
return client.secrets().inAnyNamespace().list();
}
We can filter our list by
namespace using the
getAllSecretsInNamespace(...) method.
public SecretList getAllSecretsInNamespace(String namespace) {
return client.secrets().inNamespace(namespace).list();
}
We can also get a specific
Secret using the
getSecret(...) method.
public Secret getSecret(String namespace, String secretName) {
return client.secrets().inNamespace(namespace).withName(secretName).get();
}
Create Secrets
The createSecret(...) method arguments are identical to the createConfigMap(...) method.
public Secret createSecret(String namespace, String secretName, Map
dataMap) {
Secret secret = new SecretBuilder()
.withNewMetadata()
.withName(secretName)
.endMetadata()
.addToData(dataMap)
.build();
return client.secrets().inNamespace(namespace).create(secret);
}
The method calls the
SecretBuilder class and then calls the
create(...) method.
Update Secrets
We can modify our Secrets by first obtaining the instance, updating the relevant fields, and then calling the updateSecret(...) method.
public Secret updateSecret(String namespace, Secret updatedSecret) {
return client.secrets().inNamespace(namespace).createOrReplace(updatedSecret);
}
Delete Secrets
Lastly, we can delete a particular Secret by calling the deleteSecret(...) method.
public boolean deleteSecret(String namespace, String secretName) {
return client.secrets().inNamespace(namespace).withName(secretName).delete();
}
If successful, the method will return
true. If the method fails, it will return
false.
Loading Resources from YAML
While the programmatic creation of resources using the fluent API is awesome, we may already have a YAML description of the resource. Rather than translating the YAML into its fluent counterpart, we can load it directly from a file (or input stream). Each helper class includes a YAML loader. This example is from the NamespaceHelper class.
public Namespace loadNamespaceFromYAML(String filename) throws FileNotFoundException {
return client.namespaces().load(new FileInputStream(filename)).get();
}
Each helper class
YAML loader follows the same similar pattern. Here we get the
namespace object and call
load with the supplied filename. Each helper class substitutes the
namespace instance with the corresponding helper class type.
Obtaining a Resource's YAML
In addition to loading YAML from a filesystem, each helper class also provides a mechanism for generating the YAML representation of an object. In this example, we are getting the YAML for a namespace object.
public String getNamespaceYAML(String namespace, boolean withRuntimeState) throws JsonProcessingException {
Namespace foundNamespace = this.getNamespace(namespace);
if (withRuntimeState) {
return SerializationUtils.dumpAsYaml(foundNamespace);
} else {
return SerializationUtils.dumpWithoutRuntimeStateAsYaml(foundNamespace);
}
}
When the
withRuntimeState boolean is
true, we get a
YAML file that looks like this:
---
apiVersion: "v1"
kind: "Namespace"
metadata:
creationTimestamp: "2020-11-16T03:06:19.105061394Z"
generation: 1
labels:
key1: "value1"
key2: "value2"
name: "default-test-ns"
resourceVersion: "1"
uid: "2be32a6e-f68e-4a73-a27b-3198fe77cedd"
When we run it with the
withRuntimeState boolean set to
false, our
YAML file contains no state:
---
apiVersion: "v1"
kind: "Namespace"
metadata:
labels:
key1: "value1"
key2: "value2"
name: "default-test-ns"
Like the
YAML loader methods, each helper class follows a pattern similar to the
getNamespaceYAML() method.
While this list of examples is by no means comprehensive, it does demonstrate how easily we can programmatically implement many common Kubernetes actions. For more examples, refer to the Fabric8 Kubernetes Client cheatsheet.
Testing with the Fabric8 mock server
As mentioned earlier, one of the benefits of using fabric8 is its support for testing via its Kubernetes Mock Server. The mock server allows us to test our client code without the startup and resource costs associated with even the smallest Kubernetes instance. Writing tests that use the mock server is surprisingly simple. Unsurprisingly, the first thing we need to do is add the following dependency.<dependency>
<groupId>io.fabric8</groupId>
<artifactId>kubernetes-server-mock</artifactId>
<version>${fabric8.kubernetes.server.mock.version}</version>
<scope>test</scope>
</dependency>
Mock Server annotations
The following code excerpt contains everything we need to add to our test class to enable the mock server client.
@EnableKubernetesMockClient(https=true, crud = true)
public class JobHelperTest {
static KubernetesClient client;
The
@EnableKubernetesMockClient annotation will automatically inject the
mock client into our unit test's static
Kubernetes
client field. When we call methods on the
client field, the mock server will respond. The
@EnableKubernetesMockClient annotation includes two elements:
http=true and
crud=true. We use these elements to configure the mock client.
By default, https is set to true, while crud is set to false. Setting (crud = true) in the annotation configures the mock server to persist our crud operation models between client invocations. This approach allows us to simulate a working Kubernetes instance (at least as far as the client is concerned). We can then sequence our crud-related tests to expect a specific state from a previous test. To sequence out test methods, the example uses the JUnit's org.junit.jupiter.api.Order. The following example contains a rudimentary test of the NamespaceHelper class.
loading...
via MEME
We can see from this example that testing code using the fabric8 mock server requires minimaa effort.
Twitter
Facebook
Reddit
LinkedIn
Email