
More on labels
As mentioned previously, labels are just simple key-value pairs. They are available on pods, replication controllers, replica sets, services, and more. If you recall our service YAML nodejs-rc-service.yaml, there was a selector attribute. The selector attribute tells Kubernetes which labels to use in finding pods to forward traffic for that service.
K8s allows users to work with labels directly on replication controllers, replica sets, and services. Let's modify our replicas and services to include a few more labels. Once again, use your favorite editor to create these two files and name it as nodejs-labels-controller.yaml and nodejs-labels-service.yaml, as follows:
apiVersion: v1
kind: ReplicationController
metadata:
name: node-js-labels
labels:
name: node-js-labels
app: node-js-express
deployment: test
spec:
replicas: 3
selector:
name: node-js-labels
app: node-js-express
deployment: test
template:
metadata:
labels:
name: node-js-labels
app: node-js-express
deployment: test
spec:
containers:
- name: node-js-labels
image: jonbaier/node-express-info:latest
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: node-js-labels
labels:
name: node-js-labels
app: node-js-express
deployment: test
spec:
type: LoadBalancer
ports:
- port: 80
selector:
name: node-js-labels
app: node-js-express
deployment: test
Create the replication controller and service as follows:
$ kubectl create -f nodejs-labels-controller.yaml
$ kubectl create -f nodejs-labels-service.yaml
Let's take a look at how we can use labels in everyday management. The following table shows us the options to select labels:

Let's try looking for replicas with test deployments:
$ kubectl get rc -l deployment=test
The following screenshot is the result of the preceding command:

You'll notice that it only returns the replication controller we just started. How about services with a label named component? Use the following command:
$ kubectl get services -l component
The following screenshot is the result of the preceding command:

Here, we see the core Kubernetes service only. Finally, let's just get the node-js servers we started in this chapter. See the following command:
$ kubectl get services -l "name in (node-js,node-js-labels)"
The following screenshot is the result of the preceding command:

Additionally, we can perform management tasks across a number of pods and services. For example, we can kill all replication controllers that are part of the demo deployment (if we had any running), as follows:
$ kubectl delete rc -l deployment=demo
Otherwise, kill all services that are part of a production or test deployment (again, if we have any running), as follows:
$ kubectl delete service -l "deployment in (test, production)"
It's important to note that, while label selection is quite helpful in day-to-day management tasks, it does require proper deployment hygiene on our part. We need to make sure that we have a tagging standard and that it is actively followed in the resource definition files for everything we run on Kubernetes.
$ kubectl expose pods node-js-gxkix --port=80 --name=testing-vip --type=LoadBalancer
This will create a service named testing-vip and also a public vip (load balancer IP) that can be used to access this pod over port 80. There are number of other optional parameters that can be used. These can be found with the following command: kubectl expose --help.