Skip to content

Lab 5 - Kubernetes Networking

Welcome to the lab 5. In this session, the following topics are covered:

  • Kubernetes networking basics;
  • Kubernetes Services;
  • Kubernetes NetworkPolicies;
  • Kubernetes Ingresses.

Kubernetes networking basics

Kubernetes manages several network-based communication models:

  1. Pod-to-Pod;
  2. Container-to-container;
  3. Pod-to-Service;
  4. External traffic to Service.

During this practice, you are going to discover first and fourth models. The main entities we use during this journey are Service, NetworkPolicy and Ingress.

Services

Services are helpful when a single point of access for a Pod/set of Pods is needed. In the previous labs you have already created simple Services. Let's have a closer look on one for Redis:

apiVersion: v1
kind: Service
metadata:
  name: redis
spec:
  type: ClusterIP #(1)
  selector: #(2)
    app: redis
  ports: #(3)
  - port: 6379
    targetPort: 6379
  1. spec.type ensures the app available within some context. In case of ClusterIP, a Service exposes the app within the cluster only.
  2. spec.selector ensures the service forwards a traffic to Pods with the provided labels;
  3. spec.ports specifies the list of ports available for the end user (port field) and maps them to ports of a container (targetPort field).

Ghostfolio accesses Redis application through internal network, hence it makes sense to expose Redis cluster-wise only (type is ClusterIP).

Info

There are different options for Service type:

  1. ClusterIP - Kubernetes assigns an IP from a set available within a cluster only;
  2. NodePort - each node in Kubernetes cluster reserves specified ports and forwards traffic to the Service;
  3. LoadBalancer - Kubernetes relies on an external load balancer, ignored in the lab;
  4. ExternalName - a Service is mapped to a specified DNS name, ignored in the lab;

Cluster IP

Complete

Inspect an IP address of the echoserver Service (Test Kubernetes from lab3):

kubectl describe service/echoserver
# Name:              redis
# Namespace:         default
# ...
# IP:                10.106.242.153
# IPs:               10.106.242.153
# Port:              <unset>  80/TCP
# ...

The IP above is cluster-scoped IP meaning it is available only within the cluster.

Verify

You can check if this endpoint actually works via curl tool:

# replace the IP address with one of your service
curl 10.104.159.196:80
# {"host":{"hostname":"...

As we can see, the endpoint is accessible.

ClusterIP is practical when an app should be available internally, but there is also a possibility to allow traffic from outside the cluster. For this, Ingress resource is required and reviewed in the second part of the lab.

Complete

In this section, you need to create a ClusterIP-type Service for ghostfolio app and validate if it works using curl.

NB: Please, use ghostfolio as a Service name.

Info

You can check only a status code of a response:

curl -I <Service IP>:<Service Port>
# HTTP/1.1 200 OK
# ...

NodePort

The second major type of service is NodePort, which binds selected ports of each cluster node to ports of a Pod. A range of the node ports is between 30000 and 32767.

Complete

Let's create a NodePort service for echoserver:

apiVersion: v1
kind: Service
metadata:
  name: echoserver-service-nodeport
spec:
  type: NodePort
  selector:
    app: echoserver
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30001

It works as a previous service selecting Pods by app: echoserver label, but uses a different hostname (echoserver-service-nodeport) for discovery.

After inspecting it, you can see the network-specific data:

kubectl describe service/echoserver-service-nodeport
# ...
# IPs:                      10.109.11.56
# Port:                     <unset>  80/TCP
# TargetPort:               80/TCP
# NodePort:                 <unset>  30001/TCP
# ...

Verify

The server should be accessible via node address now:

curl 0.0.0.0:30001
# {"host":{"hostname":"0.0.0.0"...

Info

For convenience, you can create a security group in ETAIS for you tenant with 30000-32767 port range and assign the group to your VM. After this, you are able to access the service from your browser (port 30001). Example:

NodePort browser test

Complete

In this section, you need to create a NodePort-type Service for ghostfolio app and validate if it works using curl on a cluster node.

NB: Please, use ghostfolio-nodeport as a Service name and 30002 as a node port.

You can also access the app via browser.

Headless Services

Headless Service is one without assigned cluster IP. These services expose each Pod IP separately. Due to this feature, they are useful for stateful applications (StatefulSet): an end-user can access specific Pods for writing and others - for reading. An example is PostgreSQL leader-follower replication: all the writes should go to the leader while reads can be handled by the follower.

A headless service doesn't provide load balancing capabilities and a user can access a specific Pod with this hostname format: <pod-name>.<service-name>.

Complete

Let's create a new PostgreSQL StatefulSet with a slightly modified config:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgresql-hl-test
spec:
  selector:
    matchLabels:
      app: postgresql-hl-test
  serviceName: postgresql-hl
  replicas: 2
  template:
    metadata:
      labels:
        app: postgresql-hl-test
    spec:
      containers:
      - name: postgresql
        image: bitnami/postgresql:15.4.0
        env:
          - name: POSTGRESQL_USERNAME
            valueFrom:
              secretKeyRef:
                name: postgresql-secret
                key: username
          - name: POSTGRESQL_DATABASE
            valueFrom:
              secretKeyRef:
                name: postgresql-secret
                key: database
          - name: POSTGRESQL_PASSWORD
            valueFrom:
              secretKeyRef:
                name: postgresql-secret
                key: postgresPassword
        ports:
        - containerPort: 5432
          name: postgres-port
        volumeMounts:
        - name: postgresql-data
          mountPath: /bitnami/postgresql
      volumes:
        - name: postgresql-data
          emptyDir: {}

the main difference is in these lines:

...
serviceName: postgresql-hl
replicas: 2
...

Also, let's create a headless service with the aforementioned name.

apiVersion: v1
kind: Service
metadata:
  name: postgresql-hl
spec:
  clusterIP: None
  selector:
    app: postgresql-hl-test
  ports:
  - port: 5432
    targetPort: 5432

The description shows 2 endpoints for the postgresql Pods, meaning, you can access each one separately.

...
IP:                None
IPs:               None
Port:              <unset>  5432/TCP
TargetPort:        5432/TCP
Endpoints:         10.0.1.181:5432,10.0.1.42:5432
...

For testing, let's use the existing postgresql-0 Pod with included psql client:

kubectl exec -it postgresql-0 -- /bin/bash
# ...
export PGPASSWORD='postgres-password'
psql -h postgresql-hl-test-0.postgresql-hl -U ghostfolio
# psql (15.4)
# Type "help" for help.

# ghostfolio=>

After testing, you should see successful connection to the database.

Network policy

Service Discovery in Kubernetes

In Kubernetes, service discovery works in 2 ways:

  1. Referring a service by a short name (metadata.name). It works only for the services within the same namespace, for example: echoserver
  2. Referring a service by a fully qualified name. It works for cross-namespace discovery and requires <service-name>.<namespace>.svc.<cluster>:<service-port> format. Example: echoserver.default.svc.cluster.local:80

Complete

Until now, we were using default namespace. Let's create a new namespace

kubectl create namespace k8s-lab5

and deploy an additional echoserver Pod and Service with the same config as before (Service should have ClusterIP type).

kubectl apply -f echoserver-pod.yaml -n k8s-lab5
kubectl apply -f echoserver-service.yaml -n k8s-lab5

Verify

When the Pod is up, connect to the echoserver in the default namespace

kubectl exec -it echoserver -- sh

and send request to the Pod in k8s-lab5:

wget -q -O- http://echoserver.k8s-lab5.svc.cluster.local:80/

In the browser, you can open Hubble UI, select default namespace and view traffic coming from one echoserver Pod to another. If you set up Hubble in the 3rd lab, you can find the UI in http://<CONTROL_PLANE_EXTERNAL_IP>:31000/. The external IP located in ETAIS portal (Project -> Resources -> VM -> External IP)

Echoserver Hubble UI

NetworkPolicy Resource

Kubernetes uses NetworkPolicy to isolate Pods from unnecessary network connections. Essentially, it allows setting up traffic between selected Pods and:

  • Pods from same namespace filtered by labels;
  • All Pods from a different namespace containing selected labels;
  • All IP addresses in the provided IP CIDR subnet.

Also, policies can work for both incoming (ingress) and outgoing (egress) traffic.

Complete

An example policy allowing access to echoserver in default namespace is:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: echoserver-network-policy
spec:
  podSelector:
    matchLabels:
      app: echoserver
  policyTypes:
    - Ingress
  ingress:
    - from:
      - namespaceSelector:
          matchLabels: #(1)
            kubernetes.io/metadata.name: k8s-lab5
      - podSelector:
          matchLabels: #(2)
            app: ghostfolio
      ports:
        - protocol: TCP
          port: 80
  1. All the Pods from k8s-lab5 namespace can access the Pod
  2. All the Pods from the same namespace and labels app=ghostfolio can access the Pod

Create a network policy via the manifest above in the default namespace.

Verify

Try to access http://echoserver.default.svc.cluster.local:80/ endpoint from echoserver Pod in k8s-lab5:

kubectl exec -it -n k8s-lab5 echoserver -- sh

wget -q -O- http://echoserver.default.svc.cluster.local:80/

The result should be successful.

Now, let's see what happens, when a user sends a request to the endpoint from a ghostfolio Pod:

kubectl exec -it deployment/ghostfolio -- bash

apt update && apt install -y curl
curl -v http://echoserver:80/

The request should provide a correct response too. The final check is from the host node:

POD_IP=$(kubectl get pod echoserver -o jsonpath='{.status.podIP}')
SERVICE_IP=$(kubectl get service echoserver -o jsonpath={.spec.clusterIP})

curl http://${POD_IP} --connect-timeout 5
# curl: (28) Connection timed out after 5001 milliseconds
curl http://${SERVICE_IP} --connect-timeout 5
# curl: (28) Connection timed out after 5001 milliseconds

NetworkPolicy doesn't allow this connection, because it is not listed in the rules.

You can also view the graph with dropped connections in the hubble UI. To show traffic going from the node, uncheck Visual -> Hide remote node and click the new remote-node block. The result should look like this:

Hubble UI connection dropped

Complete

You need to create network policies for redis and postgresql Pods. For both policies, isolate the Pods from all connections except ones from ghostfolio Pod within the same namespace.

NB: please use postgresql-network-policy and redis-network-policy names correspondingly.

Ingresses

Ingress is a resource that controls external access to the services in a Kubernetes cluster.

In order to process traffic, a cluster needs an ingress controller to be set up. The primary focus of this lab is NGINX controller, while Kubernetes supports many of them.

Essentially, this controller is a Deployment with a NodePort service exposing 80 and 443 ports.

Complete

First of all, let's install the controller:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/baremetal/deploy.yaml

All the required resources (Deployment, Service, etc.) should be available in the ingress-nginx namespace.

Verify

When you list Pods in the new ingress-nginx namespace you should see 2 completed jobs and a running Pod:

kubectl get pods -n ingress-nginx
# NAME                                        READY   STATUS      RESTARTS   AGE
# ingress-nginx-admission-create-pn564        0/1     Completed   0          2m
# ingress-nginx-admission-patch-hvz7h         0/1     Completed   0          2m
# ingress-nginx-controller-79bc9f5df8-rzdqr   1/1     Running     0          2m

Also, 2 services should exist. We are interested in ingress-nginx-controller:

kubectl get service -n ingress-nginx
# NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
# ingress-nginx-controller             NodePort    10.96.155.146   <none>        80:32375/TCP,443:30650/TCP   135m
# ingress-nginx-controller-admission   ClusterIP   10.96.24.140    <none>        443/TCP                      135m

Before we continue, you should check the port from the service (32375 in the example):

NODEPORT=$(kubectl get -n ingress-nginx service/ingress-nginx-controller -o jsonpath='{.spec.ports[0].nodePort}')
echo $NODEPORT
# 32375

This is a node port where nginx controller listens for incoming traffic. We are going to use it for ingress validation later.

Complete

NB: Due to the existing echoserver Pod is protected by a network policy, let's delete the policy to simplify testing.

kubectl delete networkpolicy echoserver-network-policy

Let's create an Ingress for echoserver:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echoserver-ingress
  labels:
    name: echoserver-ingress
spec:
  ingressClassName: "nginx"
  rules: #(5)
  - host: echoserver.PUBLIC_NODE_IP.nip.io #(1)
    http:
      paths:
      - pathType: Prefix
        path: "/" #(2)
        backend:
          service:
            name: echoserver #(3)
            port:
              number: 80 #(4)
  1. Hostname for the service. Please, replace the PUBLIC_NODE_IP with actual IP of your control-plane node.
  2. Ingress allows different paths for services, in this lab we use only root
  3. Service name to lookup
  4. Service port to target
  5. nip.io forwards traffic to the publicly available IP on third domain level (PUBLIC_NODE_IP in the example)

Verify

After some time after creation, you are able to test the Ingress via browser. Go to http://echoserver.PUBLIC_NODE_IP.nip.io:NODEPORT, where PUBLIC_NODE_IP - external node IP you use for ingress hostname, NODEPORT - value we discovered before. You should see the response similar to one we discovered in the NodePort section.

Complete

For the last task of the lab, you need to create an Ingress resource for ghostfolio with the same hostname format ghostfolio.PUBLIC_NODE_IP.nip.io. The result should be available publicly too.

NB: please, use ghostfolio-ingress as a name for the Ingress resource.