network_policy_kubernetes

22 February 2024

Managing network policies in Kubernetes is crucial for controlling traffic between pods and reducing the chances of an attacker compromising an application.

In this article, I will explain how to network policies easily. Introducing Kyverno as a solution.

Navigating Network Policies in Kubernetes

Brief Introduction to Kubernetes Internal Network

In Kubernetes, pods communicate over an internal network. Each pod has its own unique IP address, enabling direct communication. Kubernetes assigns Cluster IPs to services, providing stable internal addresses for interpod communication. You can find more information on how kubernetes architecture works in the attached article.

 

CNI and Network Policy

CNI (Container Network Interface) plugins define how container runtime establish networking for pods. It is mainly used to manage networks within your cluster. Indeed, Kubernetes does not implement CNI plugins itself. That’s why you need to install one yourself. You may already have heard of azure CNI or AWS-VPC-CNI in managed kubernetes cluster AKS and EKS, these are network plugins (CNI).

CNI plugins are responsible for setting up the network stack for pods, assigning them an IP address, setting up routes, etc. Some CNI plugins also support network policies. When you create a NetworkPolicy in Kubernetes, the Kubernetes API server will store it, but the actual enforcement of the policy is done by the CNI plugin.

Integrating CNI is hence mandatory to work with Network Policies in your cluster. It ensures the enforcement of desired communication rules.

You can find more information on CNI in the attached article from Kim on Padok’s blog.

Here is a list of different CNIs you can use to manage network policies

  • Calico: Known for its reliability and support for Network Policies. Calico offers its own api to manage network policies. It is powerful but implies a high cost on your global cluster expenses.
  • Cilium: Solid CNI plugin for managing Network Policies, giving you the possibility to manage OSI 7th layer.
  • Flannel and Weave are other CNI to manage network policies.

In this article we will be using AWS-VPC-CNI, which can now manage network policies.

What are Network Policies Under the Hood?

Network Policies operate as a set of rules applied to the networking layer of a Kubernetes cluster. These rules specify how pods can communicate with each other and external endpoints, adding a layer of control beyond the default behavior.

In AWS-VPC-CNI, eBPF is used. eBPF revolutionizes Kubernetes network policies by enabling customized packet analysis in the Linux kernel. With dynamic filters, eBPF grants administrators granular control over traffic, supporting the creation of personalized network policies for Kubernetes.

Its seamless integration enhances existing policies, offering unmatched flexibility, low overhead, and deep visibility into network traffic.

Network Policy Deployment

How to configure AWS CNI to manage network policies

To let AWS-VPC-CNI manage network policies, we had to install it in our cluster and activate its features to manage them.

You can find more information on AWS documentation to install it as a plugin or by using eksctl.

In our case, we used helm to deploy it:


helm repo add eks https://aws.github.io/eks-charts
helm install aws-vpc-cni --namespace kube-system eks/aws-vpc-cni --set enableNetworkPolicy=true

The Science of Network Policies

Network Policies function by employing labels and selectors to define rules.

Network Policies have two types: Ingress or Egress. In one case, you allow ingress communication to a pod. In the other case, you allow egress communication from a pod.

Hence, you may have understood that network policies are working on an “allow base”. Whenever you apply a network policy to a resource, this resource is denied communication on everything but what you explicitly allow.

For example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-kube-dns
  namespace: kube-system
spec:
  podSelector:
    matchLabels:
      k8s-app: kube-dns
  policyTypes:
    - Ingress
  ingress:
    - from:
      - podSelector: {}
        namespaceSelector: {}
      ports:
        - port: 53
          protocol: UDP
        - port: 53
          protocol: TCP

This policy allows communication on ingress (ie: to the pod) with labels kube-dns, from all pods in all namespaces on port 53. All other communications are blocked.

This policy means exactly the following :

explanation_policy

The Challenge of Applying Network Policies on a Big Cluster

Applying static Network Policies may be straightforward, but applying them to constantly evolving resources is a headache for SecOps. The constant evolution of clusters leads to significant operational and Network Policy management difficulties, which can sometimes lead SecOps to neglect this aspect of securing Kubernetes clusters.

For example, if I would like to apply a policy to deny egress traffic to the AWS metadata instance (you can understand why it is important in this article from Thibault), I would have to apply the following policy as much time as I have namespaces in my cluster :

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-metadata-access
  namespace: <ns>
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 169.254.169.254/32

You could say a bash loop would help but what if a new namespace is created by a devOps in the cluster after my implementation? How to apply my network policy automatically in the newly created namespace?

Now that we highlighted the difficulties SecOps may encounter when implementing network policies, we'll discuss now how this Kyverno simplifies the management and automated adaptation of Network Policies in a constantly evolving Kubernetes cluster.

Some CNI integrations as Cillium or Calico offer the possibility to implement non-namespaced global network policies. However, our use case involves using the official api version for network policies which is the one supported by AWS-CNI-Node.

The power of Kyverno in the process of deploying network policies

What is kyverno?

Kyverno is an open-source Kubernetes-native policy management tool. Kyverno enables users to define and enforce policies for validating, mutating, and generating Kubernetes resources, providing a way to implement policy-as-code within a Kubernetes cluster.

What is interesting here is the generating rule capability of Kyverno. According to Kyverno documentation: a generate rule can be used to create new Kubernetes resources in response to some other event including things like resource creation, update, or deletion, or even by creating or updating a policy itself.

This is useful to create supporting resources, such as new RoleBindings or NetworkPolicies for a Namespace, or perform other automation tasks that may either require other tools or be scripted.

Demo

Taking the previous problem, I would like to keep track of my policy that limits interaction with the metadata instance. I could use Kyverno to track the creation of new namespaces and immediately apply my network policies if a namespace is created.

The following policy can be helpful for this usecase:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: netpol-deny-metadata-access
spec:
  webhookTimeoutSeconds: 5
  generateExisting: true
  rules:
  - name: deny-metadata-access
    match:
      any:
      - resources:
          kinds:
          - Namespace
    generate:
      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      name: deny-metadata-access
      namespace: ""
      synchronize: true
      data:
        spec:
          podSelector: {}
          egress:
            - to:
                - ipBlock:
                    cidr: 0.0.0.0/0
                    except:
                      - 169.254.169.254/32
          policyTypes:
          - Egress

Here a Kyverno cluster policy is created to implement the network policy we made above.

The network policy is applied with the option synchronize: true that tells Kyverno to always keep track of the monitored resources here in namespaces. When synchronize is set to true, the generated resource is kept in-sync with the source resource.

Also by default, the policy will not be applied to existing trigger resources when it is installed. This behavior can be configured via generateExisting an attribute. Here the policy will apply to new namespaces but also to already existing namespaces when the policy is implemented.

The policy will be automatically applied to the tracked namespace thanks to the JMESPath templating capability of Kyverno namespace: "" here.

Conclusion

To sum up, Kyverno simplifies network policy management in dynamic Kubernetes clusters. Its declarative approach seamlessly adapts to constant changes, providing SecOps with an agile and efficient solution for enforcing network policies.