What’s new in Kubernetes 1.21?

What’s new in Kubernetes 1.21?

Kubernetes is always evolving and bringing more features. With each new iteration, new features and improvements arrive to make Container management easier and more flexible. With the release of the latest version of Kubernetes.

Kubernetes 1.21 is about to be released, and it comes packed with novelties! Where do we begin?

This release brings 50 enhancements, up from 43 in Kubernetes 1.20 and 34 in Kubernetes 1.19. Of those 50 enhancements, 15 are graduating to Stable, 14 are existing features that keep improving, and a whopping 19 are completely new.

But what are these new enhancements? Let’s dig in and find out.

Kubernetes logo

A New Memory Manager

Your container deployments depend on memory. And those deployments must use memory wisely, otherwise, they could wind up draining your cluster of precious resources and your business of money (remember, on cloud-hosted accounts, you pay for what you use).

The Memory Manager is a new feature in the ecosystem that enables the feature of guaranteed memory allocation for pods in the Guaranteed QoS class. With this feature, you will find two different allocation strategies:

  • single-NUMA is intended for high-performance and performance-sensitive applications.
  • multi-NUMA overcomes situations that cannot be managed with the single-NUMA strategy (such as when the amount of memory a pod demands exceeds the single-NUMA node capacity).

The Memory Manager initializes a Memory Table collection for each NUMA node (and respective memory types), resulting in Memory Map objects. Memory Table and Memory Maps are constructed like so:

type MemoryTable struct {
 
        TotalMemSize uint64 `json:"total"`
        SystemReserved uint64 `json:"systemReserved"`
        Allocatable uint64 `json:"allocatable"`
        Reserved uint64 `json:"reserved"`
        Free uint64 `json:"free"`
}\\ 
 
type NodeState struct {
       NumberOfAssignments int `json:"numberOfAssignments"`
       MemoryMap map[v1.ResourceName]*MemoryTable `json:"memoryMap"`
       Nodes []int `json:"nodes"`
}
 
type NodeMap map[int]*NodeState

A More Flexible Scheduler

One thing the developers of Kubernetes understand is that every workload is not the same. With the release of 1.21,

The scheduler will receive two new features:
  • Nominated nodes allow cloud native developers to define a preferred node, using the .status.nominatedNodeName filed within a Pod. If the scheduler fails to fit an incoming pod into a preferred node, it will attempt to preempt lower-priority pods to make room.
  • Pod affinity selector allows developers to define node affinity into a deployment. This ability allows you to constrain which nodes pods will be scheduled on.

Pod affinity is defined like so:

apiVersion: v1
kind: Deployment
 
…
 
spec:
 
…
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: example-label
operator: In
values:
- label-value
namespaces: […]
namespacesSelector:

ReplicateSet Downscaling

For anyone that manages a Kubernetes deployment, you understand that autoscaling is probably one of the most crucial features. The one issue that has plagued Kubernetes autoscaling is downscaling after a load spike passes.

With the release of 1.21, there are now two new downscale strategies, which means you will no longer have to manually check when it comes time to downscale a deployment. Those strategies are:

  • Random Pod selection on ReplicaSet downscale — which uses LogarithmicScaleDown to semi-randomly select pods (based on logarithmic bucketing of pod timestamps) to downscale.
  • ReplicaSet deletion cost makes it possible for you to annotate Pods, using controller.kubernetes.io/pod-deletion-cost=X (Where X is a number between 0 and 10). Pods with a lower deletion cost value will be removed first.

Indexed Job

With an Indexed Job, the job controller will create a Pod with an associated index (added as an annotation), from – to .spec-completions-1. The job controller will create Pods for the lowest indexes that don’t already have active or succeeded pods. If there’s more than one pod for an index, the controller will remove all but one. Active pods that do not have an index are removed and finished pods that don’t have an index won’t count towards failures or successes (and are not removed).

Network Policy Port Ranges

Before Kubernetes 1.21, you had to write a rule for each network policy. Now, you can write a single network policy that defines an entire range of ports. This means less work and fewer policy files for your deployments. Using the NetworkPolicyEndPort feature gate. You could define a range of ports like so:

spec:
egress:
- ports:
- protocol: TCP
port: 32000
endPort: 32768

Topology Aware Hints

This new approach for a more optimal network routing will replace the #536 Topology aware routing introduced in Kubernetes 1.17.

The goal is to provide a flexible mechanism to provide hints to components, like Kube-proxy. So they can be more efficient when routing traffic. The main use case for this feature is to keep service traffic within the same availability zone.

When enabled via the TopologyAwareHints feature gate, you’ll be able to define hints in an EndpointSlice:

apiVersion: discovery.k8s.io/v1 
 kind: EndpointSlice
…  
 endpoints:    
   - addresses:       
       - "10.1.2.3"     
    conditions:     
       ready: true
    hostname: pod-1      
    zone: zone-a    
    hints:       
      forZones:         
        - name: "zone-a"

EndpointSlice

The new EndpointSlice API will split endpoints into several Endpoint Slice resources. This solves many problems in the current API that are related to big Endpoints objects. This new API is also designed to support other future features, like multiple IPs per pod.

IPv4/IPv6 dual-stack support

This feature summarizes the work done to natively support dual-stack mode in your cluster, so you can assign both IPv4 and IPv6 addresses to a given pod.

Now that it is graduated to Beta, dual stack is enabled by default.

APIServer defaulted labels for all namespaces

As namespaces are not guaranteed to have any identifying labels by the Kubernetes API, you would have to use a field selector to select a namespace, instead of its name. This complicates tasks like writing default network policies and other label-driven namespace functionalities in the Kubernetes API.

A new immutable label kubernetes.io/metadata.name has been added to all namespaces, whose value is the namespace name. This label can be used with any namespace selector, like in the previously mentioned NetworkPolicy objects.

Conclusion

And there you have it, a few features coming out with Kubernetes 1.21 that should get you excited about this new release. To find out more of what’s coming (and going) with Kubernetes 1.21, make sure to check out the full change log.

So, this was, What’s new in Kubernetes? Also, Check Migrating from ingress networking.k8s.io/v1beta1 to /v1

If you need help with your Website Development or Digital Marketing, discover our Services.

Amit Chaudhary

SRE at Calibo. Helping OpenSource Community. Co-founder hyCorve limited. Certified Checkbox Unchecker. Connecting bare metal to cloud.

All author posts
Write a comment