Home avatar

Don't forget why you set off

What You Didn't Know About kubectl apply

I usually prefer deploying applications using YAML, and recently, when using kubectl apply, I encountered an issue. When I use kubectl rollout restart to restart an application, kubectl adds kubectl.kubernetes.io/restartedAt: <current time> to spec.template.metadata.annotations. Then, when I update the YAML file and use kubectl apply, it doesn’t remove the kubectl.kubernetes.io/restartedAt: "2022-07-26T11:44:32+08:00" from the annotations.

My KubeCon China 2023 Summary

First of all, I would like to thank the karmada community for providing the tickets to KubeCon, and I met with core contributors and maintainers of karmada such as zhen chang, hongcai Ren, and Wei jiang.

In previous years, attending the technology conference left me with no deep impression and no harvest. This time I forced myself to record it, deepen my impression and summarize my gains.

I am interested in colocation, so the sharing I listened to is basically related to this.

updated: all of the kubeCon china 2023 videos is released. youtube video list, WeChat subscription account

slides: https://kccncosschn2023.sched.com/?iframe=no

istioCon china 2023 slides: https://istioconchina2023.sched.com/ https://github.com/cloudnativeto/academy/tree/master/istiocon-china-2023

Gracefully Changing the DNS Server IP for node on a Kubernetes Cluster Without Impacting Applications

DNS servers are typically stable components of infrastructure and are rarely changed. However, if the IP address of a DNS server needs to be updated, here’s how to change the DNS configuration of Kubernetes nodes.

To change the node’s DNS configuration, follow these steps:

  1. Replace the DNS server IP address directly in the /etc/resolv.conf file on node.

  2. The above step updates only the /etc/resolv.conf file on the node. It does not update the /etc/resolv.conf inside the Node Local DNS and CoreDNS pods.

practice of Karmada as cluster resource synchronization in disaster recovery systems

Karmada is a Kubernetes multi-cluster management system. It allows resources to be distributed across multiple clusters while maintaining the original way of using the API server. It offers features such as cross-cloud multi-cluster management, multi-policy multi-cluster scheduling, cross-cluster fault tolerance for applications, a global unified resource view, multi-cluster service discovery, and Federated Horizontal Pod Autoscaling (FederatedHPA) capabilities. Its design philosophy inherits from Cluster Federation v2 and is currently an open-source CNCF sandbox project. For more details, visit the official website at https://karmada.io/.

Research the principle of metadata.generation value increase

Use kubebuilder to develop a vpa-related operator. This operator will watch all vpa creation, deletion and update in the cluster. controller-runtime provides predict to filter out unnecessary events, and use predicate.GenerationChangedPredicate to filter out vpa update status. However, it was found that the status update of vpa (recommended value updated by vpa-recommender) also triggered Reconcile.

The pod always scheduling to the same node

Encountered Strange Phenomenon: Spark-generated job pods are consistently scheduled on the same node, meaning that pods from different jobs are all being scheduled to the same node. This results in an uneven distribution of pods, even though the nodes have no taints, and their resource availability is similar. The jobs do not have any nodeSelector, nodeAffinity, nodeName, or PodTopologySpread.

Resource Recommendation Algorithms for Crane and VPA

Introduction to VPA

VPA, short for Vertical Pod Autoscaler, is an open-source implementation based on the Google paper Autopilot: Workload Autoscaling at Google Scale. It recommends container resource requests based on historical monitoring data from the containers within pods. In other words, VPA scales by directly modifying the resource requests (and limits, if configured in VPA resources) within the pod.

Key Benefits:

  1. Increases node resource utilization.
  2. Suitable for long-running, homogeneous applications.

Limitations: