Kubelet Bug:sandbox残留问题 - 探寻sandbox无法被清理的根源

在梳理kubelet移除pod流程过程中,遇到了一个问题”已经消失的pod的sandbox并未被清理掉“。

在上篇文章"为什么kubelet日志出现an error occurred when try to find container“里分析了pod移除流程,kubelet里的garbageCollector会清理退出的pod的sanbox,而实际上没有移除。

为什么kubelet移除pod流程里没有移除这个sanbox?即为什么garbageCollector没有清理退出的sandbox?

本文通过分析kubelet日志并结合代码逻辑,梳理了pod移除的整个流程,找到出现这种现象根本的原因。

pod移除流程系列文章

本文的kuberntes版本为1.23.10 runtime为containerd,日志级别为4。

有一个NotReady的sandbox已经存在了2个月了,而且没有任何关联的container

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# crictl  pods |grep nginx-deployment-bd4476b48-fpgvc
cf8d3a590085c       2 months ago        NotReady            nginx-deployment-bd4476b48-fpgvc                    default             0                   (default)
# crictl  ps |grep nginx-deployment-bd4476b48-fpgvc

# crictl inspectp cf8d3a590085c
{
  "status": {
    "id": "cf8d3a590085cdadf382b354e6475e17501a56f7fe0996218831a2dd03109ab1",
    "metadata": {
      "attempt": 0,
      "name": "nginx-deployment-bd4476b48-fpgvc",
      "namespace": "default",
      "uid": "95d6b80b-77f5-4218-824e-69eec4998c22"
    },
    "state": "SANDBOX_NOTREADY",
    ....
    info": {
    "pid": 0,
    "processStatus": "deleted",
    "netNamespaceClosed": true,

NotReady代表这个sandbox已经退出了

kubelet已经运行了3个月,即kubelet在pod生命周期里没有重启过。

1
2
3
4
# systemctl status kubelet
 kubelet.service - Kubernetes Kubelet
     Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-08-21 15:34:07 CST; 3 months 7 days ago

排查kubelet日志,这里的kubelet日志级别为4,发现这条日志里一直出现。

1
I1124 05:49:00.241725  190330 pod_workers.go:1251] "Pod worker has been requested for removal but is still not fully terminated" podUID=95d6b80b-77f5-4218-824e-69eec4998c22

幸好这个pod移除时间点的kubelet日志还是存在,于是从pod移除过程开始排查。

完整日志在 kubelet log

podConfig感知到pod被删除,podWorker执行。

1
2
3
4
5
I0919 11:11:20.322601  190330 kubelet.go:2130] "SyncLoop DELETE" source="api" pods=[default/nginx-deployment-bd4476b48-fpgvc]
I0919 11:11:20.322631  190330 pod_workers.go:625] "Pod is marked for graceful deletion, begin teardown" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:20.322658  190330 pod_workers.go:888] "Processing pod event" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22 updateType=1
I0919 11:11:20.322689  190330 pod_workers.go:1005] "Pod worker has observed request to terminate" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:20.322700  190330 kubelet.go:1795] "syncTerminatingPod enter" pod="default/nginx-deployment-bd4476b48-fpgvc" 

container停止成功。

1
2
3
4
5
6
7
I0919 11:11:20.322893  190330 kuberuntime_container.go:719] "Killing container with a grace period override" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22 containerName="nginx" containerID="containerd://24bee860a677b045e22fb764067cee0dbddeaeb2ac68ccd229b26418d24cf2e6" gracePeriod=30
I0919 11:11:20.322907  190330 kuberuntime_container.go:723] "Killing container with a grace period" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22 containerName="nginx" containerID="containerd://24bee860a677b045e22fb764067cee0dbddeaeb2ac68ccd229b26418d24cf2e6" gracePeriod=30
I0919 11:11:20.323279  190330 event.go:294] "Event occurred" object="default/nginx-deployment-bd4476b48-fpgvc" kind="Pod" apiVersion="v1" type="Normal" reason="Killing" message="Stopping container nginx"
I0919 11:11:20.334936  190330 status_manager.go:685] "Patch status for pod" pod="default/nginx-deployment-bd4476b48-fpgvc" patch="{\"metadata\":{\"uid\":\"95d6b80b-77f5-4218-824e-69eec4998c22\"}}"
I0919 11:11:20.334951  190330 status_manager.go:692] "Status for pod is up-to-date" pod="default/nginx-deployment-bd4476b48-fpgvc" statusVersion=3
I0919 11:11:20.334961  190330 kubelet_pods.go:932] "Pod is terminated, but some containers are still running" pod="default/nginx-deployment-bd4476b48-fpgvc"
I0919 11:11:20.426026  190330 kuberuntime_container.go:732] "Container exited normally" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22 containerName="nginx" containerID="containerd://24bee860a677b045e22fb764067cee0dbddeaeb2ac68ccd229b26418d24cf2e6"

PLEG感知到container的退出。

1
I0919 11:11:20.515262  190330 kubelet.go:2152] "SyncLoop (PLEG): event for pod" pod="default/nginx-deployment-bd4476b48-fpgvc" event=&{ID:95d6b80b-77f5-4218-824e-69eec4998c22 Type:ContainerDied Data:24bee860a677b045e22fb764067cee0dbddeaeb2ac68ccd229b26418d24cf2e6}

podWorker执行syncTerminatingPod完成,podWorker处于terminated状态。

1
2
3
4
5
I0919 11:11:20.706809  190330 kubelet.go:1873] "Pod termination stopped all running containers" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:20.706823  190330 kubelet.go:1875] "syncTerminatingPod exit" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:20.706837  190330 pod_workers.go:1050] "Pod terminated all containers successfully" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:20.706857  190330 pod_workers.go:988] "Processing pod event done" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22 updateType=1
I0919 11:11:20.706866  190330 pod_workers.go:888] "Processing pod event" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22 updateType=2

syncLoop sync触发(这个跟上篇文章里的pod移除不一样,多了syncLoop sync)。

1
I0919 11:11:21.232081  190330 kubelet.go:2171] "SyncLoop (SYNC) pods" total=1 pods=[default/nginx-deployment-bd4476b48-fpgvc]

PLEG感知到sandbox退出。

1
I0919 11:11:21.519225  190330 kubelet.go:2152] "SyncLoop (PLEG): event for pod" pod="default/nginx-deployment-bd4476b48-fpgvc" event=&{ID:95d6b80b-77f5-4218-824e-69eec4998c22 Type:ContainerDied Data:cf8d3a590085cdadf382b354e6475e17501a56f7fe0996218831a2dd03109ab1}

podWorker开始执行syncTerminatedPod。

1
I0919 11:11:21.519247  190330 kubelet.go:1883] "syncTerminatedPod enter" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22

container被移除。

1
2
I0919 11:11:21.519263  190330 kuberuntime_container.go:947] "Removing container" containerID="24bee860a677b045e22fb764067cee0dbddeaeb2ac68ccd229b26418d24cf2e6"
I0919 11:11:21.519272  190330 scope.go:110] "RemoveContainer" containerID="24bee860a677b045e22fb764067cee0dbddeaeb2ac68ccd229b26418d24cf2e6"

podWorker执行syncTerminatedPod完成,podWorker状态转为finished。

1
2
3
I0919 11:11:21.525367  190330 kubelet.go:1924] "syncTerminatedPod exit" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:21.525378  190330 pod_workers.go:1105] "Pod is complete and the worker can now stop" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:21.525395  190330 pod_workers.go:959] "Processing pod event done" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22 updateType=2

感知到pod的status更新。

1
I0919 11:11:21.534874  190330 kubelet.go:2127] "SyncLoop RECONCILE" source="api" pods=[default/nginx-deployment-bd4476b48-fpgvc]

pod再次被DELETE,由于podWorker处于finished状态所以不执行任何事情

1
2
I0919 11:11:21.546521  190330 kubelet.go:2130] "SyncLoop DELETE" source="api" pods=[default/nginx-deployment-bd4476b48-fpgvc]
I0919 11:11:21.546554  190330 pod_workers.go:611] "Pod is finished processing, no further updates" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22

。pod从apiserver中移除,由于podWorker处于finished状态所以不执行任何事情。

1
2
3
I0919 11:11:21.551243  190330 kubelet.go:2124] "SyncLoop REMOVE" source="api" pods=[default/nginx-deployment-bd4476b48-fpgvc]
I0919 11:11:21.551265  190330 kubelet.go:1969] "Pod has been deleted and must be killed" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:21.551286  190330 pod_workers.go:611] "Pod is finished processing, no further updates" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22

housekeeping触发,podWorker被移除,然后又创建了新的podWorker。

1
2
3
4
5
6
7
8
I0919 11:11:22.230603  190330 kubelet.go:2202] "SyncLoop (housekeeping)"
I0919 11:11:22.237366  190330 kubelet_pods.go:1082] "Clean up pod workers for terminated pods"
I0919 11:11:22.237397  190330 pod_workers.go:1258] "Pod has been terminated and is no longer known to the kubelet, remove all history" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:22.237414  190330 kubelet_pods.go:1111] "Clean up probes for terminated pods"
I0919 11:11:22.237425  190330 kubelet_pods.go:1134] "Clean up orphaned pod containers" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:22.237438  190330 pod_workers.go:571] "Pod is being synced for the first time" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:22.237448  190330 pod_workers.go:620] "Pod is orphaned and must be torn down" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:22.237466  190330 kubelet_pods.go:1148] "Clean up orphaned pod statuses"

housekeeping里执行(*Kubelet).HandlePodCleanups出现错误,执行runningRuntimePods, err = kl.containerRuntime.GetPods(false)报错。

1
2
3
4
5
E0919 11:11:22.237513  190330 remote_runtime.go:365] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection closed" filter="&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},}"
E0919 11:11:22.237546  190330 kuberuntime_sandbox.go:292] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection closed"
E0919 11:11:22.237556  190330 kubelet_pods.go:1156] "Error listing containers" err="rpc error: code = Unavailable desc = connection closed"
E0919 11:11:22.237567  190330 kubelet.go:2204] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection closed"
I0919 11:11:22.237573  190330 pod_workers.go:888] "Processing pod event" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22 updateType=1

podWorker开始执行syncTerminatingPod,然后housekeeping结束,这里podWoker和housekeeping是在两个不同的goroutine执行。

1
2
3
4
I0919 11:11:22.237573  190330 pod_workers.go:888] "Processing pod event" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22 updateType=1
I0919 11:11:22.237575  190330 kubelet.go:2210] "SyncLoop (housekeeping) end"
I0919 11:11:22.237611  190330 pod_workers.go:1005] "Pod worker has observed request to terminate" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:22.237619  190330 kubelet.go:1795] "syncTerminatingPod enter"

在syncTerminatingPod里执行停止容器失败。

1
2
3
I0919 11:11:22.237686  190330 kuberuntime_container.go:723] "Killing container with a grace period" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22 containerName="nginx" containerID="containerd://24bee860a677b045e22fb764067cee0dbddeaeb2ac68ccd229b26418d24cf2e6" gracePeriod=1
E0919 11:11:22.237712  190330 remote_runtime.go:479] "StopContainer from runtime service failed" err="rpc error: code = Unavailable desc = connection closed" containerID="24bee860a677b045e22fb764067cee0dbddeaeb2ac68ccd229b26418d24cf2e6"
E0919 11:11:22.237752  190330 kuberuntime_container.go:728] "Container termination failed with gracePeriod" err="rpc error: code = Unavailable desc = connection closed" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22 containerName="nginx" containerID="containerd://24bee860a677b045e22fb764067cee0dbddeaeb2ac68ccd229b26418d24cf2e6" gracePeriod=1

podWorker执行完成,由于执行出现错误,podWorker的状态没有转成terminated,还是terminating。

1
2
3
4
5
6
I0919 11:11:22.237854  190330 kubelet.go:1812] "syncTerminatingPod exit" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
E0919 11:11:22.237866  190330 pod_workers.go:951] "Error syncing pod, skipping" err="[failed to \"KillContainer\" for \"nginx\" with KillContainerE
rror: \"rpc error: code = Unavailable desc = connection closed\", failed to \"KillPodSandbox\" for \"95d6b80b-77f5-4218-824e-69eec4998c22\" with Ki
llPodSandboxError: \"rpc error: code = Unavailable desc = connection closed\"]" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I0919 11:11:22.237880  190330 pod_workers.go:988] "Processing pod event done" pod="default/nginx-deployment-bd4476b48-fpgvc" podUID=95d6b80b-77f5-4
218-824e-69eec4998c22 updateType=1

再次housekeeping触发,这次PodWorker并没有被移除。

1
2
3
4
5
6
7
8
9
I1124 05:49:00.230558  190330 kubelet.go:2202] "SyncLoop (housekeeping)"
I1124 05:49:00.241674  190330 kubelet_pods.go:1082] "Clean up pod workers for terminated pods"
I1124 05:49:00.241725  190330 pod_workers.go:1251] "Pod worker has been requested for removal but is still not fully terminated" podUID=95d6b80b-77f5-4218-824e-69eec4998c22
I1124 05:49:00.241772  190330 kubelet_pods.go:1111] "Clean up probes for terminated pods"
I1124 05:49:00.241796  190330 kubelet_pods.go:1148] "Clean up orphaned pod statuses"
I1124 05:49:00.244039  190330 kubelet_pods.go:1167] "Clean up orphaned pod directories"
I1124 05:49:00.244247  190330 kubelet_pods.go:1178] "Clean up orphaned mirror pods"
I1124 05:49:00.244258  190330 kubelet_pods.go:1185] "Clean up orphaned pod cgroups"
I1124 05:49:00.244278  190330 kubelet.go:2210] "SyncLoop (housekeeping) end"

分析完日志后,这里就出现这么几个问题:

  1. 为什么在后续的housekeeping里PodWorker并没有被移除?
  2. 为什么housekeeping里ListPodSandbox会执行失败?
  3. 为什么sandbox没有被移除?

housekeeping触发时候会执行下列调用,当pod已经被移除且pod还有podWorker,则执行removeTerminatedWorker。

1
housekeeping--> (*kubelet).HandlePodCleanups--> (*podWorkers).SyncKnownPods--> (*podWorkers).removeTerminatedWorker

当podWorker不为finished状态时候,removeTerminatedWorker直接返回,不执行后续的移除逻辑。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
func (p *podWorkers) removeTerminatedWorker(uid types.UID) {
	status, ok := p.podSyncStatuses[uid]
	// uid不在p.podSyncStatuses里直接返回
	if !ok {
		// already forgotten, or forgotten too early
		klog.V(4).InfoS("Pod worker has been requested for removal but is not a known pod", "podUID", uid)
		return
	}

	// pod worker还未完成(完成了意味着podWorker需要被移除)
	if !status.finished {
		klog.V(4).InfoS("Pod worker has been requested for removal but is still not fully terminated", "podUID", uid)
		return
	}
if status.restartRequested {
		klog.V(4).InfoS("Pod has been terminated but another pod with the same UID was created, remove history to allow restart", "podUID", uid)
	} else {
		klog.V(4).InfoS("Pod has been terminated and is no longer known to the kubelet, remove all history", "podUID", uid)
	}
	// 从p.podSyncStatuses中删除uid
	delete(p.podSyncStatuses, uid)
	// 关闭p.podUpdates中uid对应的chan,并从p.podUpdates中移除uid
	// 并从p.lastUndeliveredWorkUpdate移除uid
	p.cleanupPodUpdates(uid)

	if p.startedStaticPodsByFullname[status.fullname] == uid {
		delete(p.startedStaticPodsByFullname, status.fullname)
	}
}

具体代码在这里:

housekeeping

HandlePodCleanups

SyncKnownPods

removeTerminatedWorker

查了一下containerd的日志和状态后,发现containerd一直panic,然后systemd会重新启动containerd。

1
2
3
4
5
6
7
Nov 11 09:13:01  containerd[385194]: fatal error: concurrent map writes
Nov 11 09:13:01  containerd[385194]: goroutine 12181 [running]:
Nov 11 09:13:01  containerd[385194]: runtime.throw({0x56512ed01267?, 0x0?})
Nov 11 09:13:01  containerd[385194]:         /usr/local/go/src/runtime/panic.go:992 +0x71 fp=0xc0024fe688 sp=0xc0024fe658 pc=0x56512da81cf1
Nov 11 09:13:01  containerd[385194]: runtime.mapassign(0x56512f1282e0?, 0xc000691ec0?, 0x40?)
Nov 11 09:13:01  containerd[385194]:         /usr/local/go/src/runtime/map.go:595 +0x4d6 fp=0xc0024fe708 sp=0xc0024fe688 pc=0x56512da59e16
Nov 11 09:13:01  containerd[385194]: github.com/containerd/containerd/pkg/cri/store/sandbox.(*Store).UpdateContainerStats(0xc000691f50, {0xc0004a8e80?, 0xc000f6e140?}, 0xc000da3840)

containerd的版本为1.6.15

1
2
# dpkg -l |grep containerd
ii  containerd.io                               1.6.15-1                                amd64        An open and reliable container runtime

所以原因是containerd异常重启导致ListPodSandbox调用失败。

查了一下containerd的这个bug在1.6.22版本PR8819修复了,于是升级了containerd版本到1.6.25。

garbageCollector会调用(*containerGC).evictSandboxes进行sandbox的回收,而下面的evictNonDeletedPods为false,所以只有cgc.podStateProvider.ShouldPodContentBeRemoved(podUID)返回true才会回收sandbox。

pkg/kubelet/kuberuntime/kuberuntime_gc.go#L313-L323

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
	for podUID, sandboxes := range sandboxesByPod {
		if cgc.podStateProvider.ShouldPodContentBeRemoved(podUID) || (evictNonDeletedPods && cgc.podStateProvider.ShouldPodRuntimeBeRemoved(podUID)) {
			// Remove all evictable sandboxes if the pod has been removed.
			// Note that the latest dead sandbox is also removed if there is
			// already an active one.
			cgc.removeOldestNSandboxes(sandboxes, len(sandboxes))
		} else {
			// Keep latest one if the pod still exists.
			cgc.removeOldestNSandboxes(sandboxes, len(sandboxes)-1)
		}
	}

ShouldPodContentBeRemoved由podWorkers实现,由于pod的podWorker是存在,所以只有pod是被驱逐或"podWorker处于terminated状态,才会返回true。而这个pod的podWorker处于terminating状态,所以返回false,即不执行sandbox的回收。

pkg/kubelet/pod_workers.go#L522-L533

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
// 如果uid在p.podSyncStatuses里,则当pod是被驱逐或"pod被删除且处于terminated状态",返回true,否则返回false
// 否则在"至少一个pod worker执行了syncPod,即已经有pod通过UpdatePod()启动",返回true,否则返回false
func (p *podWorkers) ShouldPodContentBeRemoved(uid types.UID) bool {
	p.podLock.Lock()
	defer p.podLock.Unlock()
	if status, ok := p.podSyncStatuses[uid]; ok {
		return status.IsEvicted() || (status.IsDeleted() && status.IsTerminated())
	}
	// a pod that hasn't been sent to the pod worker yet should have no content on disk once we have
	// synced all content.
	return p.podsSynced
}

发现这个问题之后,去代码仓库查询pkg/kubelet/pod_workers.go的提交记录,发现这个bug已经被Clayton Coleman大佬在这个提交中修复,即1.27版本中已经修复了这个问题,而且从Google搜索结果中还发现被漏掉的相关的issue107730

为了解决pod的terminationGracePeriodSeconds无法设置为0问题,Clayton Coleman在1.22版本中的这个commit对podWorker进行重构,podWorker及其的复杂,边界情况特别多,所以引入了这个问题。为了修复这个问题,又对podWorker进行了小重构,重构后又引入一些小问题,所以后面的提交都是做修复。

这个问题的源头来自housekeeping触发的阶段执行(*Kubelet).HandlePodCleanups(),在HandlePodCleanups里从缓存中获取pod列表,这个列表里的有些pod是已经stoped,本不应该执行"Clean up orphaned pod containers”,却被执行了,导致重新生成podWorker。

而另外一个条件是调用runtime执行stop container出现错误,导致podWorker为terminating状态(不会被removeTerminatedWorker移除)。

简单的说就是在housekeeping触发的阶段从缓存中获取到非实时数据,且调用runtime执行stop container出现错误,才会导致停止sandbox不被garbageCollector清理。

即实时从runtime中获取runningRuntimePods列表,不执行"Clean up orphaned pod containers"逻辑,那么退出pod不会有podWorker,sandbox就能被garbageCollector移除了。所以后面这块代码改成直接从runtime中获取正在运行pod列表。

在1.23.17版本里的逻辑

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
	// 从runtime缓存中获取running pod列表
	runningRuntimePods, err := kl.runtimeCache.GetPods()
	if err != nil {
		klog.ErrorS(err, "Error listing containers")
		return err
	}
	for _, runningPod := range runningRuntimePods {
		switch workerState, ok := workingPods[runningPod.ID]; {
		// runtime中运行的pod也在podWorker中且"在podWorker中处于SyncPod状态",或runtime中运行的pod也在podWorker中且"在podWorker中处于TerminatingPod状态"
		case ok && workerState == SyncPod, ok && workerState == TerminatingPod:
			// if the pod worker is already in charge of this pod, we don't need to do anything
			continue
		default:
			// If the pod isn't in the set that should be running and isn't already terminating, terminate
			// now. This termination is aggressive because all known pods should already be in a known state
			// (i.e. a removed static pod should already be terminating), so these are pods that were
			// orphaned due to kubelet restart or bugs. Since housekeeping blocks other config changes, we
			// know that another pod wasn't started in the background so we are safe to terminate the
			// unknown pods.
			// runtime中运行的pod不在kl.podManager中(apiserver中),则执行kl.podWorkers.UpdatePod(UpdateType为kubetypes.SyncPodKill,让podWorker执行Terminating-->terminated)
			if _, ok := allPodsByUID[runningPod.ID]; !ok {
				klog.V(3).InfoS("Clean up orphaned pod containers", "podUID", runningPod.ID)
				one := int64(1)
				kl.podWorkers.UpdatePod(UpdatePodOptions{
					UpdateType: kubetypes.SyncPodKill,
					RunningPod: runningPod,
					KillPodOptions: &KillPodOptions{
						PodTerminationGracePeriodSecondsOverride: &one,
					},
				})
			}
		}
	}

在1.27版本里相关修改

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
	// Retrieve the list of running containers from the runtime to perform cleanup.
	// We need the latest state to avoid delaying restarts of static pods that reuse
	// a UID.
	if err := kl.runtimeCache.ForceUpdateIfOlder(ctx, kl.clock.Now()); err != nil {
		klog.ErrorS(err, "Error listing containers")
		return err
	}
	runningRuntimePods, err := kl.runtimeCache.GetPods(ctx)
	if err != nil {
		klog.ErrorS(err, "Error listing containers")
		return err
	}

Fixing termination and status pod reporting

相关内容