因为Centos EOL的缘故,去年内部忙着换OS,打算趁此机会从cgroup v1切到cgroup v2,然而,在低版本K8s适配cgroupv2的过程中,遇到了一些问题
编辑|zouyee
前期kubelet在cgroup v1的环境下,使用-enable_load_reader暴露容器的cpu load等相关监控数据,但在cgroup v2环境下,使用该配置会导致kubelet发生panic
下述为关键性信息:
container.go:422] Could not initialize cpu load reader for "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podXXX.slice": failed to create a netlink based cpuload reader: failed to get netlink family id for task stats: binary.Read: invalid type int32
需要注意的是,高版本K8s同样会有上述问题
NOTE: Containerd:1.6.21,K8s:1.19, Kernel:5.15.0
技术背景
该章节介绍以下内容:
cAdvisor是一款强大的Docker容器监控工具,专为容器场景设计,方便监控资源使用和性能分析。它用于收集、汇总、处理和输出容器的相关信息。cAdvisor支持Docker容器,同时支持其他类型的容器运行时。
Kubelet内置了对cAdvisor的支持,用户可以直接通过Kubelet组件获取有关节点上容器的监控指标。
K8s 1.19使用的cAdvisor版本为0.39.3,而这里的简要介绍使用的是版本0.48.1。
以下是主要功能代码,其中包含了一些注释以提高可读性。代码路径为:/cadvisor/cmd/cadvisor.go。
cAdvisor主要完成以下几项任务:
func init() {
optstr := container.AllMetrics.String()
flag.Var(&ignoreMetrics, "disable_metrics", fmt.Sprintf("comma-separated list of `metrics` to be disabled. Options are %s.", optstr))
flag.Var(&enableMetrics, "enable_metrics", fmt.Sprintf("comma-separated list of `metrics` to be enabled. If set, overrides 'disable_metrics'. Options are %s.", optstr))
}
从上述代码可以看到,cadvisor支持是否开启相关指标的能力,其中AllMetrics主要是下述指标:
https://github.com/google/cadvisor/blob/master/container/factory.go#L72
var AllMetrics = MetricSet{
CpuUsageMetrics: struct{}{},
ProcessSchedulerMetrics: struct{}{},
PerCpuUsageMetrics: struct{}{},
MemoryUsageMetrics: struct{}{},
MemoryNumaMetrics: struct{}{},
CpuLoadMetrics: struct{}{},
DiskIOMetrics: struct{}{},
DiskUsageMetrics: struct{}{},
NetworkUsageMetrics: struct{}{},
NetworkTcpUsageMetrics: struct{}{},
NetworkAdvancedTcpUsageMetrics: struct{}{},
NetworkUdpUsageMetrics: struct{}{},
ProcessMetrics: struct{}{},
AppMetrics: struct{}{},
HugetlbUsageMetrics: struct{}{},
PerfMetrics: struct{}{},
ReferencedMemoryMetrics: struct{}{},
CPUTopologyMetrics: struct{}{},
ResctrlMetrics: struct{}{},
CPUSetMetrics: struct{}{},
OOMMetrics: struct{}{},
}
func main() {
...
var includedMetrics container.MetricSet
if len(enableMetrics) > 0 {
includedMetrics = enableMetrics
} else {
includedMetrics = container.AllMetrics.Difference(ignoreMetrics)
}
// 上述处理需要开启的指标
klog.V(1).Infof("enabled metrics: %s", includedMetrics.String())
setMaxProcs()
// 内存方式存在监控指标
memoryStorage, err := NewMemoryStorage()
if err != nil {
klog.Fatalf("Failed to initialize storage driver: %s", err)
}
sysFs := sysfs.NewRealSysFs()
// 这是cadvisor核心逻辑,kubelet内部就是直接调用的manager.New
resourceManager, err := manager.New(memoryStorage, sysFs, manager.HousekeepingConfigFlags, includedMetrics, &collectorHTTPClient, strings.Split(*rawCgroupPrefixWhiteList, ","), strings.Split(*envMetadataWhiteList, ","), *perfEvents, *resctrlInterval)
if err != nil {
klog.Fatalf("Failed to create a manager: %s", err)
}
// 注册对外的HTTP接口.
err = cadvisorhttp.RegisterHandlers(mux, resourceManager, *httpAuthFile, *httpAuthRealm, *httpDigestFile, *httpDigestRealm, *urlBasePrefix)
if err != nil {
klog.Fatalf("Failed to register HTTP handlers: %v", err)
}
// 这里是容器标签的处理,kubelet 1.28切换到CRI之后需要修改kubelet
containerLabelFunc := metrics.DefaultContainerLabels
if !*storeContainerLabels {
whitelistedLabels := strings.Split(*whitelistedContainerLabels, ",")
// Trim spacing in labels
for i := range whitelistedLabels {
whitelistedLabels[i] = strings.TrimSpace(whitelistedLabels[i])
}
containerLabelFunc = metrics.BaseContainerLabels(whitelistedLabels)
}
...
}
其中cpu load是否生成指标,同时也由命令行enable_load_reader控制
https://github.com/google/cadvisor/blob/42bb3d13a0cf9ab80c880a16c4ebb4f36e51b0c9/manager/container.go#L455
if *enableLoadReader {
// Create cpu load reader.
loadReader, err := cpuload.New()
if err != nil {
klog.Warningf("Could not initialize cpu load reader for %q: %s", ref.Name, err)
} else {
cont.loadReader = loadReader
}
}
在Kubernetes中,Google的cAdvisor项目被用于节点上容器资源和性能指标的收集。在kubelet server中,cAdvisor被集成用于监控该节点上kubepods(默认cgroup名称,systemd模式下会加上.slice后缀) cgroup下的所有容器。从1.29.0-alpha.2版本中可以看到,kubelet目前还是提供了以下两种配置选项(但是现在useLegacyCadvisorStats为false):
if kubeDeps.useLegacyCadvisorStats {
klet.StatsProvider = stats.NewCadvisorStatsProvider(
klet.cadvisor,
klet.resourceAnalyzer,
klet.podManager,
klet.runtimeCache,
klet.containerRuntime,
klet.statusManager,
hostStatsProvider)
} else {
klet.StatsProvider = stats.NewCRIStatsProvider(
klet.cadvisor,
klet.resourceAnalyzer,
klet.podManager,
klet.runtimeCache,
kubeDeps.RemoteRuntimeService,
kubeDeps.RemoteImageService,
hostStatsProvider,
utilfeature.DefaultFeatureGate.Enabled(features.PodAndContainerStatsFromCRI))
}
kubelet以Prometheus指标格式在/stats/暴露所有相关运行时指标,如下图所示,Kubelet内置了cadvisor服务
最终可以看到cadvisor组件如何在kubelet完成初始化
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cadvisor/cadvisor_linux.go#L80
func New(imageFsInfoProvider ImageFsInfoProvider, rootPath string, cgroupRoots []string, usingLegacyStats, localStorageCapacityIsolation bool) (Interface, error) {
sysFs := sysfs.NewRealSysFs()
// 这里就是kubelet默认暴露的监控指标类型
includedMetrics := cadvisormetrics.MetricSet{
...
cadvisormetrics.CpuLoadMetrics: struct{}{},
...
}
// 创建cAdvisor container manager.
m, err := manager.New(memory.New(statsCacheDuration, nil), sysFs, housekeepingConfig, includedMetrics, http.DefaultClient, cgroupRoots, nil /* containerEnvMetadataWhiteList */, "" /* perfEventsFile */, time.Duration(0) /*resctrlInterval*/)
...
这里就是直接调用的cadvisor的manager.New的函数接口,更详细的信息可参看:揭开K8s适配CgroupV2内存虚高的迷局
CPU使用率反映的是当前cpu的繁忙程度,CPU平均负载(load average)是指某段时间内占用cpu时间的进程和等待cpu时间的进程数,这里等待cpu时间的进程是指等待被唤醒的进程,不包括处于wait状态进程。
在对设备做相关诊断时,需要结合cpu使用率、平均负载以及任务状态来进行判断,比如CPU使用率低但负载高,可能是IO瓶颈等,对此不作深入介绍。
在cadvisor中对外暴露的指标名称为
container_cpu_load_average_10s
那么我们来看看是如何被计算出来的
https://github.com/google/cadvisor/blob/master/manager/container.go#L632
// Calculate new smoothed load average using the new sample of runnable threads.
// The decay used ensures that the load will stabilize on a new constant value within
// 10 seconds.
func (cd *containerData) updateLoad(newLoad uint64) {
if cd.loadAvg < 0 {
cd.loadAvg = float64(newLoad) // initialize to the first seen sample for faster stabilization.
} else {
cd.loadAvg = cd.loadAvg*cd.loadDecay + float64(newLoad)*(1.0-cd.loadDecay)
}
}
公式计算:cd.loadAvg = cd.loadAvg*cd.loadDecay + float64(newLoad)*(1.0-cd.loadDecay)
大体意思是取的上一次采集计算出来的值cd.loadAvg乘以计算因子cd.loadDecay,然后加上当前采集
到的newLoad值乘以(1.0-cd.loadDecay)最后得出当前的cd.loadAvg值
其中cont.loadDecay计算逻辑如下:
https://github.com/google/cadvisor/blob/master/manager/container.go#L453
cont.loadDecay = math.Exp(float64(-cont.housekeepingInterval.Seconds() / 10))
这里是跟housekeepingInterval相关的固定值,衰变窗口
关于容器cpu load的详细介绍可以看引用链接
寻根溯源
cpu load avgerage的cd.loadAvg前值通过如下方式获取:
https://github.com/google/cadvisor/blob/master/manager/container.go#L650
if cd.loadReader != nil {
// TODO(vmarmol): Cache this path.
path, err := cd.handler.GetCgroupPath("cpu")
if err == nil {
loadStats, err := cd.loadReader.GetCpuLoad(cd.info.Name, path)
if err != nil {
return fmt.Errorf("failed to get load stat for %q - path %q, error %s", cd.info.Name, path, err)
}
stats.TaskStats = loadStats
cd.updateLoad(loadStats.NrRunning)
// convert to 'milliLoad' to avoid floats and preserve precision.
stats.Cpu.LoadAverage = int32(cd.loadAvg * 1000)
}
}
深入探究可以发现使用了netlink来获取系统指标,关键调用路径:
updateStats->GetCpuLoad->getLoadStats->prepareCmdMessage->prepareMessage
经过上述分析可知, cAdvisor通过发送CGROUPSTATS_CMD_GET请求来获取CPU负载信息,通过netlink消息进行通信:
在`v0.48.1`分支的第128到132行:
cadvisor/utils/cpuload/netlink/netlink.go
func prepareCmdMessage(id uint16, cfd uintptr) (msg netlinkMessage) {
buf := bytes.NewBuffer([]byte{})
addAttribute(buf, unix.CGROUPSTATS_CMD_ATTR_FD, uint32(cfd), 4)
return prepareMessage(id, unix.CGROUPSTATS_CMD_GET, buf.Bytes())
}
最终内核在cgroupstats_user_cmd中处理获取请求:
/* user->kernel request/get-response */
kernel/taskstats.c#L407
static int cgroupstats_user_cmd(struct sk_buff *skb, struct genl_info *info)
{
int rc = 0;
struct sk_buff *rep_skb;
struct cgroupstats *stats;
struct nlattr *na;
size_t size;
u32 fd;
struct fd f;
na = info->attrs[CGROUPSTATS_CMD_ATTR_FD];
if (!na)
return -EINVAL;
fd = nla_get_u32(info->attrs[CGROUPSTATS_CMD_ATTR_FD]);
f = fdget(fd);
if (!f.file)
return 0;
size = nla_total_size(sizeof(struct cgroupstats));
rc = prepare_reply(info, CGROUPSTATS_CMD_NEW, &rep_skb,
size);
if (rc < 0)
goto err;
na = nla_reserve(rep_skb, CGROUPSTATS_TYPE_CGROUP_STATS,
sizeof(struct cgroupstats));
if (na == NULL) {
nlmsg_free(rep_skb);
rc = -EMSGSIZE;
goto err;
}
stats = nla_data(na);
memset(stats, 0, sizeof(*stats));
rc = cgroupstats_build(stats, f.file->f_path.dentry);
if (rc < 0) {
nlmsg_free(rep_skb);
goto err;
}
rc = send_reply(rep_skb, info);
err:
fdput(f);
return rc;
}
并在cgroupstats_build函数中构建cgroup stats结果:
kernel/cgroup/cgroup-v1.c#L699
/**
* cgroupstats_build - build and fill cgroupstats
* @stats: cgroupstats to fill information into
* @dentry: A dentry entry belonging to the cgroup for which stats have
* been requested.
*
* Build and fill cgroupstats so that taskstats can export it to user
* space.
*
* Return: %0 on success or a negative errno code on failure
*/
int cgroupstats_build(struct cgroupstats *stats, struct dentry *dentry)
{
……
/* it should be kernfs_node belonging to cgroupfs and is a directory */
if (dentry->d_sb->s_type != &cgroup_fs_type || !kn ||
kernfs_type(kn) != KERNFS_DIR)
return -EINVAL; // 导致返回EINVAL错误码
这里可以发现cgroup_fs_type是cgroup v1的类型,而没有处理cgroup v2。因此,cgroupstats_build函数在路径类型判断语句上返回EINVAL。
在内核社区也有相关问题的说明,那么我们看看tejun(meta,cgroupv2 owner)相关说明的:
The exclusion of cgroupstats from v2 interface was intentional due to the duplication and inconsistencies with other statistics. If you need these numbers, please justify and add them to the appropriate cgroupfs stat file.
简单来说:对v2接口中排除cgroupstats的操作是有意的,因为它与其他统计数据存在重复和不一致之处。
结论
那么他的建议是什么?
他建议我们使用psi,而不是通过CGROUPSTATS_CMD_GET netlink api获取CPU统计信息,直接从cpu.pressure、memory.pressure以及io.pressure文件中获取,后续我们会介绍psi在容器领域的相关进展,当前Containerd已经支持psi相关监控.
由于笔者时间、视野、认知有限,本文难免出现错误、疏漏等问题,期待各位读者朋友、业界专家指正交流,上述排障信息已修改为社区内容。
参考文献
1. https://github.com/containerd/cgroups/pull/308
2. https://cloud.tencent.com/developer/article/2329489
3. https://github.com/google/cadvisor/issues/3137
4. https://www.cnblogs.com/vinsent/p/15830271.html 5. https://lore.kernel.org/all/20200910055207.87702-1-zhouchengming@bytedance.com/T/#r50c826a171045e42d0b40a552e0d4d1b2a2bab4d