Hello, I’ve been investigating the following crash with cephfs: ··· According to the state of the ceph_inoide_info this means that ceph_dir_is_complete_ordered would return true and the second condition should also be true since ptr_pos is held in r12 and the dir size is 26496. So the dentry being passed should be the 2953 % 512 = 393 in the cache_ctl.dentries array. Unfortunately my crashdump excldues the page cache pages and I cannot really see what are the contents of the dentries array.
Could you provide any info on how to further debug this
作者在使用cephfs的时候遇上了崩溃的情况,readdir的操作
Yan, Zheng已经对这个bug进行了修复
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=af5e5eb574776cdf1b756a27cc437bff257e22fe https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=a3d714c33632ef6bfdfaacc74ae6ba297b4c5820
但是这个是提交到Linux kernel的4.6的分支里面去了的,所以目前从官方版本来说是4.6或者更新的版本才会解决
这个问题只能是说遇到了再升级内核了
Hello all, Over the past few weeks I’ve been trying to go through the Quick Ceph Deploy tutorial at: ceph-deploy osd activate ceph02:/dev/sdc ceph03:/dev/sdc part. It never actually seems to activate the OSD and eventually times out: [ceph03][INFO ] Running command: sudo /usr/sbin/ceph-disk -v activate —mark-init systemd —mount /dev/sdc [ceph03][WARNIN] main_activate: path = /dev/sdc [ceph03][WARNIN] No data was received after 300 seconds, disconnecting…
作者在部署osd的时候出现无法激活osd的问题,最后在别人的帮助下发现了问题,在交换机上创建了 VLAN ,但没允许jumbo packets,所以出现了问题
另外一个人也出现了类似的问题,通过升级了parted解决问题(from 3.1 from the CentOS7 base)
rpm -Uhv ftp://195.220.108.108/linux/fedora/linux/updates/22/x86_64/p/parted-3.2-16.fc22.x86_64.rpm
这个一般没什么问题,确实定位到这里再升级了,一般情况下很少出现不能activate osd的情况
扫码关注腾讯云开发者
领取腾讯云代金券
Copyright © 2013 - 2025 Tencent Cloud. All Rights Reserved. 腾讯云 版权所有
深圳市腾讯计算机系统有限公司 ICP备案/许可证号:粤B2-20090059 深公网安备号 44030502008569
腾讯云计算(北京)有限责任公司 京ICP证150476号 | 京ICP备11018762号 | 京公网安备号11010802020287
Copyright © 2013 - 2025 Tencent Cloud.
All Rights Reserved. 腾讯云 版权所有