Comment 12 for bug 1626436

Revision history for this message
Martin Pitt (pitti) wrote :

This could potentially be related to changes with cgroups -- creating them did not cause any uevent in 4.4, but with 4.8 they do:

  sudo systemctl stop systemd-timesyncd
  udevadm monitor -k # in another terminal
  sudo systemctl start systemd-timesyncd

KERNEL[393.260769] add /kernel/slab/:atA-0000192/cgroup/dentry(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.261031] add /kernel/slab/inode_cache/cgroup/inode_cache(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.261850] add /kernel/slab/shmem_inode_cache/cgroup/shmem_inode_cache(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.262358] add /kernel/slab/:tA-0000192/cgroup/cred_jar(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.262636] add /kernel/slab/proc_inode_cache/cgroup/proc_inode_cache(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.452990] add /kernel/slab/:tA-0001024/cgroup/mm_struct(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.453082] add /kernel/slab/:tA-0000200/cgroup/vm_area_struct(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.453251] add /kernel/slab/:tA-0000064/cgroup/anon_vma_chain(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.453369] add /kernel/slab/anon_vma/cgroup/anon_vma(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.456909] add /kernel/slab/sock_inode_cache/cgroup/sock_inode_cache(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.457974] add /kernel/slab/:t-0000256/cgroup/kmalloc-256(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.458205] add /kernel/slab/:t-0000512/cgroup/kmalloc-512(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.460718] add /kernel/slab/:tA-0003648/cgroup/task_struct(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.462292] add /kernel/slab/:tA-0000128/cgroup/pid(1623:systemd-timesyncd.service) (cgroup)
KERNEL[393.465448] add /kernel/slab/:t-0001024/cgroup/kmalloc-1024(1623:systemd-timesyncd.service) (cgroup)

Which means that there will be a corresponding number of udev workers running. At boot there is a *lot* of cgroup action due to lots of services being started, while this is relatively quiet during runtime. This is consistent with a slow boot but normal behaviour/feeling at runtime.