Which means that there will be a corresponding number of udev workers running. At boot there is a *lot* of cgroup action due to lots of services being started, while this is relatively quiet during runtime. This is consistent with a slow boot but normal behaviour/feeling at runtime.
This could potentially be related to changes with cgroups -- creating them did not cause any uevent in 4.4, but with 4.8 they do:
sudo systemctl stop systemd-timesyncd
udevadm monitor -k # in another terminal
sudo systemctl start systemd-timesyncd
KERNEL[393.260769] add /kernel/ slab/:atA- 0000192/ cgroup/ dentry( 1623:systemd- timesyncd. service) (cgroup) slab/inode_ cache/cgroup/ inode_cache( 1623:systemd- timesyncd. service) (cgroup) slab/shmem_ inode_cache/ cgroup/ shmem_inode_ cache(1623: systemd- timesyncd. service) (cgroup) slab/:tA- 0000192/ cgroup/ cred_jar( 1623:systemd- timesyncd. service) (cgroup) slab/proc_ inode_cache/ cgroup/ proc_inode_ cache(1623: systemd- timesyncd. service) (cgroup) slab/:tA- 0001024/ cgroup/ mm_struct( 1623:systemd- timesyncd. service) (cgroup) slab/:tA- 0000200/ cgroup/ vm_area_ struct( 1623:systemd- timesyncd. service) (cgroup) slab/:tA- 0000064/ cgroup/ anon_vma_ chain(1623: systemd- timesyncd. service) (cgroup) slab/anon_ vma/cgroup/ anon_vma( 1623:systemd- timesyncd. service) (cgroup) slab/sock_ inode_cache/ cgroup/ sock_inode_ cache(1623: systemd- timesyncd. service) (cgroup) slab/:t- 0000256/ cgroup/ kmalloc- 256(1623: systemd- timesyncd. service) (cgroup) slab/:t- 0000512/ cgroup/ kmalloc- 512(1623: systemd- timesyncd. service) (cgroup) slab/:tA- 0003648/ cgroup/ task_struct( 1623:systemd- timesyncd. service) (cgroup) slab/:tA- 0000128/ cgroup/ pid(1623: systemd- timesyncd. service) (cgroup) slab/:t- 0001024/ cgroup/ kmalloc- 1024(1623: systemd- timesyncd. service) (cgroup)
KERNEL[393.261031] add /kernel/
KERNEL[393.261850] add /kernel/
KERNEL[393.262358] add /kernel/
KERNEL[393.262636] add /kernel/
KERNEL[393.452990] add /kernel/
KERNEL[393.453082] add /kernel/
KERNEL[393.453251] add /kernel/
KERNEL[393.453369] add /kernel/
KERNEL[393.456909] add /kernel/
KERNEL[393.457974] add /kernel/
KERNEL[393.458205] add /kernel/
KERNEL[393.460718] add /kernel/
KERNEL[393.462292] add /kernel/
KERNEL[393.465448] add /kernel/
Which means that there will be a corresponding number of udev workers running. At boot there is a *lot* of cgroup action due to lots of services being started, while this is relatively quiet during runtime. This is consistent with a slow boot but normal behaviour/feeling at runtime.