When a region of the SSD is trimmed, that data is lost. However for some unfathomable reason, Linux keeps the data in its caches. As a result, the 2nd dd call returns wrong data, making it seem as if the TRIM didn't work regardless whether it really did or not.
I am not sure if this is a kernel bug or not. It should not have any ill effects in practice (after all who usually tries to read from a freshly TRIMmed region?) but even so it seems like a waste of cache memory if it's occupied by invalidated data.
Anyway, drop caches before issuing the dd command.
Then see if it still returns y.y.y. or not. If it does not - trim has been working as it is supposed to be.
Also the issue_discards in LVM is not strictly necessary, it only affects lvremove, lvreduce, etc. Filesystem based TRIM seem to be passed on by LVM unconditionally.
It turns out to be a caching issue.
When a region of the SSD is trimmed, that data is lost. However for some unfathomable reason, Linux keeps the data in its caches. As a result, the 2nd dd call returns wrong data, making it seem as if the TRIM didn't work regardless whether it really did or not.
I am not sure if this is a kernel bug or not. It should not have any ill effects in practice (after all who usually tries to read from a freshly TRIMmed region?) but even so it seems like a waste of cache memory if it's occupied by invalidated data.
Anyway, drop caches before issuing the dd command.
echo 1 | sudo tee /proc/sys/ vm/drop_ caches
sudo dd bs=4096 skip=2224384 count=256 if=/dev/ mapper/ lubuntu- -vg-root | hexdump -C
Then see if it still returns y.y.y. or not. If it does not - trim has been working as it is supposed to be.
Also the issue_discards in LVM is not strictly necessary, it only affects lvremove, lvreduce, etc. Filesystem based TRIM seem to be passed on by LVM unconditionally.