2017-09-03 18:23:59 UTC
I have been running a system based on a tmpfs as '/' and with a
read-only /usr for a while now and am rather happy with that setup. I
added "mount.usr" and similar flags to systemd ages ago, so that I
could configure that setup via kernel parameters. That has worked
great so far.
Recently I saw "systemd.volatile" in the documentation (e.g. here:
and that "mount.usr*" is no longer documented. So I thought I'd move
over to the new way of doing things. The change was pretty simple to
do, I moved from "rootfs=tmpfs root=tmpfs rootflags=default
mount.usr=/device/path mount.usrflags=ro mount.usrfs=somefs" over to
"systemd.volatile=yes root=/device/path rootflags=ro rootfs=somefs".
Much simpler, nice:-)
The one pitfall I ran into is that I had to add a "usr" folder into
the usr partition for systemd-volatile-root.service to work. The
system boots well and seems to work nicely with this change.
But then I discovered one strange problem: I can not ssh into the root
ssh -v shows that a connection is established, then ssh is checking
for key files in /root/.ssh and does not find anything in there. Doing
"ls -alF /root/.ssh" as root does list keys there.
Mounting the same usr partition via "mount.usr*" kernel command line
parameters fixes the ssh login again.
The sshd.service files has no hardening options applied that might
explain the behavior.
Calling systemctl daemon-reload does not change anything (even when
making sure to stop sshd.socket and all SSH processes before doing
My usr-partition does not contain anything but a /usr folder (with all
the necessary data) now that the typical folders found in /usr were
pushed down into that folder. There is no /root folder on it.
Any ideas what might be going wrong here?