Discussion:
Improve boot-time of systemd
(too old to reply)
f***@gmail.com
2011-03-18 09:35:04 UTC
Permalink
Hi all,

We did a series boot-time tests of systemd, and found some interesting things:
(Note the tests were performed on a laptop with a 4-core mips cpu, AMD
RS780 chipset, 2GB Memory, rotate harddisk with ext4 filesystem,
debian squeeze, Linux 2.6.36 with fanotify enabled, systemd-v20, only
boot to console.)

1. What can readahead affect boot-time?
Sadly observed negative affect -- boot-time increases at least 1s.
With bootchart, I find more I/O at boot compared with no readahead,
see attachment noreadahead-vs-readahead.png.
Thoughts: Maybe we need only monitor files with read access, not all
opend files? P.S. inotify seems enough for the job (with one more step
to open each file).

2. udev-settle.service serializes the boot process, see attachment
udev-settle.png.
I tried to create a hotplug.target(which is activated after
default.target), and made udev-settle reside at it, this rendered a
unbootable system. systemd depends on udev at early time.
Thoughts: devtmpfs is mounted, so all cold-plug jobs can be done
without udev involved.
IMHO, fast boot doesn't mean get all services ready in a short time,
but means popup an UI as soon as possible. Windows seems do hotplug
jobs after user log in.

BTW, bootchart seems not very intuitive(no service, only processes.
Also some processes may be missed if they live between two "ps aux"
call of bootchart), is it possible to add similar feature to systemd?
--
Regards,
- cee1
Andrey Borzenkov
2011-03-18 10:52:31 UTC
Permalink
Post by f***@gmail.com
Hi all,
(Note the tests were performed on a laptop with a 4-core mips cpu, AMD
RS780 chipset, 2GB Memory, rotate harddisk with ext4 filesystem,
debian squeeze, Linux 2.6.36 with fanotify enabled, systemd-v20, only
boot to console.)
1. What can readahead affect boot-time?
Sadly observed negative affect -- boot-time increases at least 1s.
Gustavo Sverzut Barbieri
2011-03-18 13:40:00 UTC
Permalink
Post by f***@gmail.com
Hi all,
(Note the tests were performed on a laptop with a 4-core mips cpu, AMD
RS780 chipset, 2GB Memory, rotate harddisk with ext4 filesystem,
debian squeeze, Linux 2.6.36 with fanotify enabled, systemd-v20, only
boot to console.)
1. What can readahead affect boot-time?
Sadly observed negative affect -- boot-time increases at least 1s.
Kay Sievers
2011-03-18 13:57:33 UTC
Permalink
On Fri, Mar 18, 2011 at 14:40, Gustavo Sverzut Barbieri
Post by f***@gmail.com
1. What can readahead affect boot-time?
Sadly observed negative affect -- boot-time increases at least 1s.
Chen Jie
2011-03-19 06:05:50 UTC
Permalink
Post by Kay Sievers
On Fri, Mar 18, 2011 at 14:40, Gustavo Sverzut Barbieri
Post by f***@gmail.com
1. What can readahead affect boot-time?
Sadly observed negative affect -- boot-time increases at least 1s.
f***@gmail.com
2011-03-20 05:28:01 UTC
Permalink
It's ~0.5 sec faster here with readahead on a SSD.
Each time runs readahead-replay may cause readahead-collect to record
more blocks to read ahead -- size of "/.readahead" never reduces.
Also I found "/.readahead" recorded files like ".xession-errors" which
is written only, thus should not be read ahead.
Maybe adjust readahead-collect-done.timer to 5s will help.
Current readahead implementation has some problems:
1. It can't separate *real* block read requests from all read
requests(which includes more blocks read by the kernel's readahead
logic)
2. It just gives advices for how to do kernel's readahead, causes the
first read of a fille to spend more time.

I revisited "Booting Linux in five seconds" article[1], AIUI, they
did readahead in a different way:
1. They determine "which blocks need to read ahead" by a patch against kernel.
2. They do read ahead(aka replay) by reading each block with the
"idle" I/O scheduler.



Regards,
cee1
-------
[1] http://lwn.net/Articles/299483/ Intel made five seconds boot: 1s
for kernel, 1s for early boot, 1s for X and 2s for desktop
environment.
Lennart Poettering
2011-03-28 19:53:41 UTC
Permalink
Post by f***@gmail.com
It's ~0.5 sec faster here with readahead on a SSD.
Each time runs readahead-replay may cause readahead-collect to record
more blocks to read ahead -- size of "/.readahead" never reduces.
Also I found "/.readahead" recorded files like ".xession-errors" which
is written only, thus should not be read ahead.
Maybe adjust readahead-collect-done.timer to 5s will help.
1. It can't separate *real* block read requests from all read
requests(which includes more blocks read by the kernel's readahead
logic)
Shouldn't make a big difference, since on replay we turn off additional
kernel-side readahead.

However, it is true that the file will only ever increase, never
decrease in size.
Post by f***@gmail.com
2. It just gives advices for how to do kernel's readahead, causes the
first read of a fille to spend more time.
Hmm?
Post by f***@gmail.com
I revisited "Booting Linux in five seconds" article[1], AIUI, they
1. They determine "which blocks need to read ahead" by a patch against kernel.
Well, the meego readahead implementation uses a kernel patch to store in
each inode struct when it was first read, and then iterates through the
FS hierarchy and reads that value. That is a workable solution if you
plan to run the collector only once at distro build-time and on a
limited size FS, but for a generic distro we need to run it on every boot
basically, because you end up reiterating through your FS tree at each
boot, and that can be a massive amount of time.

Note that the meego implementation relies on mincore() to determine what
block to readahead, which is precisely what we do. The only difference
is how the list of files to use mincore() on is generated. We use
fanotify (which requires no kernel patch), and they use the inode
timestamp plus FS iteration.
Post by f***@gmail.com
2. They do read ahead(aka replay) by reading each block with the
"idle" I/O scheduler.
We do that too. We use "idle" on SSD, and "realtime" on HDD.

Lennart
--
Lennart Poettering - Red Hat, Inc.
f***@gmail.com
2011-03-29 03:20:45 UTC
Permalink
Post by Lennart Poettering
Post by f***@gmail.com
1. It can't separate *real* block read requests from all read
requests(which includes more blocks read by the kernel's readahead
logic)
Shouldn't make a big difference, since on replay we turn off additional
kernel-side readahead.
However, it is true that the file will only ever increase, never
decrease in size.
For collect, it can't filter out:
1. Kernel-side readahead, whether the readahead is initiated by
kernel(when no /.readhead data), or the replay process.
2. Written blocks of files(opened as "r+", "w+", "a"). The written
blocks resides at memory when boot time.

IMHO, the kernel lacks some APIs to notify each *real* read requests.
e.g, It can be done by tracking each read syscall (mmap seems not easy
to handle, though).
Post by Lennart Poettering
Post by f***@gmail.com
2. It just gives advices for how to do kernel's readahead, causes the
first read of a fille to spend more time.
Hmm?
posix_fadvise(...) may make each read do more readahead(more than the
kernel guess way), thus spend more time. e.g.
* When no replay, someone reads A part of file X --> do some job -->
reads B part of file X.
* When replay, both A and B parts of file X are read in one time, thus
more I/O usage. Other services may spend more time waiting for
I/O.(This can be observed from bootchart diagram)

BTW, does posix_fadvise apply globally or just for the process which calls it?
Post by Lennart Poettering
We do that too. We use "idle" on SSD, and "realtime" on HDD.
Why "realtime" on HDD?

BTW, According to test, the "idle" is not really *idle*, see the attachment.
That means 'replay' will always impact other one's I/O. For 'replay'
in idle I/O class on HDD, other one's I/O performance will reduce by
half, according to the test.
--
Regards,
- cee1
Lennart Poettering
2011-03-29 15:13:29 UTC
Permalink
Post by f***@gmail.com
Post by Lennart Poettering
Post by f***@gmail.com
1. It can't separate *real* block read requests from all read
requests(which includes more blocks read by the kernel's readahead
logic)
Shouldn't make a big difference, since on replay we turn off additional
kernel-side readahead.
However, it is true that the file will only ever increase, never
decrease in size.
1. Kernel-side readahead, whether the readahead is initiated by
kernel(when no /.readhead data), or the replay process.
That is true. But is that really a problem? Usually kernel readahead
should be a useful optimization which shouldn't hurt much. And we will
only apply it once, during the original run. It will not be done again
one replay, since we disable it explicitly then.
Post by f***@gmail.com
2. Written blocks of files(opened as "r+", "w+", "a"). The written
blocks resides at memory when boot time.
Actually, now that I am looking into this it might actually be possible
to distuingish read and write accesses to files, by using
FAN_CLOSE_NOWRITE/FAN_CLOSE_WRITE instead of FAN_OPEN. I do wonder
though why that isn't symmetric here...
Post by f***@gmail.com
IMHO, the kernel lacks some APIs to notify each *real* read requests.
e.g, It can be done by tracking each read syscall (mmap seems not easy
to handle, though).
The kernel has quite a number of APIs, for example there is blktrace,
and there are the newer syscall tracing APIs. But fanotify is actually
the most useful of all of them.
Post by f***@gmail.com
Post by Lennart Poettering
Post by f***@gmail.com
2. It just gives advices for how to do kernel's readahead, causes the
first read of a fille to spend more time.
Hmm?
posix_fadvise(...) may make each read do more readahead(more than the
kernel guess way), thus spend more time. e.g.
* When no replay, someone reads A part of file X --> do some job -->
reads B part of file X.
* When replay, both A and B parts of file X are read in one time, thus
more I/O usage. Other services may spend more time waiting for
I/O.(This can be observed from bootchart diagram)
The idea of readahead is to load as much IO requests into the kernel as
possible, so that the IO elevator can decide what to read when and to
reorder things as it likes and thinks is best.
Post by f***@gmail.com
BTW, does posix_fadvise apply globally or just for the process which calls it?
The kernel caches each block only once.
Post by f***@gmail.com
Post by Lennart Poettering
We do that too. We use "idle" on SSD, and "realtime" on HDD.
Why "realtime" on HDD?
Because on HDD seeks are very expensive. The idea of readahead is to
rearrange our reads so that no seeks happen, i.e. we read things
linearly in one big chunk. If accesses of other processes are
interleaved with this then disk access will be practically random and
the seeks will hurt.

On SSD seeks are basically free, hence all we do is tell the kernel
early what might be needed later so that that it reads it when it has
nothing else to do.
Post by f***@gmail.com
BTW, According to test, the "idle" is not really *idle*, see the attachment.
That means 'replay' will always impact other one's I/O. For 'replay'
in idle I/O class on HDD, other one's I/O performance will reduce by
half, according to the test.
That's probably something to fix in the elevator in the kernel?

Lennart
--
Lennart Poettering - Red Hat, Inc.
f***@gmail.com
2011-03-30 03:27:54 UTC
Permalink
Post by Lennart Poettering
Post by f***@gmail.com
1. It can't separate *real* block read requests from all read
requests(which includes more blocks read by the kernel's readahead
logic)
I guess we need to add the following in systemd-readahead-collect.service:
ConditionPathExists=!/.readahead

Replayed blocks will always be collected, but in the case of
enable/disable/install/remove services, read actions relating to
removed/disabled services need't be collected.

Each time we enable/disable/install/remove services, /.readahead
should be removed.
Post by Lennart Poettering
The idea of readahead is to load as much IO requests into the kernel as
possible, so that the IO elevator can decide what to read when and to
reorder things as it likes and thinks is best.
Well, this actually makes some early services waiting for I/O, then
the whole boot process blocks. See the attachment.
--
Regards,
- cee1
Lennart Poettering
2011-03-28 19:48:13 UTC
Permalink
Post by Kay Sievers
On Fri, Mar 18, 2011 at 14:40, Gustavo Sverzut Barbieri
Post by f***@gmail.com
1. What can readahead affect boot-time?
Sadly observed negative affect -- boot-time increases at least 1s.
Kay Sievers
2011-03-28 23:43:57 UTC
Permalink
Post by Kay Sievers
On Fri, Mar 18, 2011 at 14:40, Gustavo Sverzut Barbieri
But it could be improved yes. As you all said, maybe we should handle
udev hotplug in a more throttled way by postponing non-critical
devices and having everything else to be hotplug aware?
That's not really possible, you can not really make such list, and you
need to handle all parent devices from all 'interesting' devices
anyway to expose them.
The 'settle' service is only there for broken services. Originally it
wasn't even pulled into the base target but was free-hanging with
nobody getting blocked by it. Lennart pulled it in for a few broken
things and selinux to work, and it ended up blocking the base target
to be on the safe side for non-hotplug aware stuff. We might want to
re-check if that's really what we want.
Udev no longer enables udev-settle.service by default now.
basic.target is no longer blocked by it, and udev's coldplug will run
in the background.

Services that can not cope with today's hotplug world need to
explicitly pull-in udev-settle.service and let it delay their
execution until udev's coldplug run has fully finished.

Alternatively, 'systemctl enable udev-settle.service' will enable it
unconditionally.

Kay
f***@gmail.com
2011-03-29 03:36:50 UTC
Permalink
Hi Kay,
Post by Kay Sievers
Udev no longer enables udev-settle.service by default now.
basic.target is no longer blocked by it, and udev's coldplug will run
in the background.
To make boot fast, it seems udev's coldplug do too much jobs -- what I
expect is only coldplug local block devices and tty in that stage.
This can save ~2s in my boot test.

Is it possible to support .device unit files?
--
Regards,
- cee1
Lennart Poettering
2011-03-29 14:58:55 UTC
Permalink
Post by f***@gmail.com
Hi Kay,
Post by Kay Sievers
Udev no longer enables udev-settle.service by default now.
basic.target is no longer blocked by it, and udev's coldplug will run
in the background.
To make boot fast, it seems udev's coldplug do too much jobs -- what I
expect is only coldplug local block devices and tty in that stage.
This can save ~2s in my boot test.
Is it possible to support .device unit files?
Hmm?

Note sure I understand the question, but for a .device unit to show up
in systemd it must be tagged "systemd" in udev, which only can happen
when the device was triggered after udev is started.

Lennart
--
Lennart Poettering - Red Hat, Inc.
f***@gmail.com
2011-03-30 02:28:15 UTC
Permalink
Post by Lennart Poettering
Post by f***@gmail.com
To make boot fast, it seems udev's coldplug do too much jobs -- what I
expect is only coldplug local block devices and tty in that stage.
This can save ~2s in my boot test.
Is it possible to support .device unit files?
Hmm?
Note sure I understand the question, but for a .device unit to show up
in systemd it must be tagged "systemd" in udev, which only can happen
when the device was triggered after udev is started.
I've already known that currently systemd can only add .device units
from udev. My question was "Is it suitable for systemd to add supports
of loading .device units from .device unit files in
/lib/systemd/system ?"

What I expect is something like:
My_machine.target.wants/dev-sda1.device <or generate from /etc/fstab>
My_machine.target.wants/dev-sda2.device
...
My_machine.target.wants/dev-tty1.device
--
Regards,
- cee1
Lennart Poettering
2011-04-20 00:46:59 UTC
Permalink
Post by f***@gmail.com
Post by Lennart Poettering
Post by f***@gmail.com
To make boot fast, it seems udev's coldplug do too much jobs -- what I
expect is only coldplug local block devices and tty in that stage.
This can save ~2s in my boot test.
Is it possible to support .device unit files?
Hmm?
Note sure I understand the question, but for a .device unit to show up
in systemd it must be tagged "systemd" in udev, which only can happen
when the device was triggered after udev is started.
I've already known that currently systemd can only add .device units
from udev. My question was "Is it suitable for systemd to add supports
of loading .device units from .device unit files in
/lib/systemd/system ?"
systemd reads .device units just fine from disk. In fact you don't have
to do any kind of configuration for them.

If you do "systemctl start foobar.device" this call will wait for a
device of that name to show up. You can make up any name you want with
this. If such a device never shows up then it might eventually time out
though.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Gustavo Sverzut Barbieri
2011-03-22 12:21:54 UTC
Permalink
Any comments about these 2 points:

On Fri, Mar 18, 2011 at 1:40 PM, Gustavo Sverzut Barbieri
Post by f***@gmail.com
2. udev-settle.service serializes the boot process, see attachment
udev-settle.png.
I have feeling that increased parallelism during boot (like starting
fsck/mount as soon as device becomes available) actually has negative
effect on consumer grade devices. My HDD in notebook simply is not
prepared for it ...
ACK. Maybe we should add some intelligence to systemd's automatic
unit-mount creation and serialize partition mounts of the same device?
For traditional systems it's easy, just make all /dev/sda* depend on
each other, but world is bit harder and multiple-device FS like btrfs
or even DM will screw with that. Ideas? Maybe we could do it just
based on /etc/fstab, sorting dependencies based on /dev/sda* and
respective mountpoints.
Post by f***@gmail.com
I tried to create a hotplug.target(which is activated after
default.target), and made udev-settle reside at it, this rendered a
unbootable system. systemd depends on udev at early time.
Thoughts: devtmpfs is mounted, so all cold-plug jobs can be done
without udev involved.
IMHO, fast boot doesn't mean get all services ready in a short time,
but means popup an UI as soon as possible. Windows seems do hotplug
jobs after user log in.
Mandriva uses so called "speedboot" with sysvint - where GUI is
started as soon as possible. It is handcrafted so that only several
device classes are coldplugged and then DM is launched effectively
from rc.sysinit already.
Users did mention that boot under systemd actually feels slower then
using sysvinit.
Well, I never tried other distro other than Gentoo on this Macbook and
here it's kinda fast at 7s to be 100% ready with E17 (I have an
autostart .desktop that writes to /dev/kmsg to measure it), "Startup
finished in 2s 360ms 651us (kernel) + 1s 753ms 783us (userspace) = 4s
114ms 434us."
But it could be improved yes. As you all said, maybe we should handle
udev hotplug in a more throttled way by postponing non-critical
devices and having everything else to be hotplug aware? AFAIK Xorg
will handle nicely new input devices. ConnMan/NetworkManager will
handle nicely network devices. Same for bluez. We could even just
activate these services based on the presence of the devices, at least
E17 will handle nicely daemons appearing later by means of DBus
NameOwnerChanged.
 1. should we change ConnMan and NetworkManager to work as BlueZ an
be able to be activated/shutdown by udev hotplug actions (but
cooperative with systemd, bluetoothd is not AFAIR);
 2. should we do (or have a way to) force a manual ordering to help
Xorg/DM/WM by avoiding spawn of concurrent services? We know these
have the higher priority, but it's a higher priority only during
startup, later on they should all have the same priority... otherwise
we could just do it by means of systemd's service settings.
A hackish(?) solution would be to have a BootPriority=(True|False),
set to False by default and True for services we care most. Lower
priority services would be set to "background" priority in IO, CPU and
others, then being reset to actual values when systemd is notified.
Problem is we need to notify Systemd of that, as it's not a matter of
just starting "gdm", but actually gdm being in a "usable" state
(defined by gdm itself) or desktop being ready if users use autologin
(like I do). This could also be stated as "system is idle for X
seconds", which would be monitored by systemd itself and then no
manual notification is required.
--
Gustavo Sverzut Barbieri
http://profusion.mobi embedded systems
--------------------------------------
MSN: ***@gmail.com
Skype: gsbarbieri
Mobile: +55 (19) 9225-2202
f***@gmail.com
2011-03-24 09:19:14 UTC
Permalink
Hi,
 2. should we do (or have a way to) force a manual ordering to help
Xorg/DM/WM by avoiding spawn of concurrent services? We know these
have the higher priority, but it's a higher priority only during
startup, later on they should all have the same priority... otherwise
we could just do it by means of systemd's service settings.
I prefer to start UI first, and then UI notifies systemd to start more services.
The original boot sequence seems server-oriented, and I need
desktop-oriented boot sequence -- let most services start AFTER a
usable basic UI popup.
But it could be improved yes. As you all said, maybe we should handle
udev hotplug in a more throttled way by postponing non-critical
devices and having everything else to be hotplug aware?
That's not really possible, you can not really make such list, and you
need to handle all parent devices from all 'interesting' devices
anyway to expose them.
Maybe we can add some .device units(This currently doesn't work,
systemd reads device information from udev, also no "device section"
support). These units belongs a fastboot target, only works for
devices of one type.
Device vendor will benefit from such a mechanism.

BTW, I succeeded to boot a udev-less system(modifies ***@.service to
get rid dependency on dev-tty*.device). The bootchart diagram(see
attachment) looks good, but still has a "wait" behind systemd-logger,
any idea?
--
Regards,
- cee1
Andrey Borzenkov
2011-03-24 09:35:45 UTC
Permalink
Post by f***@gmail.com
Hi,
 2. should we do (or have a way to) force a manual ordering to help
Xorg/DM/WM by avoiding spawn of concurrent services? We know these
have the higher priority, but it's a higher priority only during
startup, later on they should all have the same priority... otherwise
we could just do it by means of systemd's service settings.
I prefer to start UI first, and then UI notifies systemd to start more services.
The original boot sequence seems server-oriented, and I need
desktop-oriented boot sequence -- let most services start AFTER a
usable basic UI popup.
KDM/GDM get really confused when host name changes after they are
started. And I have seen complaints that displayed host name is wrong.
So it probably should depend at least on network being available.
Gustavo Sverzut Barbieri
2011-03-24 10:20:38 UTC
Permalink
Post by Andrey Borzenkov
Post by f***@gmail.com
Hi,
 2. should we do (or have a way to) force a manual ordering to help
Xorg/DM/WM by avoiding spawn of concurrent services? We know these
have the higher priority, but it's a higher priority only during
startup, later on they should all have the same priority... otherwise
we could just do it by means of systemd's service settings.
I prefer to start UI first, and then UI notifies systemd to start more services.
The original boot sequence seems server-oriented, and I need
desktop-oriented boot sequence -- let most services start AFTER a
usable basic UI popup.
KDM/GDM get really confused when host name changes after they are
started. And I have seen complaints that displayed host name is wrong.
So it probably should depend at least on network being available.
That is stupid, as the hostname may change due lots of reasons, maybe
you wifi changed and now you got another home domain from dhcp? What
would they do?
--
Gustavo Sverzut Barbieri
http://profusion.mobi embedded systems
--------------------------------------
MSN: ***@gmail.com
Skype: gsbarbieri
Mobile: +55 (19) 9225-2202
Greg KH
2011-03-24 16:07:32 UTC
Permalink
Post by Gustavo Sverzut Barbieri
Post by Andrey Borzenkov
Post by f***@gmail.com
Hi,
 2. should we do (or have a way to) force a manual ordering to help
Xorg/DM/WM by avoiding spawn of concurrent services? We know these
have the higher priority, but it's a higher priority only during
startup, later on they should all have the same priority... otherwise
we could just do it by means of systemd's service settings.
I prefer to start UI first, and then UI notifies systemd to start more services.
The original boot sequence seems server-oriented, and I need
desktop-oriented boot sequence -- let most services start AFTER a
usable basic UI popup.
KDM/GDM get really confused when host name changes after they are
started. And I have seen complaints that displayed host name is wrong.
So it probably should depend at least on network being available.
That is stupid, as the hostname may change due lots of reasons, maybe
you wifi changed and now you got another home domain from dhcp? What
would they do?
They show the old hostname :(
Lennart Poettering
2011-03-28 19:57:18 UTC
Permalink
Post by Gustavo Sverzut Barbieri
Post by Andrey Borzenkov
Post by f***@gmail.com
Hi,
 2. should we do (or have a way to) force a manual ordering to help
Xorg/DM/WM by avoiding spawn of concurrent services? We know these
have the higher priority, but it's a higher priority only during
startup, later on they should all have the same priority... otherwise
we could just do it by means of systemd's service settings.
I prefer to start UI first, and then UI notifies systemd to start more services.
The original boot sequence seems server-oriented, and I need
desktop-oriented boot sequence -- let most services start AFTER a
usable basic UI popup.
KDM/GDM get really confused when host name changes after they are
started. And I have seen complaints that displayed host name is wrong.
So it probably should depend at least on network being available.
That is stupid, as the hostname may change due lots of reasons, maybe
you wifi changed and now you got another home domain from dhcp? What
would they do?
X does hostname based auth. It's seriously broken. In fact, almost
everything is broken in this context: A) dhcp should not modify the
local hostname. B) X shouldn't be so retarded to use literal host name
strings for authentication purposes. C) KDM shouldn't set up xauth that
way.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Lennart Poettering
2011-04-20 00:43:36 UTC
Permalink
Post by f***@gmail.com
Hi all,
(Note the tests were performed on a laptop with a 4-core mips cpu, AMD
RS780 chipset, 2GB Memory, rotate harddisk with ext4 filesystem,
debian squeeze, Linux 2.6.36 with fanotify enabled, systemd-v20, only
boot to console.)
1. What can readahead affect boot-time?
Sadly observed negative affect -- boot-time increases at least 1s.
Tomasz Torcz
2011-04-20 09:42:13 UTC
Permalink
Post by f***@gmail.com
Hi all,
(Note the tests were performed on a laptop with a 4-core mips cpu, AMD
RS780 chipset, 2GB Memory, rotate harddisk with ext4 filesystem,
debian squeeze, Linux 2.6.36 with fanotify enabled, systemd-v20, only
boot to console.)
1. What can readahead affect boot-time?
Sadly observed negative affect -- boot-time increases at least 1s.
Lennart Poettering
2011-04-20 00:39:12 UTC
Permalink
Post by f***@gmail.com
I tried to create a hotplug.target(which is activated after
default.target), and made udev-settle reside at it, this rendered a
unbootable system. systemd depends on udev at early time.
Thoughts: devtmpfs is mounted, so all cold-plug jobs can be done
without udev involved.
IMHO, fast boot doesn't mean get all services ready in a short time,
but means popup an UI as soon as possible. Windows seems do hotplug
jobs after user log in.
Mandriva uses so called "speedboot" with sysvint - where GUI is
started as soon as possible. It is handcrafted so that only several
device classes are coldplugged and then DM is launched effectively
from rc.sysinit already.
We want this to become the default actually. Currently the semantics of
rc-local still block us from doing that. But ideally gdm would popup a
login dialog on every screen as it shows up with no delay in any way,
and without waiting for any other services.
Users did mention that boot under systemd actually feels slower then
using sysvinit.
We can spawn the full set of userspace services (reasonably complete
GNOME session) now in less than 1s. I doubt anybody else has been
capable of doing anything like that in this time so far.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Lennart Poettering
2011-04-20 00:36:09 UTC
Permalink
Post by f***@gmail.com
Hi all,
(Note the tests were performed on a laptop with a 4-core mips cpu, AMD
RS780 chipset, 2GB Memory, rotate harddisk with ext4 filesystem,
debian squeeze, Linux 2.6.36 with fanotify enabled, systemd-v20, only
boot to console.)
1. What can readahead affect boot-time?
Sadly observed negative affect -- boot-time increases at least 1s.
With bootchart, I find more I/O at boot compared with no readahead,
see attachment noreadahead-vs-readahead.png.
Thoughts: Maybe we need only monitor files with read access, not all
opend files? P.S. inotify seems enough for the job (with one more step
to open each file).
In general if what you boot is minimal the effect of readahead will be
minimal too, but the cost of spawning yet another service is what you
pay for.
Post by f***@gmail.com
2. udev-settle.service serializes the boot process, see attachment
udev-settle.png.
I tried to create a hotplug.target(which is activated after
default.target), and made udev-settle reside at it, this rendered a
unbootable system. systemd depends on udev at early time.
udev-settle is unnecessary, unless you use LVM and very few other broken
services. As soon as they are fixed we can remove this for good. I don't
use this service on my machine anymore.

Also see my more recent blog story about this:

http://0pointer.de/blog/projects/blame-game
Post by f***@gmail.com
BTW, bootchart seems not very intuitive(no service, only processes.
Also some processes may be missed if they live between two "ps aux"
call of bootchart), is it possible to add similar feature to systemd?
We have that now with systemd-analyze plot.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Continue reading on narkive:
Loading...