Friday, 17 June 2011

Understanding systemd part-I

This Post is from my understanding of the original post by leannart poettering on systemd :- 

From the look of things systemd just looks like a fancy init system.

May be at this point, I must get more familiar with init system, to understand more about systemd. init has PID 1, and so obviously it gets started first by the kernel, before all other processes. It is the parent process to all other child processes. Apart from this very important function, init script performs the central task of bringing up and maintaining user-space during boot. init script is much faster than its venerable predecessor sysvinit. For a faster system, 2 main things are required.
(1). Start less ---> Starting less means starting fewer services or deferring the starting of services until they are actually needed. For example a printing service may not be required immediately after having system on, whereas, there are some services where we know that they will be required sooner or later (syslog, D-Bus system bus, etc.). There would be no need to start many services, until directly called by the user, or its API is required by some-other service.
(2). Start More in parallel ---> Starting more in parallel means that if we have to run something, we should not serialise its start-up (as sysvinit does), but run it all at the same time, so that the available CPU and disk IO bandwidth is maxed out, and hence the overall start-up time minimised.

Lets see how programs start, with an example :-

--------------X--------------------------------------------------------- DBus service
---------------|------------------------------------------------- --X--- HAL service
--------------X---------------------------------------------------X---  syslog
               |                                                    |
               |                                                    |
Avahi(dependent on dbus and syslog)---------->livirtd (dependent on HAL and syslog) & also on Avahi

So for livirtd to start, it would have to wait for HAL and syslog to start, and additionally wait for Avahi to start too, hence livirtd can't start until all these services have started, and hence the incumbent delay.

Parallelizing Socket Services :-
In-order to get rid of these synchronising and paralleling delays, we have to understand what is required by one process from another. Usually that is an AF_UNIX socket in the file-system, but it could be AF_INET[6], too. For example, clients of D-Bus wait that /var/run/dbus/system_bus_socket can be connected to, clients of syslog wait for /dev/log, clients of CUPS wait for /var/run/cups/cups.sock and NFS mounts wait for /var/run/rpcbind.sock and the portmapper IP port, and so on. Now if we can make this socket appear before the entire daemon gets executed, and link it with the service, we can significantly reduce boot time and start more processes in parallel. We can create the listening sockets before we actually start the daemon, and then just pass the socket during exec() to it. That way, we can create all sockets for all daemons in one step in the init system, and then in a second step run all daemons at once. If a service needs another, and it is not fully started up, that's completely OK: what will happen is that the connection is queued in the providing service and the client will potentially block on that single request. But only that one client will block and only on that one request. Also, dependencies between services will no longer necessarily have to be configured to allow proper parallelized start-up: if we start all sockets at once and a service needs another it can be sure that it can connect to its socket.

## not clearly understood this part ##

Basically, the kernel socket buffers help us to maximise parallelization, and the ordering and synchronisation is done by the kernel, without any further management from userspace! And if all the sockets are available before the daemons actually start-up, dependency management also becomes redundant (or at least secondary): if a daemon needs another daemon, it will just connect to it. If the other daemon is already started, this will immediately succeed. If it isn't started but in the process of being started, the first daemon will not even have to wait for it, unless it issues a synchronous request. And even if the other daemon is not running at all, it can be auto-spawned. From the first daemon's perspective there is no difference, hence dependency management becomes mostly unnecessary or at least secondary, and all of this in optimal parallelization and optionally with on-demand loading. On top of this, this is also more robust, because the sockets stay available regardless whether the actual daemons might temporarily become unavailable (maybe due to crashing). In fact, you can easily write a daemon with this that can run, and exit (or crash), and run again and exit again (and so on), and all of that without the clients noticing or loosing any request.

Parallelizing Bus Services
Modern daemons on Linux tend to provide services via D-Bus instead of plain AF_UNIX sockets. Now, the question is, for those services, can we apply the same parallelizing boot logic as for traditional socket services? Yes, we can, D-Bus already has all the right hooks for it: using bus activation a service can be started the first time it is accessed. Bus activation also gives us the minimal per-request synchronisation we need for starting up the providers and the consumers of D-Bus services at the same time: if we want to start Avahi at the same time as CUPS (side note: CUPS uses Avahi to browse for mDNS/DNS-SD printers), then we can simply run them at the same time, and if CUPS is quicker than Avahi via the bus activation logic we can get D-Bus to queue the request until Avahi manages to establish its service name.

Apart from these services, filesystem jobs also have to be parallised, but I am not going to read much into it, as the main focus of my project, should lie in maintaining Bus and Socket services.
Technorati Tags: , , , ,

No comments:

Post a Comment