Systemd takes a reliable, known, thoroughly debugged process (init, or various of its tweaks, including Ubuntu's upstart and Debian's insserv), and converts booting from a deterministic, predictable process to one that's inherently unpredictable.
And the stated objective? "To reduce boot times".
The best way to reduce boot times is to not boot. The reason I reboot systems is to return them to a known good state (or, very rarely, to perform a kernel upgrade).
On server hardware, I perform boots infrequently, and really, really, really want them to work right.
On end-user hardware, I perform boots infrequently, preferring to use suspend/restore to quiesce my systems (suspend to RAM, occasionally suspend to disk). That is a process which I'd like to have very thoroughly debugged and not give me any unhappy surprises (say: crash my video, e.g.: interactive session, lose track of drivers/hardware, especially wireless).
Systemd loses the huge transitivity of shell scripting, and puts you in the position of needing to acquire a novel skill at the one time you least need to be learning and most need to be applying: when your systems won't boot straight:
https://lwn.net/Articles/494711/
I'm also not much surprised that Red Hat, who've had such a historic problem with consistency and reliable dependency management within their packaging system (as compared to Debian/Ubuntu) are proponents of this technology (hint: it's not the package format, it's the policy, or lack thereof). And now Arch.
As an OS X, and thus launchd user... I think you guys are crazy! :)
For launchd, the service description is a few declarative entries: on OS X, ssh.plist is 37 lines, but only because XML plists are really verbose; it could be half that in a saner format. On my Debian system, /etc/init.d/ssh is 167 lines of almost entirely boilerplate shell script that has to be maintained separately for each service (and that isn't even enough to make the script standalone; it invokes the 1400 line start-stop-daemon). The only thing simpler about SysV init is that it's the legacy everything is compatible with: the simplicity of shell scripts doesn't hold up when you need over 100 lines for a simple daemon.
launchd itself is many thousands of lines of code (too much?), but it provides cron and inetd-like services (i.e. generalized on-demand services - it is really nice to know that a daemon has zero effect on my system, no pages that had to be loaded from disk, when it's not being used, but still operates efficiently when it comes under load; this also makes the implementation for the daemon simpler in some cases), as well as automatic process termination/restarting. Its service-on-demand focused dependency model is nondeterministic in the same way that systemd is (?), but it's completely reliable, since it's standard by now so everything is designed to work with it.
Of course I usually use suspend and restore, but making rebooting really fast makes the system more fun to use.
And yes, I'm talking about launchd, not systemd, but from what I've heard systemd is pretty similar in design and goals.
I'm sorry, but shell scripts suck as a language for booting the system. You need to fork() and exec() for almost anything non-trivial, wasting precious CPU cycles in the progress. It looks like every Linux distro has its own way to manage boot scripts. And when they fail, you have no idea what happened.
More importantly, init only handles starting and stopping of services. They don't manage services, like restarting them when they crash. Systemd can do that. The socket activation stuff also allows one to potentially save resources by not starting services until they're really needed.
The best way to reduce boot times is to not boot? Have you ever heard of "laptops" and "average users"? Even on my servers, a shorter boot time is welcome.
My wish list goes something like this: better wireless drivers, improved sleep/suspend/hibernate/resume, better power management, a better package manager, more up-to-date applications ...
At the very, very bottom of that list -- the very last item, so far at the bottom of the list that it's in danger of falling off entirely -- is "faster boot times".
Dredmorbius is spot on, at least for me and my daily usage and the couple dozen or so servers that I'm responsible for. If things are so pooched that I have to reboot it, then it doesn't really matter to me anymore whether it takes 30 seconds or a minute to start up. I would much prefer not having to reboot it in the first place.
Since Chakra was (I think) forked from Arch Linux, I'll have to check and see if they're gonna do this too.
I hope not.
(edit: none of this is intended as a criticism of Chakra's development team, who have been doing an amazing job of putting together a system that, despite its warts, I genuinely enjoy using every day.)
Note that a better way of saying things is that systemd deals with state changes a lot better. Booting is one big state change, but when you have a laptop you go through a heck of a lot of other state changes (suspend, hibernate, resume), docking, connectivity changes (eg wifi coming and going), storage added and removed etc. You may let other people use your system (more state changes).
systemd can also ensure that only services you use actually use get started. For example printing is done as a server on Linux (cups) so systemd can ensure it doesn't start until you need it. This reduces power consumption.
Because of the way systemd manages services it can also do a better job of isolating them and dealing with unexpected issues. For example if the print server crashes, or someone attacks it while in Starbucks you'll be better off. (Its chrooting is easier to use, as well as the way things are put into control groups.)
All the things you list require developer time and attention. If systemd lets developers spend less time on startup scripts, then they will have more time to devote to the things on your list. (If you've ever had to write startup scripts you'll know how long it takes to develop and debug them.)
You can add state-change management to your system without mucking with really solid, stable, low-level, critical code like init.
There's already hotplug support, xinetd, ifupdown's pre/post up/down stanzas, and the like (though networkmanager's screwing that bit up wonderfully). Chroot jails too. I'm not saying that these are perfect (and some are a very pale shadow of perfect indeed), but they're independent of init.
Systemd mashes a whole bunch of crap in one place. Most of which I really don't want to have to worry about.
Now, if Arch and Fedora want to serve as test beds for this stuff -- and either perfect them or reject them as nonviable, well. Yeah, I suppose I can live with that. Though I'm definitely not a fan.
You see, there's a few things here.
For my own systems, I really like not having to fuck with useless shit. Currently I'm managing networking manually on my laptop as NetworkMangler has gone to crap again. So I run "ifconfig" and "route" from a root shell (yay for shell history and recursive reverse search).
For servers, part of my performance evaluation is based on how many nines I can deliver. Not having shit get fucked up does really nice things to my nines. Having shit change does crap things to my nines. I like my nines. I really hate change. It's an ops thing. Where I've got to have change, I like to have it compartmentalized, modularized, with loosely-linked parts and well-defined interfaces.
Startup scripts are a very much mostly solved problem. Debian gives you a nice template in /etc/init.d/skeleton. Play with it. Yes, I've written startup scripts.
No one is stopping you from making your own distro that meets your own needs. And servers generally do not have state changes, and it would generally be acceptable to just reboot on any of them.
You may enjoy micro-managing your networking etc - good for you. Some of us don't like doing that. To give one example of stuff that certainly doesn't just work, I was trying to run Squid on my (Ubuntu) laptop and it certainly can't handle state changes well, and neither can Ubuntu's ifup/down and init system. I often ended up having to manually do stuff that the system should have been able to handle well.
I'm personally delighted with systemd's functionality - the way it captures output from services would have saved me hours in the past from services that wouldn't startup cleanly and avoided providing useful information as to why.
(Separately: my kingdom for a simple caching web proxy server)
Systemd on opensuse as well as Mageia is pretty damn reliable. It is not 'test bed'. Various distributions have been using this.
As a result, there are way less differences now between distributions. Which means configuration becomes easier.
In any case, if you really care about things not changing, then I assume you're using a distribution which doesn't change this suddenly. So I don't see why your so awfully negative.
I'm a daily laptop linux user, and like you I know how to go a long time between reboots, so I can wait for my computer to start up.
But forget about us, we're already converts, we don't matter. My grandfather (93 years old) is also a daily laptop linux user. When he presses that power button, that laptop better be booted and ready /yesterday/. And when he pushes it again, it better be off before he closes the lid. Slow startup and shutdown times are simply not an acceptable user experience; they are literally the difference between enjoying and wanting to use the computer, and not wanting to bother with it.
And don't think for a minute he's going to learn about suspend, hibernate, power savings, battery life, or whatever. It's just not going to happen. His laptop lives in the closet, so it's going to be off (either by his doing, or the battery running out). When he sees something on tv and wants to read about it, he takes the laptop out, plugs it in, and turns it on. If it's not ready for him when he's ready for it (i.e. now) then he just won't use it.
However, since I've got that sucker booting from power button to firefox home page load complete in under 7 seconds, he uses it all the time. And it's amazing how it enriches his life. You simply can't get computer use to penetrate into lives like his without fast booting and an easy user experience.
The sane thing is to tie power management to the power button.
Light press: hybrid suspend suspends to RAM, also saves state to disk -- system spins down quickly and, so long as it's not been hibernating long enough to drain battery, restores in a second or so. Longer and it will do a boot/restore from disk.
Long press: powerdown.
Many devices have separate "suspend" and "poweroff" hardware (or soft controls) as well.
That's lovely, but you're not paying attention. It doesn't matter how it's set up. It matters how it preforms.
To the non-enthusiast / casual user, closing the lid, pressing the power button, doing a system shutdown, inactivity sleep timeout, and the battery running out are all the same thing: the computer was "on", now it's "off". Asking someone like this to think about how the reason it came to be "off" affects how fast it will be ready for them later is a fool's errand. It needs to be fast in every circumstance.
Normal people just want to get something done. They judge their computer by how easy it is to use and how fast it responds to what they do. That included cold boots, launching program, and downloading webpages. Even if they're doing something "the wrong way", they will still judge it with the same criteria and the same harshness. I want my grandfather to use linux because I can quickly help him and fix things from afar, and because there are very few ways for him to mess it up. He uses it because he really thinks it's better then windows, and that's purely because it's fast and easy, every way he uses it.
For the record, I set it up so the power button does a shutdown, and everything else results in a hybrid sleep. What he understands that he can shut it down if he wants, otherwise no matter what happens (lid closed or not) everything will be the way he left it, even if he forgets about it for a few days or doesn't charge it.
That kind of simplicity is what allows people to think of linux as something they can use, not just some super complicated tool for "hackers" and "computer geniuses". I'm not saying it should be dumbed down or have options removed, but I am saying that making it enjoyable for everyone results in more people using it, and that benefits us all.
Your message seems to have the hidden assumption that development resources are now being redirected from wireless drivers/suspend/power management to improving boot times. This is false. Different components are handled by different people, and axing a project does not mean that the people responsible will automatically work in one of the other fields that you prioritize.
> My wish list goes something like this: better wireless drivers, improved sleep/suspend/hibernate/resume, better power management, a better package manager, more up-to-date applications ...
Use a different distro. None of these is a problem on a modern distro with reasonably modern hardware.
Exactly how precious are those CPU cycles? I mean, really. Can you put a dollar figure on them?
And then contrast that with the dollar figure for consultant / employee / remote hands time to figure out WTF went wrong?
There are numerous systems for managing services: monit is the best known, mon and several proprietary systems also exist. Nagios can tell you if the service is running or not (though it doesn't handle the start/stop logic).
These are small details and extensions on top of the existing SysV init foundation.
Ubuntu's boot time is already down to 8.6 seconds -- a restore from suspend is barely less than that (and restore from disk is considerably longer), though both restores preserve user state. You know, what applications / files you had open, and what was in them when you left off, positions of windows on your desktop. All that jazz.
http://www.jamesward.com/2010/09/08/ubuntu-10-10-boots-in-8-...
The socket management is kind of nifty, but doesn't add a whole lot that xinetd didn't already offer (systemd does allow multi-socket services and d-bus-initiated services). I'm not convinced these couldn't be hacked into xinetd while preserving the simplicity and stability of init.
My desktop state (and its preservation) is worth a lot more than fast boot.
Yes. I've heard of inane gratuitous questions. As I said: if you're forcing average users to reboot with any frequency, you're Doing It Wrong.
No, monit doesn't manage services. Monit tries to follow clues you've given it about what's running, it polls them once in a while, and if something appears to be not running (as measured by the instructions you've given it), it runs the one-liner you've given it that should start the thing up again.
Monit does a thing that approximates managing a process, for certain values of "approximates", "managing", and "process". Supervisory process management is one of Linux's absolute weakest points. I cut my teeth on fault-tolerant HA minicomputers, and it pains me to think that 30 years later, we still don't have a way to say "make sure apache is always running. period."
As a great blog pointed out, there is exactly one process that KNOWS when a service has stopped running, and it doesn't need .pid files or polling or anything else to tell it: process 1.
I'm not a systemd advocate - I don't know enough about it, and we're using Ubuntu so I'll end up learning upstart anyway - but read this, it's way more eloquent that I can be:
Fair points. And thanks, by the way, for actually advancing the discussion.
Init can and does manage processes. Somewhat crudely, mostly via the 'respawn' directive. One thing it isn't particularly good at is telling if a process is doing something useful (say, serving out web pages successfully), but it will let you know that it's running. There was a semi-popular hack some years back to run sshd out of init (via respawn) to ensure you always had an SSH daemon on your box (Dustin mentions this). The downside is that while it will ensure sshd is running, it doesn't give you much flexibility over the process (you've got to edit inittab and 'init q' to make changes).
What monit and kin can do, above and beyond process-level monitoring, is check that the service attributes of a process are sane. That a webserver, say, kicks out a 200 OK response rather than a 4## or 5## error, and restart the service if this isn't the case. Checking for correct operation can be more useful than simply verifying a process is running (though going too far overboard in defining "correctness" can also cause problems).
For realtime/HA tools, attacking things on the single-system level is probably the wrong way to roll. You want a load balancer in front of multiple hosts with response detection -- is host A still up or not? Whether or not this ties into mitigation (restart) or alerting (notifications to staff) is another matter.
There are also places other than init you can watch things from. /proc contains within it multitudes, including a lot of interesting/useful process state. Daemons can be written with control/monitoring sockets instrumented directly into themselves. Debuggers, strace, ltrace, dtrace, and systemtap all provide resolution inside a running process/thread. Creating something sane, effective, efficient, and sufficient out of all these tools ... interesting problem.
How long does it take your servers to finish POST? Shaving CPU cycles on boot is not something I ever worry about, because just getting to the boot loader takes minutes.
Also, shell scripts rock for an init system language. It's a language that almost everyone knows and can debug without being a CS major. The only reason you 'have no idea what happened' is because the scripts are written poorly, and code in any language would be hard to debug if it's written poorly.
Fork and exec, seriously? You're worried about functions that take microseconds to finish? Look again - the huge sleep cycles to wait for drivers to finish initializing takes up a lot more time.
I have written my own init systems three times in three languages, and examined countless distros' versions. Trust me, shell is the best compromise.
'ls -A | wc -l' will spare you having to account for the '.' and '..' lines. Omitting the '-l' (redundant for your case) also spares the "totals" line.
You're going to bash systemd but give a pass to upstart? Give me a break. Upstart is just as radical a departure from SysV init as systemd is, but the documentation (and IMHO, features) is much poorer.
As far as I can see your arguments are 1) you boot your systems infrequently, so any work in that area isn't valuable 2) socket-based activation is somehow not predictable 3) you're familiar with shell scripting, so a change that replaces shell scripting with something else must be bad. 4) and then you thow in some unclear Red Hat FUD for no apparent reason. None of those sound convicing to me.
Upstart and systemd provide tons and tons of other features though. Restarting of crashed processes, dependencies, etc.. They also generally have much more simple config files instead of start up scripts. I don't know how many crappy startup scripts I've seen over the years, when in practice: set these environment variables, execute this program as this user with these arguments is 95+% of what's needed.
Much much much more straight forward to have some specially formatted comments (?!hahaha, that's the UNIX spirit!) to determine the boot priority and then source some files to read some arbitrary variables and construct the command line that you're interested in running with complete abstraction.
The other functionality may be nice, but 1) it's got no place in init and 2) really complicates a key piece of system infrastructure. Complexity and change are the two dual enemies of stability. As an old-fart ops type, with scars on my hide and notches on my belt, I really hate both change and complexity. The mess with my nines.
Arch and Fedora are relatively wide of my usual ambit, but I've learned in my years to be wary of what others ask for -- you may get it and have to live with the consequences (see: GNOME).
My computer changes location at least twice a week as I commute between my place and my girlfriend's place. A Mac mini serves my needs very well because (including AC adapter) it weighs only 2.7 pounds, and I really appreciate having more ports than most laptops have and not having to pay for and carry around a bad keyboard and a laptop display. (I consider all laptop keyboards bad keyboards, and -- maybe because I am "far-sighted" -- much prefer my girlfriend's 32-inch TV to any laptop display.)
But since the Mac mini does not have a battery, S3 sleep mode does not survive unplugging the device. And since suspend-to-disk is not supported by the OS I run, shutting down is the only option.
P.S., I would have preferred something like a Mac mini, but with a small battery that powers S3 sleep mode. Sadly, I could not find anything like that on the market.
P.P.S., I run OS X on it. If I were to switch to Linux, would suspend-to-disk work reliably?
Well ... you're not running Linux, so systemd is moot (you've got launchd instead, which has certain similarities).
I'm a fan of small form-factor systems, though I suspect we'll start seeing these as G3 tablets (where the iPad was G1, and the current Android-and-others are G2). Which is to say, devices with integrated display and battery, to which other peripherals may be attached (physically or wirelessly, say, by Bluetooth). That said, we're not there yet.
And yes, small form-factor PCs (CPU, no battery, no display) are pretty slick. I'm something of a fan of the FitPC offerings: http://www.fit-pc.com/web/purchase/order-direct-fit-pc3/ (Googling "small form factor" will show you numerous other vendors).
I used a similar configuration under Linux for a time, and as of mid 2000s, found suspend-to-disk worked pretty reliably, though not perfectly. In the past 4-5 years on laptops and desktops, I've had very few problems, mostly traceable to display drivers.
Just because you don't reboot often does not mean it has to be slow. In open source community, people choose their own projects. You can't really expect for force for that matter for them to work on your favorite things.
Except in practise, systemd works fantastically and you're all worried about nothing.
Systemd also does much more than that and handles stuff like daemonization and socket creation, so that these things don't need to be re-implemented in every program that requires them.
Bash scripts are overly verbose, repetitive, and awkward in comparison to unit files.
And you can always use sysvinit if you still aren't convinced, just Arch will be optimised for systemd.
...except that those things do still need to be re-implemented in every program that requires them, since most POSIXy programs are portable to more than just systems using systemd.
This being my primary problem with the current upheaval in Linux system organization. Its instigators have mostly made it clear that they consider everything not Linux (or possibly preferably not their favorite flavor thereof) to be obsolete - throwing portability out the window.
Core, deep systems software has subtle bugs, or hidden bugs, or emergent bugs, or any of a whole host of things.
If arch and fedora want to ride this tiger, I guess they can.
Again: init is really, really stable stuff.
Add in hooks to journald, d-bus, and the equivalent of an xinetd replacement/upgrade. Too much change.
And a Really Bad Attitude from the developer. My experience (a few decades of beating around on various tech at various scales) says this doesn't bode well.
Your "much written better than I can" article is about Upstart, not Systemd. They're unrelated init daemons, and it seems like many of the complaints in the article relate specifically to upstart (event-triggered services with no dependencies, minimal scripting support, killing daemons, not all daemons managable by upstart) but do NOT apply to systemd. Systemd has service dependencies, support for old-style init scripts, and can stop daemons with arbitrary commands. Are you confusing the two init systems?
On the other hand, this would be nice: "There is no tool that will print out a dependency map." It's also pretty trivial to implement with a little shell script and graphviz.
I work on embedded linux boxes in vehicles and boot times are hugely important to us. The time from when someone turns on their car ignition to the time when our box is usable is critical.
I understand that this feature isn't important to dredmorbius, but to some of us this type of improvement is fantastic.
Any reason you can't just sleep/suspend your system at ignition-off? I can see that there might be times when the system does go down hard and you've got no option but to reboot, but still, that should be rare.
I work with a fair number of embedded systems myself. Most avoid full boots where possible.
Depending on the app, sleep may be possible. But for some apps, like automotive, it is not surprising to see requirements like current draw being less than one milliamp in the sleep state.
I haven't seen or heard of any embedded apps which hibernate to flash. That would not be terribly fast, and would wear out the flash quickly.
That could raise some interesting engineering/maintenance considerations.
A local capacitor might provide the latent power to support sleep state. Or you could provision flash with enough ECC and reserve capacity (a 16 GB microSD drive fits on my pinkie nail) to survive years. Might even make swapping the storage a regular maintenance item, say 5-year cycle. Figure a high-end duty-cycle of 10 starts/day, 365 days/year -- that's 3650 read/write cycles a year. Even if that's a 100x low estimate, we're talking 365,000 cycles/year (that's assuming 1000 starts/day). As of 2003, AMD were discussing 1,000,000 cycle lifetimes for flash storage: http://www.spansion.com/Support/Application%20Notes/AMD%20DL...
Actually, in five years, controller technology would likely advance enough that, provided your unit production count is high enough, you'd just swap the entire controller for a new component with enhanced capabilities.
> Systemd loses the huge transitivity of shell scripting, and puts you in the position of needing to acquire a novel skill at the one time you least need to be learning and most need to be applying: when your systems won't boot straight:
What about those that can't debug when a shell script breaks?
Your answer is going to be that they have no business administering a server where a shell script is an integral part of the system working.
Conversely, someone that can't debug when a system can't start that uses systemd has no business administering a server where systemd is an integral part of the system working. If your system uses systemd, then you're going to need to learn a new tool. Get used to it.
If they can't debug it, they can find someone who can, and that skillset is, I can guarantee you, going to be far more widely available than Systemd debugging fu.
On which point, specifically: when Debian breaks during initrd execution, the system is dumped to a shell, "dash", a POSIX-compliant shell. It doesn't have all the niceties of bash, but it's usable.
When a Red Hat system breaks during initrd execution, the system shell doesn't handle terminal IO. You literally can't even fucking talk to the damned thing. It's a scripting-only shell.
The kicker: the RHEL initrd shell is larger than dash.
Guess which of these two systems is easier to troubleshoot / debug / rescue in a pinch?
You haven't seen how this epic engineering artifact of wheel reinvention is exploding in your face. See http://lwn.net/Articles/506842/ Now try debug it with rd.debug and you will have debug info printed for debug info printing functions.
> What about those that can't debug when a shell script breaks?
Are they more able to debug when a systemd setup breaks? If not, it seems like a moot point to bring up. They're hosed either way.
Although I do have to say, I like the systemd model. The use of sockets to do process activation and thus doing away with almost all of the need for dependency management is pretty cool. I haven't used it enough to pass judgement, but the concept has the potential to be a good deal simpler than the init hackery we have now.
How is it a moot point? If you are equally unable to debug it either way then dredmorbius' original argument about the failure point being a bad time to learn to debug is completely meaningless. The point is that either way you're going to have to learn how to fix problems before they happen but dredmorbius is complaining because ey just happens to already know how to debug one form.
While I understand why you don't want to boot all the time, some of us do.
Here are some reasons:
* Despite years of work on it, I find sleeping on laptops on linux is still flakey. My Thinkpad T420s failed to wake up about once a week (on Ubuntu), so I tend to shut down.
* I like having a clean desktop when I start on a morning. If I keep sleeping my machine, I just tend to gather up programs. Of course, you could argue I should get more sorted, but I don't really want to.
* One other problem you have is to do with Linux being used on both servers and desktops. I can see your problem. Personally, if my machine ever got in such a mess that they couldn't boot, I'd just reinstall, regardless of what had broken. I suspect most people are the same. However, I can understand if you want to be able to edit how your machine starts up, and fix it when it brakes.
1: File a bug report. As I said: if you want faster boots, boot less. We should be fixing problems (like hibernate/restore flakiness) that cause people to reboot. Or long-term power draw that requires embedded devices to require poweroff. Or flash read/write duty cycle limitations that limit the ability of embedded devices to save state / the rate at which they can save/restore data. Etc.
2. You can bounce your X session. No need to reboot the full box (me? I prefer saved state).
3. My servers may be anywhere from several feet from me (stuffed into a closet with limited access and a crap POS keyboard and monitor) to tens to thousands of miles away. With varying values of ILOM / remote hands / virtual media support. "Reinstall" isn't generally a highly tenable operation. Being able to handle issues without having to dedicate one or more staff days to travel and unavailability for other tasks really sucks productivity down.
Is any of this really an argument against Arch adopting it? Arch and Gentoo and all the other rolling distros are the cowboy distros (I run Arch on my laptop). Literally anything can break with an update on Arch. Is Arch not the best place to try new ideas and see if they work out with a bunch of fairly technical users? I'm not about to switch all my servers to it but I'm more then willing to play around with it on my laptop.
> Literally anything can break with an update on Arch.
I can confirm that. After the 3rd time such a thing happened to me I switched to Ubuntu. Arch is a testing bed for people who like screwing around with Linux. Nothing against that but from time to time I'd like to be able to do actual work on my workstation :)
Shell scripts are awful for boot. They have no expression of a dependency graph and truly pathetic notions of state. Hell, starting a process in the background takes ACTUAL THOUGHT in a shell script. How insane is that? systemd may not be as thoroughly tested but at least it's designed thoughtfully and will eventually be more reliable. For now, let it be relegated to arch and let them test it. What are you even doing with arch on a server anyway?
inssrv adds the dependency graph you're looking for.
Standard on wheezy. Allows for parallel launch of services.
I'll spend more time in hardware init (especially on servers) and fsck (even just journal replays) than service startups for most part. Even my servers (minimal services starting) take a while to come live, mostly due to the actual workload stack coming up. Then caches get to warm up and all that jazz.
Dependency problems on Red Hat? News to me, and I admin various RHEL servers.
I do know that there were issues maybe 10+ years ago. Bringing things up that were solved 10+ years ago is a bit pointless.
Also regarding your systemd summary is inaccurate. You give the impression you just don't like Red Hat, because you don't really say anything concrete about systemd aside from some very generic remarks.
> I'm also not much surprised that Red Hat, who've had such a historic problem with consistency and reliable dependency management within their packaging system (as compared to Debian/Ubuntu)
I think "historic" is important here. RPMs were still hard to use (mainly dependency hell) for a while after .debs were easy. This hasn't been true for years though.
On server hardware, I perform boots infrequently, and really, really, really want them to work right.
My philosophy is different. Make a change to a server, reboot.
The goal is to eliminate surprises if the server is restarted unexpectedly. I'd rather have them during the maintenance window than at 03:30 after a power outage.
Anyway - to systemd.
I was appalled when we moved up to Solaris 10 and the SMF facility started to replace init scripts. It felt wrong.
I adapted. It's not wrong, it's just different. Better in some respects: you can still use bash scripts, but you have better control over them, a standardized way of managing things.
Now we're abandoning Solaris for Linux and ... I'm appalled that 'linux' default method is still .. init scripts. And a hodge-podge of stuff like djb, systemd, etc, all with competing fan boys and advocates.
I'm a big fan of knowing my systems will initialize properly as well.
AT&T ran into a little restart issue, as I recall, in 1990 when a software upgrade gone wrong crashed much of the phone network. Among the problems were that most of the switches had been upgraded in place, many over decades, and there had never been a cold-boot restart. There was some uncertainty as to whether the system would start up properly or not.
While long uptimes are nice, I generally prefer seeing a few reboots annually just to be sure things will come up right. There's a balance between "restart for every change" and "restart regularly enough to not be surprised at 3am".
I've been through phases where I've done frequent reboots. As noted above, hardware POSTs are usually the bulk of the cycle. There used to be other annoyances (Sendmail's 2-3 minute timeout on non-networked hosts was a real PITA).
Now it's other stuff. With chef and automated system management, repeat 'apt-get update' runs, which even with local caches and other tricks give about 120-150s per startup.
And the stated objective? "To reduce boot times".
The best way to reduce boot times is to not boot. The reason I reboot systems is to return them to a known good state (or, very rarely, to perform a kernel upgrade).
On server hardware, I perform boots infrequently, and really, really, really want them to work right.
On end-user hardware, I perform boots infrequently, preferring to use suspend/restore to quiesce my systems (suspend to RAM, occasionally suspend to disk). That is a process which I'd like to have very thoroughly debugged and not give me any unhappy surprises (say: crash my video, e.g.: interactive session, lose track of drivers/hardware, especially wireless).
Systemd is the wrong answer to the wrong problem.
Much written better than I can: http://blog.mywarwithentropy.com/2010/10/upstart-better-init...
Systemd loses the huge transitivity of shell scripting, and puts you in the position of needing to acquire a novel skill at the one time you least need to be learning and most need to be applying: when your systems won't boot straight: https://lwn.net/Articles/494711/
I'm also not much surprised that Red Hat, who've had such a historic problem with consistency and reliable dependency management within their packaging system (as compared to Debian/Ubuntu) are proponents of this technology (hint: it's not the package format, it's the policy, or lack thereof). And now Arch.