The Millenium Technology Prize, awarded every two years, is a Finnish award designed “to improve the quality of life and to promote sustainable development-oriented research, development and innovation.” Sir Tim Berners-Lee won the prize in 2004. The finalists this year are Dr. Shinya Yamanaka, who has been contributing to the area of stem cell research, and Linux creator Linus Torvalds. The 2012 Grand Prize winner will be announced on June 13 in Helsinki, Finland.
From the press release:
In recognition of his creation of a new open source operating system kernel for computers leading to the widely used Linux operating system. The free availability of Linux on the Web swiftly caused a chain-reaction leading to further development and fine-tuning worth the equivalent of 73,000 man-years. Today millions use computers, smartphones and digital video recorders like Tivo run on Linux. Linus Torvalds’ achievements have had a great impact on shared software development, networking and the openness of the web, making it accessible for millions, if not billions.
I had the opportunity to ask Linus a few questions by email. Hopefully I didn’t simply create a nerd version of The Chris Farley Show.
Scott Merrill: You use a MacBook Air because you want a silent, quality computer. Why is it that Apple has the corner on this market? Have you considered using your fame or some portion of your fortune to try to remedy this?
Linus Torvalds: You *really* don’t want me to start designing hardware. Hey, I’m a good software engineer, but I’m not exactly known for my fashion sense. White socks and sandals don’t translate to “good design sense”
That said, I’m have to admit being a bit baffled by how nobody else seems to have done what Apple did with the Macbook Air – even several years after the first release, the other notebook vendors continue to push those ugly and *clunky* things. Yes, there are vendors that have tried to emulate it, but usually pretty badly. I don’t think I’m unusual in preferring my laptop to be thin and light.
Btw, even when it comes to Apple, it’s really just the Air that I think is special. The other apple laptops may be good-looking, but they are still the same old clunky hardware, just in a pretty dress.
I’m personally just hoping that I’m ahead of the curve in my strict requirement for “small and silent”. It’s not just laptops, btw – Intel sometimes gives me pre-release hardware, and the people inside Intel I work with have learnt that being whisper-quiet is one of my primary requirements for desktops too. I am sometimes surprised at what leaf-blowers some people seem to put up with under their desks.
I want my office to be quiet. The loudest thing in the room – by far – should be the occasional purring of the cat. And when I travel, I want to travel light. A notebook that weighs more than a kilo is simply not a good thing (yeah, I’m using the smaller 11″ macbook air, and I think weight could still be improved on, but at least it’s very close to the magical 1kg limit).
SM: I wasn’t so much asking why you haven’t designed your own hardware — I fully understand people playing to their own strengths. It’s taken considerable time for hardware manufacturers to recognize Linux as a viable platform, and today more and more OEMs are actively including or working toward Linux compatibility. Surely there’s an opportunity there for the global Linux community to influence laptop design for the betterment of everyone? I know it’s not your passion, and I respect that. Do you have any suggestions or guidance on ways we can collectively influence these kinds of things?
LT: I think one of the things that made Apple able to do this was how focused they’ve been able to stay. They really have rather few SKU’s compared to most big computer manufacturers, and I think that is what has allowed them to focus on those particular SKU’s and make them be better than the average machine out there.
Sure, they have *some* variation (different amounts of memory etc), but compare the Apple offerings to the wild and crazy world of HP or Lenovo or Toshiba. Other hardware manufacturers tend to not put all their eggs in a single (or a few) baskets, and even then they tend to hedge their bets and go for fairly safe and boring on most offerings (and then they sometimes make the mistake of going way crazy for the “designer” models to overcompensate for their boring bread-and-butter).
That kind of focus is quite impressive. It’s also often potentially unstable – I think most people still remember Apple’s rocky path. I used to think that Apple would go bankrupt not *that* long ago, and I’m sure I wasn’t the only one. And it can be hard to maintain in the long run, which is probably why most other companies don’t act that way – the companies who consistently try to revolutionize the world also consistently eventually fail.
So that kind of focus takes guts. I’m not an apple fan, because I think they’ve done some really bad things too, but I have to give them credit for not just having good designers, but the guts to go with it. Jobs clearly had a lot to do with that.
Anyway, I don’t think it’s worth worrying too much about laptops. The thing is, the Macbook Air was (and still to some degree is) ahead of its time. But I actually think that hardware is catching up to the point where doing good laptops really isn’t going to be rocket science any more. Rotational media really is going away, and with it goes one of the last formfactor issues: people really do not need (or want) that big spindle for a harddisk, or the silly spindle for an optical drive.
Sure, optical drives will remain in some form factors for a while, and others formfactors will remain bigger just because the manufacturer will want to continue to offer the capability of a rotational disk too – they’re still cheaper and have bigger capacities. But at the same time, *small* flash-based storage is really getting quite good, and while you still pay more for them, it’s not revolutionary any more. The mSATA/miniPCIe form factor is making it more and more realistic standard form-factor.
Together with CPU’s often being “fast enough” I would expect that the macbook air kind of formfactor becomes way more of a norm than it used to be. Apple was ahead of the curve, and I absolutely have higher expectations of the hardware I use than the average user probably does, but at the same time I’m convinced that the notebook market will finally get where I think it should be. Sure, some people will still want to use the big clunkers, but making a good thin-and-light machine is simply not going to be the technical expensive challenge it used to be.
In other words, we’ll take the whole Macbook Air formfactor for granted in a few years. It’s been done, it used to be pretty revolutionary, it’s going to be pretty standard.
It *did* take a lot longer than I thought it would take, admittedly. I’ve loved the thin-and-lights for much longer than the Macbook Air has existed. It’s not like Apple made up the concept – they just executed well on it.
What I in many ways think is more interesting are people who do new things. I love the whole Raspberry PI concept, for example. That’s revolutionary in a whole different direction – maybe not the prettiest form-factor, but taking advantage of how technology gets cheaper to really push the price down to the point where it’s really cheap. Sure, it’s a bit limited, but it’s pretty incredible what you can do for $35. Think about that with a few more years under its belt.
The reason I think that is interesting is because I think we’re getting to the point where it is *so* cheap to put a traditional computer together, that you can really start using that as a platform for doing whole new things. Sure, it’s good for teaching people, but the *real* magic is if one of those people who get one of those things comes up with something really new and fun to do with it.
Fairly cheap home computing was what changed my life. I wouldn’t worry about how to incrementally improve laptop design: I think it’s interesting to see what might *totally* change when you have dirt cheap almost throw-away computing that you can use to put a real computer inside some random toy or embedded device. What does that do to the embedded development world when things like that are really widely available?
SM: You don’t pull any punches when communicating with kernel developers and patch submitters. Has this tactic helped or hindered your success as a father?
LT: I really don’t know. I think the kids have grown up really well, and I don’t think it hurt them that we had rules in the family that were fairly strictly enforced (usually with a five-minute timeout in the bathroom). We had a very strict “no whining” rule, for example, and I’ve seen kids that should definitely have been brought up with a couple of rules like that.
That said, maybe they’re just naturally good kids. I don’t remember the last time I sent them to the bathroom (but it’s still a joke in our family: “If you don’t behave, you’ll spend the rest of the day in the bathroom”)
And while I do work from home, I am *not* a “father” when I work. The kids always knew that if they came in and disturbed me while I was at the computer, they’d get shouted at. I know some people who say that they could never work from home because they’d be constantly distracted by their kids – that is just not the case in our family. So despite me working from home, we’re a very “traditional” family – Tove stayed at home and was really the homemaker and took care of the kids.
And don’t get me wrong: when I interact with kernel developers, there can be a lot of swearing involved. And while that may *occasionally* happen with the kids too, the kids get hugs and good-night kisses too. Kernel developers? Not so much.
Would some kernel people prefer getting tucked in at night instead of being cursed at? I’m sure that would be appreciated. I don’t think I have it in me, though.
SM: How does your family feel about what you do for a living? What questions did/do they ask?
LT: They’ve never seen anything else, so I doubt they even think about it. It’s just what dad does. None of my three daughters have so far shown any actual interest in computers (outside of being pure users – they game, they chat, they do the facebook thing), and while they end up using Linux for all of that they don’t seem to think it’s all that strange.
SM: Do you try to get involved with technology problem solving in your every day life, for example at your kids’ school? If so, how has that been received?
LT: Oh, the absolute *last* thing I want to do is be seen as a support person. No way.
Sure, I do maintain the computers in the house, and it obviously means that the kids laptops (that they use in school too) run Linux, but it turns out that the local school district has had some Linux use in their computer labs anyway, so that never even made them look all that different.
But I’m simply not really organized enough to be a good MIS person. And frankly, I lack the interest. I find the low-level details of how computers work really interesting, but if I had to care about user problems and people forgetting their passwords or messing up their backups, I don’t know what I’d do. I’d probably turn to drugs and alcohol to dull the pain.
Even in the kernel project, I’m really happy that I’m not a traditional manager. I don’t have to manage logistics and people, I can worry purely about the technical side. So while I don’t do all that much programming any more (I spend most of my day merging code others wrote), I also don’t think of myself as a “manager”, I tend to call myself a “technical lead person” instead.
SM: What do you want to tell people that no one has ever bothered to ask you?
LT: The thing is, I don’t have a “message” to people. I never really did. I did (and do) Linux because it’s fun and interesting, and I really also enjoy the social aspect of developing things in the open, but I really don’t have anything I want to tell people.
SM: I apologize for not making this question more clear. I’m not asking if you have a message or anthem or anything like that. As a celebrity, you’ve conducted lots of interviews. Many of them have been formulaic, and there’s only so many times you can receive the same questions before rolling your eyes in exasperation.
Is there any question you wish you’d’ve been asked in an interview? Whether it’s because you’ve got the perfect / clever / whatever answer prepared, or just because you’d welcome the novelty of it? If so, what would have been your answer?
LT: Hmm. Some of the interviews I’ve enjoyed the most have been from somewhat antagonistic people who came from a non-computer background. I remember this russian journalist (back when I lived in Helsinki), who was writing a piece for some russian financial newspaper. He really was pretty aggressive, and being Russian from after the fall of the soviet union he had an almost unhealthy admiration for Microsoft and making lots of money, and capitalism. I’m sure it was heightened by the whole admiration for wall street etc that must run in the blood of most financial journalists to begin with.
That made for an interesting interview – because I like arguing. Explaining to a person like that why open source works, and in fact works better than the model he so clearly idolized was interesting. I don’t think I necessarily convinced him, but it still made for a memorable interview.
But any particular question? No. That’s not what I tend to find interesting – I enjoy the process, and the argument, and the flow of ideas of an interview, I don’t think there’s a “perfect question”, much less a “perfect answer that I wish somebody had asked me the question for”. So you’re not asking for something that I think I have.
But to expand on that, and to perhaps give you something of an answer anyway: this is very much true for me in software development too. I like the *process*. I like writing software. I like trying to make things work better. In many ways, the end result is unimportant – it’s really just the excuse for the whole experience. It’s why I started Linux to begin with – sure, I kind of needed an OS, but I needed a *project* to work on more than I needed the OS.
In fact, to get a bit “meta” on this issue, what’s even more interesting than improving a piece of software, is to improve the *way* we write and improve software. Changing the process of making software has sometimes been some of the most painful parts of software development (because we so easily get used to certain models), but that has also often been the most rewarding parts. It is, after all, why “git” came to be, for example. And I think open source in general is obviously just another “process model” change that I think is very successful.
So my model is kind of a reverse “end result justifies the means”. Hell no, that’s the stupidest saying in the history of man, and I’m not even saying that because it has been used to make excuses for bad behavior. No, it’s the worst possible kind of saying because it totally misses the point of everything.
It’s simply not the end that matters at all. It’s the means – the journey. The end result is almost meaningless. If you do things the right way, the end result *will* be fine too, but the real enjoyment is in the doing, not in the result.
And I’m still really happy to be “doing” 20 years later, with not an end in sight.
SM: Looking back over the history of Linux, do you have any “Oh man, I can’t believe I did/said that” reactions? (Note: this is not in respect to code strictly, but engineering or policy decisions)
LT: Engineering decisions usually aren’t a problem. Sure, I’ve made the wrong decision many times, but usually there was some good reason for it at the time – and the important part about engineering decisions is that you can fix them later when you realize they were wrong. So the “oh, that was spectacularly wrong” happens all the time, but the more spectacular it is, the quicker we notice, and that means that we fix it quickly too.
The one really memorable “Oh sh*t” moment was literally very early on in Linux development, when I realized that I had auto-dialed my main harddisk when I *meant* to auto-dial the university dial-in lines over the modem. And in the process wiped out my then Minix setup by writing AT-commands to the disk that understandably didn’t respond the way the autodialling script expected (“AT commands” is just the traditional Hayes modem control instruction set).
That’s the point where I ended up switching over to Linux entirely, so it was actually a big deal for Linux development. But that was back in 1991.
SM: If you could give an award to someone, who would be the recipient, and for what accomplishment?
LT: Hey, while I am a computer guy, my heroes are still “real scientists”. So if I can pick anybody, I think I’d pick Richard Dawkins for just being such an outspoken critic of muddled thinking and anti-scientific thought.
SM: The Millennium Technology Prize ceremony is on June 13, which happens to be my birthday. Any chance I can be your +1 to the party?
LT: Scott, I never knew you felt that way. I think my wife would not approve.
SM: Nor would mine, but you miss all the shots you don’t take!
SM: What are the major Linux distributions doing right, in general, and where are they falling short? Your recent Google+ rant about OpenSUSE’s security stance sheds some light on this, but I’d like to know more. Are formalized distributions a necessary evil? How much (if any) influence do you have with the distributions?
LT: So I absolutely *love* the distributions, because they are doing all the things that I’m not interested in, and even very early on they started being a big support for the kernel, and driving all the things that most technical people (including very much me) didn’t tend to be interested in: ease of use, internationalization, nice packaging, just making things a good “experience”.
So I think distributions have been very instrumental in making Linux successful, and that whole thing started happening very early on (some of the first distributions started happening early 92 – on floppy disks).
So they aren’t even a “necessary evil” – they are a “necessary good”. They’ve been very instrumental in making Linux be what it is, both on a technical side, but *especially* on a ease of use and approachability side.
That said, exactly because they are so important, it does frustrate me when I hit things that I perceive to be steps backwards. The SuSE rant was about asking a non-technical user about a password that the non-technical user had absolutely no reason to even know, in a situation where it made no sense. That kind of senseless user hostility is something that we’ve generally come away from (and some kernel people tend to dismiss Ubuntu, but I really think that Ubuntu has generally had the right approach, and been very user-centric).
The same thing is what frustrated me about many of the changes in Gnome 3. The whole “let’s make it clutter-free” was taken to the point where it was actually hard to get things done, and it wasn’t even obvious *how* to do things when you could do them. That kind of minimalist approach is not forward progress, it’s just UI people telling people “we know better”, even if it makes things harder to do. That kind of “things that used to be easy are suddenly hard or impossible” just drives me up the wall, and frustrates me.
As to my own influence: it really goes the other way. The distributions have huge influences on the kernel, and not only in the form of employing a lot of the engineers. I actively look to the distributions to see which parts of the kernel get used, and often when people suggest new features, one of the things that really clinches it for me is if a manager for some distribution speaks up and says “we’re already using that, because we needed it for xyz”.
Sure, I end up influencing them through what I merge, and how it’s done, but at the same time I really do see the distributions as one of the first users of the kernel, and the whole way we do releases (based on time, not features) is partly because that way distributions can plan ahead sanely. They know the release schedule to within a week or two, and we try very hard to be reliable and not do crazy things.
We have a very strict “no regressions” rule, for example, and a large part of that rule is so that people – very much including the people involved in distributions – don’t need to fear upgrades. If it used to work a certain way, we try very hard to make sure it continues to work that way. Sure, bugs happen, and some change may not be noticed in time, but on the whole I think a big part of kernel development is to try to make it as painless as possible for people to upgrade smoothly.
Because if you make upgrades painful, it just means that people will stay back.
SM: You’ve been doing this for 20 years. What do you think of the newest crop of kernel contributors? Do you see any rising stars? Do you see any positive or worrisome trends with respect to the kind and caliber of contribution from younger developers?
LT: I’m very happy that we still have a very wide developer base, and we continue to see more than a thousand different people for each release (which is roughly every three months or so). A lot of those contributions come from people who make just tiny one-liner changes, and some of them are never heard from again once they got their one small fix done, but on the other hand, the small one-liner changes is how many others gets started.
That said, one of the things that *has* changed a lot in the 20 years is that we certainly have a lot more “process” in place. Most of those one-liners didn’t get to me directly – many of them came through multiple layers of submaintainers etc. By the time I see most “rising stars” they’ve already been doing smaller changes for a long time.
The one worrisome trend is pretty much inevitable: the kernel *is* getting big, and a lot of the core code is quite complex and sometimes hard to really wrap your head around. Core areas like the VM subsystem or the core VFS layer simply are not easy to get into for a new developer. That makes it a bit harder to get started if that’s what you are interested in – the bar has simply been raised from where it was ten or fifteen years ago.
At the same time, I do think it’s still fairly easy to get involved, you may just have to start in a less central place. Most kernel people start off worrying about one particular driver or platform, and “grow” from there. We do seem to have quite a lot of developers, and I’ve talked to open source project maintainers that are very envious of just how many people we have involved in the kernel.
SM: You’ve said that it’s the technical challenge that keeps you involved and motivated. Surely there are plenty of technical challenges in the world. Why stick with the kernel?
LT: I think it’s partly because I’m the kind of person who doesn’t flit from one project to another. I keep on doing Linux, because once I get started, I’m kind of obstinate that way.
But part of it is simply the reason I started doing a kernel in the first place – if what you are interested in is low-level interactions with hardware, a kernel is where it is all at. Sure, there are tons of technical challenges out there, but very few of them are as interesting as an operating system kernel if you are into that kind of low-level interaction between software and hardware.
SM: As the number of systems and architectures supported by the Linux kernel continues to grow, you can’t possibly have development hardware for each of them. How do you verify the quality and functionality of all the change requests you get?
LT: Oh, that’s easy: I don’t.
The whole model is built on a network of trust among developers that have come to know each other over the years. There’s no way I can test all the platforms we support – the same way there is no way I can check every single commit that gets merged through me. And I wouldn’t even really even *want* to check each hardware or each change – the point of open source and distributed development is that you do things together. We have a few tens of “highlevel” maintainers for various subsystems (eg networking, USB drivers, graphics, particular hardware architectures etc etc), and even those maintainers can’t test everything in their area, because they won’t have that particular hardware etc. I trust them, and they in turn trust the people they work with.
I think any big project is about finding people you can trust, and really then depending on that trust. I don’t *want* to micro-manage people, and I couldn’t afford to even if I did want to.
And the thing is, smart people (and people who have what I call “good taste”, which is often even more important) may be rare, but you do recognize them. I think one of my biggest successes is actually outside Linux: recognizing how good a developer Junio Hamano was on git, and trusting him enough to just ask if he would be willing to maintain the project. Being able to let go and trusting somebody else is *important*, because without that kind of trust you can’t get big projects done.
What will Linus do with the prize money, if he wins? “I guess I won’t have to worry about the kids education any more,” he says.
Thanks, Linus, for taking the time to chat with me. And good luck! We hope you win the Millenium Technology Prize!
Photo credit: Wikipedia
Read more : An Interview With Linus Torvalds
0 Responses
Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.