Interview with Linus Torvalds

Italian - Spanish

We can’t start without a question: does Linux infinge Microsoft patents?

As far as we know, the answer is a resounding “no”, and it’s all just MS trying to counter-act the fact that they have problems competing with Linux on a technical side by trying to spread FUD.

According to Mark Shuttleworth, the most important condition for Linux distributions is that they must remain free (free like free beer). He said: “It will be a failure if the world moves from paying for shrink-wrapped Windows to paying for shrink-wrapped Linux”. What do you think about?

Oh, absolutely. And I don’t see it happening. I do think that having companies work together (and that includes MS) is a good idea, but no, Linux itself is not about paying patent extortion money. In fact, the GPLv2 already requires that you can distribute it without any patent limitations.

And what do you think about GPL 3?

I think it’s just another open source license, along with about fifty other open source licenses out there (BSD, Mozilla, etc etc). It’s no longer a really bad one like some of the early drafts were, but in my opinion the GPLv2 is simply better.

Newer versions don’t necessarily mean “better”, especially when the new versions are more complex, and limit usage more.

Now you are into Linux Foundation. James Zemlin, the Director of the Foundation at New York Times sayd: “There are things that Microsoft does well in terms of promoting Windows, providing legal protection and standardizing Windows” and “the things that Microsoft does well are things we need to do well — to promote, protect and standardize Linux”. What else is possible to learn from Microsoft?

Well, historically, the most important lesson from Microsoft - and one they themselves seem to have forgotten - is simply “Give your customers what they want”.

I think the reason Microsoft was so successful was that they filled a niche with some very basic technology (and in this case, early on, that basic technology was literally the BASIC language - that’s how they largely got started), and they sold it cheap and made it “good enough”. They didn’t play games with the customer.

Of course, that seems to have changed. A lot about the last few years of Microsoft seems to very much be playing games with customers: their licensing and what, seven different “versions” of Vista, and all the DRM crap they are trying to push on their customers are not actually what anybody wants.

So Microsoft has always been good about marketing and selling, and their strong hold on the market has also caused them to become a standardized platform. That’s generally all good for customers. They’ve left some of that behind (now they are trying to splinter their market on purpose with Vista and pushing DirectX 10 only on the new platform, for example), but I think their historical successes are worth looking at.

What do you think about Novell and Microsoft’s agreement? Which future developments will produce? And what about Red Hat’s events?

I really don’t care. You’re asking all these marketing and company questions, and the thing is, I’m not at all into it. I’m totally uninterested. What I’m into is the technology, and working together with people.

With Web 2.0 we are watching at a larger adoption of open source development models, the Linux Model: we think at Adobe and in part at Microsoft and Sun. What does it mean for you, and what is according to you open source now.

I think the real issue about adoption of open source is that nobody can really ever “design” a complex system. That’s simply not how things work: people aren’t that smart - nobody is. And what open source allows is to not actually “design” things, but let them evolve, through lots of different pressures in the market, and having the end result just continually improve.

And doing so in the open, and allowing all these different entities to cross-pollinate their ideas with each other, and not having arbitrary boundaries with NDA’s and “you cannot look at how we did this”, is just a better way.

I compare it with science and witchcraft (or alchemy). Science may take a few hundred years to figure out how the world works, but it does actually get there, exactly because people can build on each others knowledge, and it evolves over time. In contrast, witchcraft/alchemy may be about smart people, but the knowledge body never “accumulates” anywhere. It might be passed down to an apprentice, but the hiding of information basically means that it can never really become any better than what a single person/company can understand.

And that’s exactly the same issue with open source vs proprietary products. The proprietary people can design something that is smart, but it eventually becomes too complicated for a single entity (even a large company) to really understand and drive, and the company politics and the goals of that company will always limit it.

In contrast, open source works well in a complex environment. Maybe nobody at all understands the big picture, but evolution doesn’t require global understanding, it just requires small local improvements and a open market (”survival of the fittest”).

So I think a lot of companies are slowly starting to adopt more open source, simply because they see these things that work, and they realize that they would have a hard time duplicating it on their own. Do they really buy into my world view? Probably not. But they can see it working for individual projects.

Linux is a versatile system. It supplies PC, huge servers, mobiles and ten or so of other devices. From your privileged position, which sector will be the one where Linux will express the highest potential?

I think the real power of Linux is exactly that it is not about one niche. Everybody gets to play along, and different people and different companies have totally different motivations and beliefs in what is important for them. So I’m not even interested in any one particular sector.

That said, I personally tend to think most about the desktop, not because it’s in any “the primary niche”, but simply because the desktop tends to have much more varied and complex behaviour than most other areas, so desktop usage shows issues that many other - more specific - usage areas simply won’t show.

In other words, you simply have to be more “well-rounded” on the desktop than you have to be in many other areas. In servers, the hardware and the software is often much more constrained (much smaller variation in both hardware and in types of loads put on the machine), and in various embedded areas the system really only needs to do one or two things really well.

In contrast, in the desktop space, different people do very different things, so you have to do a lot of different things, and you have to do so with wildly varying hardware.

People is still waiting for Linux’s grand style entry on desktops. User-friendly ditributions, like Ubuntu, and Dell’s decision to sell computer based on Linux are two big steps. But it seems that still needs something….what, according to you?

Oh, I htink it just needs more time. We basically have all the pieces, but we can improve on them, and there’s simply a lot of inertia with most people and companies simply not being all that interested in changing their environment.

So I don’t worry about anything in particular, I just make sure we slowly improve, and time will take care of the rest).

Eben Moglen asked Google more cooperation with the open source world. How do you put yourself with Google?

A lot of kernel developers are actually working for google, so I wouldn’t worry about it. My right-hand man (Andrew Morton) is employed by them, for example, exactly to improve the kernel. And that’s really all that matters in the end: even big companies are nothing more than an accumulation of individuals, and what matters is not whether “google” helps, but whether people like Andrew Morton are there - and by employing them, google does end up helping them improve Linux.

That’s in no way google-specific, of course. It’s true of any company that is involved with Linux, or even if not involved, allows their employees to do open source software on the side.

A technical question: any big news on Linux’s Kernel for the future? Maybe 2.8?

We’re not likely to have a new versioning system: we’ve been very successful at the new development model where we release a new 2.6.x kernel roughly every ten weeks or so (2-3 months), and we’ve been able to make even fairly radical changes without it being a big watershed event for users that requires some big new version number.

And that’s really how things should be. Smooth and continued improvement. We’ve had lots of fairly big and painful re-organizations of the source code over the years, but as the kernel has matured (and as we have learnt better how to maintain it), there’s been less and less reason to see things as “huge new issues”, and more and more reason to see it as a continual improvement that most users don’t even need to be all that aware of.

From a user perspective, what you tend to want is not “oh, I need to upgrade to version X.Y because it does so-and-so”, but more something along the lines of “Hey, I can just upgrade, knowing that it will do the things it has always done, but maybe do some of them even better”.

And that may not sound very exciting, but it’s solid technology, and in the end, the excitement for kernel engineers is all in the kinds of details that most users would never even worry about (and shouldn’t worry about: after all, the whole point of an Operating System is to act as an abstraction layer between the system resources and the various applications you want to run on top of those resources).

A curiosity: which is your favourite distribution, and which on e do you consider more secure?

I don’t really tend to care much, I’ve changed distributions over the years, and to me the most important thing tends to be that they are easy to install and upgrade, and allow me to do the only part I really care about - the kernel.

So the only major distribution I’ve never used has actually been Debian, exactly because that has traditionally been harder to install. Which sounds kind of strange, since Debian is also considered to be the “hard-core technical” distribution, but that’s literally exactly what I personally do not want in a distro. I’ll take the nice ones with simple installers etc, because to me, that’s the whole and only point of using a distribution in the first place.

So I’ve used SuSE, Red Hat, Ubuntu, YDL (I ran my main setup on PowerPC-based machines for a while, and YDL - Yellow Dog Linux - ended up the easiest choice). Right now, most of my machines seem to have Fedora 7 on then, but that’s only a statement of fact, not meant to be that I think it’s necessarily “better” than the other distros.

In a famous book you defined Linux’s development as an activity “just for fun”. Do you still enjoy it?

Yes. It’s still why I do it. The parts I do that end up beign fun have been different over the years - it used to be purely about the coding, these days I don’t write all that much code myself, and now it’s mostly about the organizational side: merging code, communicating with people, pointing people in the right direction, and then the occasional bugfixing myself.

Questa intervista è stata anche tradotta in italiano.

I Post sul network

Espandi lista