MS Linux Myths - the Truth Revealed


The following is a mostly complete reprint, with added commentary, of the recent Microsoft "Linux Myths" propaganda. I have attempted to answer only the points raised by Microsoft; I have not included other technical advantages or other Linux hype. Additionally, I have been honest in assessing the current shortcomings of the Linux kernel. I believe this document should serve as an accurate and telling rebuttal of Microsoft's blatant attempt to demote Linux back into the "almost-was" category. Enjoy.

With all the recent attention around Linux as an operating system it's important to step back from the hype and look at the reality.

Yes, lets.

First, it's worth noting that Linux is a UNIX-like operating system. Linux fundamentally relies on 30-year-old operating system technology and architecture.

True in some respects, false in others. UNIX, like all operating systems, has taken the 30 years it was given and evolved. MS is correct in that Linux is a UNIX-like operating system - Linux took the ideas of UNIX and re-coded them using modern algorithms and practices. So, like MS's own Windows NT (loosely based on VAX/VMS) or DOS (cloned from CP/M), it has evolved.

Linux was not designed from the ground-up to support symmetrical multiprocessing (SMP),

True, but I haven't seen exceptional performance out of SMP NT, either. In fact, only a few specialized systems push NT beyond its normal 4-processor limit. If NT was truly capable of scaling as MS promotes, then it should be running regularly on 16-processor Alpha machines right now. Linux handles user-process SMP quite well right now. As MS mentions, 2.2 was supposed to address SMP issues (I'll add "in the kernel") - it did, but the granularity is not as fine as some would wish. Linux 2.4 has had some extensive analysis in this department, both as a result of benchmark critiques and from tools provided by SGI - expect it to be much more efficient at SMP.

graphical user interfaces (GUI),

Well, as much as DOS was ever designed to run a GUI (Windows 3.x - 98)... The Linux "movement" has spawned several ambitious and successful GUI projects; they can be configured to work "just like Windows", or they can extend the capabilities of the GUI to new levels. Now what remains is to refine the utilities and applications for maximum useability.

It is also arguable that having your GUI tied in to your OS is a bad thing.

asynchronous I/O,

Different models for different systems. Asynch I/O is on its way; in the meantime there are other ways.

fine-grained security model,

MS has three things in mind when they pull out this term. First is the NTFS support for ACLs. As in NT, this is a filesystem issue, not a design problem - FAT doesn't support them, NTFS does. Linux does not have ACL support available in production yet, but tests are underway on an ACL filesystem, and patches also exist to create a "trustee" filesystem similar to Novell's approach. The second thing they mean is that security access can be divided under NT. Linux supports this via groups, via PAM, and via the 'sudo' command. Additionally, Linux has support for POSIX (draft) capabilities flags, though without filesystem support, their functionality is limited at this time. The third item MS may be speaking about when they talk of a "fine-grained security model" is delegation of authority for administering users, groups, and computers (available in Windows 2000). Linux supports this in the same manner as does Windows 2000 - using LDAP.

and many other important characteristics of a modern operating system. These architectural limitations mean that as customers look for a platform to cost effectively deploy scalable, secure, and robust applications, Linux simply cannot deliver on the hype.

Sigh. No-one can deliver on hype, not even MS - the master of hype and pre-announcements. If a customer is truly looking for scalable, secure, and robust, they should bypass both NT and Linux and head for an enterprise-class machine with hot-swap everything and 8+ processors. If they are looking for a workstation, departmental server, or within the realm of Internet servers, then Linux has been proven to perform and scale quite well.

Myth: Linux performs better than Windows NT


Reality: Windows NT 4.0 Outperforms Linux On Common Customer Workloads

The Linux community claims to have improved performance and scalability in the latest versions of the Linux Kernel (2.2), however it's clear that Linux remains inferior to the Windows NT® 4.0 operating system.

[stuff about the Mindcraft/PC Week Labs tests deleted...]

An analysis of the data gleaned from these tests did indeed indicate a weakness in the TCP/IP stack of the Linux version used. A patch was created during the time of these tests which made the SMP-ness of the TCP/IP stack much better. This will be made available in the 2.4 kernel (and parts are included in the 2.2 kernel series).

Also, some number-crunching indicates that a single-processor system of either OS would saturate something between a T1 and a T3 line. Ask MS how many Internet servers they recommend for tasks of this size, then ask them why they don't recommend fewer - its called [lack of] stability.

* Linux performance and scalability is architecturally limited in the 2.2 Kernel. Linux only supports 2 gigabytes (GB) of RAM on the x86 architecture,1 compared to 4 GB for Windows NT 4.0. The largest file size Linux supports is 2 GB versus 16 terabytes (TB) for Windows NT 4.0. The Linux SWAP file is limited to 128 MB RAM. In addition, Linux does not support many of the modern operating system features that Windows NT 4.0 has pioneered such as asynchronous I/O, completion ports, and fine-grained kernel locks. These architecture constraints limit the ability of Linux to scale well past two processors.

Sigh - the 2GB limit has been overcome in semi-official patches; it may not be in the official kernel due to the limited number of customers actually deploying systems with this memory capacity. Few Intel systems accept even 1GB of RAM at this stage. SGI currently uses (and supports) these patches for their systems - they should be considered production level.

The filesystem is currently limited in its capacity. This is, in part, a POSIX definition; future filesystem work is likely to remove this limit. It should be noted that many databases like multiple smaller files over single large files - huge files generate inefficiencies.

The swap issue has been dealt with in the more recent kernels; additionally, the Linux swap partition is more efficient than an NT swap file.

Lastly, Windows NT 4.0 has not pioneered fine-grained kernel locks. The big UNIX vendors have been doing SMP for longer than MS has had an SMP-capable kernel. I would imagine their pioneering of the other technologies mentioned has been as forward-looking.

* The Linux community continues to promise major SMP and performance improvements. They have been promising these since the development of the 2.0 Kernel in 1996. Delivering a scalable system is a complex task and it's not clear that the Linux community can solve these issues easily or quickly. As D. H. Brown Associates noted in a recent technical report,2 the Linux 2.2 Kernel remains in the early stages of providing a tuned SMP kernel.

The 2.0 kernels had user-level SMP (one lock around the kernel). The 2.2 kernels distributed locks throughout the kernel, but these locks were admitted (at the time of release) to be an incomplete solution. The Linux community has taken these critiques to heart, and has already overcome issues associated with them. In less than a year, Linux has gone from the "early stages of providing a tuned SMP kernel" to providing an efficient one.

Myth: Linux is more reliable than Windows NT


Reality: Linux Needs Real World Proof Points Rather than Anecdotal Stories

The Linux community likes to talk about Linux as a stable and reliable operating system, yet there is no real world data or metrics and very limited customer evidence to back up these claims.

* Microsoft Windows NT 4.0 has been proven in demanding customer environments to be a reliable operating system. Customers such as Barnes and Noble, The Boeing Company, Chicago Stock Exchange, Dell Computer, First Union Capital Markets, Nasdaq and many others run mission critical applications on Windows NT 4.0.

Hmm... Let's try US West, Cisco, the US Post Office, the digital effects rendering farm for "Titanic", and about 40% of the country's ISPs. I'd go on, but you can try the Linux Business Applications page - it's got a big list...

* Linux lacks a commercial quality Journaling File System. This means that in the event of a system failure, such as a power outage, data loss or corruption is possible. In any event, the system must check the integrity of the file system during system restart, a process that will likely consume an extended amount of time, especially on large volumes and may require manual intervention to reconstruct the file system.

NTFS is also not a true JFS, but rather a partial implementation. The major advantages to a JFS are "guaranteed" data preservation and speedy recovery from hard crashes. However, there is a reason why many NT admins will not build their NT servers with NTFS boot partitions. Ask MS what you should do if your NTFS partition does get corrupted beyond the ability of a journal replay... Oh, (I know you'll get tired of this), there are no fewer than three filesystem solutions in development or beta-test stages to add journaling: ext3 (an extension of the current Linux filesystem standard), reiserfs (a theoretical improvement over normal JFS's), and SGI's XFS (a proven workhorse).

* There are no commercially proven clustering technologies to provide High Availability for Linux. The Linux community may point to numerous projects and small companies that are aiming to deliver HA functionality. D. H. Brown recently noted that these offerings remain immature and largely unproven in the demanding business world.

I wonder how they rated the MS clustering solution: it doesn't have much deployment... Again, if the customer truly wants HA, then they need to go for the big UNIX boys. But if they do want a cheaper PC solution, highly-rated third-party packages used for HA on Sun and other platforms are now becoming available on Linux (for their normal somewhat exhorbitant price).

* There are no OEMs that provide uptime guarantees for Linux, unlike Windows NT where Compaq, Data General, Hewlett-Packard, IBM, and Unisys provide 99.9 percent system-level uptime guarantees for Windows NT-based servers.

These companies probably also recommend a scheduled reboot on a regular basis (NT works much better if it isn't running for very long). Also, these companies (and others) are just entering the commercial Linux market. As they become familiar with the system, these guarantees will appear. In the meantime, I would pit my poor under-powered home Linux system (which has run at 100% capacity 12 hours daily for months on end) against the commercial NT server solution at my workplace any day.

Myth: Linux is Free


Reality: Free Operating System Does Not Mean Low Total Cost of Ownership

The Linux community will talk about the free or low-cost nature of Linux. It's important to understand that licensing cost is only a small part of the overall decision-making process for customers.

That depends - for the number of systems at my current job, combined with the number of users, I believe licensing costs are pretty significant. (Lets see - 4000 users * (1 client + 1 Exchange license + approx. 3 server licenses) := lots of bucks for MS.) As the number of clients increases, the significance of licensing vs. Linux increases.

* The cost of the operating system is only a small percentage of the overall total cost of ownership (TCO). In general Windows NT has proven to have a lower cost of ownership than UNIX. Previous studies have shown that Windows NT has 37 percent lower TCO than UNIX. There is no reason to believe that Linux is significantly different than other versions of UNIX when it comes to TCO.

The quoted lower TCO is probably due to maintenance fees (always significant in the proprietary hardware/software world), in combination with upgrade fees (also traditionally higher in the closed-source world of high-end UNIX), and the lack of complete admin tools. MS fails to consider the changes brought about by Linux: hardware maintenance done by the same people that do the Windows machines, upgrade fees low to non-existant, and a rapidly-developing GUI admin environment. Combined with the reliability of a Linux system, this equates to a lower TCO.

Linux is still catching up in the refinement of the GUI, but serious efforts have been launched to address jargon bloat and other useability issues which do affect TCO.

* The very definition of Linux as an Open Software effort means that commercial companies like Red Hat will make money by charging for services. Therefore, commercial support services for Linux will be fee-based and will likely be priced at a premium. These costs have to be factored into the total cost model.

MS obviously hasn't looked at its fee schedule for support lately. The Linux support organizations seem to be pricing themselves around the same ballpark as MS, however - TCO from this standpoint appears to be similar. Additionally, the Linux companies are providing 30-day installation support with their product free with the purchase; this is the period traditionally highest in call volume, and should lower TCO.

* Linux is a UNIX-like operating system and is therefore complex to configure and manage. Existing UNIX users may find the transition to Linux easier but administrators for existing Windows®-based or Novell environments will find it more difficult to handle the complexity of Linux. This re-training will add significant costs to Linux deployments.

MS can't have it both ways, here. Either UNIX is more complex (and therefore more advanced than NT), or it isn't. As for retraining, MS seems to be finding it hard to get college graduates understanding NT, but the colleges are cranking out Linux literate grads at a rate never before seen by man; Linux is cheap - hence, available to students. And again, handling difficult issues under NT is at least as complex as Linux. (Can you say "undocumented Registry setting"?)

* Linux is a higher risk option than Windows NT. For example how many certified engineers are there for Linux? How easy is it to find skilled development and support people for Linux? Who performs end-to-end testing for Linux-based solutions? These factors and more need to be taken into account when choosing a platform for your business.

In fact, MS points this out because Linux certification - like its popularity - are new, but rapidly developing. Let's rephrase the question: how many certified UNIX admins/engineers are there? All of them are at least 80% qualified to be certified Linux admins. Linux's initial popularity sprang from its faithful emulation and expansion on the UNIX base; all of these people are capable of rapidly picking up on a Linux system where they left off on their old platform.

Our organization has recently seen an "outbreak" of Linux installations. In determining support level, it was determined that about 70% of the UNIX admin team and perhaps 30% of the Windows support and admin teams are already running and administering a Linux system or systems. I don't think we will have a problem in its support, other than telling our team we don't need any more support people.

Myth: Linux is more secure than Windows NT


Reality: Linux Security Model Is Weak

All systems are vulnerable to security issues, however it's important to note that Linux uses the same security model as the original UNIX implementations- a model that was not designed from the ground up to be secure.

+ Linux only provides access controls for files and directories. In contrast, every object in Windows NT, from files to operating system data structures, has an access control list and its use can be regulated as appropriate.

Well, almost every object. Luckily for MS, I'm not going to pound on this too much - they corrected their ommissions in Windows 2000. It seems rare in production environments that the user/group/other model does not fit the requirements, and simple solutions are sometimes the best and easiest to secure.

+ Linux security is all-or-nothing. Administrators cannot delegate administrative privileges: a user who needs any administrative capability must be made a full administrator, which compromises best security practices. In contrast, Windows NT allows an administrator to delegate privileges at an exceptionally fine-grained level.

False on both accounts. Linux provides group-level security for several common administration functions, among other things. The kernel is also capable of providing full POSIX (draft) capabilities (though, as pointed out, they are not used to their fullest potential due to filesystem lags). LDAP and NIS+ both provide ACL-style account administration control for networked machines. Lastly, the UNIX world has had a convenient, audited method of securing administrator functionality for some time - it's called 'sudo'. Windows NT does not allow quite the "fine-grained control" MS would have you believe. Yes, there are a decent number of controls, but the Administrator account (or its clone) is still the ruler of the NT pond, just as root is in control of the UNIX world. It can still do anything and, with the exception of passwords, the "owner" may never know what happened.

+ Linux has not supported key security accreditation standards. Every member of the Windows NT family since Windows NT 3.5 has been evaluated at either a C2 level under the U.S. Government's evaluation process or at a C2-equivalent level under the British Government's ITSEC process. In contrast, no Linux products are listed on the U.S. Government's evaluated product list.

And the Linux "movement" has had the cash to do this when? Now that Linux companies are getting larger, look for Linux to enter the certified arena soon. In the meantime, understand that C2 is a minimal rating that means little to customers who really need security. Most modern operating systems can reach a C2 with little problem; NT can't if it is connected to a network. Security of a 'B'-level rating is needed for secure systems, and this software is only available from the big guns (Trusted Solaris, IBM OS products, etc.). Also understand that these ratings are tied to specific hardware models - your generic PC has not been so rated.

+ Linux system administrators must spend huge amounts of time understanding the latest Linux bugs and determining what to do about them. This is made complex due to the fact that there isn't a central location for security issues to be reported and fixed. In contrast Microsoft provides a single security repository for notification and fixes of security related issues.

FUD, plain and simple. I read security sites as does any good administrator. (If NT administrators don't read NT Bugtraq they're missing out on the real world.) However, I do not pour over bugs like a fiend possessed. I stroll over to my local distribution's update site, and download the packaged update. It's usually available within a day of the bug's discovery. How long does it usually take for MS to turn around security fixes? Try a week or more on average. And convenient packages containing all bug fixes are not available more than once or twice yearly for NT, despite what MS might have you believe (reference ZDNet's recent "crack this box" contest, where ZD claims that installing RedHat's 21 centrally-stored fixes was "too complicated" for the average administrator to use, while the NT box apparently had all the recent hot-fixes from MS).

Oh, BTW, did I mention my distribution maintained a central update site? It provides a single security repository (blah, blah, blah...), including reporting. The strength of Linux lies in having multiple avenues of information, however - the paranoid never trust one source all the time.

+ Configuring Linux security requires an administrator to be an expert in the intricacies of the operating system and how components interact. Misconfigure any part of the operating system and the system could be vulnerable to attack. Windows NT security is easy to set up and administer with tools such as the Security Configuration Editor.

Yadda, yadda, yadda. Funny, last I checked it was MS which was leaving world-writable files by default on their IIS installations. To properly administer any operating system, you have to know what the operating system is capable of doing, and what problems that could cause. UNIX systems have for years had security auditing tools which catch misconfigured software and filesystems, as well as weak passwords.

Myth: Linux can replace Windows on the desktop


Reality: Linux Makes No Sense at the Desktop

Linux as a desktop operating system makes no sense. A user would end up with a system that has fewer applications, is more complex to use and manage, and is less intuitive.

+ Linux does not provide support for the broad range of hardware in use today; Windows NT 4.0 currently supports over 39,000 systems and devices on the Hardware Compatibility List. Linux does not support important ease-of-use technologies such as Plug and Play, USB, and Power Management

The current Linux releases support most new hardware (excepting cheap WinModems) as-is. It is deceiving to state that PnP cards are not yet supported - the kernel does not fully support these devices, but a utility program exists to probe and set PnP card parameters to enable their use; this functionality is built-in to the installation process on most distributions. USB has limited support in current kernels (keyboards, mice, and some peripherals). As for Power Management, the older APM features have been supported for some time; the newer style is under development. As MS is largely responsible for pushing (foisting?) these features on PC manufacturers, you would think that they have a head-start on them, yet NT still lacks USB support.

Are you considering Linux? Most major manufacturers now sell Linux-certified systems. And chances are your old hardware will run just fine with Linux - how much would an upgrade to NT's minimum requirements cost?

+ The complexity of the Linux operating system and cumbersome nature of the existing GUI's would make retraining end-users a huge undertaking and would add significant cost

MS either hasn't played with KDE or GNOME, or doesn't want users to know how easy they are to use (by default, they feel kind of like Windows; or you can change to Mac-style, or NeXT, or...) MS does not assume that most users will be administering their NT systems - it should not assume users will administer their Linux systems, either. And, in fact, Linux systems need very little configuration out of the box.

+ Linux application support is very limited, meaning that customers end up having to build their own horizontal and vertical applications. A recent report from Forrester Research highlighted the fact that today 93 percent of enterprise ISVs develop applications for Windows NT, while only 13 percent develop for Linux.3

And that survey from a year ago would have only shown about 3 percent developing for Linux. A different recent study shows that over 70 percent are evaluating Linux for near-term use. This scares Microsoft - that's why they wrote this piece of FUD. There is probably at least one major application vendor in every field who currently has or will soon have an application available for Linux.

Summary

The Linux operating system is not suitable for mainstream usage by business or home users. Today with Windows NT 4.0, customers can be confident in delivering applications that are scalable, secure, and reliable--yet cost effective to deploy and manage. Linux clearly has a long way to go to be competitive with Windows NT 4.0.

Linux is a viable solution for many business and home users, provided the systems (like Windows systems) are bought with the proper configuration and software, or provided the proper expertise is available. It has become the phenomenon of the day largely because of its reliability and utility. Don't let the MicroSofties deceive you.

With the release of the Windows 2000 operating system, Microsoft extends the technical superiority of the platform even further ensuring that customers can deliver the next generation applications to solve their business challenges.

I had the chance to get some hands-on with Win2K the other week, and it does add needed features. Features like disk quotas (UNIX has had them for years, as has Linux), mount-points (UNIX has had them forever), LDAP distributed user and system databases (Linux has them available already), and Kerberos authentication (UNIX has had add-on Kerberos authentication for years). It has also improved on one other feature - reliability (the lab of 14 computers actually didn't crash during the three days we used them, though the LDAP domain system was misbehaving). On the other hand, it adds some features I didn't need: installation time (over two hours on a P2/233 with 64MB), and bloat (that was the absolute minimum recommended server installation RAM - they say 128MB is recommended, and 256MB preferred). My Linux install went in about twenty minutes on a much slower system.

© 1999 Microsoft Corporation. All rights reserved.

Copyright (C) 1999 Leslie M. Barstow III. Permission is granted to re-distribute this document as-is or with clearly-denoted additional commentary, provided all Copyright notices remain intact.