Linux is a robot, weld of scrap metal from the nearest junkyard. The control panel is absolutely forgotten, and to control robot, you must make contact with insulated wires and twitches the correct cords manually.
Linux is the set of principles found at garbage heap of history.
It is conglomeration of thousands console utilities with unknown purpose and origin.
It is huge kernel, solid and stiff, it is hulking X-server.
The hardware manufacturers have no interest of it, because they want to make money, not to investigate this code heap for write driver for it.
It is wry-armed programmers, incapable of providing needed libraries for their software. It is endless slogans and promises.
This is what we run away from, but came back to.
Who is indeed that novice user and what does he/she do? Is it a system administrator, a programmer, an artist, an architect or a designer, maybe lawyer, sportsman, musician or accountant, a street cleaner or just a bus driver? For which occupation do I need exactly Linux? Some Linux users would say “a user who is getting introduced with computers”. So what if 99% did not switch to Linux, does this really mean that they are still in the process of “getting introduced with computers”?
It can happen only in a bad dream when designer or architect is forced to use Linux. Tell me, do you live in a good house? Do you watch movies? And can you say that architects and designers are not professionals? They are exactly those people who built your sweet house and created scenery for your favorite movies. Would you like to live in a crooked house or watch a mockery of a movie only because instead of doing his work, someone got into the computer trying to find the solutions for numerous issues and failures? Professionals need handy tools: be it a programmer who needs thoroughly structured and well commented code as well as detailed documentation, or musician who needs good software with rich functionality for sound mixing and processing, or a writer who needs easy-to-use, but powerful word processor.
You will not find that in Linux. Even widely believed myth that Linux is good for programmers is busted as soon as you face that praised “Linux architecture” mentioned further in this FAQ.
There is no software for Linux that could indeed satisfy user needs if they exceed something like “surfing the web with Firefox”. And even in this case Linux has nothing to do with this, because it will be just a “starter” for more successful software. However, any other (open, if needed) operating system may be used for this instead of Linux.
Thus we have some kind of imaginary professional who is trained to solve Linux-specific issues and nothing more. We doubt that someone can make use of it because no one needs a specialist tailored to solve farfetched problems.
Such crap comes from Linux users who spend time digging their computers instead of working with them. Linux does not teach thinking, but solving farfetched problems which should not exist at all. Why should I try hard to find a Linux flavor I need or decide what hardware I can refuse or accept only because it does or does not work with Linux? Why should I always consult documentation for every little piece of action I want to perform in my system? I definitely have more important things to do. I paid for my computer and I want to just start working with it! Is it so bad?
Let’s say, I buy a toaster, bring it home, but it does not work. I take it back to the shop, but a salesman starts yelling at me: “You did not cope with it! There is a 5-volume guide with it as well as forum URL printed on a box!” And what do you think my suggestions will be as to where this shop assistant should stick this toaster?
Regarding “users get stupid” — this is usually caused by Linux. In fact, Linux makes users get skills of configuring and maintaining itself in a good shape.
Whole Linux and Open Source are both something average between communistic utopia and a religious sect. And we all know for sure what this really means: smart guys earning money with Linux while “dirty work” is done by the community and its young adepts dreaming of hippie philosophy and “freedom”.
95% of WHAT kind of users you are talking about? Those who never heard about Linux? Those who used it only half a year term? Or maybe those who work on a computer in general? I believe those are 95% of 0.95% of actual number of Linux users.
For instance, take a look at MacOS. People are willing to spend their money to buy that OS and use it legally.
Take a look at Linux then. People don’t want it, even for free, despite all the advertisement, all support from big corporations and free of charge, “on the first demand” postal delivery of distros. The “community” tries everything possible to attract new members. There’s one thing left to be done – they should come into each private apartment to persuade people to use Linux. But still, no one needs Linux.
Did they succeed? No, there is only a swarm of egg-head freaks with the operating system that sucks, because of the lack of usability and technical imperfectness.
We believe, the “community” strongly wishes to remain being a minority. People of that kind like to do something, like using Linux, to annoy all the rest. So far, there’s no need to talk about a sane competition between Linux and other OSes. You can compare the whole matter about Linux with a stupid business of a farmer, who tries to grow grain on sand.
Yes, it does… at some extent! The support isn’t complete in most cases. When you buy a new hardware, and if you use Linux OS, you should be ready for the lack of their functionality because of both, drivers and OS. Quite often you have to spend several evenings, reading forums and manuals, fixing configuration files, typing console commands. Now imaging average Joe who has bought a new video card (or anything else) but it doesn’t work because of no support for this device in the installed Linux kernel. Do you think our average Joe will actually try to resolve the problem? Doubt that. He would get rid of Linux, no matter if there is a support in newer version of Linux kernel, or not. Average Joe does not care about reconfiguring the system and recompiling the kernel. So he just wouldn’t do that.
First of all, the increase of the amount of Linux-based PCs depends on the increase of total number of PCs. Thus, summary share of linux-PCs for the last 10 years didn’t change much and on the September 2009 it is still 0.95%. Note, that percentage is not “pure” because there are people, using Linux just for fun, to see, what new features recent distro has. There are also people, using Linux in VMware or using Live CDs without “real” installation. Besides, the bigger part of “community”, keeps two or more installed distros in order to enhance the Internet stats. Note also, the 0.95% is consisted of all disros. So far, there is no unity even in the communities between Linux users. Can you imagine a Mac user, installing two or three versions of Mac OS to use them simultaneously? That is a “usual” situation for the Linux user. Also interesting thing is statistics showing that amount of Linux users is almost equal to amount of mentally ill people.
Mac OS shows real increase of its market share, however, the increase of Linux market share still stands “within the limits of error.”
Do not forget the built-in solutions and various kinds of servers, where Linux still has certain popularity, and who, apparently, successfully completes that 0.95%. Thus, expectations of Linux users are far from reality.
Let me explain the “popularity” of Linux. As you probably know, there are young people’s subcultures - Tolkienists, goths, emos, and others. Members of these subcultures meet together, talk to each other and actually pondering the whole world should consist of people like them, while the other “stupid cattle” – are species on their way to extinction. In real world the opposite is true, and the above statement is in mind of only a small group of freaks. If Linux is working on less than one percent of the total number of PCs worldwide, what do you think – is Linux popular, or not?
Linux users, as far as any other minority - try to raise a big noise around them in order to be heard by the whole world, claiming themselves “real big deal”. And it’s never mind that their number varies within 1% of PC-users of any kind, because the level of noise from them is that, as if they were the vast majority. Note also, their statements; those beat you with unbearable effrontery and frank mendacity.
And by the way, a big chunk of equipment running GNU/Linux is considered as PCs (personal computer), but in fact they are network equipment, various hardware media players, smart NAS and so on…
What advantage can I get from study of Linux, except for the abstract “self-development”?! Since when the learning by heart of Linux’s command list and parameters for stupid configuration files, is considered a “self-development”? Is the study of the operating system that was outdated before its birth, called a “self-development”? If I abandon Linux after a year of use, would that mean I couldn’t help myself to manage it? Even if so it is, must I throw away as a junk the year of my life, which I could spend doing much more useful things?!
I believe that the study of at least 20% of MSWord’s possibilities would be more useful than the study of the whole Linux.
Yes, there are, but 90% of them are not needed by anybody, generally. Yes, of course, it is possible to surf the Internet and to check your e-mail using Linux, but after all, you can do the same with your cellular phone too! As soon as you will start to execute more serious tasks on your computer, it would be really difficult for you to do, if you use Linux!
First of all I would like you to think of how are you going to benefit of this? There is no sense in wasting your time to please a mere percent of people who are always unsatisfied. Moreover, if you develop a commercial application, Linux users won’t buy it just because they suppose that developers are not to claim rights for their work and that’s why they (Linux users) switched to Linux, and the only way for Linux developers to earn money is to collect donations. Besides, Linux has WINE which means if your applications fail to run properly under Linux, then the problem is not at your side, but WINE developers’!
What kind of freedom is it when a user is bound to maintainers, repositories, Internet, specific hardware? Is it really freedom? Freedom not to pay? For 90% of what Linux offers, developers themselves should pay to users to avoid being beaten by them..
Choice just for choice is insanity. Linux is only kernel, so freedom is about choosing between distros, each having the same software with the only difference in quantity. So, where is the choice? Where is freedom?
Console is not an intuitively comprehensive interface. The deeper user tasks go and the rarer they repeat, the less convenient console interface is. So are the facts. GUIs of most applications are based on similar principles which allow users to get acquainted with it by just exploring it in an ad-hoc way. Here is an example of an «intuitive» command used to unpack an archive in console:
bzip2 -cd foo.tar.bz2 | tar -xvf
The problem of most Linux users is that they consider themselves “smarter” than others only because they are able to solve issues created artificially. For a regular user it is more natural to click “Unpack” button instead of memorizing a whole bunch of commands, their options and parameters. The user won’t become dumber, but instead he will gain more time for his own preferred tasks.
We know that users don’t like reading manuals — it is a known fact. An ideal interface is which does not require users referring to manuals at all. Do you read manuals after buying a new television? Imagine a new GNU/TV device operated by a twenty various controls and switches and in order to change volume level you have to spend the whole day with a soldering iron. Who would you think should dig up 250 pages of a manual — you or device engineers and developers? And that’s happening now, in 21 century! Assembling a TV set at home was popular 50 years ago — same as using console 20 years ago too.
GUI had been invented for our convenience, ease of visual representation and speedy understanding/operation. Today console is an example of obsolete user interface, something from 70s.
GUI is created by developers and tested. Configuration files are created by users and not tested. It is a natural way for a user to make mistakes while editing text files.
Let’s see a simple example of this. Commentaries in configuration files usually describe data format that should be used. I made a mistake and entered 1796 instead of 1976. And date is used at the end of a lengthy process which cannot be completed without specifying the date — the result will be wrong. Date depends on configuration, not on application. That means that date may be valid in one file and invalid in other. In GUI I can check validity of the entered data using any rules. GUI contains additional code layer that checks if user input is valid and formats it if needed. There is no reason to use direct text editing in configuration files instead of using such GUI that can help users avoid inputting incorrect or improperly formatted data!
Linux users can say that “any parameters should be checked upon application launch”. But these checks can actually differ a lot! Application itself is to make sure it can process data and not to check their validity. Otherwise it would require major changes to be made to the application when the data are changed.
Can Linux users limit possible data values in their text configuration files? Instead of excluding invalid values at the moment of inputting data, they offer doing it when application is run. What if parameters are incorrect in some cases which can never happen? Catching exception once a week? That’s really Unix-way as it is!
Any mistake made while editing configuration files can simply kill the whole system. I myself ran into such situations many times. No need to have viruses or other malware! In order to crash your application or the whole system all you need is to specify dot instead of comma as a decimal separator.
GUI-based configurators offer higher level of usability and functionality. Editing configuration text files is an attempt of putting developer’s responsibility to users.
However, recently Linux users began to admit that idea of using text files for storing system configuration became obsolete.
That’s why they started to use an approach that “users don’t need to edit configuration files anymore”, but only “click buttons with their mice”. Before they understood that, anyone using GUI for that would be considered a lamer. Now the “standard” has changed and “clicking buttons” is not so “lamer-ish”. Introducing GUI-based configurators gave birth to new difficulties only because instead of deep redesign, a traditional UNIX way approach has been chosen leading to awkward solutions for simple tasks.
Regarding the Windows registry — it is a single hierarchical database for storing system and application configuration data. Users don’t need to be familiar with registry — only developers do. And I don’t understand those Linux users who complain that they don’t understand registry parameters and values. Why getting into something that you don’t really need and complain that “we don’t understand anything there”?
No need to mention advantages of the registry. It would be enough to say that all data on servers are stored in databases — not in separate text files. We may only guess why UNIX world is so “blind” on that.
Garbage in the registry appears only if developer creates it on purpose. Linux users have two options here: either to admit that Linux doesn't allow this to happen by restricting developers actions, or to agree that Linux has the same problems.
Linux, as you know, is based on UNIX principles. But even UNIX was the first system infected by virus. In 1988 Morris worm was the first computer virus in history that caused serious damage to companies using computers. The damage was estimated at about 100 million dollars! As always, a breach was found in one of UNIX’s services which resulted in spreading infection.
Saying that “there are no viruses for Linux” is the same as convincing yourself with “I’m immortal”. Both cases mean “temporarily”.
In present time creating viruses for Linux is indeed not very popular. I doubt that someone would spend time for the sake of attacking single computers.
It is also important to note initially higher level of computer literacy of Linux users. Poor access control system in Linux with only two levels of access may seem to be helping in this situation. But in Windows, access can be controlled in much more flexible way. The only difference is that Linux administrators were always striving to train users to work without any rights at all.
There is nothing free in this world. If you think that someone will work for you for free, you are wrong. “Free” pushing is just an artful strategy to earn money using people who want to get a free lunch. First fix is always free of charge. Getting you hooked on it — this is what important. If in case of Windows you understand what you pay for, with Linux you don’t — people are fooled with “freedom” propaganda. Simply speaking, “Linux at no cost” means you will be implicitly used as a beta tester for commercial Linux distros.
Actually this fact is not a secret, but only a few people understand that while others prefer to believe in tales about “freedom”.
If you still believe that someone will work “just for fun”, show me that place where they build houses and grow wheat for free! I want to see it!
Let’s imagine for a bit, that "XYZ" company start to distribute their cars for free. Of course, the production of the company will be high-quality at the first, but this idyl will go on only while the company wouldn't get considerable market part. But where the company would get money from? That's simple! The planned quality fall-off begins after rival's exclusion. This will stimulate user to appeal to paid technical support! If everything is going fine, why appealling to technical support?
We can see opposite situation with companies that sell products offering technical support as supplementary service. In this way company interested in debugged product issue.
With the appearance of Linux users appeared their desire of being opposed to the others. Before Linux emerged everything was 'calm and peaceful'. There were Mac's users, users of Amiga, Windows' users and users of other systems. And the key word here is 'users'. As soon as Linux began its 'victorious' procession all over the planet it was found out that if people didn't understand or accept the advantages of Linux they were labeled as a second class. Surely the disagreements like 'what is better: Mac or Amiga always existed. But the most important thing was missing: ideology. And this ideology appeared along with Linux. Linux didn't have competitive advantages where other systems did. What did Linux have? It didn't have any advantages at all. It was impossible to promote such a system and it was vitally important to find something that would single it out, something that would be attractive to vast masses as a common welfare. The idea of 'openness' and pseudo-freedom took place of such welfare.
Something like that we can see in the sects. Basically there's an idea of happiness that is rather close to an absolute one etc. By the way, sects are characterized by the same Linux users' narrow-mindedness of thinking that is the center of the created doctrine. So Linux has become a system with 'a double kernel'. From one side it was a kernel—the central component of the system, from the other side it occurred to be the kernel of integration of the sect's followings. When a group of people obtains selfconsciousness and a feeling that makes them different from the others, a group's name appears. And that was the moment when the definition “Linux user” appeared. As soon as the idea had enough fans, a policy of a total obtrusion was started. A cunny speculation with definitions, “freedoms” and values followed one target—to persuade vast masses in its numerous perspectives and in the vital nesessity of using Linux during their everyday life. All those who didn't agree were claimed to be “lamer”, “unable to understand and to grasp.” Linux users invented an unfavourable definition “windozers” (Windows users), by this word they mean common users (!!!). As a result Linux users created the difference between those “who agree” and those “who disagree” and they commenced to broaden actively at the expence of those who didn't want to be claimed loosers, being unable to work with Linux. It was a starting point when a wide obtrusion of Linux to vast masses began.
Yes, we’ve heard a lot of stories about righteousness among Linux users who always use pedestrian crossing when crossing a street and do it with green light only. Am I correct saying that you always buy music and movies too? Is it really good to be fair regarding one thing and so unfair regarding another? Let’s say you run a company and they told you that “you are safer and can save with Linux?” They deceived and misled you. You should get a detailed understanding on what “free software philosophy” really is before you rush to any conclusions that it will be cheaper with free Linux.
You should also avoid confusing “theft” with “illegal use of somebody’s intellectual property”. Such confusion is common for both Linux users and copyright owners.
For ordinary users: in Windows you can use free software and its quality on this platform is much higher compared to Linux. Who’s keeping you from doing that?
Writing a monolithic kernel in 1991 was “a giant step back into the 1970s”. Author of these words was Andrew S. Tanenbaum—a professor of computer science at the Vrije Universiteit, Amsterdam in the Netherlands. Linus Torvalds used (in a quite awkward manner) Tanenbaum’s book when writing Linux. When Tanenbaum looked deeply into the results of Torvalds’ work, he said “I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design.” And indeed, Linux design incorporated many principles from Unix that is an obsolete system. Unfortunately, none made an attempt to look at Linux without prejudice about “modern design”. No initial planning, wrong line of development, no expertise in OS design—all that resulted in Linux being buggy, low quality and slow system. Later people began to see hopelessness of Linux—e.g. European Union gave Tanenbaum a grant of €2.5 million to aid in microkernel development.
The microkernel concept itself is not new. In the beginning of 80s there were microkernel-based systems on the market. But, unfortunately, they could not gain significant public attention because Torvalds was the person of which mass media created image of romantic and idealistic student instead.
Microkernel architecture is a sign of delicate and slim design. Pure microkernel design is rarely used in real life. Such kind of kernel is only 100-200 KiB in size and is very stable, modular and secure. For example, adding or removing a module in Linux kernel is quite complicated, but doing the same with microkernel is a routine task. That’s why microkernel design means stability and flexibility – we don’t have to keep unneeded modules in memory. Any error in Linux driver would lead to complete system crash or allow unauthorized attackers to obtain full system access and destroy your data. This is not possible with microkernel architecture because driver will never get unrestricted privileges and will be immediately unloaded in case of fault. No vulnerabilities were revealed in Tanenbaum’s MINIX microkernel for over 10 years.
Supporting monolithic kernel is extremely complicated. I think that country-wide switching to Linux would be a serious loss for all those organizations in terms of support and further development. That’s why that open source yelling about “easy updating” is a kind of sabotage.
And what about finding a bug in monolithic kernel? Here is one interesting quotation on this: Andrew Morton, a leading Linux kernel developer and maintainer, is complaining about the development of –mm kernel branch: “It took me over two solid days to get this lot compiling and booting on a few boxes. This required around 90 fixup patches and patch droppings. There are several bugs in here which I know of and presumably many more which I don’t know of. I have to say that this just isn’t working anymore”. The latest 2.6.23-rc6 kernel patch is almost 30 MiB in size. This is equivalent to about 30 thousand pages of source code (assuming a thousand of characters per page).
This can be a very long discussion. For example, many people still believe that lack of drivers is just a conspiracy of corporations against Linux. The answer is simple: corporations are not interested in spending many thousands of dollars for writing drivers for Linux due to complexity of this process. Even if a driver will be written, it would require significant amount of time for support and maintenance because Linux does not have stable ABI (Application Binary Interface). This means that the driver will need to be rebuilt with every new release of Linux kernel.
It also needs to be said that not only kernel, but also graphics and audio subsystems in Linux are designed and implemented in a very terrible way.
I’m not offering everybody to switch to MINIX because it is designed mainly for some specific (academic) purposes and for studying theory of system design and architecture of drivers and interfaces. Using pure microkernel-based system at home is not really needed, but still is a giant leap forward.
At the same time Windows uses hybrid kernel. It’s difference from microkernel is that it loads layers, not modules.
I want to see what is the operating system inside out, and Linux gives me a good opportunity for this.
You are sadly mistaken if you think that Linux will allow you "to sit and to examine" the Operating System. Look: the volume of one kernel 2.6.33 is more than 351MB (!), these are millions of pages with source codes! And each day the kernel is growing! Are you sure, that you will "sit" and easily sort out SEVERAL MILLIONS of source codes? I repeat, this is only the kernel! Besides that, there are plenty of things, which you will "want" to get acquainted with! In this FAQ I have already quoted the developer, who wrote with horror, that even he, the man, who devoted long years programming for Linux, faced difficulties with how all this works. Do you think this will be easier for you?
If you are so interested in what is operating system and how does it work, you should turn to other projects. There are tens of compact (from ten kilobyte to several megabyte) operating systems where everything is really clear. There you can easily ( if you have the necessary knowledge, of course) "sit & examine" with what and how everything works in the operating systems. Everything is rather easy and understandable there! They are compact and easy for understanding! They will always be glad to see you, regardless of the level of your knowledge. Not able to program? You are welcome, draw, translate the documentation, help with the developing of the site & so on. Main thing is that you will get the invaluable experience and expertise while working for such projects! Try to spend couple of years making a supplement for Linux, and after that it won't be included in the official source codes tree. Do you want to kill time? Linux gives you such an opportunity!
I’m really fed up with all those stories about people happily running Linux on old Pentium III 500 MHz with 256 Mb RAM. Am I supposed to believe that such computer satisfies all your needs? By the way, Windows XP runs quite perfectly on this configuration. Or “doing your work” is just some kind of perversion with the only goal of boasting? Like “they are all idiots who bought PC or Mac and work with modern software on modern hardware, but look at me — I’m so cool because I installed Linux!”
Besides, we haven't talked through irons, squeezers, cameras yet... There's a lot to do! In fact there's nothing outstanding in such porting. The fact that you can eat soup, twirl a screw or pull a nail out using a fork doesn't mean you should do it using the fork! The specialized decisions will be much more effective! Unfortunately lots of people let this imaginary universality of Linux push themselves around and in number of cases they choose to economize and not to use the specialized decisions. The outcome is that the products received don't have the same accuracy and quality that they could have received by using a different system. There are dozens of systems for embedded decisions and if you don't look only at Linux, the decision can be much easier and of a higher quality than you expected!
Linux is popular on simple servers because the less functionality a system possesses, the higher its stability is. A server system and a desktop one differ vastly. With servers we can neglect functionality and usability for the sake of stability. Actually, Linux on servers is just a launcher for PHP, SQL, and Apache software—that’s all!
But even there, according to latest IDC reports, Linux has already begun forfeiting the game!
“Linux should not go to schools because most of its applications look like as if schoolboys created them.” Sometimes it seems that Linux developers simply humiliate users because this huge amount of mistakes and faults cannot be explained in other reasonable ways. As a demonstration of this, I prepared a series of distro reviews. I can say with certainty that the amount of defects did not decrease since 2002: Linux still fails, hangs and crashes during trivial operations. Even Debian, a distro known for its robustness, turned out to be buggy and unstable.
Regarding specialists, do you know that nowadays not only programmers, but also psychologists and physiologists participate actively in software development process (this does not apply to Internet Explorer with its ugly and clunky interface)? All this is taken into account when calculating TCO (total cost of ownership). Adoption is easy only on paper, but in real life you need to hire specialists and spend a lot of time teaching users to complete required tasks while keeping it efficient enough. That’s why the above mentioned specialists are involved into product design and implementation — to minimize adoption and other costs. It is natural that a product with well-designed interface and balanced functionality is easier for someone to make familiar with.
All those specialists need to work and solve tasks together in a well-coordinated team. There are no such specialists and coordination in Linux. As a result, low quality software is what we have there.
It's not generally accepted to work under root in Linux. Firstly nothing stops you to work as a user, not an administrator in Windows. Secondly at his home computer user plays two parts simultaneously: of the user and of the administrator. What's more, the user never invents complicated passwords and all these root accounts lose their significance when the user dials a password like '12345'. You can't fight with it, you'll achieve only the user's irritation and nothing else. UAC makes the process of obtaining the necessary rights in Windows less irritable.
I’ve seen that opinion on many Linux forums. Statements like that can only mean poor computer knowledge of Linux users.
Unix-like file systems don’t have such term as “fragmentation”. But they do have non-contiguous files. So they don’t have fragmentation, but discontinuity instead. System cannot guess which data and in which order will be written to drive, so the writing is not linear. That means one file may not be stored as a whole segment of blocks and sectors, but thrown into various parts of disk.
During its work the system creates huge amount of fragmented data, which can dramatically decrease system performance with frequent disk operations.
The problem of Linux is that its users always have to justify the lack of some capabilities in their system.
If someone releases a Linux defragmenter several years later, Linux developers must admit the blatant lies to their users. This makes many teens on the net believe that Linux doesn’t require such tools.
But defragmenters are only part of it. Linux doesn’t even have data recovery tools. So what, will they say “we don’t need recovery tools”? Pretty much same logic here.
All that gets worse considering the lack of one STANDARD file system. Speaking simple, if Windows needs one program, Linux would need ten programs for ten popular file systems. And one will be done in QT, another in GTK or something else. It turns out horrendous. Curiously enough, NTFS exists for over 15 years (since 1993), and still there’s no apparent reasons to dump it.
The whole bunch of Linux distros exists only to demonstrate an imaginary profusion. This makes possible to say that the cause of any issues with Linux lies in the incorrectly chosen distro. Regular users see the possibility of choosing from the plenty of distros, but professionals see the same kernel with similar variety of software. The difference between distros is mainly cosmetic. But let’s see what really behind all those distros is.
Every piece of software has its own developer(s). Moreover, each distro has its own maintainer that repackages applications according to distro specifications. This means “fooling around”. In Linux, one writes code and others repackage doing the same work simultaneously. What a useless waste of time! If distro version is changed, you need to repackage it. If a new version of an application is out, you need to backport the old one. Linux developer efforts are wasted for all those distros, ports and forks meaning that most of them (open source developers) are doing totally nonsense work.
From the user’s point of view, that means being dependent on not only developers, but also dozens of other people who are adapting the software for use in a certain distro.
Many Linux users mistakenly think and even pride that first 3D-effects appeared in XGL (now Compiz). If necessary they will even provide Vista release date and show that they had it a year ago.
Casual user will most likely believe this bullshit. But this is obvious manipulation with facts. The first 3D effects were seen already in Longhorn build 3718. (now Vista). XGL was officially released only in 2006. Do you see the difference? Or do you think one may not compare beta version of Aero with release of Compiz? If you do, take a look at a state of Compiz release. It was version 0.2 , and calling it a “ready-to-use product” is silly!
Despite high visual attractiveness of Compiz effects, they are useless: in practical sense, a single Aero Shake effect surpasses all of them.
Virtual desktop can never replace second monitor. Drawing analogies, instead of second toilet in the house, Linux offers to install one more bowl near first one. Like “What? There are two holes, and large family won’t have any problems.”
A repository in Linux is a constrained measure. Even if all modern distros has same kernel version, the software is not guaranteed to work with this distros. You need to tune the program to specific distro, and if there’s no repository, there’s high probability that user will download a non-working version is high. Coders fix their software for specific distro and put it in the repository… Great and powerful repository is not a feature, but a kludge. Nobody guarantees a working OS without it.
I’m trying to divide the idea of Linux and open source. But this is not always appropriate. Linux is, in most cases, the ideology of open source, and exactly Linux clearly enough shows us systematic problems of open source in general.
The problem is that current leaders turned open source, generally a good idea, to ideology. Moreover, they have created a real “religion”! And this religion drugged heads and made people raise the flag of war. But if you look back, it suddenly turns out that screaming about the war and the death of Windows comes from those who contribute nothing, but only those who use!
Even such a good idea like free software is brought to the point of insanity. In recent years, Richard Stallman, who is considered the founding father of open source, proved by his example, to what level idiocy can rise if you try hard enough. Then it turns out that the declared freedom turns into real slavery.
Deep and lingering crisis which involved leading Linux vendors such as Novell and Mandriva demonstrated extremely low commercial efficiency of Linux-based solutions. For example, Red Hat CEO Jim Whitehurst says he does not understand how can one do business with Linux solutions: “First of all, I don’t know how to make money on it,” Red Hat CEO Jim Whitehurst said. “Very few people are running a desktop that’s mission-critical,” so they do not want to pay the company for a desktop OS, he said.
Canonical is also unprofitable and exists at the expense of other Shuttleworth’s projects and money from Thawte, but it’s going to be kept aswim for as long as possible because its public failure (factual failure has already occured) would be a hard blow even for militant open source extremists.
The main purpose of open software is not sales, but the destruction of competitor sales. Open software as a market phenomenon—is a tool of competition, allowing legally keep dumping in the territory of a competitor. In an effort to undermine the business of another company, large corporations are investing big money in the development of competing products distributed free of charge. This leads to the fact that the business of another developer is crumbling because he is not able to sell their own product. Once the niche is occupied, and then starts the commercialization of free product. This is done invisibly to end users. For example company starts selling paid additions, related solutions or technical support. There are many options. The problem is that it's almost impossible to win this niche.
Dumping—sale of goods on the foreign and domestic markets at artificially lowered prices, lowered average retail prices, sometimes lower than the cost (production and distribution costs). Dumping is done to penetrate the market, the conquest it, pushing competitors away. Dumping by the state and companies in the calculation of compensation in the future, the current losses when the expense of dumping will be achieved strong position in the market.
In real life, every mistake in the software product is extremely dangerous, only one vulnerability could lead to extremely serious consequences. In case of real life the business could collapse overnight, but given the trend of global computerization it can lead to death. Will you like if equipment that is responsible for life support would fail as a result of hacker attack? The latter is still unlikely, but could you think thirty years ago that the virus will destroy your photo album or dissertation?
In the case of commercial applications and systems integral part of pre-release preparing is multi-level testing, which can significantly reduce the likelihood of finding errors in a stable version of the product. Testing requires money. Open Source developers because of their ideology cannot ensure it fully, so objectively the number of vulnerabilities in open software products is significantly higher. I do not think your soul will be warm of the fact that after only 48 hours after the data on its hard drive had been destroyed, the update was released, correcting the found vulnerabilities.
The Linux security is worse than you can imagine. I have already pointed to the example of Firefox, which is 20 times more popular than Linux, that open products are extremely vulnerable. Now imagine a Linux distribution, assembled from most of these open products, and what comes out of it. 20 holes in a browser, 3 in email client, 4 in the player, 17 in the Linux kernel, 5 ... – the system becomes a real sieve, where you can choose from variety of ways to take it down. By the way, the architecture of the Linux kernel does not allow avoiding a number of attacks. So regularly sites dedicated to network security show news like “vulnerability found in the e1000 driver for Linux, allows you can to crash the kernel by sending a specially prepared Ethernet frame”. Not bad, eh? Faulty drivers in the system with a modern design cannot lead to the collapse of the entire OS. Under modern design, I understand all the things which managed to avoid Unix-like stigma. By the way, many kernel developers are well aware of the whole defect of Linux architecture, but to rectify the situation is simply impossible. Here is what Andrew Morton, leading developer of the Linux kernel writes about it: "Linux developers add new OS kernel errors faster than they can be fixed. As a result, the kernel becomes less safe and stable.
Sure, man is prone to mistakes, but the software product having almost no vulnerabilities at all would be optimal. And this can be reached not by reading source code (as Linux users assume), but active participation by such specialists as system architects, security experts, programmers, software test engineers and designers in a software development process.
For business priority is not money, but the so-called Total Cost of Ownership (TCO). This determines how different solutions will cost you. Do not think that if Linux is free, then its implementation and operation will also cost you nothing. Practice shows that the total cost of ownership based on Linux will be much higher than that of Microsoft or even Apple. Thus, choosing Linux, do not forget that free distributions are those aimed at deception to engage users in the process of testing of raw products with a view to subsequent sale of paid solutions. License cost of commercial distributions, is often higher than the equivalent of MS. To explain the reasons for this we use the example of land, covered with farms. Every farmer cultivating his small size buys fertilizer, equipment, machinery, organizes storage and transportation, as well as establishes contacts with the product salesmen and implements it, plus everything to provide for himself personally. In the case of one large farmer costs at each stage will be considerably lower. For example, large-scale farmers can buy fertilizer at wholesale prices. Thus, a major distributor of products will be much cheaper. A major distributor here is Microsoft.
The key distinction from the company Microsoft is to coordinate the top, not bottom. The consequence is no need to resort to unfair methods of testing raw products on the users. The first step is the design phase, which is involved in a number of specialists, and then evaluated the usefulness and feasibility of solutions. And only after careful analysis the development will begin immediately. In the case with Linux, the situation looks like this: "I have an idea; I want to try to implement it. You can imagine the production plant, part of the staff which suddenly began to issue non-standard parts that require processing at the final stage of the whole construction of the car?
For successful business vision you need to use already tested and sharpened decisions. The spectrum of such solutions for Windows is much broader and more diverse.
Cost of service personnel with knowledge of *nix systems will be much higher because of its less popularity and more difficulty in mastering. And do not forget that Linux distributors spend a lot more strength to fight with each other!
When choosing Linux, you are guided by a desire to save on buying for example MS Office, believing that under Linux you will be able to use OpenOffice, think, why do you need it? Under Windows, there is a lot more free software than under Linux!
The Microsoft products have a very long period of life and technical support. Thus, many Linux distributions are replaced by newer versions in six months, in this time; life period of some versions of Windows is about ten years. Here you can make an analogy with the shoes, which can faithfully serve you one season, but could not wear out for many years.
There are many reasons why the use of Linux would not be justified in real life.
It's really difficult to understand somebody else’s logic, and to find a properly hidden mistake is much more difficult. Open source codes don't guarantee anything. Tell me please, when you've the last time watched the source codes of OpenOffice to find a harmful code in it? Have you ever watched the source code before installing? Don't think so...
Any vulnerability in a software product does not differ greatly from so called “enemy” tab. As I mentioned earlier, lots of vulnerabilities are found in open source software products (sometimes this amount of vulnerabilities is even bigger than in products with closed source!). Can you guarantee that those vulnerabilities were not created on purpose? And how many of them still remain unrevealed?
To begin with, only Linux users suffer from blatant disregard of standards. It is Linux users who constantly demand from Microsoft so that they play by the rules, and they both want. You understand that the game turns out deliberately dishonest? I will not remember how many times the kernel ABI and QT (see QT3 and QT4) was broken. Suffice it to recall the buzz around the ODF format. Under pressure from the EU Microsoft included support for ODF in Office, but it was found that Microsoft Office documents and OpenOffice again not fully compatible. After debriefing the reason turned out to be banal. The standard has been recognized ODF 1.1, which was followed by Microsoft, but they simply forgot to include the specification of formulas in ODF! And, of course, things that were not described in the specification were implemented by Microsoft in its sole discretion. How else should they act? Why community did not bother to check their own specifications, before demanding their inclusion? Why the specifications were incomplete? Another example of the consequences of Liberty - I want to write, I do not want to write?
Here's another example that shows how much Microsoft infringes Linux users. Microsoft refused to certify the Evolution mail client to work with the Exchange server due to the fact that some of its settings depend on the window manager configuration. And once again this raised rampage about the evil empire that hinders the development of Linux. But wait. You were merely asked to standardize the application! That’s all! Why when Linux users were required compliance with basic rules, they start moaning about the violation of freedoms?
At the same coin box multiboot specification can be added. Why in one Linux distribution, this specification is respected, but not in others? How you demand compliance from Microsoft with this standard, if you ruin Windows boot loader through time, and Windows will have to comply with the standards?
Windows source code is not closed; it is just not open for everyone.
GNU fanatics always demand source code. Why? Because they themselves cannot create anything valuable and worth someone’s attention. They want to get everything ready for use from others, so they can just add “Created by megahacker Vasya Pupkin” and distribute it, sometimes for money.
Humans have 97% of ape genes, and so what? In theory, any system can be Unix-like that meets the POSIX standard - Portable Operating Systems Interface. Interface of portable operating systems. In Windows NT4 POSIX compatibility was out of the box, and now Windows NT is Unix-like? Mac OS X is based on the original kernel and has a well-developed own subsystems, its own file system, etc. So thinking of Mac OS X as a Unix-like system is not right.
I was always tormented by the question, why everyone shows off those Unix-like systems? Is Unix an ideal OS? Is it an apex of OS-building? Unattainable ideal which must be matched? And if it’s that cool, why 92% of computers are running Windows, 6% are running Mac OS X, and 2% are running other systems? Where is this great Unix?
Yes, I heard that you need only wait for the ''new'' version of the distribution, and no problems. In fact, Linux has a number of complex problems and shortcomings of the original, which never allow him to become a major product. Wait and believe that developers need to fix it - very naive. I spent 8 years trying to convince the entire futility of this expectation. How to manage your time - you decide. Personally, I refused to use Linux.