- What is autopackage?
- Is autopackage meant to replace RPM?
- Why a new format? Why do the RPMs I find on the net today only work on one distro?
- How does it work?
- Does it do automatic dependency resolution like apt and emerge?
- Help! I've got a problem!
- How do I install autopackages offline?
- Why do packages install to /usr by default?
- What command line switches do .package files support?
- How do I uninstall packages, list packages, show what files a package installed ... ?
- How do I uninstall or upgrade autopackage itself?
- What happens if I install an autopackage without first removing the pre-existing package?
- Is there an auto-update facility for autopackages?
- I installed a package and it doesn't show up in the graphical manager program. What gives?
- What's wrong with centralized repositories, apt style?
- What's wrong with NeXT/MacOSX style appfolders?
- Why not statically link everything?
- If I install an autopackage, RPM won't know about it!
- It can't possibly integrate as well as a package made for my distribution!
- This type of easy installer never manages dependencies! Right?
- If a package needs a library that isn't autopackaged and is missing, it'll break!
- What's a desktop Linux platform? Why do we need one?
- What about security?
- You mean you're not going to do package signing?
- Won't the lack of signing let malware and spyware into Linux, like in Windows?
- So how can we fight spyware if not at the package management level?
- Can I use autopackage for my new distro?
- Does it support commercial software?
- How does the multiple front ends system work?
- Can autopackages be translated?
- What widget toolkit does the default graphical front end use?
- Is there a KDE/Qt frontend?
- Can I autopackage KDE/Qt apps?
- Is autopackage cross-platform?
- This approach feels way too complicated. Is there a simpler alternative?
General
# What is autopackage? For users: it makes software installation on Linux easier. If a project provides an autopackage, you know it can work on your distribution. You know it'll integrate nicely with your desktop and you know it'll be up to date, because it's provided by the software developers themselves. You don't have to choose which distro you run based on how many packages are available. For developers: it's software that lets you create binary packages for Linux that will install on any distribution, can automatically resolve dependencies and can be installed using multiple front ends, for instance from the command line or from a graphical interface. It lets you get your software to your users quicker, easier and more reliably. It immediately increases your user base by allowing people with no native package to run your software within seconds. # Is autopackage meant to replace RPM? No. RPM is good at managing the core software of a distro. It's fast, well understood and supports features like prepatching of sources. What RPM is not good at is non-core packages, ie programs available from the net, from commercial vendors, magazine coverdisks and so on. This is the area that autopackage tackles. Although in theory it'd be possible to build a distro based around it, in reality such a solution would be very suboptimal as we sacrifice speed for flexibility and distro neutrality. For instance, it can take several seconds to verify the presence of all required dependencies, something that RPM can do far quicker. # Why a new format? Why do the RPMs I find on the net today only work on one distro? There are a number of reasons, some obvious, some not so obvious. Let's take them one at a time:- Dependency metadata: RPMs can have several types of dependencies, the most common being file deps and package deps. In file deps, the package depends on some other package providing that file. Depending on /bin/bash for a shell script is easy, as that file is in the same location with the same name on all systems. Other dependencies are not so simple, there is no file that reliably expresses the dependency, or the file could be in multiple locations. That means sometimes package dependencies are preferred. Unfortunately, there is no standard for naming packages, and distros give them different names, as well as splitting them into different sized pieces. Because of that, often dependency information has to be expressed in a distro-dependent way.
- RPM features: because RPM is, at the end of the day, a tool to help distro makers, they sometimes add new macros and features to it and then use them in their specfiles. People want proper integration of course, so they use Mandrake specific macros or whatever, and then that RPM won't work properly on other distros.
- Binary portability: This one affects all binary packaging systems. A more detailed explanation of the problems faced can be found in Chapter 7 of the developer guide.
- Bad interactions with source code: because the current versions of RPM don't check the system directly, they only check a database, it makes it hard to install them on distros like Gentoo even when they only use file deps. Similar problems arise if you compile things from source.
Using autopackage
# Help! I've got a problem! We have a dedicated tech support page, try looking there if this FAQ does not answer your question. # How do I install autopackages offline? It may be that your computer cannot access the internet for some reason, but you can still download packages some other way and move them across. If you want to do that, you'll need to put at least two other files in the same directory as the .package you wish to install (the first time you do this). That way, autopackage won't try and download its runtime code, instead, it'll use the copies you already obtained. The two files you need are: # Why do packages install to /usr by default? We used to use /usr/local however there are too many distros that do not set up all the paths for this correctly, so various things broke badly. In particular, desktop integration like menu items and file associations tend not to work. Installing to /usr fixed all these issues and gives the user a much more reliable experience. If you dislike it you can alter the default install prefix in the/etc/autopackage/config
file, or use the --prefix switch.
Alternatively, report bugs against your distribution to get
/usr/local (or /opt, or wherever) supported as a first class
citizen and we will whitelist the distribution upstream so
autopackages automatically install to wherever your distribution
prefers. We won't do this unless installing to the chosen prefix
is as functional as installing to /usr however.
# What command line switches do .package files support?
- -x : this extracts the contents and places them in a directory named after the package. Use this when you don't want to install software, but instead just want the contents.
- -p/--prefix : you can select where you want packages to install to with this switch
- -t : forces the use of the TTYFE mode. If you have a graphical environment running but would prefer a purely command line based install, use this switch
- -d : puts the package into debug mode. In this mode, the metadata and payload are extracted to a temporary working directory. Then instead of starting the install scripts, you are dumped into a shell. You can now investigate or change the control scripts or payload. Type "exit" to quit the shell and clean up.
package
command, which is similar
in style to rpm or dpkg. This lets you query the autopackage
database along similar lines to how you would normally do so. It
doesn't currently support "who owns this file", because such
features are better gained through native package manager
integration which has been on the horizon for a long time. But
it supports the basics.
Why is it called "package"? Because originally it was intended
to be a simple abstraction over the systems native package
manager, so you could sit down and do things like install
packages etc without having to know what the distribution and
native package managers were. That feature never really got
implemented. It'd be nice to add though, if you are looking for work.
There is a graphical way to uninstall things, choose "Managed
3rd party software" from the System Tools menu.
# How do I uninstall or upgrade autopackage itself?
Use this command: package remove autopackage-gtk
autopackage-qt autopackage
. That will remove the
autopackage runtime, but it'll leave all existing autopackages
and the database alone. That means you can re-install
autopackage later and carry on just as before. This is also how
you upgrade: uninstall the version you have currently, then
install some other package and the latest version will be downloaded.
# What happens if I install an autopackage without first removing the pre-existing package?
If you install a package, say Gaim, using autopackage and you already have it installed from a native package or from the source then any files which conflict will be backed up into the autopackage database and then replaced. When you uninstall the autopackage, the backed up files will be put back again. This means you will be upgraded to the new version but uninstalling the package won't make the software disappear.
Autopackage 1.2 can automatically remove native packages if they conflict, so fixing this rather unintuitive behaviour.
# Is there an auto-update facility for autopackages? Nope, not yet. The autopackage code itself will always download the latest version on first-run though. # I installed a package and it doesn't show up in the graphical manager program. What gives? Only programs that install menu entries or file associations (.desktop files) show up in the graphical manager apps by default. This is so the view isn't cluttered with libraries and command line programs. This has been fixed in newer versions, in which you can reveal all installed packages using a drop down menu.Why bother?
# What's wrong with centralized repositories, apt style? The system of attempting to package everything the user of the distro might ever want is not scalable. By not scalable, we mean the way in which packages are created and stored in a central location, usually by separate people to those who made the software in the first place. There are several problems with this approach:- Centralisation introduces lag between upstream releases and actually being able to install them, sometimes measured in months or years.
- Packaging as separate from development tends to introduce obscure bugs caused by packagers not always fully understanding what it is they're packaging. It makes no more sense than UI design or artwork being done by the distribution.
- Distro developers end up duplicating effort on a massive scale. 20 distros == the same software packaged 20 times == 20 times the chance a user will receive a buggy package. Broken packages are not rare: see Wine, which has a huge number of incorrect packages in circulation, Mono which suffers from undesired distro packaging, etc
- apt et al require extremely well controlled repos, otherwise they can get confused and ask users to provide solutions manually : this requires an understanding of the technology we can't expect users to have.
- Very hard to avoid the "shopping mall" type user interface, at which point choice becomes unmanagably large: see Synaptic for a pathological example of this. Better UIs are possible but people fundamentally don't expect a big central list of programs, when they are used to a search based interface like Google. Imagine if the same UI were used for locating websites!
- Pushes the "appliance" line of thinking, where a distro is not a platform on which third parties can build with a strong commitment to stability but merely an appliance: a collection of bits that happen to work together today but may not tomorrow: you can use what's on the CDs but extend or modify it and you void the warranty. Appliance distros have their place: live demo CDs, router distros, maybe even server distros, but not desktops. To compete with Windows for mindshare and acceptance we must be a platform.
- Dependencies. MacOS X avoids this by having a large
platform that isn't configurable: you can't opt-out of
installing parts of the base OS like you can with Linux, so
Mac developers can say that they need OS X 10.2 and get a
large chunk of functionality. See the discussion of the Linux
desktop platform below. Some packages still have dependencies, so two techniques are used:
- MacOS provides a simple installer technology. It's been improved in the "Tiger" release.
- Framework linking allows you to have a framework contained within an appfolder. If this is a newer version than the one on the system, other apps will automatically use this.
- The core frameworks support weak linkage, so programs can easily fall back when run on earlier versions of the OS. We have an equivalent to this on Linux now through relaytool.
- Design of desktop environments on Linux. The freedesktop.org standards are oriented around the concept of "drop a file in $XYZ_DIR, regenerate a cache". This doesn't work too well when applications can be installed anywhere. To make things like file associations and URL handlers work, on the Mac the Finder registers appfolders with Launch Services as it finds them. On startup it scans the Applications directory for appfolders and links them into the file type/component database, and also as the user navigates to them in the Finder. In other words, they aren't truly location neutral: to integrate properly you need to hold them in the Applications folder. In order to correctly integrate appfolders with KDE/GNOME, you would have to rewrite them to ignore the freedesktop.org specs (or hack around them). We can't do that: autopackage is designed to work on Linux systems deployed today. You would also have to remove the applications menu.
- There are at least 3 major CPU architectures in use on Linux today: x86, amd64 and PowerPC[64]. The NeXT approach to this was "fat binaries", which essentially means giving everybody binaries for every architecture. But this can massively increase the size of the package: whilst disk space may be cheap, bandwidth is still not cheap for many people.
- Appfolders don't take into account the possibility of optionally downloading/installing translations. For a popular program like the GIMP, the translations alone can easily reach over 12mb. You don't really want to download large quantities of translations for languages you don't speak.
- Apps written purely for deployment using appfolders often end up making bad assumptions, like being able to write to their directory.
- Disk space may be cheap, but memory pressure is still a significant bottleneck on most desktop systems. Static linking prevents page-level sharing which is easily the biggest win in terms of memory usage. Without extensive use of dynamic linking, desktop Linux would slow to a crawl.
- Large numbers of internet users (about 50% seems to be the best estimate) are still on dialup connections. For these people the difference between a 12mb download and a 3mb download can make the difference between trying and app and not trying it, or worse: download a security update or not.
- Security updates are less effective because each app has a private copy of the same code.
I bet autopackage sucks because ...
# If I install an autopackage, RPM won't know about it! This is usually not a problem for applications. Whilst it would be nice to register with the RPM database, there is no requirement to do so and it is common for Linux users to use software not registered in this way (eg source code installs, or commercial Linux products). For libraries RPM will give dependency failures for libraries that may actually be on the system, installed via autopackage. If this happens then you are no worse off than you would have otherwise been, you'll just have to find an RPM for that library. Currently RPM and other package managers like dpkg or portage will silently overwrite files on your system if they're in the way. That's a data loss bug but is fundamental to the design mentality of these systems, which assumes you will never want to install something that isn't provided by the distribution. In future it's possible that autopackage will register with the package manager databases to prevent this, or alternatively a distribution could provide support for additional prefixes (eg via unionfs) one of which could be used by autopackage for its own files. It's not entirely clear that RPM registration would be useful. An RPM which depends on "widget >= 2.3" may actually not depend on upstream Widget at all, but rather the distributions patched or re-configured version of Widget. So having autopackage register an installed copy of upstream Widget would be wrong here, and this is a fundamental problem with todays package managers that is hard to correct. # It can't possibly integrate as well as a package made for my distribution! Desktop integration works well: autopackage understands various desktops and desktop versions for menus, file associations and icon themes. This is useful while most deployed desktops don't [correctly] support the standards! Integration with other software on the system is a matter for upstream: it's a part of the autopackage philosophy that integration between projects should happen with the co-operation of both parties rather than by applying custom distro-specific hacks. Likewise, software such as debconf should be unnecessary especially for desktop software which is what autopackage is targetted at. If upstream doesn't provide good configuration tools then it should be fixed there. Most configuration can be done at first-run time instead of install time. For system wide configuration, typically running the program as root then using the custom config UI will always work better than a generic system: it will have higher usability and can integrate better with tools such as Sabayon. There are a few cases where it does not integrate with the base distribution as well as it should. Init scripts are one obvious example. This will be fixed by the addition of more APIs to abstract the different ways in which distributions work. # This type of easy installer never manages dependencies! Right? Wrong. Autopackage supports dependency management. However it differs from other systems in its approach: rather than maintaining a huge database of files which will inevitably get out of date the moment you install from the source or copy files from another computer, autopackage directly checks the system itself for the stuff it needs. If a dependency is missing, it can be installed automatically from another autopackage (which ideally is maintained by the dependency maintainers themselves). In future it will integrate with the users distribution so tools like apt can be used to resolve dependencies as well. # If a package needs a library that isn't autopackaged and is missing, it'll break! Well, you could try and convince the library maintainers to produce their own autopackages, you could produce your own (make sure upstream know you're doing this!), or you could statically link the library. Long term the solution to this problem is the development of a desktop Linux platform, so it's possible to predict with certainty what will be available on your users systems.Improving Linux
# What's a desktop Linux platform? Why do we need one? Essentially, software is easy to install on Windows and MacOS not because of some fancy technology they have that we don't - arguably Linux is the most advanced OS ever seen with respect to package management technology - but rather because by depending on "Windows 2000 or above" developers get a huge chunk of functionality guaranteed to be present, and it's guaranteed to be stable. In contrast, on Linux you cannot depend on anything apart from the kernel and glibc. Beyond that you must explicitly specify everything, which restricts what you can use in your app quite significantly. This is especially true because open source developers often depend on new versions of libraries that have been out for perhaps only a few months, putting Linux users on a constant upgrade treadmill. Worse, sometimes the only way of easily upgrading a particular component like GTK+ is to upgrade your entire distribution which may be hard, especially for dialup users (the norm in third world countries). The problem is especially big because many distros don't install obsolete library versions by default, relegating them to compatibility packages the user must know about and request explicitly. A desktop Linux platform would help, because instead of enumerating the dependencies of an app each time (dependencies that may be missing on the users system), application authors can simply say that their program requires "Desktop Linux v1.0" - a distribution that provides platform v1.3 is guaranteed to satisfy that. Support for a given platform version implies that a large number of libraries are installed. The platform would update every year, meaning that for applications that were Desktop Linux compatible (or whatever branding was used), there would be a worst-case upgrade lifespan of 12 months. A platform is more than just providing a bunch of packages. It implies a commitment to stability - that's why it's called a platform. Would you want to build your house on sand? No? Neither do application developers, and this is why a platform is so important. A fair amount of work is required to make this happen, and it's also quite political. You can read more thoughts on this in the NOTES file. # What about security? What about it? # You mean you're not going to do package signing? This is something we're still thinking about. In a decentralised environment like the one we're aiming for, it's difficult to provide guarantees the the package won't blow up your hard disk. As anybody can produce a .package file without permission from us (or anybody else) there is always a risk, no matter how slight, that you will download a package that will attempt to destroy data or root your box. This is a risk that exists with any form of software distribution, including RPMs and source tarballs. So what can be done about? Well, one solution is to have a known trusted authority digitally sign all packages, where they audit the code for trojans. This introduces centralisation however, and worse, if you no longer trust packages which aren't signed, any holdups at the signing authority can cause serious problems. Another possibility is a simplistic network of trust. The root server trusts the gnome.org server, and gnome.org trusts gstreamer.net, and gstreamer.net trusts the gst-player package. So you trust gst-player. That might be workable, but it's not something we're currently implementing, and it'd require quite a bit of thought. # Won't the lack of signing let malware and spyware into Linux, like in Windows? No. This is a logical fallacy. Spyware is not caused by a lack of package signing on Windows: in fact, ActiveX controls themselves are often signed yet much spyware gets in by exploiting weaknesses in ActiveX (for instance, by using objects that aren't safe for scripting). Microsoft often ship signed EXE files as they wrap documents in them. Spyware is most often used by proprietary software vendors as a way to fund the development of their software. By shipping a program with Gator or some similar piece of nastyness, they can derive income from "gratis" programs. Package signing in dpkg/rpm and the like does not solve this problem, because it primarily affects proprietary software which isn't accepted into such repositories anyway. Certainly, if a user has decided they want some program they will install it regardless of what form it comes in: if Linux gets in their way they will blame the operating system, not the application vendor. autopackage doesn't enable proprietary software vendors to ship spyware any more than Loki Setup does. As running proprietary software on Linux becomes more popular therefore, spyware is likely to become a problem. It will be tackled, but it's a different project to this one. Long term the true solution is for proprietary end-user software to be replaced with Free software. However, that's certainly not going to be done by tomorrow so we must plan ahead to deal with the transitional period (and some software will perhaps always be proprietary, like commercial games). # So how can we fight spyware if not at the package management level? Fundamentally, this is about filtering and trust. We know there are bad guys out there who want to flood our IT infrastructure with malicious programs, and we know we must do something to stop them. So what can we do? Fighting spyware is the same as fighting viruses and bots. It's about preventing a certain class of programs from running. We say "certain class" because the definition of malicious varies from person to person: there are actually people out there who install things like Gator deliberately, because they like the services it provides and do not care what it does in the background. So one persons definition of "bad software" is different to anothers, and any systematic attempt to exclude programs must take that into account. There are several problems that make this task difficult. The first is that there are many ways for malicious code to get access to the systems resources. Most obviously, the user can download and run a trojan horse. The only thing we can do here is intercept the attempt to run a program and warn the user what they are really running. This could be implemented by a new Linux Security Module which maintains a database of cleared files and denies permission on initial execution, while triggering a check against a blacklist and asking the user if they want to proceed. Unfortunately, you quickly realise that such a technique is ineffective unless a whitelist of digital signatures is used, as otherwise polymorphic containers can be used to evade detection. In other words, if a program is not signed by a trusted key, a warning is produced telling the user not to run it. The point here is not to run thorough checks on new vendors and binaries, rather to make getting signed binaries very easy, and then if you cause trouble the key can be revoked quickly. It's very important that gaining a trusted key is as easy as possible, because otherwise you get the problem Microsoft have with signed drivers. In an attempt to ensure quality control of drivers, Microsoft started a signing program whereby your drivers would be submitted for analysis to the Windows Hardware Quality Labs (WHQL), and if it passed the tests your drivers would be signed. If they didn't, the user would receive a warning when installing the hardware. Predictably, many drivers are still not signed and users have learned to ignore the warnings. Any warning or advice the computer gives must be accurate, and have minimal false positives otherwise the user will simply ignore them. Therefore, for a software vendor you aren't sure is trustworthy it's better to give them a signed key and the benefit of the doubt: if they turn out to violate the policies of the trust network then the key can be revoked in an online update before too much damage is done. The alternative is that users get used to accepting unsigned software, and at that point the defence is useless. There are other defences possible. Exec-shield and ProPolice type technologies can help defend against viruses and bots that would invade a computer via flaws in its construction. SELinux can be used to lock down what mobile code is able to do, opening up the possibility of rich-client mobile software that is nonetheless secure: something ActiveX and Java tried to do, but failed. Finally, no matter how good the defences we put in place, they will inevitably be breached. When they are, damage control will be necessary. Programs like Tripwire can monitor a system for unexpected changes, rootkit scanners can be used to detect breakins. All these programs need to be adapted for the desktop and welded into one cohesive system that can be deployed, by default, on desktop Linux distributions. Again, doing this work is necessary but not a part of the autopackage project.For packagers
# Can I use autopackage for my new distro? No, autopackage is not a tool for building distributions. It's a tool for 3rd party software developers to produce packages for their website that any Linux user can use. # Does it support commercial software? The licensing of autopackage is suitable for proprietary software vendors to use (LGPL). It cannot display EULAs, due to the intended UI vision, which implies zero interactivity. But that's OK - you shouldn't be displaying a EULA in the installer anyway as on a multi-user system it's possible for person A to install the program, and person B to then use it without ever seeing the agreement. So it should be done when you first start the program in a new user account. Remember that EULAs are probably non-binding in many areas of the world. # How does the multiple front ends system work? When the user runs a package, the scripts figure out which front end to use based on a series of heuristics. If you ran it from the command line it will use the command line interface. If you ran it from inside of X, for instance from Konqueror or Nautilus then a graphical front end will be selected based on your running desktop environment. The back end (the part that actually does the installation) communicates with the front end via a simple protocol based (currently) on named pipes. # Can autopackages be translated? Yes, autopackage itself is internationalised and the specfile format used (which is based on the INI format) supports having the same sections and keys in multiple languages. In future we hope to ship with useful stock phrases that are pre-translated. # What widget toolkit does the default graphical front end use? It uses GTK2 and is written in C, because:- We know C
- We know GTK
- C has a stable ABI, so we only need one binary
- GTK is the most 'neutral' toolkit, in that even people with really eclectic desktops probably have one or two gtk apps kicking around
- GTK is a good toolkit with lots of features. We like it.