logo Autopackage - Easy Linux Software Installation
General
Using autopackage
Why bother?
I bet autopackage sucks because ...
Improving Linux
For packagers

General

# What is autopackage?

For users: it makes software installation on Linux easier. If a project provides an autopackage, you know it can work on your distribution. You know it'll integrate nicely with your desktop and you know it'll be up to date, because it's provided by the software developers themselves. You don't have to choose which distro you run based on how many packages are available.

For developers: it's software that lets you create binary packages for Linux that will install on any distribution, can automatically resolve dependencies and can be installed using multiple front ends, for instance from the command line or from a graphical interface. It lets you get your software to your users quicker, easier and more reliably. It immediately increases your user base by allowing people with no native package to run your software within seconds.

# Is autopackage meant to replace RPM?

No. RPM is good at managing the core software of a distro. It's fast, well understood and supports features like prepatching of sources. What RPM is not good at is non-core packages, ie programs available from the net, from commercial vendors, magazine coverdisks and so on. This is the area that autopackage tackles. Although in theory it'd be possible to build a distro based around it, in reality such a solution would be very suboptimal as we sacrifice speed for flexibility and distro neutrality. For instance, it can take several seconds to verify the presence of all required dependencies, something that RPM can do far quicker.

# Why a new format? Why do the RPMs I find on the net today only work on one distro?

There are a number of reasons, some obvious, some not so obvious. Let's take them one at a time:

There are various reasons why a new format was created rather than changing RPM. The first is that in order to do dependency checking in a distribution neutral fashion, a completely new dependency model was required. RPM expresses dependencies only in terms of other RPMs (and the same is true of all current package managers). An expressed dependency is usually given by a name or file location, but both of these things vary - usually pointlessly - between distributions. Worse, a dependency does not encode any information on where to find it or how to install it if it's missing: to get these features databases with all possible package names must be provided externally. If a name is found in a package but not a database you get the familiar "E: Broken Packages" error from apt. It is said that the package universe lacks closure. But, this is a natural and common occurance that any robust installer should be able to deal with cleanly.

Another reason was that when autopackage was designed, distributions and desktops differed greatly in how they dealt with things like menus and file assocations. It's been three years now, and this is still the case. While the situation is looking up, and we should soon have robust and well designed standards for these facilities, it will be years until they have been fully rolled out. Also there are still many things that are not yet standardised, like init scripts, documentation/help systems and so on.

To deal with this, it was felt that an API based approach rather than a table based approach was the way forward. New APIs could be added easily to deal with the case of things like installing a menu item which can be highly complex. Quirks in the distributions could be easily dealt with as the full power of bash scripting was available at every stage, so decisions like "should this file be installed here or here" can be answered on the fly.

Finally, there was a psychological reason. While RPMs can theoretically be multi-distro, in practice they never are. In practice, people do not know how to build multi-distro RPMs so on virtually all open source project websites you have different RPMs for different versions of each distribution. Worse, much software is not designed with binary portability in mind. Features are enabled/disabled at compile time instead of runtime, paths are hard coded into the binaries.

Creating a totally new format forces people to learn about it from us, and along the way we can teach them how to build robust and portable binaries using the tools we have produced, like apbuild which deals with the baroque glibc/gcc combination, relaytool which makes it trivial to adapt to missing libraries/library features at runtime, and binreloc which lets you easily make a typical autotools based project binary relocatable (installable to any prefix) at runtime. By using a new format, we ram home the point that this is different to what has come before, and requires modifications to the source, so making autopackages even more reliable and useful.

# How does it work?

An autopackage (a .package file) contains all the files needed for the package in a distribution neutral format with special control files inside, wrapped in a tarball with a stub script appended to the beginning. In order to install a .package file, you run it, and the scripts then check your system for the autopackage tools and offers to download them if they're not present. It'll then boot the front end of your choice and begin doing the things that installers do - check for dependencies, copy files and so on. Finally, you can uninstall or repair a package with the "package uninstall" or "package verify" commands.

# Does it do automatic dependency resolution like apt and emerge?

Yes, but in a different way. Currently, maintainers are expected to host a small file that describes what packages are available, what mirrors are available, and optionally what interface versions the packages fulfil. The skeleton files which encapsulate a dependency contain the URL to these XML files, therefore the "bootstrap" info needed to locate a dependency is built into the package. These XML files are a part of the luau project.

In future we will be able to use apt and similar depsolvers directly in case there are no available autopackages.

Using autopackage

# Help! I've got a problem!

We have a dedicated tech support page, try looking there if this FAQ does not answer your question.

# How do I install autopackages offline?

It may be that your computer cannot access the internet for some reason, but you can still download packages some other way and move them across. If you want to do that, you'll need to put at least two other files in the same directory as the .package you wish to install (the first time you do this). That way, autopackage won't try and download its runtime code, instead, it'll use the copies you already obtained. The two files you need are:

# Why do packages install to /usr by default?

We used to use /usr/local however there are too many distros that do not set up all the paths for this correctly, so various things broke badly. In particular, desktop integration like menu items and file associations tend not to work.

Installing to /usr fixed all these issues and gives the user a much more reliable experience. If you dislike it you can alter the default install prefix in the /etc/autopackage/config file, or use the --prefix switch.

Alternatively, report bugs against your distribution to get /usr/local (or /opt, or wherever) supported as a first class citizen and we will whitelist the distribution upstream so autopackages automatically install to wherever your distribution prefers. We won't do this unless installing to the chosen prefix is as functional as installing to /usr however.

# What command line switches do .package files support?

These switches are stable and can be relied upon to exist in future.

# How do I uninstall packages, list packages, show what files a package installed ... ?

You can use the package command, which is similar in style to rpm or dpkg. This lets you query the autopackage database along similar lines to how you would normally do so. It doesn't currently support "who owns this file", because such features are better gained through native package manager integration which has been on the horizon for a long time. But it supports the basics.

Why is it called "package"? Because originally it was intended to be a simple abstraction over the systems native package manager, so you could sit down and do things like install packages etc without having to know what the distribution and native package managers were. That feature never really got implemented. It'd be nice to add though, if you are looking for work.

There is a graphical way to uninstall things, choose "Managed 3rd party software" from the System Tools menu.

# How do I uninstall or upgrade autopackage itself?

Use this command: package remove autopackage-gtk autopackage-qt autopackage . That will remove the autopackage runtime, but it'll leave all existing autopackages and the database alone. That means you can re-install autopackage later and carry on just as before. This is also how you upgrade: uninstall the version you have currently, then install some other package and the latest version will be downloaded.

# What happens if I install an autopackage without first removing the pre-existing package?

If you install a package, say Gaim, using autopackage and you already have it installed from a native package or from the source then any files which conflict will be backed up into the autopackage database and then replaced. When you uninstall the autopackage, the backed up files will be put back again. This means you will be upgraded to the new version but uninstalling the package won't make the software disappear.

Autopackage 1.2 can automatically remove native packages if they conflict, so fixing this rather unintuitive behaviour.

# Is there an auto-update facility for autopackages?

Nope, not yet. The autopackage code itself will always download the latest version on first-run though.

# I installed a package and it doesn't show up in the graphical manager program. What gives?

Only programs that install menu entries or file associations (.desktop files) show up in the graphical manager apps by default. This is so the view isn't cluttered with libraries and command line programs.

This has been fixed in newer versions, in which you can reveal all installed packages using a drop down menu.

Why bother?

# What's wrong with centralized repositories, apt style?

The system of attempting to package everything the user of the distro might ever want is not scalable. By not scalable, we mean the way in which packages are created and stored in a central location, usually by separate people to those who made the software in the first place. There are several problems with this approach:

# What's wrong with NeXT/MacOSX style appfolders?

It's quite tricky to implement appfolders on Linux using the same implementation as used on NeXT/RISC OS/MacOS X, for several reasons:

It's worth noting that Apple themselves don't always use appfolders anymore: in fact iTunes itself comes inside an installer.

It'd still be possible to do an appfolders based Linux, but you would probably end up creating a new distro and heavily patching the desktop environments. This could be interesting, but it doesn't solve anything for the millions of Linux users already out there and happy with the OS they have.

But ... (there's always a but) ... this does not mean we can't implement the appfolders style user interface in some other way. People often assume that the only way to have a drag/drop based user interface is to do things exactly like MacOS X does. Not so! The UI vision document talks about how we can implement a better form of the appfolders drag/drop "apps as first class user objects" user interface on top of a Linux style packaging system.

# Why not statically link everything?

A commonly proposed solution to things being hard to install on Linux is to eliminate the biggest problem - dependencies - by not having any dependencies at all. This solution is typically not viable:

That said, static linking is not necessarily evil. Dependencies do have a cost, and should be treated with care. For rare or very new libraries, static linking may be the best way to keep things simple for the user especially if the code is small. For very unstable libraries, you basically have no choice, as the version you need will often not be available. The apbuild tool makes statically linking much easier than it would otherwise be for occasions when it is deemed that the benefit outweighs the cost.

I bet autopackage sucks because ...

# If I install an autopackage, RPM won't know about it!

This is usually not a problem for applications. Whilst it would be nice to register with the RPM database, there is no requirement to do so and it is common for Linux users to use software not registered in this way (eg source code installs, or commercial Linux products).

For libraries RPM will give dependency failures for libraries that may actually be on the system, installed via autopackage. If this happens then you are no worse off than you would have otherwise been, you'll just have to find an RPM for that library.

Currently RPM and other package managers like dpkg or portage will silently overwrite files on your system if they're in the way. That's a data loss bug but is fundamental to the design mentality of these systems, which assumes you will never want to install something that isn't provided by the distribution. In future it's possible that autopackage will register with the package manager databases to prevent this, or alternatively a distribution could provide support for additional prefixes (eg via unionfs) one of which could be used by autopackage for its own files.

It's not entirely clear that RPM registration would be useful. An RPM which depends on "widget >= 2.3" may actually not depend on upstream Widget at all, but rather the distributions patched or re-configured version of Widget. So having autopackage register an installed copy of upstream Widget would be wrong here, and this is a fundamental problem with todays package managers that is hard to correct.

# It can't possibly integrate as well as a package made for my distribution!

Desktop integration works well: autopackage understands various desktops and desktop versions for menus, file associations and icon themes. This is useful while most deployed desktops don't [correctly] support the standards!

Integration with other software on the system is a matter for upstream: it's a part of the autopackage philosophy that integration between projects should happen with the co-operation of both parties rather than by applying custom distro-specific hacks.

Likewise, software such as debconf should be unnecessary especially for desktop software which is what autopackage is targetted at. If upstream doesn't provide good configuration tools then it should be fixed there.

Most configuration can be done at first-run time instead of install time. For system wide configuration, typically running the program as root then using the custom config UI will always work better than a generic system: it will have higher usability and can integrate better with tools such as Sabayon.

There are a few cases where it does not integrate with the base distribution as well as it should. Init scripts are one obvious example. This will be fixed by the addition of more APIs to abstract the different ways in which distributions work.

# This type of easy installer never manages dependencies! Right?

Wrong. Autopackage supports dependency management. However it differs from other systems in its approach: rather than maintaining a huge database of files which will inevitably get out of date the moment you install from the source or copy files from another computer, autopackage directly checks the system itself for the stuff it needs.

If a dependency is missing, it can be installed automatically from another autopackage (which ideally is maintained by the dependency maintainers themselves).

In future it will integrate with the users distribution so tools like apt can be used to resolve dependencies as well.

# If a package needs a library that isn't autopackaged and is missing, it'll break!

Well, you could try and convince the library maintainers to produce their own autopackages, you could produce your own (make sure upstream know you're doing this!), or you could statically link the library. Long term the solution to this problem is the development of a desktop Linux platform, so it's possible to predict with certainty what will be available on your users systems.

Improving Linux

# What's a desktop Linux platform? Why do we need one?

Essentially, software is easy to install on Windows and MacOS not because of some fancy technology they have that we don't - arguably Linux is the most advanced OS ever seen with respect to package management technology - but rather because by depending on "Windows 2000 or above" developers get a huge chunk of functionality guaranteed to be present, and it's guaranteed to be stable.

In contrast, on Linux you cannot depend on anything apart from the kernel and glibc. Beyond that you must explicitly specify everything, which restricts what you can use in your app quite significantly. This is especially true because open source developers often depend on new versions of libraries that have been out for perhaps only a few months, putting Linux users on a constant upgrade treadmill.

Worse, sometimes the only way of easily upgrading a particular component like GTK+ is to upgrade your entire distribution which may be hard, especially for dialup users (the norm in third world countries). The problem is especially big because many distros don't install obsolete library versions by default, relegating them to compatibility packages the user must know about and request explicitly.

A desktop Linux platform would help, because instead of enumerating the dependencies of an app each time (dependencies that may be missing on the users system), application authors can simply say that their program requires "Desktop Linux v1.0" - a distribution that provides platform v1.3 is guaranteed to satisfy that. Support for a given platform version implies that a large number of libraries are installed.

The platform would update every year, meaning that for applications that were Desktop Linux compatible (or whatever branding was used), there would be a worst-case upgrade lifespan of 12 months.

A platform is more than just providing a bunch of packages. It implies a commitment to stability - that's why it's called a platform. Would you want to build your house on sand? No? Neither do application developers, and this is why a platform is so important.

A fair amount of work is required to make this happen, and it's also quite political. You can read more thoughts on this in the NOTES file.

# What about security?

What about it?

# You mean you're not going to do package signing?

This is something we're still thinking about. In a decentralised environment like the one we're aiming for, it's difficult to provide guarantees the the package won't blow up your hard disk. As anybody can produce a .package file without permission from us (or anybody else) there is always a risk, no matter how slight, that you will download a package that will attempt to destroy data or root your box. This is a risk that exists with any form of software distribution, including RPMs and source tarballs.

So what can be done about? Well, one solution is to have a known trusted authority digitally sign all packages, where they audit the code for trojans. This introduces centralisation however, and worse, if you no longer trust packages which aren't signed, any holdups at the signing authority can cause serious problems.

Another possibility is a simplistic network of trust. The root server trusts the gnome.org server, and gnome.org trusts gstreamer.net, and gstreamer.net trusts the gst-player package. So you trust gst-player. That might be workable, but it's not something we're currently implementing, and it'd require quite a bit of thought.

# Won't the lack of signing let malware and spyware into Linux, like in Windows?

No. This is a logical fallacy. Spyware is not caused by a lack of package signing on Windows: in fact, ActiveX controls themselves are often signed yet much spyware gets in by exploiting weaknesses in ActiveX (for instance, by using objects that aren't safe for scripting). Microsoft often ship signed EXE files as they wrap documents in them.

Spyware is most often used by proprietary software vendors as a way to fund the development of their software. By shipping a program with Gator or some similar piece of nastyness, they can derive income from "gratis" programs.

Package signing in dpkg/rpm and the like does not solve this problem, because it primarily affects proprietary software which isn't accepted into such repositories anyway. Certainly, if a user has decided they want some program they will install it regardless of what form it comes in: if Linux gets in their way they will blame the operating system, not the application vendor. autopackage doesn't enable proprietary software vendors to ship spyware any more than Loki Setup does.

As running proprietary software on Linux becomes more popular therefore, spyware is likely to become a problem. It will be tackled, but it's a different project to this one.

Long term the true solution is for proprietary end-user software to be replaced with Free software. However, that's certainly not going to be done by tomorrow so we must plan ahead to deal with the transitional period (and some software will perhaps always be proprietary, like commercial games).

# So how can we fight spyware if not at the package management level?

Fundamentally, this is about filtering and trust. We know there are bad guys out there who want to flood our IT infrastructure with malicious programs, and we know we must do something to stop them. So what can we do?

Fighting spyware is the same as fighting viruses and bots. It's about preventing a certain class of programs from running. We say "certain class" because the definition of malicious varies from person to person: there are actually people out there who install things like Gator deliberately, because they like the services it provides and do not care what it does in the background. So one persons definition of "bad software" is different to anothers, and any systematic attempt to exclude programs must take that into account.

There are several problems that make this task difficult. The first is that there are many ways for malicious code to get access to the systems resources. Most obviously, the user can download and run a trojan horse. The only thing we can do here is intercept the attempt to run a program and warn the user what they are really running. This could be implemented by a new Linux Security Module which maintains a database of cleared files and denies permission on initial execution, while triggering a check against a blacklist and asking the user if they want to proceed.

Unfortunately, you quickly realise that such a technique is ineffective unless a whitelist of digital signatures is used, as otherwise polymorphic containers can be used to evade detection. In other words, if a program is not signed by a trusted key, a warning is produced telling the user not to run it. The point here is not to run thorough checks on new vendors and binaries, rather to make getting signed binaries very easy, and then if you cause trouble the key can be revoked quickly.

It's very important that gaining a trusted key is as easy as possible, because otherwise you get the problem Microsoft have with signed drivers. In an attempt to ensure quality control of drivers, Microsoft started a signing program whereby your drivers would be submitted for analysis to the Windows Hardware Quality Labs (WHQL), and if it passed the tests your drivers would be signed. If they didn't, the user would receive a warning when installing the hardware.

Predictably, many drivers are still not signed and users have learned to ignore the warnings. Any warning or advice the computer gives must be accurate, and have minimal false positives otherwise the user will simply ignore them. Therefore, for a software vendor you aren't sure is trustworthy it's better to give them a signed key and the benefit of the doubt: if they turn out to violate the policies of the trust network then the key can be revoked in an online update before too much damage is done. The alternative is that users get used to accepting unsigned software, and at that point the defence is useless.

There are other defences possible. Exec-shield and ProPolice type technologies can help defend against viruses and bots that would invade a computer via flaws in its construction. SELinux can be used to lock down what mobile code is able to do, opening up the possibility of rich-client mobile software that is nonetheless secure: something ActiveX and Java tried to do, but failed.

Finally, no matter how good the defences we put in place, they will inevitably be breached. When they are, damage control will be necessary. Programs like Tripwire can monitor a system for unexpected changes, rootkit scanners can be used to detect breakins. All these programs need to be adapted for the desktop and welded into one cohesive system that can be deployed, by default, on desktop Linux distributions.

Again, doing this work is necessary but not a part of the autopackage project.

For packagers

# Can I use autopackage for my new distro?

No, autopackage is not a tool for building distributions. It's a tool for 3rd party software developers to produce packages for their website that any Linux user can use.

# Does it support commercial software?

The licensing of autopackage is suitable for proprietary software vendors to use (LGPL).

It cannot display EULAs, due to the intended UI vision, which implies zero interactivity. But that's OK - you shouldn't be displaying a EULA in the installer anyway as on a multi-user system it's possible for person A to install the program, and person B to then use it without ever seeing the agreement. So it should be done when you first start the program in a new user account.

Remember that EULAs are probably non-binding in many areas of the world.

# How does the multiple front ends system work?

When the user runs a package, the scripts figure out which front end to use based on a series of heuristics. If you ran it from the command line it will use the command line interface. If you ran it from inside of X, for instance from Konqueror or Nautilus then a graphical front end will be selected based on your running desktop environment. The back end (the part that actually does the installation) communicates with the front end via a simple protocol based (currently) on named pipes.

# Can autopackages be translated?

Yes, autopackage itself is internationalised and the specfile format used (which is based on the INI format) supports having the same sections and keys in multiple languages. In future we hope to ship with useful stock phrases that are pre-translated.

# What widget toolkit does the default graphical front end use?

It uses GTK2 and is written in C, because:

# Is there a KDE/Qt frontend?

Yes, a Qt frontend has been written and is available from the downloads page. If it's installed it will override the GTK frontend. You can see screenshots of it in the gallery.

# Can I autopackage KDE/Qt apps?

It's possible, but you most have both g++-3.2/3.3 and 3.4/4.0 installed because of the unstable C++ ABI. Set the environment variable APBUILD_CXX1 to your 3.2/3.3 g++ binary, and CXX2 to your 3.4/4.0 binary. Autopackage will compile your application with both compilers and then install the proper binary for the systemm when your package is installed on a users computer. GTKmm apps are OK because you can statically link GTKmm and not suffer too badly, but Qt/KDE cannot be statically linked in a sensible fashion.

# Is autopackage cross-platform?

No, right now it only works on x86 and x86-64. We may support non-x86 in the future, but right now we don't. The main reason is because nobody has offered to support non-x86 builds. If you are an experienced user of a non-x86 Linux and have some time to compile releases for us and support them if users report issues, we'd love to hear from you.

# This approach feels way too complicated. Is there a simpler alternative?

Sure. Check out Zero-Install. It's a network based filing system that means dependencies are resolved by the process of loading the application. All dependencies are available via the web, and given unique locations within the filing system becaues of that. An interesting approach, definately worth checking out.