“Snaps” will kill Ubuntu?

Sounds stupid. How the base application distribution system of Ubuntu can kill it?

It is simple: because of “jailing” or “sand-boxing”.

I had a displeasure to install two key applications in my system: FreeCad and Firefox from “snap”.

And you know what?

I couldn’t use them.

Firefox was unable to actually see anything except the “~/Download” folder and I use it as a primary HTML renderer tool. So if it cannot see absolutely every file in system, then it is useless.

Then FreeCad could only see “~/tmp” and “~/” (home) folders. Normally you would say, it is ok. I did however created a huge LVM volume which I mounted in a point which can be accessed by all user accounts. Then, I thought, each user may create a sym-link to folder on LVM volume and place it inside their “~/” home folder. It is easy to use and reasonable, more reasonable that creating separate LVM volume for each user and mounting it inside their “home” folders.

Of course sand-boxing destroyed this idea. Sorry, sym-links leading out of allowed folders are out of question.

Why it is so stupid?

To protect us, users of course. Because we re dumb idiots, and we can’t tell rogue application from legitimate one.

True.

But the snap and flatpack, both assume that the person who wrote the application can tell it! We, users cannot say: “this snap can access that folder and f*off dear sandboxing”. No, the snap distributor is the one who can do it.

Where is logic in that? The one who creates rogue snap will for sure enable access to all folder he needs. So there is zero protection.

The true protection comes only when we, users, are able to decide what to protect and what to not protect.

Why it is not transparent?

The sandboxing, when done well and having proper GUI for users to deal with it is a great tool. I would be happy to be able to right click on any application and say: “this one bastard cannot get there!” I am very curious why it is not done this way, anyway.

The great source of confusion and proof of idiocy is in the fact that snap “jail” and flat-pack “sandboxing” actually do hide some folders and inject others. And this happens totally without user being told it is done. You click “open” dialog and… hey, where are my folders?! Alike you may find folders there which do not exist on your disk at all.

I could accept sand-boxing which would pop up an “access denied” error or something like that, but not one which simply makes things gone as if never been there. Also added “virtual” folders and files must be clearly marked. Of course that last will be tricky, as most file systems do not support such a marking, but I suppose this is up to those who promote aggressive sandboxing to scratch their bald heads about it.

Summary

Having secure and safe applications is great. Sometimes it comes close however to make them so secure and safe, that it is almost as if we are not having them at all. A html browser which can’t see file on disk, a CAD which can’t access other folders than “~/home”, both are so cumbersome to use that users will simply throw them away.

And replace with less secure solutions. Is this what you want guys? Make users to absolutely reject any security measures because you tightened them too much? Why not simply create a more reasonable file system access rights than old, useless “user/group/root” triad? Why don’t let set access rights per application?

It is like “change your password weekly” mania in many companies. Sure it good. But so problematic to users, that you will sooner or later end up with two groups of users:

  • those who write password on sticky notes and place them on their desks or;
  • those who create “algorithmic” passwords, like “two first letters of a month followed by week number, followed by name of my mum and silly character”;

Both are bad security practices, but what would you expect?!

Package dependencies and why I do start hating Linux.

I am using Linux at home since more than 20 years now. I did start from a RedHat during pre-internet Windows 3.11 era. At that time it was nothing more than a toy or a server system, since there was no applications which could be used to do any work.

Then I moved to Debian and recently to Ubuntu.

Moving from Debian to Ubuntu was a desperate move. I simply thought that maybe moving to one of most popular distributions will let me use it with the same flexibility I had at work with Windows XP.

I was wrong.

Sometimes ideas do not scale

DLL hell and Linux approach to a problem

My first contact with Debian met in time with the of something called “DLL hell” at Windows. For those who does not know what it is: DLL is a “Dynamically Linked Library”. This is exactly the same concept as “*.so” libraries in Linux.

DLL were introduced by Microsoft together with Windows when multitasking came to PC world. The concept behind it was: “Since many programs are using the same library why to keep two copies of it? And is there any reason to load it twice to precious RAM memory?”.

At that moment nobody thought about “versioning”. I guess they thought, that new version of a library will be always able to substitute for an older version.

The “DLL hell” appeared when programmers discovered that even they can ask operating system to load a certain library for them, the system is unable to do it when the library is not there. So each program needed to carry it’s own set of DLL libraries. The installation media was bloated, but during the install programs did drop all DLLs in one, shared system folder.

You may already guess, it created a lot of problems with versioning. Should it override existing file? Should it delete the library on uninstalling or keep it? Windows did not offer any kind of “use tracking” nor “dependency tracking”.

So very quickly application creators decided to provide all needed libraries except system ones and not dump them into a shared folder. Instead everything is kept in an application folder. This way programmers can be sure that the set of libraries the user is using is exactly the same which was used for testing. And they can just wipe them all out.

At that moment Linux was aware of the problem and decided to solve it. They did introduce the “package dependency system”. In RedHat it was RPM, in debian it is DPKG or APT. This was a good idea.

A package maintainer did specify what packages in which version his program needs. And the system kept track of it. When package is to be installed system checks what dependencies are needed and installs them too. If package is uninstalled it may inform user that some packages are no longer used by anything at the moment and ask if it should remove them.

It really saved a lot of disk space and RAM.

But it was internally cracked.

Dependency hell

It did not took long for a “dependency hell” to appear.

Any Linux user who does not chase the most up-to-date distribution but just wishes to cherry pick was hit by it. I was, in fact, three or four times put in such a deadlock that I could not install any package at all. A complete wipe out was necessary and it is never an easy work.

At the beginning of my play with Debian it had 8 distribution CDs with about 6000 packages on them. Six thousands package which do depend on each other!

People are always making mistakes. A person who write dependencies of a package can make a mistake. You may test each package install and uninstall operation in a certain distribution but You can’t do it when people start cherry picking applications from other distributions or sources. And the will do. Linux is about a freedom right?

If You do so sooner or later You will end up with a message saying that something cannot be installed because a dependency is broken. Usually it will happen when package maintainer fixed a version of a library his package depends on and instead of linking to “xxxx.2.4.so” he linked to “xxxx.so” leaving job to the system. He said: “My program is incompatible with different versions”. Or closer to the truth: “I tested it with that version of a library so it may not work with others”.

You may try to solve it by hand, but I never succeeded. Once I even ended up without a GUI because X-server was a conflicting package. So sooner or later You will be forced to do a distribution upgrade. Upgrade everything You had. Yes I know, I could be upgrading it item-by-item manually and carefully tracking best dependencies but honestly… it is a work which may take hours or even days of my time.

And we are again back to resources. People and time.

Just testing 6000 packages if they install or not takes about 3 months of full time job (assuming You need just 5 minutes for each of them: install, start, uninstall, check if everything was purged). And what about modern distros which count in tens of thousands packages?

There is simply no way to test their dependencies.

Stiffening it up

One thing must to be said: You usually won’t have big problems if You stick with stable distribution and do not install anything which is not inside of it.

Then Linux is great!

You just select a program from a nice looking manager, click and it runs.

But is it much greater than Windows?

No.

The main difference is that You need to search internet for a program and the next difference is a download time and disk space used. It may count in an about order of magnitude or even more, but it is the only difference.

From user point of view it is absolutely nothing. If You have a decent job there is no problem to scoop up a bit of money for second or third hard disk or an additional memory chip. Upgrading CPU however is quite expensive. So the size doesn’t matter, but the speed do.

The price You pay for working with Linux in a “distribution lock-in” model is tremendous.

You can’t install application You need. You can’t install the version You need. And, what is a sole source of a problem: You can’t install two versions of it a the same time.

So You start cherry-picking from other versions of Your distribution or even from a totally outside sources. Remember, the time needed for testing a distribution is huge so they are almost always a year back in time when compared with the newest stable version of an application You need.

Oh, by the way. You can upgrade in this model to next distribution. But then You have to upgrade everything. Certainly not being sure Your fine tuned tweaks will survive it.

Warranty

Computers are used for work. A user must feel safe, that by installing some application others won’t be broken. A user must be able to step back if something gone wrong.

If an application works in a moderate isolation this can be easy. You install it in a non-standard location, try it and remove it. Unless an application is not messing up with some global data (registry in Windows, /etc/ in Linux) there is no problem.

Second, if an application supplier needs to give You at least vague warranty that his program works he need to test it against a certain set of libraries. Please remember, a bug fix is not always a fix. Sometimes, especially when documentation for a library was poor a bug is actually a “feature”. People checked how it worked and assumed it should be that way. Fixing it breaks everything then. Again this is easiest to just include the set of libraries together with an application.

This workflow could have worked.

But in package based Linux it does not.

First, there is no way to tell where the program should be installed.

Second it is in fact scattered all around. At my server at work I tried to get two totally separate copies of a program to run and it was a misery.

And third, You can’t have neither two different versions in system nor two different copies of the application.

This still could be acceptable, if not the dependencies. Installing application X from outside Your base distribution may pull some libraries and boost up theirs versions. Your new application needs them and will be happy with them because it was tested against them. But what with other applications You have? Were they tested against this new library? Yes, they allow it to be installed, but mostly because otherwise You would not be able to install anything form outside of a base distribution. But they were not tested against it!

Now imagine, You have found X to not be Your sweet spot. You need to remove it and revert back. So you remove it. What have happen with libraries? Did they also revert back?

No.

Build it from source

Yeah, right. You want a house? Build it from blueprints!

Downloading Inkscape and compiling it took me, not counting struggling with source dependencies, about four hours. Four hours. And to be able to use with a dedicated set of libraries I would still have to tinker with linker setting and library search path. I am a programmer. I could learn how to do it. Can other people do it? No they can’t. They have theirs lives to live.

Diversity of solutions

Of course You will say “it is already solved”. We have FlatPack. We have Docker. We have LXD containers. We have LXC containter. We have Snap. We have….

Guess what?

I don’t care.

I just like to get to “something.org” web page, download a file in version I like and run it. Answer few questions or follow a few lines long instruction and have it running. Without a need of downloading or setting up anything else.

Then, when program is not needed I’ll just delete a folder in which it was installed and clean up icons I made. That’s all. This is the use model which is most friendly to almost anyone.

Sure, having some kind of “applications manager” looks good. If it works. And if it allows You to select which version You need. If it allows You to keep two of them side by side. If it allows You to tell where to install it.

Yes, I know containers do allow it. But don’t You think it is like shooting from the cannon to the fly? As far as I can see the idea behind them is to make an isolated, lightweight copy of a system in which an application was tested and let the application to see just this copy. You don’t have to compile or tweak an application. You can just take it and contain in a container.

I do not think this is a good idea. I do think this idea is even worse than package system. It is technically good but from user point of view it is just annoying.

Is it reliable? How does it deal with a case when the container system is totally different than a host system? Will container with Ubuntu inside work well on 5-years old RedHat? Will user be not confused when file browser built in an application sees files which are not in his system? At Windows we, who are not speaking English, were already hit by it hard: try to explain user that there is no folder named “Moja muzyka” and even tough he sees it in an Explorer then in command line he have to use “My Music”?

All right, I wrote an application for Linux…

…and I would like to give it to users in a built, running form. What kind of package or container should I use?

I don’t know. Probably all of them? Gosh…. It won’t be easy. I will have to learn a lot. A lot. A lot more. I will waste hours and hours which I could have spent on perfecting my application.

My guess is then: screw them all.

Make sure Your application has at most one or two easy to get external dependencies. All others must be included with Your application. I do code in JAVA so I’m just telling users to get any JAVA and clearly state at which version I tested it. If I am sending them physical media I do include some JRE even if it is a small breach of license terms. They simply can be off-line if they asked a physical media. It would be rude for me to leave them at their own.

Then make sure Your application does not need to be installed at all. You may add some fancy installer if You like, but simply copying it anywhere and clicking should do the work.

You may ask system for some global data, but You must not alter them. Keep everything, all settings and etc in application folder. Yes, I know it breaks almost all multi-user environment rules. But how many users do actually share their computers?

Even if You do it like this and a second user will like to have own copy of an application let him to do it. He will copy it to his own folder and start. No problem at all. All user settings are still private.

But it do double the disk space!

So what? Let the file system to do the job. Some systems can by themselves detect file duplicates and link them together. Some can do it with a help of a companion scanner. If You use such a file system You may expect that after a few days the disk space use will drop.

Summary

If Linux wishes to survive at consumer level it must provide a technically simple and uniform way of managing application leaving full freedom to users. User must be able to decide about what application, in what version, where and in a what number of copies he likes it to install.