A Secure Environment for Running Apps?

A Secure Environment for Running Apps?

By Alan Griffiths

Overload, 28(155):18-19, February 2020


Getting apps from the app store is easy. Alan Griffiths considers this from a security perspective.

What does ‘app confinement’ mean?

When you run an application on a computer you are giving it, and by extension, the developers of that application access to your computer. Unless you take precautions, it gets access to everything you can access.

Historically, there’s been a high cost of entry to application development and distribution meaning that developers have had to establish a reputation and trust. While some have suspicions of what, say, Microsoft Office or Chromium does, there’s no realistic fear that it will steal from you or hold information on your computer for ransom.

But the barrier to entry has become low, writing an app and getting it into the app store has never been easier and, as a result, application development is no longer the preserve of a few well-known organizations. The basis for trust that used to exist has been eroded.

At the same time, computers are being trusted with more and more sensitive information. We carry pocket computers with us everywhere and trust them to hold personal information including access to bank accounts, credit cards and medical details.

When the computer has access to your bank accounts, running code from developers that are essentially unknown to you beyond a picture of their app on the app store is risky.

Taking precautions to mitigate the risk posed by untrusted code is where app confinement comes into play. By confining the app at the operating system level it is possible to restrict its access to your computer to only those things that are needed for it to work.

How does ‘app confinement’ work?

As developers, we all know that something that sounds simple in the user domain can involve some serious work in the solution domain. App confinement is no exception: we need to consider what the operating system needs to do to confine an app; how that can be controlled; how the user can review and configure the confinement; and how to write and package applications so they work with restricted access to the system.

The discussion that follows talks about some specific Linux technologies for app confinement. That’s for the convenience of having concrete examples that I’m familiar with, but the principles involved can be, and have been, applied with other technologies and on other operating systems.

Kernel and userspace

The code running on a computer can be divided into ‘kernel’ and ‘userspace’. The kernel is that part of the operating system that mediates all interaction with hardware and between processes. The userspace is everything that runs within a normal app. (I know this isn’t the whole story, but software development is about useful abstractions and this separation is useful for this article.)

If we write a “hello world” application, the code we write runs in userspace. And so does the output function from the library we use (maybe operator<<() , or printf() or …) but at some point it writes to the console and at that point the kernel takes over and, eventually (there may well be further userspace and kernel code executed), some pixels are lit on the screen.

While code can run without invoking the kernel it cannot produce significant effects without doing so. It can’t access your files, it can’t access the internet, it can’t access your keyboard, mouse, touchpad, interact with other processes, etc.

That makes the interface between userspace and kernel a useful place to restrict the activities of a program.

AppArmor

The kernel enhancement I’m familiar with for implementing the confinement of apps is AppArmor. This intercepts calls to the kernel and checks to see if the app is permitted to make them. It does this based on an ‘AppArmor profile’ that has been applied to the app.

Like much of Linux configuration, these profiles are based on text files. These contain rules for matching resources on the system and specify the access that is permitted. For example:

  owner /run/user/[0-9]*/wayland-[0-9]* rw,

allows read and write access to any files matching the pattern that have the same owner (i.e. user) as the app’s process. The app cannot access files or resources unless they are allowed by a rule. (Not even if it is running as root.)

While AppArmor profiles are readable, they are not at a very convenient level of abstraction. Usually, one is concerned with, for example, enabling the playing of DVDs not with listing the various logical devices that may be needed to do so. Profiles can easily run into hundreds of lines, to take an example I am working with:

  $ cat
  /var/lib/snapd/apparmor/profiles/snap.mir-kiosk-
  kodi.mir-kiosk-kodi | wc -l
  1199

Lists of rules that are this long for each and every application are not easy to maintain nor review.

Snaps, snapcraft and snapd

AppArmor is an implementation detail of ‘snap confinement’, which is a component of Canonical’s ‘Snap’ packaging format. Snaps make use of lists of AppArmor rules called ‘interfaces’, each of which covers identifiable capabilities. These interfaces are reviewed by the Snap developers and can be enabled (or disabled) by the end user.

Listing 1 is an example corresponding to the 1200-line AppArmor profile mentioned above:

$ snap connections mir-kiosk-kodi
Interface         Plug                             Slot               Notes
alsa              mir-kiosk-kodi:alsa              :alsa              manual
audio-playback    mir-kiosk-kodi:audio-playback    :audio-playback    -
avahi-observe     mir-kiosk-kodi:avahi-observe     :avahi-observe     manual
hardware-observe  mir-kiosk-kodi:hardware-observe  :hardware-observe  manual
locale-control    mir-kiosk-kodi:locale-control    :locale-control    manual
mount-observe     mir-kiosk-kodi:mount-observe     :mount-observe     manual
network-observe   mir-kiosk-kodi:network-observe   :network-observe   manual
opengl            mir-kiosk-kodi:opengl            :opengl            -
pulseaudio        mir-kiosk-kodi:pulseaudio        :pulseaudio        -
removable-media   mir-kiosk-kodi:removable-media   :removable-media   manual
shutdown          mir-kiosk-kodi:shutdown          :shutdown          manual
system-observe    mir-kiosk-kodi:system-observe    :system-observe    manual
wayland           mir-kiosk-kodi:wayland           :wayland           manual
			
Listing 1

The owner of the computer is in charge of the interfaces a snap connects to. Some, carefully curated, interfaces will ‘auto-connect’ on installation, most require the user to explicitly enable them. (There are both graphical and command-line ways to manage the connections.)

This means that, provided an app is packaged and confined as a snap, you can install it and be sure that it isn’t accessing parts of your computer you do not choose to share. Instead of trusting each and every application, you just have to trust ‘snap confinement’. Trusting the well-known company that provides the operating system is less of a risk than trusting ‘Jo’ who uploaded some interesting looking game to the app store.

Writing apps for confined environments

In principle, there is nothing very special about writing apps for confined environments. Your app will need to ‘do stuff’ and that implies having the permissions needed to do that stuff. In the above example, Kodi, a media player needs access to various sources of media and the devices needed for audio and video playback.

A side-effect of Snap confinement is that some directories are not in the ‘expected’ place and applications must respect the environment variables that locate them. For example, each snap will have its own $HOME directory (something like /home/alan/snap/mir-kiosk-kodi/51 ) which it can use without restrictions. So long as the application uses $HOME (and not something like /home/$USER ) it can ‘just work’.

Although it has its own $HOME a confined app has no access to the user’s home directory unless the ‘home’ interface is connected. Even connecting this interface does not give unfettered access: it only allows access to ‘normal’ files and directories, it does not provide any access to hidden ones or those associated with other snapped applications.

While the details of this treatment of $HOME are specific to Snaps something similar is needed by any system of confinement to allow applications to work without changes. There are a few other environment variables that are adjusted for Snap confinement but (possibly with a bit of tweaking to the packaging ‘recipe’) most applications ‘just work’.

The main thing application developers need to do is avoid requiring unnecessary capabilities for the application to run. And to be aware that some capabilities may not be enabled so that can be handled gracefully.

The Kodi media centre is a good example of this: options that rely on access to resources that are unavailable (because the user hasn’t enabled those interfaces) do not appear on the menus. I don’t think this is intentional support for confinement by the Kodi developers, just a by-product of it being possible to install Kodi on devices with a wide range of capabilities.

Computers are everywhere

Computers are being used in increasing numbers of internet connected devices. As well as the familiar desktops, laptops, tablets and phones there are all sorts of smart devices that are getting both an internet connection and the ability to install apps. Securing the operation of these is important for both users and developers.

As a user, it may seem cute to install a game for the kids on your car infotainment system, but you really want to be sure that it cannot misbehave and interfere with the satnav! Or, inside the home, adding apps for some local shops to the latest smart fridge could expose traffic on your home WiFi.

As developers, we have a responsibility to ensure the systems we deploy are properly protected against bad actors. Fulfilling that responsibility while opening the system to extension by, for example, installing third-party applications from a ‘store’ needs care.

I hope I’ve given a flavour of how, if the operating system is secure by design, this is possible.

Further reading

Alan Griffiths has delivered working software and development processes to a range of organizations, written for a number of magazines, spoken at several conferences, and made many friends.