June 28, 2017

This is a guest post by Ricardo Feliciano, Developer Evangelist at CircleCI. If you would like to contribute a guest post, please contact ubuntu-iot@canonical.com.

Snapcraft, the package management system fighting for its spot at the Linux table, re-imagines how you can deliver your software. A new set of cross-distro tools are available to help you build and publish “Snaps”. We’ll cover how to use CircleCI 2.0 to power this process and some potential gotchas along the way.

What are snap packages? And Snapcraft?

Snaps are software packages for Linux distributions. They’re designed with lessons learned from delivering software on mobile platforms such as Android as well Internet of Things devices. Snapcraft is the name that encompasses Snaps and the command-line tool that builds them, the website, and pretty much the entire ecosystem around the technologies that enables this.

Snap packages are designed to isolate and encapsulate an entire application. This concept enables Snapcraft’s goal of increasing security, stability, and portability of software allowing a single “snap” to be installed on not just multiple versions of Ubuntu, but Debian, Fedora, Arch, and more. Snapcraft’s description per their website:

“Package any app for every Linux desktop, server, cloud or device, and deliver updates directly.”

Building a snap package on CircleCI 2.0

Building a snap on CircleCI is mostly the same as your local machine, wrapped with CircleCI 2.0 syntax. We’ll go through a sample config file in this post. If you’re not familiar with CircleCI or would like to know more about getting started with 2.0 specifically, you can start here.

Base Config

version: 2
jobs:
  build:
    machine: true
    working_directory: ~/project
    steps:
      - checkout
      - run:
          command: |
            sudo apt update && sudo apt install -y snapd
            sudo snap install snapcraft --edge --classic
            /snap/bin/snapcraft

This example uses the machine executor to install snapd, the executable that allows you to manage snaps and enables the platform, as well as snapcraft, the tool for creating snaps.

The machine executor is used rather than the docker executor as we need a newer kernel for the build process. Linux 4.4 is available here, which is new enough for our purposes.

Userspace dependencies

The example above uses the machine executor, which currently is a VM with Ubuntu 14.04 (Trusty) and the Linux v4.4 kernel. This is fine if your project/snap requires build dependencies available in the Trusty repositories. What if you need dependencies available in a different version, perhaps Ubuntu 16.04 (Xenial)? We can still use Docker within the machine executor to build our snap.

version: 2
jobs:
  build:
    machine: true
    working_directory: ~/project
    steps:
      - checkout
      - run:
          command: |
            sudo apt update && sudo apt install -y snapd
            docker run -v $(pwd):$(pwd) -t ubuntu:xenial sh -c "apt update -qq && apt install snapcraft -y && cd $(pwd) && snapcraft"

In this example, we again install snapd in the machine executor’s VM, but we decide to install Snapcraft and build our snap within a Docker container built with the Ubuntu Xenial image. All apt packages available in Ubuntu 16.04 will be available to snapcraft during the build.

Testing

Unit testing your software’s code has been covered extensively in our blog, our docs, and around the Internet. Searching for your language/framework and unit testing or CI will turn up tons of information. Building a snap on CircleCI means we end with a .snap file which we can test in addition to the code that created it.

Workflows

Let’s say the snap we built was a webapp. We can build a testing suite to make sure this snap installs and runs correctly. We could try installing the snap. We could run Selenium to make sure the proper pages load, logins, work, etc. Here’s the catch, snaps are designed to run on multiple Linux distros. That means we need to be able to run this test suite in Ubuntu 16.04, Fedora 25, Debian 9, etc. CircleCI 2.0’s Workflows can efficiently solve this.

A recent addition to the CircleCI 2.0 beta is Workflows. This allows us to run discrete jobs in CircleCI with a certain flow logic. In this case, after our snap is built, which would be a single job, we could then kick off snap distro testing jobs, running in parallel. One for each distro we want to test. Each of these jobs would be a different Docker image for that distro (or in the future, additional executors will be available).

Here’s simple example of what this might look like:

workflows:
  version: 2
  build-test-and-deploy:
    jobs:
      - build
      - acceptance_test_xenial:
          requires:
            - build
      - acceptance_test_fedora_25:
          requires:
            - build
      - acceptance_test_arch:
          requires:
            - build
      - publish:
          requires:
            - acceptance_test_xenial
            - acceptance_test_fedora_25
            - acceptance_test_arch

This setup builds the snap, and then runs acceptance tests on it with four different distros. If and when all distro builds pass, then we can run the publish job in order to finish up any remaining snap task before pushing it to the Snap Store.

Persisting the .snap package

To test our .snap package in the workflows example, a way of persisting that file between builds is needed. I’ll mention two ways here.

  1. artifacts – We could store the snap package as a CircleCI artifact during the build job. Then retrieve it within the following jobs. CircleCI Workflows has its own way of of handling sharing artifacts which can be found here.
  2. snap store channels – When publishing a snap to the Snap Store, there’s more than one channel to choose from. It’s becoming a common practice to publish the master branch of your snap to the edge channel for internal and/or user testing. This can be done in the build job, with the following jobs installing the snap from the edge channel.

The first method is faster to complete and has the advantage of being able to run acceptance tests on your snap before it hits the Snap Store and touches any user, even testing users. The second method has the advantage of install from the Snap Store being one of the test that is run during CI.

Authenticating with the snap store

The script snapcraft-config-generator.py can generate the store credentials and save them to .snapcraft/snapcraft.cfg (note: always inspect public scripts before running them). You don’t want to store this file in plaintext in your repo (for security reasons). You can either base64 encode the file and store it as a private environment variable or you can encrypt the file and just store the key in a private environment variable.

Here’s an example of having the store credentials in an encrypted file, and using the creds in a deploy step to publish to the Snap Store:

- deploy:
    name: Push to Snap Store
    command: |
      openssl aes-256-cbc -d -in .snapcraft/snapcraft.encrypted -out .snapcraft/snapcraft.cfg -k $KEY
      /snap/bin/snapcraft push *.snap

Instead of a deploy step, keeping with the Workflow examples from earlier, this could be a deploy job that only runs when and if the acceptance test jobs passed.

More information

 

Original post here

 

on June 28, 2017 10:18 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #511 for the weeks of June 12 – 25, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • Paul White
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on June 28, 2017 05:59 AM

Are you from the US Northwest area? Have something cool to tell or show from the open source world? Then you should apply to give a talk at SeaGL this year!

SeaGL is a grassroots technical conference, taking place in Seattle, WA, United States in October 6-7th. It’s dedicated to spreading knowledge about the GNU/Linux community and free/libre/open-source software/hardware. I went last year, and there’s a lot of cool people with amazing stories on multiple open source topics. Now, we want to hear from you.

I’m sure there’s a lot of you with cool projects that you want to share with the world. Go ahead! This fourth year there’s 20-minute talks, where you can give a quick introduction to your piece of software/hardware, or 50-minute talks, where you can do a demo, and go in-depth about your project. Is it going to be your first talk ever? SeaGL is a great place to get started! Have questions about your talk proposal? They have weekly office hours in the #seagl channel on freenode to give you a hand!

Conferences like SeaGL are powered by their own attendees, so it’d be great to see new faces around showing off amazing stuff. I hope to see many new names on the schedule, as well as some other familiar ones.  Propose a talk, and hopefully, I’ll see you in October! And hurry – CFP closes on August 6th, midnight PDT.


on June 28, 2017 12:02 AM

June 27, 2017

Introduction

This newsletter is to provide a status update from the Ubuntu Foundations Team.  There will also be highlights provided for any interesting subjects the team may be working on.

If you would like to reach the Foundations team, you can find us at the #ubuntu-devel channel on freenode.

 

Highlights

 

The State of the Archive

  • Python3-defaults has migrated to artful with support for python 3.6; see above for more information about the status of this transition
  • With the Debian unstable floodgates open now following the Debian stretch release, a ghc transition is in progress; with luck this will complete over the weekend
  • KDEPIM 16.12 packages are unblocked in artful-proposed, expected to land in artful soon

Upcoming Ubuntu Dates

16.10 EoL in July 2017

16.04.3 point release is scheduled for August 3, 2017

 

Weekly Meeting

on June 27, 2017 07:31 PM

Almost every day, somebody tells me there is no way they can survive without some social media like Facebook or Twitter. Otherwise mature adults fearful that without these dubious services, they would have no human contact ever again, they would die of hunger and the sky would come crashing down too.

It is particularly disturbing for me to hear this attitude from community activists and campaigners. These are people who aspire to change the world, but can you really change the system using the tools the system gives you?

Revolutionaries like Gandi and the Bolsheviks don't have a lot in common: but both of them changed the world and both of them did so by going against the system. Gandi, of course, relied on non-violence while the Bolsheviks continued to rely on violence long after taking power. Neither of them needed social media but both are likely to be remembered far longer than any viral video clip you have seen recently.

With US border guards asking visitors for their Facebook profiles and Mark Zuckerberg being a regular participant at secretive Bilderberg meetings, it should be clear that Facebook and conventional social media is not on your side, it's on theirs.

Kettling has never been easier

When street protests erupt in major cities such as London, the police build fences around the protesters, cutting them off from the rest of the world. They become an island in the middle of the city, like a construction site or broken down bus that everybody else goes around. The police then set about arresting one person at a time, taking their name and photograph and then slowly letting them leave in different directions. This strategy is called kettling.

Facebook helps kettle activists in their arm chair. The police state can gather far more data about them, while their impact is even more muted than if they ventured out of their home.

You are more likely to win the lottery than make a viral campaign

Every week there is news about some social media campaign that has gone viral. Every day, marketing professionals, professional campaigners and motivated activists sit at their computer spending hours trying to replicate this phenomenon.

Do the math: how many of these campaigns can really be viral success stories? Society can only absorb a small number of these campaigns at any one time. For most of the people trying to ignite such campaigns, their time and energy is wasted, much like money spent buying lottery tickets and with odds that are just as bad.

It is far better to focus on the quality of your work in other ways than to waste any time on social media. If you do something that is truly extraordinary, then other people will pick it up and share it for you and that is how a viral campaign really begins. The time and effort you put into trying to force something to become viral is wasting the energy and concentration you need to make something that is worthy of really being viral.

An earthquake and an escaped lion never needed to announce themselves on social media to become an instant hit. If your news isn't extraordinary enough for random people to spontaneously post, share and tweet it in the first place, how can it ever go far?

The news media deliberately over-rates social media

News media outlets, including TV, radio and print, gain a significant benefit crowd-sourcing live information, free of charge, from the public on social media. It is only logical that they will cheer on social media sites and give them regular attention. Have you noticed that whenever Facebook's publicity department makes an announcement, the media are quick to publish it ahead of more significant stories about social or economic issues that impact our lives? Why do you think the media puts Facebook up on a podium like this, ahead of all other industries, if the media aren't getting something out of it too?

The tail doesn't wag the dog

One particular example is the news media's fascination with Donald Trump's Twitter account. Some people have gone as far as suggesting that this billionaire could have simply parked his jet and spent the whole of 2016 at one of his golf courses sending tweets and he would have won the presidency anyway. Suggesting that Trump's campaign revolved entirely around Twitter is like suggesting the tail wags the dog.

The reality is different: Trump has been a prominent public figure for decades, both in the business and entertainment world. During his presidential campaign, he had at least 220 major campaign rallies attended by over 1.2 million people in the real world. Without this real-world organization and history, the Twitter account would have been largely ignored like the majority of Twitter accounts.

On the left of politics, the media have been just as quick to suggest that Bernie Sanders and Jeremy Corbyn have been supported by the "Facebook generation". This label is superficial and deceiving. The reality, again, is a grass roots movement that has attracted young people to attend local campaign meetings in pubs up and down the country. Getting people to get out and be active is key. Social media is incidental to their campaign, not indispensible.

Real-world meetings, big or small, are immensely more powerful than a social media presence. Consider the Trump example again: if 100,000 people receive one of his tweets, how many even notice it in the non-stop stream of information we are bombarded with today? On the other hand, if 100,000 bellow out a racist slogan at one of his rallies, is there any doubt whether each and every one of those people is engaged with the campaign at that moment? If you could choose between 100 extra Twitter followers or 10 extra activists attending a meeting every month, which would you prefer?

Do we need this new definition of a Friend?

Facebook is redefining what it means to be a friend.

Is somebody who takes pictures of you and insists on sharing them with hundreds of people, tagging your face for the benefit of biometric profiling systems, really a friend?

If you want to find out what a real friend is and who your real friends really are, there is no better way to do so then blowing away your Facebook and Twitter account and waiting to see who contacts you personally about meeting up in the real world.

If you look at a profile on Facebook or Twitter, one of the most prominent features is the number of friends or followers they have. Research suggests that humans can realistically cope with no more than about 150 stable relationships. Facebook, however, has turned Friending people into something like a computer game.

This research is also given far more attention then it deserves though: the number of really meaningful friendships that one person can maintain is far smaller. Think about how many birthdays and spouse's names you can remember and those may be the number of real friendships you can manage well. In his book Busy, Tony Crabbe suggests between 10-20 friendships are in this category and you should spend all your time with these people rather than letting your time be spread thinly across superficial Facebook "friends".

This same logic can be extrapolated to activism and marketing in its many forms: is it better for a campaigner or publicist to have fifty journalists following him on Twitter (where tweets are often lost in the blink of an eye) or three journalists who he meets for drinks from time to time?

Facebook alternatives: the ultimate trap?

Numerous free, open source projects have tried to offer an equivalent to Facebook and Twitter. GNU social, Diaspora and identi.ca are some of the more well known examples.

Trying to persuade people to move from Facebook to one of these platforms rarely works. In most cases, Metcalfe's law suggests the size of Facebook will suck them back in like the gravity of a black hole.

To help people really beat these monstrosities, the most effective strategy is to help them live without social media, whether it is proprietary or not. The best way to convince them may be to give it up yourself and let them see how much you enjoy life without it.

Share your thoughts

The FSFE community has recently been debating the use of propriety software and services. Please feel free to join the list and click here to reply on the thread.

on June 27, 2017 07:29 PM

Announcements

  • Transition to Git in Launchpad
    The MAAS team is happy to announce that we have moved our code repositories away from Bazaar. We are now using Git in Launchpad.[1]

MAAS 2.3 (current development release)

This week, the team has worked on the following features and improvements:

  • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress involved:
    • Updated Jenkins job configuration to run CI tests from Git instead of bzr.
    • Created new Jenkins jobs to test older releases via Git instead of bzr.
    • Update Jenkins job triggering mechanism from using Tarmac to using the Jenkins Git plugin.
    • Replaced the maas code lander (based on tarmac) with a Jenkins job to automatically land approved branches.
      • This also includes a mechanism to automatically set milestones and close Launchpad bugs.
    • Updated Snap building recipe to build from Git. 
  • Removal of ‘tgt’ as a dependency behind a feature flag – This week we have landed the ability to load ephemeral images via HTTP from the initrd, instead of doing it via iSCSI (served by ‘tgt’). While the use of ‘tgt’ is still default, the ability to not use it is hidden behind a feature flag (http_boot). This is only available in trunk. 
  • Django 1.11 transition – We are down to the latest items of the transition, and we are targeting it to be completed by the upcoming week. 
  • Network Beaconing & better network discovery – The team is continuing to make progress on beacons. Following a thorough review, the beaconing packet format has been optimized; beacon packets are now simpler and more compact. We are targeting rack registration improvements for next week, so that newly-registered rack controllers do not create new fabrics if an interface can be determined to be on an existing fabric.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1). The MAAS team is currently targeting a new 2.2.1 release for the upcoming week.

  • LP #1687305 – Fix virsh pods reporting wrong storage
  • LP #1699479 – A couple of unstable tests failing when using IPv6 in LXC containers

[1]: https://git.launchpad.net/maas

on June 27, 2017 01:10 PM

New address book

Colin Watson

I’ve had a kludgy mess of electronic address books for most of two decades, and have got rather fed up with it. My stack consisted of:

  • ~/.mutt/aliases, a flat text file consisting of mutt alias commands
  • lbdb configuration to query ~/.mutt/aliases, Debian’s LDAP database, and Canonical’s LDAP database, so that I can search by name with Ctrl-t in mutt when composing a new message
  • Google Contacts, which I used from Android and was completely separate from all of the above

The biggest practical problem with this was that I had the address book that was most convenient for me to add things to (Google Contacts) and the one I used when sending email, and no sensible way to merge them or move things between them. I also wasn’t especially comfortable with having all my contact information in a proprietary web service.

My goals for a replacement address book system were:

  • free software throughout
  • storage under my control
  • single common database
  • minimal manual transcription when consolidating existing databases
  • integration with Android such that I can continue using the same contacts, messaging, etc. apps
  • integration with mutt such that I can continue using the same query interface
  • not having to write my own software, because honestly

I think I have all this now!

New stack

The obvious basic technology to use is CardDAV: it’s fairly complex, admittedly, but lots of software supports it and one of my goals was not having to write my own thing. This meant I needed a CardDAV server, some way to sync the database to and from both Android and the system where I run mutt, and whatever query glue was necessary to get mutt to understand vCards.

There are lots of different alternatives here, and if anything the problem was an embarrassment of choice. In the end I just decided to go for things that looked roughly the right shape for me and tried not to spend too much time in analysis paralysis.

CardDAV server

I went with Xandikos for the server, largely because I know Jelmer and have generally had pretty good experiences with their software, but also because using Git for history of the backend storage seems like something my future self will thank me for.

It isn’t packaged in stretch, but it’s in Debian unstable, so I installed it from there.

Rather than the standalone mode suggested on the web page, I decided to set it up in what felt like a more robust way using WSGI. I installed uwsgi, uwsgi-plugin-python3, and libapache2-mod-proxy-uwsgi, and created the following file in /etc/uwsgi/apps-available/xandikos.ini which I then symlinked into /etc/uwsgi/apps-enabled/xandikos.ini:

[uwsgi]
socket = 127.0.0.1:8801
uid = xandikos
gid = xandikos
umask = 022
master = true
cheaper = 2
processes = 4
plugin = python3
module = xandikos.wsgi:app
env = XANDIKOSPATH=/srv/xandikos/collections

The port number was arbitrary, as was the path. You need to create the xandikos user and group first (adduser --system --group --no-create-home --disabled-login xandikos). I created /srv/xandikos owned by xandikos:xandikos and mode 0700, and I recommend setting a umask as shown above since uwsgi’s default umask is 000 (!). You should also run sudo -u xandikos xandikos -d /srv/xandikos/collections --autocreate and then Ctrl-c it after a short time (I think it would be nicer if there were a way to ask the WSGI wrapper to do this).

For Apache setup, I kept it reasonably simple: I ran a2enmod proxy_uwsgi, used htpasswd to create /etc/apache2/xandikos.passwd with a username and password for myself, added a virtual host in /etc/apache2/sites-available/xandikos.conf, and enabled it with a2ensite xandikos:

<VirtualHost *:443>
        ServerName xandikos.example.org
        ServerAdmin me@example.org

        ErrorLog /var/log/apache2/xandikos-error.log
        TransferLog /var/log/apache2/xandikos-access.log

        <Location />
                ProxyPass "uwsgi://127.0.0.1:8801/"
                AuthType Basic
                AuthName "Xandikos"
                AuthBasicProvider file
                AuthUserFile "/etc/apache2/xandikos.passwd"
                Require valid-user
        </Location>
</VirtualHost>

Then service apache2 reload, set the new virtual host up with Let’s Encrypt, reloaded again, and off we go.

Android integration

I installed DAVdroid from the Play Store: it cost a few pounds, but I was OK with that since it’s GPLv3 and I’m happy to help fund free software. I created two accounts, one for my existing Google Contacts database (and in fact calendaring as well, although I don’t intend to switch over to self-hosting that just yet), and one for the new Xandikos instance. The Google setup was a bit fiddly because I have two-step verification turned on so I had to create an app-specific password. The Xandikos setup was straightforward: base URL, username, password, and done.

Since I didn’t completely trust the new setup yet, I followed what seemed like the most robust option from the DAVdroid contacts syncing documentation, and used the stock contacts app to export my Google Contacts account to a .vcf file and then import that into the appropriate DAVdroid account (which showed up automatically). This seemed straightforward and everything got pushed to Xandikos. There are some weird delays in syncing contacts that I don’t entirely understand, but it all seems to get there in the end.

mutt integration

First off I needed to sync the contacts. (In fact I happen to run mutt on the same system where I run Xandikos at the moment, but I don’t want to rely on that, and going through the CardDAV server means that I don’t have to poke holes for myself using filesystem permissions.) I used vdirsyncer for this. In ~/.vdirsyncer/config:

[general]
status_path = "~/.vdirsyncer/status/"

[pair contacts]
a = "contacts_local"
b = "contacts_remote"
collections = ["from a", "from b"]

[storage contacts_local]
type = "filesystem"
path = "~/.contacts/"
fileext = ".vcf"

[storage contacts_remote]
type = "carddav"
url = "<Xandikos base URL>"
username = "<my username>"
password = "<my password>"

Running vdirsyncer discover and vdirsyncer sync then synced everything into ~/.contacts/. I added an hourly crontab entry to run vdirsyncer -v WARNING sync.

Next, I needed a command-line address book tool based on this. khard looked about right and is in stretch, so I installed that. In ~/.config/khard/khard.conf (this is mostly just the example configuration, but I preferred to sort by first name since not all my contacts have neat first/last names):

[addressbooks]
[[contacts]]
path = ~/.contacts/<UUID of my contacts collection>/

[general]
debug = no
default_action = list
editor = vim
merge_editor = vimdiff

[contact table]
# display names by first or last name: first_name / last_name
display = first_name
# group by address book: yes / no
group_by_addressbook = no
# reverse table ordering: yes / no
reverse = no
# append nicknames to name column: yes / no
show_nicknames = no
# show uid table column: yes / no
show_uids = yes
# sort by first or last name: first_name / last_name
sort = first_name

[vcard]
# extend contacts with your own private objects
# these objects are stored with a leading "X-" before the object name in the vcard files
# every object label may only contain letters, digits and the - character
# example:
#   private_objects = Jabber, Skype, Twitter
private_objects = Jabber, Skype, Twitter
# preferred vcard version: 3.0 / 4.0
preferred_version = 3.0
# Look into source vcf files to speed up search queries: yes / no
search_in_source_files = no
# skip unparsable vcard files: yes / no
skip_unparsable = no

Now khard list shows all my contacts. So far so good. Apparently there are some awkward vCard compatibility issues with creating or modifying contacts from the khard end. I’ve tried adding one address from ~/.mutt/aliases using khard and it seems to at least minimally work for me, but I haven’t explored this very much yet.

I had to install python3-vobject 0.9.4.1-1 from experimental to fix eventable/vobject#39 saving certain vCard files.

Finally, mutt integration. I already had set query_command="lbdbq '%s'" in ~/.muttrc, and I wanted to keep that in place since I still wanted to use LDAP querying as well. I had to write a very small amount of code for this (perhaps I should contribute this to lbdb upstream?), in ~/.lbdb/modules/m_khard:

#! /bin/sh

m_khard_query () {
    khard email --parsable --remove-first-line --search-in-source-files "$1"
}

My full ~/.lbdb/rc now reads as follows (you probably won’t want the LDAP stuff, but I’ve included it here for completeness):

MODULES_PATH="$MODULES_PATH $HOME/.lbdb/modules"
METHODS='m_muttalias m_khard m_ldap'
LDAP_NICKS='debian canonical'

Next steps

I’ve deleted one account from Google Contacts just to make sure that everything still works (e.g. I can still search for it when composing a new message), but I haven’t yet deleted everything. I won’t be adding anything new there though.

I need to push everything from ~/.mutt/aliases into the new system. This is only about 30 contacts so shouldn’t take too long.

Overall this feels like a big improvement! It wasn’t a trivial amount of setup for just me, but it means I have both better usability for myself and more independence from proprietary services, and I think I can add extra users with much less effort if I need to.

Postscript

A day later and I’ve consolidated all my accounts from Google Contacts and ~/.mutt/aliases into the new system, with the exception of one group that I had defined as a mutt alias and need to work out what to do with. This all went smoothly.

I’ve filed the new lbdb module as #866178, and the python3-vobject bug as #866181.

on June 27, 2017 11:57 AM
The forkstat mascot
Forkstat is a tiny utility I wrote a while ago to monitor process activity via the process events connector. Recently I was sent a patch from Philipp Gesang to add a new -l option to switch to line buffered output to reduce the delay on output when redirecting stdout, which is a useful addition to the tool.   During some spare time I looked at the original code and noticed that I had overlooked some of lesser used process event types:
  • STAT_PTRC - ptrace attach/detach events
  • STAT_UID - UID (and GID) change events
  • STAT_SID - SID change events
..so I've now added support for these events too.
    I've also added some extra per-process information on each event. The new -x "extra info" option will now also display the UID of the process and where possible the TTY it is associated with.  This allows one to easily detect who is responsible for generating the process events.

    The following example shows fortstat being used to detect when a process is being traced using ptrace:

     sudo ./forkstat -x -e ptrce  
    Time Event PID UID TTY Info Duration Process
    11:42:31 ptrce 17376 0 pts/15 attach strace -p 17350
    11:42:31 ptrce 17350 1000 pts/13 attach top
    11:42:37 ptrce 17350 1000 pts/13 detach

    Process 17376 runs strace on process 17350 (top). We can see the ptrace attach event on the process and also then a few seconds later the detach event.  We can see that the strace was being run from pts/15 by root.   Using forkstat we can now snoop on users who are snooping on other user's processes.

    I use forkstat mainly to capture busy process fork/exec/exit activity that tools such as ps and top cannot see because of the very sort duration of some processes or threads. Sometimes processes are created rapidly that one needs to run forkstat with a high priority to capture all the events, and so the new -r option will run forkstat with a high real time scheduling priority to try and capture all the events.

    These new features landed in forkstat V0.02.00 for Ubuntu 17.10 Aardvark.
    on June 27, 2017 10:59 AM

    Welcome to the third Ubuntu OpenStack development summary!

    This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

    If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

    OpenStack Distribution

    Stable Releases

    The next cadence of stable fixes is undergoing testing:

    Cinder: RBD calls block entire process (Kilo)
    https://bugs.launchpad.net/cinder/+bug/1401335

    Cinder: Upload to image does not copy os_type property (Kilo)
    https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1692446

    Swift: swift-storage processes die of rsyslog is restarted (Kilo, Mitaka)
    https://bugs.launchpad.net/ubuntu/trusty/+source/swift/+bug/1683076

    Neutron: Router HA creation race (Mitaka, Newton)
    https://bugs.launchpad.net/neutron/+bug/1662804

    Mitaka Stable Point Releases
    https://bugs.launchpad.net/ubuntu/+bug/1696177

    Newton Stable Point Releases
    https://bugs.launchpad.net/ubuntu/+bug/1696133

    Ocata Stable Point Releases
    https://bugs.launchpad.net/ubuntu/+bug/1696139

    Nova-LXD storage pool compatibility
    https://bugs.launchpad.net/ubuntu/+source/nova-lxd/+bug/1692962

    You’ll notice some lag in the cadence flow at the moment; we’re working with the Ubuntu SRU team to see how that can be optimised better going forwards.

    Development Release

    Builds for all architectures for Ceph 12.0.3 can be found in:

    https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/2779

    The first RC for Luminous was released last week, so expect that to appear soon in the same location; pending successful testing 12.1.0 will also be uploaded to Artful and the Ubuntu Cloud Archive for OpenStack Pike.

    You’ll also find updates GlusterFS 3.10.x packages in Ubuntu Artful and the Ubuntu Cloud Archive for OpenStack Pike.

    OpenStack Pike Milestone 2 is now in the Ubuntu Cloud Archive for OpenStack Pike which can be added to Ubuntu 16.04 LTS installations using:

    sudo add-apt-repository cloud-archive:pike

    This milestone involved over 70 package updates and 5 new packages for new OpenStack dependencies!

    OpenStack Snaps

    After some review and testing, the snap sub-team decided to switch back to strict mode for all OpenStack snaps; classic mode was pushing complexity out of snapd and into every snap which was becoming hard to maintain, so moving back to strict mode snaps made sense.

    Alongside this work, we’ve been working on the first cut of ‘snapstack’, a tool to support testing of snaps as part of the gating process for development, and as part of the CI/CD process for migration of snaps across channels in the snap store.

    If you want to give the current snaps a spin to see what’s possible checkout snap-test.

    Nova LXD

    Work on Nova-LXD in the last few weeks has focussed on moving the Tempest DevStack OpenvSwitch experimental gate into the actual blocking gate; this work is now complete for Ocata and Pike releases; tests are not executed against the older Newton stable branch due to a number of races in the VIF plugging part of the driver. This is a significant step forward in assuring the quality of the driver going forwards.

    Work is also underway on refactoring the VIF plugging codebase to integrate better with os-vif and Neutron; this should improve the quality of the driver when used with the Linuxbridge mechanism driver in Neutron, and will make integration of other SDN choices easier in the future. This work will also resolve compatibility issues with the native Open vSwitch firewall driver and Nova-LXD.

    OpenStack Charms

    New Charms

    Specifications are up for review for the proposed Gnocchi and GlusterFS charms. Please feel free to read through and provide any feedback on the proposed specifications!

    Pike Updates

    A few minor updates to support the Pike development release are working through review; these should be landed soon (the team aims to maintain deployability of development milestones alongside OpenStack development).

    IRC (and meetings)

    As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

    EOM


    on June 27, 2017 09:50 AM

    I recently published a paper with Sayamindu Dasgupta that provides evidence in support of the idea that kids can learn to code more quickly when they are programming in their own language.

    Millions of young people from around the world are learning to code. Often, during their learning experiences, these youth are using visual block-based programming languages like Scratch, App Inventor, and Code.org Studio. In block-based programming languages, coders manipulate visual, snap-together blocks that represent code constructs instead of textual symbols and commands that are found in more traditional programming languages.

    The textual symbols used in nearly all non-block-based programming languages are drawn from English—consider “if” statements and “for” loops for common examples. Keywords in block-based languages, on the other hand, are often translated into different human languages. For example, depending on the language preference of the user, an identical set of computing instructions in Scratch can be represented in many different human languages:

    Examples of a short piece of Scratch code shown in four different human languages: English, Italian, Norwegian Bokmål, and German.

    Although my research with Sayamindu Dasgupta focuses on learning, both Sayamindu and I worked on local language technologies before coming back to academia. As a result, we were both interested in how the increasing translation of programming languages might be making it easier for non-English speaking kids to learn to code.

    After all, a large body of education research has shown that early-stage education is more effective when instruction is in the language that the learner speaks at home. Based on this research, we hypothesized that children learning to code with block-based programming languages translated to their mother-tongues will have better learning outcomes than children using the blocks in English.

    We sought to test this hypothesis in Scratch, an informal learning community built around a block-based programming language. We were helped by the fact that Scratch is translated into many languages and has a large number of learners from around the world.

    To measure learning, we built on some of our our own previous work and looked at learners’ cumulative block repertoires—similar to a code vocabulary. By observing a learner’s cumulative block repertoire over time, we can measure how quickly their code vocabulary is growing.

    Using this data, we compared the rate of growth of cumulative block repertoire between learners from non-English speaking countries using Scratch in English to learners from the same countries using Scratch in their local language. To identify non-English speakers, we considered Scratch users who reported themselves as coming from five primarily non-English speaking countries: Portugal, Italy, Brazil, Germany, and Norway. We chose these five countries because they each have one very widely spoken language that is not English and because Scratch is almost fully translated into that language.

    Even after controlling for a number of factors like social engagement on the Scratch website, user productivity, and time spent on projects, we found that learners from these countries who use Scratch in their local language have a higher rate of cumulative block repertoire growth than their counterparts using Scratch in English. This faster growth was despite having a lower initial block repertoire. The graph below visualizes our results for two “prototypical” learners who start with the same initial block repertoire: one learner who uses the English interface, and a second learner who uses their native language.

    Summary of the results of our model for two prototypical individuals.

    Our results are in line with what theories of education have to say about learning in one’s own language. Our findings also represent good news for designers of block-based programming languages who have spent considerable amounts of effort in making their programming languages translatable. It’s also good news for the volunteers who have spent many hours translating blocks and user interfaces.

    Although we find support for our hypothesis, we should stress that our findings are both limited and incomplete. For example, because we focus on estimating the differences between Scratch learners, our comparisons are between kids who all managed to successfully use Scratch. Before Scratch was translated, kids with little working knowledge of English or the Latin script might not have been able to use Scratch at all. Because of translation, many of these children are now able to learn to code.


    This blog post and the work that it describes is a collaborative project with Sayamindu Dasgupta. Sayamindu also published a very similar version of the blog post in several places. Our paper is open access and you can read it here. The paper was published in the proceedings of the ACM Learning @ Scale Conference. We also recently gave a talk about this work at the International Communication Association’s annual conference. We received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Nathan TeBlunthuis at the University of Washington. Financial support came from the US National Science Foundation.

    on June 27, 2017 01:00 AM

    June 26, 2017

    In recent years innersource is a term that has cropped up more and more. As with all new things in technology, there has been a healthy mix of interest and suspicion around what exactly innersource is (and what it isn’t).

    As a consultant I work with a range of organizations, large and small, across various markets (e.g. financial services, technology etc) to help them bring innersource into their world. So, here is a quick guide to what innersource is, why you might care, and how to get started.

    What is Innnersource?

    In a nutshell, ‘innersource’ refers to bringing the core principles of open source and community collaboration within the walls of an organization. This involves building an internal community, collaborative engineering workflow, and culture.

    This work happens entirely within the walls of the company. For all intents and purposes, the company develops an open source culture, but entirely focused on their own intellectual property, technology, and teams. This provides the benefits of open source collaboration, community, and digital transformation, but in a safe environment, particularly for highly regulated industries such as financial services.

    Innersource is not a product or service that you buy and install on your network. It is instead a term that refers to the overall workflow, methodology, community, and culture that optimizes an organization for open source style collaboration.

    Why do people Innersource?

    Many organizations are very command-and-control driven, often as a result of their industry (e.g. highly regulated industries), the size of the organization, or how long they have been around.

    Command-and-control driven organizations often hit a bottleneck in efficiency which results in some negative outcomes such as slower Time To Market, additional bureaucracy, staff frustration, reduced innovation, loss of a competitive edge, and additional costs (and waste) for operating the overall business.

    An unfortunate side effect of this is that teams get siloed, and this results in reduced collaboration between projects and teams, duplication of effort, poor communication of wider company strategic goals, territorial leadership setting in, and frankly…the organization becomes a less fun and inspiring place to work.

    Pictured: frustration.

    While the benefits of open source have been clearly felt in reducing costs for consuming and building software and services, there has also been substantive value for organizations and staff who work together using an open source methodology. People feel more engaged, are able to grow their technical skills, build more effective relationships, feel their work has more impact and meaning, and experience more engagement in their work.

    It is very important to note that innersource is not merely about optimizing how people write code. Sure, workflow is a key component, but innersource is fundamentally cultural in focus. You need both: If you build an environment that (a) has an open and efficient peer-review based workflow. and (b) you build a culture that supports cross-departmental collaboration and internal community, the tangible output is unsurprisingly, not just better code, but better teams, and better products.

    What are the BENEFITS of innersource for an organization?

    There are number of benefits for organizations that work in an innersource way:

    • Faster Time To Market (TTM) – innersource optimizes teams to work faster and more efficiently and this reduces the time it takes to build and release new products and services.
    • Better code – a collaborative peer-review process commonly results in better quality code as multiple engineers are reviewing the code for quality, optimization, and elegance.
    • Better security – with more eyeballs on code due to increased collaboration, all bugs (and security flaws) are shallow. This means that issues can be identified more quickly, and thus fixed.
    • Expanded innovation – you can’t successfully “tell” people to innovate. You have to build an environment that encourages employees to have and share ideas, experiment with prototypes, and collaborate together. Innersource optimizes an organization for this and the result is a permissive environment that facilitates greater innovation.
    • Easier hiring – young engineers are growing up in a world where they can cut their teeth on open source projects to build up their experience. Consequently, they don’t want to work in dusty siloed organizations, they want to work in an open source way. Innersource (as well as wider open source participation) not only makes your company more attractive, but it is increasingly a requirement to attract the best talent.
    • Improved skills development – with such a focus on collaboration with innersource, staff learn from each other, discover new approaches, and rectify bad habits due to peer review.
    • Easier performance/audit/root cause analysis – innersource workflow results in a digital record of your entire collaborative work. This can make tracking performance, audits, and root cause analysis easier. Your organization benefits from a record of how previous work was done which can inform and illustrate future decisions.
    • More efficient on-boarding for new staff – when new team members join the company, this record of work I outlined in the previous bullet helps them to see and learn from how previous decisions were made and how previous work was executed. This makes on-boarding, skills development, and learning the culture and personalities of an organization much easier.
    • Easier collaboration with the public open source world – while today you may have no public open source contributions to make, if in the future you decide to either contribute to or build a public open source project, innersource will already instill the necessary workflow, process, and skills to work with public open source projects well.

    What are the RISKS of innersource for an organization?

    While innersource has many benefits, it is not a silver bullet. As I mentioned earlier, innersource is fundamentally about building culture, and a workflow and methodology that provides practical execution and delivery.

    Building culture is hard. Here are some of the risks attached:

    • It takes time – putting innersource in place takes time. I always recommend organizations to start small and iterate. As such, various people in the organization (e.g. execs and key stakeholders) will need to ensure they have realistic expectations about the delivery of this work.
    • It can cause uncertainty – bringing in any new workflow and culture can cause people to feel anxious. It is always important to involve people in the formation and iteration of innersource, communicate extensively, reassure, and always be receptive to feedback
    • Purely top-down directives are often not taken seriously – innersource requires both a top-down permissive component from senior staff and bottom-up tangible projects and workflow for people to engage with. If one or the other is missing, there is a risk of failure.
    • It varies from organization to organization – while the principles of innersource are often somewhat consistent, every organization’s starting point is different. As such, delivering this work will require a lot of nuance for the specifics of that organization, and you can’t merely replicate what others have done.

    How do I use Innersource at my company?

    In the interests of keeping this post concise, I am not going to explain here how to build out an innersource program here, but to share some links some other articles I have written for how to get started:

    One thing I would definitely recommend is hiring someone to help you with this work. While not critical, there is a lot of nuance attached to building the right mix of workflow, incentives, messaging, and building institutional knowledge. Obviously, this is something I provide as a consultant (more details), so if you want to discuss this further, just drop me a line.

    The post Innersource: A Guide to the What, Why, and How appeared first on Jono Bacon.

    on June 26, 2017 04:02 AM

    June 24, 2017

    Disertaremos si debería de existir un Ubuntu rolling y sobre la obsolescencia que provoca el abandono de la arquitectura de 32 bits.

    El podcast esta disponible para escuchar en:

    Ubuntu y otras hierbas S01E06

    En este capítulo intervenimos: Francisco MolineroFrancisco Javier TerueloFernando Lanero y Marcos Costales.
    on June 24, 2017 09:43 PM

    June 23, 2017

    conjure-up dev summary for week 25

    conjure-up dev summary for week 25 conjure-up dev summary for week 25 conjure-up dev summary for week 25

    With conjure-up 2.2.2 out the door we bring a whole host of improvements!

    sudo snap install conjure-up --classic  
    

    Improved Localhost

    We recently switched over to using a bundled LXD and with that change came a few hiccups in deployments. We've been monitoring the error reports coming in and have made several fixes to improve that journey. If you are one of the ones unable to deploy spells please give this release another go and get in touch with us if you still run into problems.

    Juju

    Our biggest underlying technology that we utilize for deployments is Juju. Version 2.2.1 was just released and contains a number of highly anticipated performance improvements:

    • frequent database writes (for logging and agent pings) are batched to significantly reduce database I/O
    • cleanup of log noise to make observing true errors much easier
    • status history is now pruned whereas before a bug prevented that from happening leading to unbounded growth
    • update-status interval configurable (this value must be set when bootstrapping or performing add-model via the --config option; any changes after that are not noticed until a Juju restart)
    • debug-log include/exclude arguments now more user friendly (as for commands like juju ssh, you now specify machine/unit names instead of tags; "rabbitmq-server/0" instead of "unit-rabbitmq-server-0".

    Capturing Errors

    In the past, we’ve tracked errors the same way we track other general usage metrics for conjure-up. This has given us some insight into what issues people run into, but it doesn’t give us much to go on to fix those errors. With this release, we’ve begun using the open source Sentry service (https://sentry.io/) to report some more details about failures, and it has already greatly improved our ability to proactively fix those bugs.

    Sentry collects information such as the conjure-up release, the lxd and juju version, the type of cloud (aws, azure, gce, lxd, maas, etc), the spell being deployed, the exact source file and line in conjure-up where the error occurred, as well as some error specific context information, such as the reason why a controller failed to bootstrap.

    As with the analytics tracking, you can easily opt out of reporting via the command line. In addition to the existing --notrack option, there is now also a --noreport option. You can now also set these option in a ~/.config/conjure-up.conf file. An example of that file would be:

    [REPORTING]
    notrack = true  
    noreport = true  
    

    Future

    Our next article is going to cover the major features planned for conjure-up 2.3! Be sure to check back soon!

    on June 23, 2017 05:36 PM

    This week Mark goes camping, we interview Michael Hall from Endless Computers, bring you another command line love and go over all your feedback.

    It’s Season Ten Episode Sixteen of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Joey Sneddon are connected and speaking to your brain.

    In this week’s show:

    • We discuss what we’ve been upto recently:
    • We interview Michael Hall about Endless Computers.

    • We share a Command Line Lurve:

      • nmon – nmon is short for Nigel’s performance Monitor
    • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

    • This weeks cover image is taken from Wikimedia.

    That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

    on June 23, 2017 02:00 PM

    June 22, 2017

    ISO Image Writer

    Jonathan Riddell

    ISO Image Writer is a tool I’m working on which writes .iso files onto a USB disk ready for installing your lovely new operating system.  Surprisingly many distros don’t have very slick recommendations for how to do this but they’re all welcome to try this.

    It’s based on ROSA Image Writer which has served KDE neon and other projects well for some time.  This adds ISO verification to automatically check the digital signatures or checksums, currently supported is KDE neon, Kubuntu and Netrunner.  It also uses KAuth so it doesn’t run the UI as root, only a simple helper binary to do the writing.  And it uses KDE Frameworks goodness so the UI feels nice.

    First alpha 0.1 is out now.

    Download from https://download.kde.org/unstable/isoimagewriter/

    Signed by release manager Jonathan Riddell with 0xEC94D18F7F05997E. Git tags are also signed by the same key.

    It’s in KDE Git at kde:isoimagewriter and in bugs.kde.org, please do try it out and report any issues.  If you’d like a distro added to the verification please let me know and/or submit a patch. (The code to do with is a bit verbose currently, it needs tidied up.)

    I’d like to work out how to make AppImages, Windows and Mac installs for this but for now it’s in KDE neon developer editions and available as source.

     

    Facebooktwittergoogle_pluslinkedinby feather
    on June 22, 2017 07:14 PM
    The stress-ng logo
    The latest release of stress-ng contains a mechanism to measure latencies via a cyclic latency test.  Essentially this is just a loop that cycles around performing high precisions sleeps and measures the (extra overhead) latency taken to perform the sleep compared to expected time.  This loop runs with either one of the Round-Robin (rr) or First-In-First-Out real time scheduling polices.

    The cyclic test can be configured to specify the sleep time (in nanoseconds), the scheduling type (rr or fifo),  the scheduling priority (1 to 100) and also the sleep method (explained later).

    The first 10,000 latency measurements are used to compute various latency statistics:
    • mean latency (aka the 'average')
    • modal latency (the most 'popular' latency)
    • minimum latency
    • maximum latency
    • standard deviation
    • latency percentiles (25%, 50%, 75%, 90%, 95.40%, 99.0%, 99.5%, 99.9% and 99.99%
    • latency distribution (enabled with the --cyclic-dist option)
    The latency percentiles indicate the latency at which a percentage of the samples fall into.  For example, the 99% percentile for the 10,000 samples is the latency at which 9,900 samples are equal to or below.

    The latency distribution is shown when the --cyclic-dist option is used; one has to specify the distribution interval in nanoseconds and up to the first 100 values in the distribution are output.

    For an idle machine, one can invoke just the cyclic measurements with stress-ng as follows:

     sudo stress-ng --cyclic 1 --cyclic-policy fifo \
    --cyclic-prio 100 --cyclic-method --clock_ns \
    --cyclic-sleep 20000 --cyclic-dist 1000 -t 5
    stress-ng: info: [27594] dispatching hogs: 1 cyclic
    stress-ng: info: [27595] stress-ng-cyclic: sched SCHED_FIFO: 20000 ns delay, 10000 samples
    stress-ng: info: [27595] stress-ng-cyclic: mean: 5242.86 ns, mode: 4880 ns
    stress-ng: info: [27595] stress-ng-cyclic: min: 3050 ns, max: 44818 ns, std.dev. 1142.92
    stress-ng: info: [27595] stress-ng-cyclic: latency percentiles:
    stress-ng: info: [27595] stress-ng-cyclic: 25.00%: 4881 us
    stress-ng: info: [27595] stress-ng-cyclic: 50.00%: 5191 us
    stress-ng: info: [27595] stress-ng-cyclic: 75.00%: 5261 us
    stress-ng: info: [27595] stress-ng-cyclic: 90.00%: 5368 us
    stress-ng: info: [27595] stress-ng-cyclic: 95.40%: 6857 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.00%: 8942 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.50%: 9821 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.90%: 22210 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.99%: 36074 us
    stress-ng: info: [27595] stress-ng-cyclic: latency distribution (1000 us intervals):
    stress-ng: info: [27595] stress-ng-cyclic: latency (us) frequency
    stress-ng: info: [27595] stress-ng-cyclic: 0 0
    stress-ng: info: [27595] stress-ng-cyclic: 1000 0
    stress-ng: info: [27595] stress-ng-cyclic: 2000 0
    stress-ng: info: [27595] stress-ng-cyclic: 3000 82
    stress-ng: info: [27595] stress-ng-cyclic: 4000 3342
    stress-ng: info: [27595] stress-ng-cyclic: 5000 5974
    stress-ng: info: [27595] stress-ng-cyclic: 6000 197
    stress-ng: info: [27595] stress-ng-cyclic: 7000 209
    stress-ng: info: [27595] stress-ng-cyclic: 8000 100
    stress-ng: info: [27595] stress-ng-cyclic: 9000 50
    stress-ng: info: [27595] stress-ng-cyclic: 10000 10
    stress-ng: info: [27595] stress-ng-cyclic: 11000 9
    stress-ng: info: [27595] stress-ng-cyclic: 12000 2
    stress-ng: info: [27595] stress-ng-cyclic: 13000 2
    stress-ng: info: [27595] stress-ng-cyclic: 14000 1
    stress-ng: info: [27595] stress-ng-cyclic: 15000 9
    stress-ng: info: [27595] stress-ng-cyclic: 16000 1
    stress-ng: info: [27595] stress-ng-cyclic: 17000 1
    stress-ng: info: [27595] stress-ng-cyclic: 18000 0
    stress-ng: info: [27595] stress-ng-cyclic: 19000 0
    stress-ng: info: [27595] stress-ng-cyclic: 20000 0
    stress-ng: info: [27595] stress-ng-cyclic: 21000 1
    stress-ng: info: [27595] stress-ng-cyclic: 22000 1
    stress-ng: info: [27595] stress-ng-cyclic: 23000 0
    stress-ng: info: [27595] stress-ng-cyclic: 24000 1
    stress-ng: info: [27595] stress-ng-cyclic: 25000 2
    stress-ng: info: [27595] stress-ng-cyclic: 26000 0
    stress-ng: info: [27595] stress-ng-cyclic: 27000 1
    stress-ng: info: [27595] stress-ng-cyclic: 28000 1
    stress-ng: info: [27595] stress-ng-cyclic: 29000 2
    stress-ng: info: [27595] stress-ng-cyclic: 30000 0
    stress-ng: info: [27595] stress-ng-cyclic: 31000 0
    stress-ng: info: [27595] stress-ng-cyclic: 32000 0
    stress-ng: info: [27595] stress-ng-cyclic: 33000 0
    stress-ng: info: [27595] stress-ng-cyclic: 34000 0
    stress-ng: info: [27595] stress-ng-cyclic: 35000 0
    stress-ng: info: [27595] stress-ng-cyclic: 36000 1
    stress-ng: info: [27595] stress-ng-cyclic: 37000 0
    stress-ng: info: [27595] stress-ng-cyclic: 38000 0
    stress-ng: info: [27595] stress-ng-cyclic: 39000 0
    stress-ng: info: [27595] stress-ng-cyclic: 40000 0
    stress-ng: info: [27595] stress-ng-cyclic: 41000 0
    stress-ng: info: [27595] stress-ng-cyclic: 42000 0
    stress-ng: info: [27595] stress-ng-cyclic: 43000 0
    stress-ng: info: [27595] stress-ng-cyclic: 44000 1
    stress-ng: info: [27594] successful run completed in 5.00s


    Note that stress-ng needs to be invoked using sudo to enable the Real Time FIFO scheduling for the cyclic measurements.

    The above example uses the following options:

    • --cyclic 1
      • starts one instance of the cyclic measurements (1 is always recommended)
    • --cyclic-policy fifo 
      • use the real time First-In-First-Out scheduling for the cyclic measurements
    • --cyclic-prio 100 
      • use the maximum scheduling priority  
    • --cyclic-method clock_ns
      • use the clock_nanoseconds(2) system call to perform the high precision duration sleep
    • --cyclic-sleep 20000 
      • sleep for 20000 nanoseconds per cyclic iteration
    • --cyclic-dist 1000 
      • enable latency distribution statistics with an interval of 1000 nanoseconds between each data point.
    • -t 5
      • run for just 5 seconds
    From the run above, we can see that 99.5% of latencies were less than 9821 nanoseconds and most clustered around the 4880 nanosecond model point. The distribution data shows that there is some clustering around the 5000 nanosecond point and the samples tail off with a bit of a long tail.

    Now for the interesting part. Since stress-ng is packed with many different stressors we can run these while performing the cyclic measurements, for example, we can tell stress-ng to run *all* the virtual memory related stress tests and see how this affects the latency distribution using the following:

     sudo stress-ng --cyclic 1 --cyclic-policy fifo \  
    --cyclic-prio 100 --cyclic-method clock_ns \
    --cyclic-sleep 20000 --cyclic-dist 1000 \
    --class vm --all 1 -t 60s

    ..the above invokes all the vm class of stressors to run all at the same time (with just one instance of each stressor) for 60 seconds.

    The --cyclic-method specifies the delay used on each of the 10,000 cyclic iterations used.  The default (and recommended method) is clock_ns, using the high precision delay.  The available cyclic delay methods are:
    • clock_ns (use the clock_nanosecond() sleep)
    • posix_ns (use the POSIX nanosecond() sleep)
    • itimer (use a high precision clock timer and pause to wait for a signal to measure latency)
    • poll (busy spin-wait on clock_gettime() to eat cycles for a delay.
    All the delay mechanisms use the CLOCK_REALTIME system clock for timing.

    I hope this is plenty of cyclic measurement functionality to get some useful latency benchmarks against various kernel components when using some or a mix of the stress-ng stressors.  Let me know if I am missing some other cyclic measurement options and I can see if I can add them in.

    Keep stressing and measuring those systems!

    on June 22, 2017 06:45 PM
    Thank you to Oracle Cloud for inviting me to speak at this month's CloudAustin Meetup hosted by Rackspace.

    I very much enjoyed deploying Canonical Kubernetes on Ubuntu in the Oracle Cloud, and then exploring Kubernetes a bit, how it works, the architecture, and a simple workload within.  I'm happy to share my slides below, and you can download a PDF here:


    If you're interested in learning more, check out:
    It was a great audience, with plenty of good questions, pizza, and networking!

    I'm pleased to share my slide deck here.

    Cheers,
    Dustin
    on June 22, 2017 03:20 PM

    The Ubuntu OpenStack team is pleased to announce the general availability of the OpenStack Pike b2 milestone in Ubuntu 17.10 and for Ubuntu 16.04 LTS via the Ubuntu Cloud Archive.

    Ubuntu 16.04 LTS

    You can enable the Ubuntu Cloud Archive for OpenStack Pike on Ubuntu 16.04 LTS installations by running the following commands:

    sudo add-apt-repository cloud-archive:pike
    sudo apt update

    The Ubuntu Cloud Archive for Pike includes updates for Barbican, Ceilometer, Cinder, Congress, Designate, Glance, Heat, Horizon, Ironic, Keystone, Manila, Murano, Neutron, Neutron FWaaS, Neutron LBaaS, Neutron VPNaaS, Neutron Dynamic Routing, Networking OVN, Networking ODL, Networking BGPVPN, Networking Bagpipe, Networking SFC, Nova, Sahara, Senlin, Trove, Swift, Mistral, Zaqar, Watcher, Senlin, Rally and Tempest.

    We’ve also now included GlusterFS 3.10.3 in the Ubuntu Cloud Archive in order to provide new stable releases back to Ubuntu 16.04 LTS users in the context of OpenStack.

    You can see the full list of packages and versions here.

    Ubuntu 17.10

    No extra steps required; just start installing OpenStack!

    Branch Package Builds

    If you want to try out the latest master branch updates, or updates to stable branches, we are maintaining continuous integrated packages in the following PPA’s:

    sudo add-apt-repository ppa:openstack-ubuntu-testing/newton
    sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
    sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

    bear in mind these are built per-commitish (30 min checks for new commits at the moment) so ymmv from time-to-time.

    Reporting bugs

    Any issues please report bugs using the ‘ubuntu-bug’ tool:

    sudo ubuntu-bug nova-conductor

    this will ensure that bugs get logged in the right place in Launchpad.

    Still to come…

    In terms of general expectation for the OpenStack Pike release in August we’ll be aiming to include Ceph Luminous (the next stable Ceph release) and Open vSwitch 2.8.0 so long as the release schedule timing between projects works out OK.

    Any finally – if you’re interested in the general stats – Pike b2 involved 77 package uploads including new 4 new packages for new Python module dependencies!

    Thanks and have fun!

    James


    on June 22, 2017 10:00 AM

    Input Method Editors, or IMEs for short, are ways for a user to input text in another, more complex character set using a standard keyboard, commonly used for Chinese, Japanese, and Korean languages (CJK for short). So in order to type anything in Chinese, Japanese, or Korean, you must have a working IME for that language.

    Quite obviously, especially considering the massive userbase in these languages, it’s crucial for IMEs to be quick and easy to setup, and working in any program you decide to use.

    The reality is quite far from this. While there are many problems that exist with IMEs under Linux, the largest one I believe is the fact that there’s no (good) standard for communicating with programs.

    IMEs all have to implement a number of different interfaces, the 3 most common being XIM, GTK (2 and 3), and Qt (3, 4, and 5).

    XIM is the closest we have to a standard interface, but it’s not very powerful, the pre-editing string doesn’t always work properly, isn’t extensible to more advanced features, doesn’t work well under many window systems (in those I’ve tested, it will always appear at the bottom of the window, instead of beside the text), and a number of other shortcomings that I have heard exist, but am not personally aware of (due to not being one who uses IMEs very often).

    GTK and Qt interfaces are much more powerful, and work properly, but, as might be obvious, they only work with GTK and Qt. Any program using another widget toolkit (such as FLTK, or custom widget toolkits, which are especially prevalent in games) needs to fall back to the lesser XIM interface. Going around this is theoretically possible, but very difficult in practice, and requires GTK or Qt installed anyways.

    IMEs also need to provide libraries for every version of GTK and Qt as well. If an IME is not updated to support the latest version, you won’t be able to use the IME in applications using the latest version of GTK or Qt.

    This, of course, adds quite a large amount of work to IME developers, and causes quite a problem with IME users, where a user will no longer be able to use an IME they prefer, simply because it has not been updated to support programs using a newer version of the toolkit.

    I believe these issues make it very difficult for the Linux ecostructure to advance as a truly internationalized environment. It first limits application developers that truly wish to honor international users to 2 GUI toolkits, GTK and Qt. Secondly, it forces IME developers to constantly update their IMEs to support newer versions of GTK and Qt, requiring a large amount of effort, duplicated code, and as a result, can result in many bugs (and abandonment).

     

    I believe fixing this issue would require a unified API that is toolkit agnostic. There’s 2 obvious ways that come to mind.

    1. A library that an IME would provide that every GUI application would include
    2. A client/server model, where the IME is a server, and the clients are the applications

    Option #1 would be the easiest and least painful to implement for IME developers, and I believe is actually the way GTK and Qt IMEs work. But there are also problems with this approach. If the IME crashes, the entire host application will crash as well, as well as the fact that there could only be one IME installed at a time (since every IME would need to provide the same library). The latter is not necessarily a big issue for most users, but in multi-user desktops, this can be a big issue.

    Option #2 would require more work from the IME developers, juggling client connections and the likes (although this could be abstracted with a library, similar to Wayland’s architecture). However, it would also mean a separate address space (therefore, if the IME crashes, nothing else would crash as a direct result of this), the possibility for more than one IME being installed and used at once, and even the possibility of hotswapping IMEs at runtime.

    The problem with both of these options is the lack of standardization. While they can adhere to a standard for communicating with programs, configuration, dealing with certain common problems, etc. are all left to the IME developers. This is the exact problem we see with Wayland compositors.

    However, there’s also a third option: combining the best of both worlds in the options provided above. This would mean having a standard server that will then load a library that provides the IME-related functions. If there are ever any major protocol changes, common issues, or anything of the likes, the server will be able to be updated while the IMEs can be left intact. The library that it loads would be, of course, entirely configurable by the user, and the server could also host a number of common options for IMEs (and maybe also host a format for configuring specific options for IMEs), so if a user decides to switch IMEs, they wouldn’t need to completely redo their configuration.

    Of course, the server would also be able to provide clients for XIM and GTK/Qt-based frontends, for programs that don’t use the protocol directly.

    Since I’m not very familiar with IMEs, I haven’t yet started a project implementing this idea, since there may be challenges about a method like this that might have already been discussed, but that I’m not aware of.

    This is why I’m writing this post, to hopefully bring up a discussion about how we can improve the state of IMEs under Linux :) I would be very willing to work with people to properly design and implement a better solution for the problem at hand.


    on June 22, 2017 07:08 AM

    June 21, 2017

    The other day some of my fellow Ubuntu developers and I were looking at bug 1692981 and trying to figure out what was going on. While we don’t have an answer yet, we did use some helpful tools (at least one of which somebody hadn’t heard of) to gather more information about the bug.

    One such tool is lp-bug-dupe-properties from the lptools package in Ubuntu. With this it is possible to quickly find out information about all the duplicates, 36 in this case, of a bug report. For example, if we wanted to know which releases are affected we can use:

    lp-bug-dupe-properties -D DistroRelease -b 1692981

    LP: #1692981 has 36 duplicates
    Ubuntu 16.04: 1583463 1657243 1696799 1696827 1696863 1696930 1696940
    1697011 1697016 1697068 1697099 1697121 1697280 1697290 1697313 1697335
    1697356 1697597 1697801 1697838 1697911 1698097 1698100 1698104 1698113
    1698150 1698171 1698244 1698292 1698303 1698324 1698670 1699329
    Ubuntu 16.10: 1697072 1698098 1699356

    While lp-bug-dupe-properites is useful, in this case it’d be helpful to search the bug’s attachments for more information. Luckily there is a tool, lp-grab-attachments (also part of lptools), which will download all the attachments of a bug report and its duplicates if you want. Having done that you can then use grep to search those files.

    lp-grab-attachments -dD 1692981

    The ‘-d’ switch indicates I want to get the attachments from duplicate bug reports and the ‘-D’ switch indicates that I want to have the bug description saved as Description.txt. While saving the description provides some of the same capability as lp-bug-dupe-properties it ends up being quicker. Now with the attachments saved I can do something like:

    for desc in $(find . -name Description.txt); do grep "dpkg 1.18.[4|10]" $desc;
    done

    ...
    dpkg 1.18.4ubuntu1.2
    dpkg 1.18.10ubuntu2
    dpkg 1.18.10ubuntu1.1
    dpkg 1.18.4ubuntu1.2
    ...

    and find out that a variety of dpkg versions are in use when this is encountered.

    I hope you find these tools useful and I’d be interested to hear how you use them!

    on June 21, 2017 05:40 PM
    I first started using Ubuntu just a few weeks after Lucid Lynx was released and have used Ubuntu, Kubuntu, Xubuntu, Lubuntu and Ubuntu GNOME since then. Towards the end of 2016 I took early retirement and decided to curtail some of my Ubuntu related activities in favour of some long abandoned interests which went back to the 1960s. Although I had no intention of spending every day sat in front of a computer screen I still wished to contribute to Ubuntu but at a reduced level. However, recent problems relating to my broadband connection, which I am hoping are now over, prompted me to look closely at how I could continue to contribute to Ubuntu if I lost my "always on" internet.

    Problems

    Thanks to my broadband provider, whose high profile front man sports a beard and woolly jumpers, my connection changed from being one that was "always on" to one that was "usually off". There's a limit to how many times I'm prepared to reboot my cable modem on the advice of the support desk, be sent unnecessary replacement modems because the one I'm using must be faulty, to allow engineers into my home to measure signal levels, and be told the next course of action will definitely get my connection working only to find that I'm still off-line the next day and the day after. I kept asking myself: "Just how many engineers will they need to send before someone successfully diagnoses the problem and fixes it?"

    Mobile broadband

    Much of my recent web browsing, on-line banking, and updating of my Xubuntu installations has been done with the aid of two iPhones acting as access points while connected to the 3 and EE mobile networks. It was far from being an ideal situation, connection speeds were often very low by today's standards but "it worked" and the connections were far more reliable than I thought that they would be. A recent test during the night showed a download speed on a 4G connection to be comparable to that offered by many other broadband providers. But downloading large Ubuntu updates took a long time especially during the evening. As updating the pre-installed apps on a smart phone can quickly use up one's monthly data allowance I made myself aware of where I could find local Wi-Fi hotspots to make some of the important or large phone updates and save some valuable bandwidth for Ubuntu. Interestingly with the right monthly plan and using more appropriate hardware than a mobile phone, I could actually save some money by switching from cable to mobile broadband although I would definitely miss my 100Mb/s download speed that is most welcome when downloading ISO images or large Ubuntu updates.

    ISO testing

    Unfortunately these problems, which lasted for over three weeks, meant that I had to cease ISO testing due to the amount of data I would need to download several times each week. I had originally intended to get a little more involved with testing of the development release of Xubuntu during the Artful cycle but those plans were put on hold while I waited for my broadband connection to be restored and deemed to be have been fixed permanently. During this outage I still managed to submit a couple of bug reports and comment on a few others but my "always on" high speed connection was very much missed.

    Connection restored!

    How I continue with Ubuntu long-term will now depend on the reliability of my broadband connection which does seem to have now been restored to full working order. I'm finalising this post a week after receiving yet another visit from an engineer who restored my connection in just a matter of minutes. Cables had been replaced and signal levels had been measured and brought to within the required limits. Apparently the blame for the failure of the most recent "fix" was put solely on one of his colleagues who I am told failed to correctly join two cables together. In other words, I wasn't actually connected to their network at all. It must have been so very obvious to my modem/router which sat quietly in the corner of the room forever looking to connect to something that it just could not find and yet was unable to actually tell me so. If only such devices could actually speak....

    on June 21, 2017 09:58 AM
    Friday, I uploaded an updated nplan package (version 0.24) to change its Priority: field to important, as well as an update of ubuntu-meta (following a seeds update), to replace ifupdown with nplan in the minimal seed.

    What this means concretely is that nplan should now be installed by default on all images, part of ubuntu-minimal, and dropped ifupdown at the same time.

    For the time being, ifupdown is still installed by default due the way debootstrap generates the very minimal images used as a base for other images -- how it generates its base set of packages, since that depends only on the Priority: field of packages. Thus, nplan was added, but ifupdown still needs to be changed (which I will do shortly) to disappear from all images.

    The intent is that nplan would now be the standard way of configuring networks. I've also sent an email about this to ubuntu-devel-announce@.

    I've already written a bit about what netplan is and does, and I have still more to write on the subject (discussing syntax and how to do common things). We especially like how using a purely declarative syntax makes things easier for everyone (and if you can't do what you want that way, then it's a bug you should report).

    MaaS, cloud-init and others have already started to support writing netplan configuration.

    The full specification (summary wiki page and a blueprint reachable from it) for the migration process is available here.

    While I get to writing something comprehensive about how to use the netplan YAML to configure networks, if you want to know more there's always the manpage, which is the easiest to use documentation. It should always be up to date with the current version of netplan available on your release (since we backported the last version to Xenial, Yakkety, and Zesty), and accessible via:

    man 5 netplan

    To make things "easy" however, you can also check out the netplan documentation directly from the source tree here:

    https://git.launchpad.net/netplan/tree/doc/netplan.md

    There's also a wiki page I started to get ready that links to the most useful things, such as an overview of the design of netplan, some discussion on the renderers we support and some of the commands that can be used.

    We even have an IRC channel on Freenode: #netplan

    I think you'll find that using netplan makes configuring networks easy and even enjoyable; but if you run into an issue, be sure to file a bug on Launchpad here:

    on June 21, 2017 02:10 AM

    June 20, 2017

    Now that Ubuntu phones and tablets are gone, I would like to offer my thoughts on why I personally think the project failed and what one may learn from it.
    on June 20, 2017 03:00 PM

    June 19, 2017

    Today I released the second development snapshot (3.25.3) of what will be GNOME Tweak Tool 3.26.

    I consider the initial User Interface (UI) rework proposed by the GNOME Design Team to be complete now. Every page in Tweak Tool has been updated, either in this snapshot or the previous development snapshot.

    The hard part still remains: making the UI look as good as the mockups. Tweak Tool’s backend makes this a bit more complicated than usual for an app like this.

    Here are a few visual highlights of this release.

    The Typing page has been moved into an Additional Layout Options dialog in the Keyboard & Mouse page. Also, the Compose Key option has been given its own dialog box.

    Florian Müllner added content to the Extensions page that is shown if you don’t have any GNOME Shell extensions installed yet.

    A hidden feature that GNOME has had for a long time is the ability to move the Application Menu from the GNOME top bar to a button in the app’s title bar. This is easy to enable in Tweak Tool by turning off the Application Menu switch in the Top Bar page. This release improves how well that works, especially for Ubuntu users where the required hidden appmenu window button was probably not pre-configured.

    Some of the ComboBoxes have been replaced by ListBoxes. One example is on the Workspaces page where the new design allows for more information about the different options. The ListBoxes are also a lot easier to select than the smaller ComboBoxes were.

    For details of these and other changes, see the commit log or the NEWS file.

    GNOME Tweak Tool 3.26 will be released alongside GNOME 3.26 in mid-September.

    on June 19, 2017 11:15 PM

    This wasn't a joke!As previously announced, few days ago I attended the GNOME Fractional Scaling Hackfest that me and Red Hat‘s Jonas Ådahl organized at the Canonical office in Taipei 101.
    Although the location was chosen mostly because it was the one closest to Jonas and near enough to my temporary place, it turned out to be the best we could use, since the huge amount of hardware that was available there, including some 4k monitors and HiDPI laptops.
    Being there also allowed another local Caonical employee (Shih-Yuan Lee) to join our efforts!

    As this being said I’ve to thank my employer, for allowing me to do this and for sponsoring the event in order to help making GNOME a better desktop for Ubuntu (and not only).

    Going deeper into the event (for which we tracked the various more technical items in a WIP journal), it has been a very though week, hard working till late while trying to look for the various edge cases and discovering bugs that the new “logically sized” framebuffer and actors were causing.

    In fact, as I’ve already quickly explained, the whole idea is to paint all the screen actors at the maximum scale value across the displays they intersect and then using scaled framebuffers when painting, so that we can redefine the screen coordinates in logical pixels, more than using pixel units. However, since we want to be able to use any sized element scaled at (potentially any) fractional value, we might incur in problems when eventually we go back to the pixel level, where everything is integer-indexed.

    We started by defining the work items for the week and setting up some other HiDPI laptops (Dell XPS 15 and XPS 13 mostly) we got from the office with jhbuild, then as you can see we defined some list of things to care about:

    • Supporting multiple scaling values: allowing to scale up and down (< 1.0) the interface, not only to well-known value, but providing a wider range of floats we support
    • Non-perfect-scaling: covering the cases in which the actor (or the whole monitor) when scaled up/down to a fractional level has not anymore a pixel-friendly size, and thus there are input and outputs issues to handle due to rounding.
    • GNOME Shell UI: the shell StWidget‘s need to be drawn at proper resource scaling value, so that when they’re painted they won’t look blurred.
    • Toolkit supports: there are some Gtk issues when scaling more than 2x, while Qt has support for Fractional scaling.
    • Wayland protocol improvements: related to the point above we might define a way to tell toolkits the actual fractional scaling value, so that they could be scaled at the real value, instead of asking them to scale up to the upper integer scaling level. Also when it comes to games and video players, they should not be scaled up/down at all.
    • X11 clients: supporting XWayland clients

    What we did

    As you see the list of things we meant to work or plan was quite juicy, so more than enough for one week, but even if we didn’t finish all the tasks (despite the Super-Joans powers :-)), we have been able to start or address the work for most of them so that we’ll know what to work on for the next weeks.

    Scaling at 1.25x

    As a start, we had to ensure mutter was supporting various scaling values (including the ones < 1.0), we decided (this might change, but given the Unity experience, it proved to work well) to support 8 intermediate values per integer, from 0.5 to 4.0. This, as said, would lead to troubles when it comes to many resolutions (as you see in the board picture, 1280×720 is an example of a case that doesn’t work well when scaled at 1.5 for instance). So we decided to make mutter to expose a list of supported scaling values per each mode, while we defined an algorithm to compute the closest “good” scaling level to get a properly integer sized logical screen.
    This caused a configuration API change, and we updated accordingly gnome-settings-daemon and gnome-control-center adding also some UI changes to reflect and control this new feature.
    Not only, the ability of having such fractional values, caused various glitches in mutter, mostly related to the damage algorithm, which Jonas refactored. Other issues in screenshots or gnome-shell fullscreen animations have also been found and fixed.

    Speaking of Gnome Shell toolkit, we started some work to fix the drawing of cairo-based areas, while I had already something done for labels, that needs to be tuned. Shih-Yuan fixed a scaling problem of the workspace thumbnails.

    On toolkits support, we didn’t do much (a part Gnome Shell) as Gtk problem is not something that affects us much in normal scenarios yet, but still we debugged the issue, while it’s probably a future optimization to support fractional-friendly toolkits using an improved wayland protocol. Instead it’s quite important to define a such protocol for apps that don’t need to be scaled, such as games, but in order to do it we need feedback from games developers too, so that we can define it in the best way.

    Not much has been also done in XWayland world (right now everything is scaled to the required value by mutter, but the toolkit  will also use scale 1, which would lead to some blurred result), but we agreed that we’d probably need to define an X11 protocol for this.

    We finally spent some time for defining an algorithm for picking the preferred scaling per mode. This is a quite controversial aspect, and anyone might have their ideas on this (especially OEMs), so far we defined some DPI limits that we’ll use to evaluate weather a fractional scaling level has to be done or not: outside these limits (which change depending we’re handling a laptop panel or an external monitor [potentially in a docked setup]) we use integer scaling, in between them we use instead proportional (fractional) values.
    One idea I had was to see the problem the other way around and define instead the physical size (in tenth of mm) we want for a pixel at least to be, and then scale to ensure we reach those thresholds instead of defining DPIs (again, that physical size should be weighted for external monitors differently, though). Also, hardware vendors might want to be able to tune these defaults, so one idea was also to provide a way for them to define defaults by panel serial.
    In any case, the final and most
    important goal, to me, is to provide defaults that guarantee usable and readable HiDPI environments, so that people would be able to use gnome-control-center and adjust these values if needed.
    And I think could be quite also quite useful to add
    to the gnome-shell intro-wizard an option to chose the scaling level if an high DPI monitor is detected.
    For this reason, we also filled this wiki page, with display technical infos for all the hardware we had around, and we encourage you to do add your infos (if you don’t have write access to the Wiki, just send it to us).

    What to do

    As you can see in our technical journal TODO, we’ve plenty of things to do but the main thing is currently fixing the Shell toolkit widgets, while going through various bugs and improving the XWayland clients situation. Then there multiple optimizations to do at mutter level too.

    When we ship

    Our target is to get this landed by GNOME 3.26, even if this might be under an experimentalgsettings key, as right now the main blocker is the X11 clients support.

    How to help

    The easiest thing you can do is help testing the code (using jhbuild build gnome-shell with a config based on this) should be enough), also filling the scale factor tests wiki page might help. If you want to get involved with code, these are the git branches to look at.

    Read More

    You can read a more schematic report that Jonas wrote for this event on the gnome-shell mailing list.

    Conclusions

    It has been a great event, we did and discussed about many things but first of all I’ve been able to get more closely familiar in the GNOME code with who has wrote most of it, which indeed helped.
    We’ve still lots of things to do, but we’re approaching to a state that would allow everyone to get differently scaled monitors at various fractional values, with no issues.

    Our final board

    Check some other pictures in my flickr gallery

    Finally, I’ve to say thanks a lot to Jonas who initially proposed the event and, a part being a terrific engineer, has been a great mate to work and hang out with, making me discover (and survive in) Taipei and its food!

    on June 19, 2017 09:03 PM

    The second release of the GTK+ 3 powered Xfce Settings is now ready for testing (and possibly general use).  Check it out!

    What’s New?

    This release now requires xfconf 4.13+.

    New Features

    • Appearance Settings: New configuration option for default monospace font
    • Display Settings: Improved support for embedded DisplayPort connectors

    Bug Fixes

    • Display Settings: Fixed drawing of displays, was hit and miss before, now its guaranteed
    • Display Settings: Fixed drag-n-drop functionality, the grab area occupied the space below the drawn displays
    • Display Settings (Minimal): The mini dialog now runs as a single instance, which should help with some display drivers (Xfce #11169)
    • Fixed linking to dbus-glib with xfconf 4.13+ (Xfce #13633)

    Deprecations

    • Resolved gtk_menu_popup and gdk_error_trap_pop deprecations
    • Ignoring GdkScreen and GdkCairo deprecations for now. Xfce shares this code with GNOME and Mate, and they have not found a resolution yet.

    Code Quality

    • Several indentation fixes
    • Dropped duplicate drawing code, elimination another deprecation in the process

    Translation Updates

    Arabic, Bulgarian, Catalan, Chinese (China), Chinese (Taiwan), Croatian, Danish, Dutch, Finnish, French, Galician, German, Greek, Hebrew, Indonesian, Italian, Japanese, Kazakh, Korean, Lithuanian, Malay, Norwegian Bokmal, Norwegian Nynorsk, Occitan, Portuguese, Portuguese (Brazil), Russian, Serbian, Slovak, Spanish, Swedish, Thai, Ukrainian

    Downloads

    The latest version of Xfce Settings can always be downloaded from the Xfce archives. Grab version 4.13.1 from the below link.

    http://archive.xfce.org/src/xfce/xfce4-settings/4.13/xfce4-settings-4.13.1.tar.bz2

    • SHA-256: 01b9e9df6801564b28f3609afee1628228cc24c0939555f60399e9675d183f7e
    • SHA-1: 9ffdf3b7f6fad24f4efd1993781933a2a18a6922
    • MD5: 300d317dd2bcbb0deece1e1943cac368
    on June 19, 2017 09:40 AM

    The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

    MAAS Sprint

    The Canonical MAAS team sprinted at Canonical’s London offices this week. The purpose was to review the previous development cycle & release (MAAS 2.2), as well as discuss and finalize the plans and goals for the next development release cycle (MAAS 2.3).

    MAAS 2.3 (current development release)

    The team has been working on the following features and improvements:

    • New Feature – support for ‘upstream’ proxy (API only)Support for upstream proxies has landed in trunk. This iteration contains API only support. The team continues to work on the matching UI support for this feature.
    • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress so far is:
      • Prepared the MAAS CI infrastructure to fully support Git once the transition is complete.
      • Started working on creating new processes for PR’s auto-testing and landing.
    • Django 1.11 transition – The team continues to work through the Django 1.11 transition; we’re down to 130 unittest failures!
    • Network Beaconing & better network discovery – Prototype beacons have now been sent and received! The next steps will be to work on the full protocol implementation, followed by making use of beaconing to enhance rack registration. This will provide a better out-of-the-box experience for MAAS; interfaces which share network connectivity will no longer be assumed to be on separate fabrics.
    • Started the removal of ‘tgt’ as a dependency – We have started the removal of ‘tgt’ as a dependency. This simplies the boot process by not loading ephemeral images from tgt, but rather, having the initrd download and load the ephemeral environment.
    • UI Improvements
      • Performance Improvements – Improved the loading of elements in the Device Discovery, Node listing and Events page, which greatly improve UI performance.
      • LP #1695312 – The button to edit dynamic range says ‘Edit’ while it should say ‘Edit reserved range’
      • Remove auto-save on blur for the Fabric details summary row. Applied static content when not in edit mode.

    Bug Fixes

    The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

    • LP: #1678339 – allow physical (and bond) interfaces to be placed on VLANs with a known 802.1q tag.
    • LP: #1652298 – Improve loading of elements in the device discovery page
    on June 19, 2017 09:15 AM

    Mission Reports

    Stephen Michael Kellat

    Well, taking just over 60 days to write again is not generally a good sign. Things have been incredibly busy at the day job. Finding out that a Reduction In Force is expected to happen in late September/early October also sharpens the mind as to the state of the economy. Our CEO at work is somewhat odd, to say the least. Certain acts by the CEO remain incredibly confusing if not utterly baffling.

    In UK-slang, I guess I could probably be considered a "God-botherer". I've been doing work as an evangelist lately. The only product though has been the Lord's Kingdom. One of the elders at church wound up with their wife in a local nursing home due to advanced age as well as deteriorating health so I got tasked with conducting full Sunday services at the nursing home. Compared to my day job, the work has been far more worthwhile serving people in an extended care setting. Sadly it cannot displace my job that I am apparently about to lose in about 90 days or so anyhow thanks to pending actions of the board and CEO.

    One other thing I have had running in the background has been the external review of Outernet. A short research note was drawn up in LaTeX and was submitted somewhere but bounced. Thanks to the magic of Pandoc, I was able to convert it to HTML to tack on to this blog post.

    The Outernet rig in the garage

    The Outernet rig is based in my garage to simulate a field deployment. The goal by their project is to get these boards into the wild in places like the African continent. Those aren't "clean room" testing environments. If anything, temperature controls go out the window. My only indulgence is that I added on an uninterruptible power supply due to known failures in the local grid.

    The somewhat disconnected Raspberry Pi B+ known as ASTROCONTROL to connect to the Outernet board to retrieve materials

    Inside the house a Raspberry Pi running Raspbian is connected via Ethernet to a Wi-Fi extender to reach out to the Outernet board. I have to manually set the time every time that ASTROCONTROL is used. Nothing in the mix is connected to the general Internet. The connection I have through Spectrum is not really all that great here in Ashtabula County.

    As seen through ConnectBot, difficulties logging in

    The board hit a race condition at one point recently where nothing could log in. A good old-fashioned IT Crowd-style power-cycling resolved the issue.

    Pulling files on the Outernet board itself as seen in a screenshot via Cathode on an iPad

    Sometimes I have used the Busybox version of tar on the board to gather files to review off the board.

    The Outernet UI as seen on a smartphone

    The interface gets a little cramped on a smartphone like the one I have.

    And now for the text of the paper that didn't make the cut...

    Introduction

    A current endeavor is to review the Outernet content distribution system. Outernet is a means to provide access to Internet content in impaired areas.1 This is not the only effort to do so, though. At the 33rd Chaos Communications Congress there was a review of the signals being transmitted with a view to reverse engineering it.2 The selection of content as well as the innards of the mainboard shipped in the do-it-yourself kit wind up being areas of review that continue.

    In terms of concern, how is the content selected for distribution over the satellite platform? There is no known content selection policy. Content reception was observed to try to discern any patterns.

    As to the software involved, how was the board put together? Although the signals were focused on at the Chaos Communications Congress, it is appropriate to learn what is happening on the board itself. As designed, the system intends for access to be had through a web browser. There is no documented method of bulk access for data. A little sleuthing shows that that is possible, though.

    Low-Level Software

    The software powering the mainboard, a C.H.I.P. device, was put together in an image using the Buildroot cross-compilation system. Beyond the expected web-based interface, a probe using Nmap found that ports were open for SSH as well as traditional FTP. The default directory for the FTP login is a mount point where all payloads received from the satellite platform are stored. The SSH session is provided by Dropbear and deposits you in a Busybox session.

    The mainboard currently in use has been found to have problems with power interruption. After having to vigorously re-flash the board due to filesystem corruption caused by a minor power disruption, an uninterruptible power system was purchased to keep it running. Over thirty days of running, as measured by the Busybox-exposed command uptime, was gained through putting the rig on an uninterruptible power supply. The system does not adapt well with the heat as observed in the summer in northeast Ohio as we have had to power-cycle it to reboot it during high temperature periods as remote access became inaccessible.

    Currently the Outernet mainboard is being operated air-gapped from other available broadband to observe how it would operate in an Internet-impaired environment. The software operates a Wi-Fi access point on the board with the board addressable at 10.0.0.1. Maintaining a constant connection through a dedicated Raspberry Pi and associated monitor plus keyboard has not proved simple so far.

    Content Selection

    Presently a few categories of data are routinely transmitted. Weather data is sent for viewing in a dedicated applet. News ripped from the RSS feeds of selected news outlets such as the Voice of America, Deutsche Welle, and WTOP is sent routinely but is not checked for consistency. For example, one feed routinely pushes a page daily that the entire feed is just broken. Pages from Wikipedia are sent but there is no pattern discernible yet as to how the pages are picked.

    Currently there is a need to review how Wikipedia may make pages available in an automated fashion. It is an open question as to how these pages are being scraped. Is there a feed? Is there manual intervention at the point of uplink? The pages sent are not the exact web-based versions or PDF exports but rather the printer-friendly versions. For now investigation needs to occur relative to how Wikipedia releases articles to see if there is anything that correlates with what is being released.

    There are still open questions that require review. The opacity of the content selection policies and procedures limit the platform's utility. That opacity prevents a user having a reasonable expectation of what exactly is coming through on the downlink.

    Conclusion

    A technical platform is only a means. With the computers involved at each end, older ideas for content distribution are reborn for access-impaired areas. Content remains key, though.


    1. Alyssa Danigelis, "'Outernet' Project Seeks Free Internet Access For Earth?: Discovery News," DNews, February 25, 2014, http://news.discovery.com/tech/gear-and-gadgets/outernet-project-seeks-free-internet-access-for-earth-140225.htm./\/\

    2. Reverse Engineering Outernet (Hamburg, Germany, 2016), https://media.ccc.de/v/33c3-8399-reverse_engineering_outernet./\/\

    on June 19, 2017 01:41 AM

    June 18, 2017

    Xfce 4.14 development has been picking up steam in the past few months.  With the release of Exo 0.11.3, things are only going to get steamier.  

    What is Exo?

    Exo is an Xfce library for application development. It was introduced years ago to aid the development of Xfce applications.  It’s not used quite as heavily these days, but you’ll still find Exo components in Thunar (the file manager) and Xfce Settings Manager.

    Exo provides custom widgets and APIs that extend the functionality of GLib and GTK+ (both 2 and 3).  It also provides the mechanisms for defining preferred applications in Xfce.

    What’s New in Exo 0.11.3?

    New Features

    • exo-csource: Added a new --output flag to write the generated output to a file (Xfce #12901)
    • exo-helper: Added a new --query flag to determine the preferred application (Xfce #8579)

    Build Changes

    • Build requirements were updated.  Exo now requires GTK+ 2.24, GTK 3.20, GLib 2.42, and libxfce4ui 4.12
    • Building GTK+ 3 libraries is no longer optional
    • Default debug setting is now “yes” instead of “full”. This means that builds will not fail if there are deprecated GTK+ symbols (and there are plenty).

    Bug Fixes

    • Discard preferred application selection if dialog is canceled (Xfce #8802)
    • Do not ship generic category icons, these are standard (Xfce #9992)
    • Do not abort builds due to deprecated declarations (Xfce #11556)
    • Fix crash in Thunar on selection change after directory change (Xfce #13238)
    • Fix crash in exo-helper-1 from GTK 3 migration (Xfce #13374)
    • Fix ExoIconView being unable to decrease its size (Xfce #13402)

    Documentation Updates

    Available here

    • Add missing per-release API indices
    • Resolve undocumented symbols (100% symbol coverage)
    • Updated project documentation (HACKING, README, THANKS)

    Translation Updates

    Amharic, Asturian, Catalan, Chinese (Taiwan), Croatian, Danish, Dutch, Finnish, Galician, Greek, Indonesian, Kazakh,  Korean, Lithuanian, Norwegian Bokmal, Norwegian Nynorsk, Occitan, Portuguese (Brazil), Russian, Serbian, Slovenian, Spanish, Thai

    Downloads

    The latest version of Exo can always be downloaded from the Xfce archives. Grab version 0.11.3 from the below link.

    http://archive.xfce.org/src/xfce/exo/0.11/exo-0.11.3.tar.bz2

    • SHA-256: 448d7f2b88074455d54a4c44aed08d977b482dc6063175f62a1abfcf0204420a
    • SHA-1: 758ced83d97650e0428563b42877aecfc9fc3c81
    • MD5: c1801052163cbd79490113f80431674a
    on June 18, 2017 05:30 PM

    Kubuntu 17.04 – Zesty Zapus

    The latest 5.10.2 bugfix update for the Plasma 5.10 desktop is now available in our backports PPA for Zesty Zapus 17.04.

    Included with the update is KDE Frameworks 5.35

    Kdevelop has also been updated to the latest version 5.1.1

    Our backports for Xenial Xerus 16.04 also receive updated Plasma and Frameworks, plus some requested KDE applications.

    Kubuntu 16.04 – Xenial Xerus

    • Plasma Desktop 5.8.7 LTS bugfix update
    • KDE Frameworks 5.35
    • Digikam 5.5.0
    • Kdevelop 5.1.1
    • Krita 3.1.4
    • Konversation 1.7.2
    • Krusader 2.6

    To update, use the Software Repository Guide to add the following repository to your software sources list:

    ppa:kubuntu-ppa/backports

    or if it is already added, the updates should become available via your preferred update method.

    The PPA can be added manually in the Konsole terminal with the command:

    sudo add-apt-repository ppa:kubuntu-ppa/backports

    and packages then updated with

    sudo apt update
    sudo apt full-upgrade

     

    Upgrade notes:

    ~ The Kubuntu backports PPA already contains significant version upgrades of Plasma, applications, Frameworks (and Qt for 16.04), so please be aware that enabling the backports PPA for the 1st time and doing a full upgrade will result in a substantial amount of upgraded packages in addition to the versions in this announcement.  The PPA will also continue to receive bugfix and other stable updates when they become available.

    ~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], file a bug against our PPA packages [3], or optionally contact us via social media.

    1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
    2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
    3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

    on June 18, 2017 01:08 PM

    I’m pleased to announce the Community Data Science Collective Dataverse. Our dataverse is an archival repository for datasets created by the Community Data Science Collective. The dataverse won’t replace work that collective members have been doing for years to document and distribute data from our research. What we hope it will do is get our data — like our published manuscripts — into the hands of folks in the “forever” business.

    Over the past few years, the Community Data Science Collective has published several papers where an important part of the contribution is a dataset. These include:

    Recently, we’ve also begun producing replication datasets to go alongside our empirical papers. So far, this includes:

    In the case of each of the first groups of papers where the dataset was a part of the contribution, we uploaded code and data to a website we’ve created. Of course, even if we do a wonderful job of keeping these websites maintained over time, eventually, our research group will cease to exist. When that happens, the data will eventually disappear as well.

    The text of our papers will be maintained long after we’re gone in the journal or conference proceedings’ publisher’s archival storage and in our universities’ institutional archives. But what about the data? Since the data is a core part — perhaps the core part — of the contribution of these papers, the data should be archived permanently as well.

    Toward that end, our group has created a dataverse. Our dataverse is a repository within the Harvard Dataverse where we have been uploading archival copies of datasets over the last six months. All five of the papers described above are uploaded already. The Scratch dataset, due to access control restrictions, isn’t listed on the main page but it’s online on the site. Moving forward, we’ll be populating this new datasets we create as well as replication datasets for our future empirical papers. We’re currently preparing several more.

    The primary point of the CDSC Dataverse is not to provide you with way to get our data although you’re certainly welcome to use it that way and it might help make some of it more discoverable. The websites we’ve created (like for the ones for redirects and for page protection) will continue to exist and be maintained. The Dataverse is insurance for if, and when, those websites go down to ensure that our data will still be accessible.


    This post was also published on the Community Data Science Collective blog.

    on June 18, 2017 02:35 AM

    June 17, 2017

    I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in my previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.

    I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.

    To begin, we need to install 3 packages:
    sudo apt-get install msmtp msmtp-mta ca-certificates
    Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:

    # Set defaults.
    defaults
    # Enable or disable TLS/SSL encryption.
    tls on
    tls_starttls on
    tls_trust_file /etc/ssl/certs/ca-certificates.crt
    # Setup WP account's settings.
    account <MSMTP_ACCOUNT_NAME>
    host smtp.gmail.com
    port 587
    auth login
    user <EMAIL_USERNAME>
    password <PASSWORD>
    from <FROM_ADDRESS>
    logfile /var/log/msmtp/msmtp.log
    
    account default : <MSMTP_ACCOUNT_NAME>
    

    Any of the uppercase items (i.e. <PASSWORD>) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.

    Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.

    sudo mkdir /var/log/msmtp
    sudo chown -R www-data:adm /var/log/msmtp
    sudo chmod 0600 /etc/msmtprc
    

    Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.

    /var/log/msmtp/*.log {
    rotate 12
    monthly
    compress
    missingok
    notifempty
    }
    

    Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
    sendmail_path =
    to
    sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a <MSMTP_ACCOUNT_NAME> -t"
    Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : <MSMTP_ACCOUNT_NAME> was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following

    mail ('personal@email.com', 'Test Subject', 'Test body text');
    exit();
    

    Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).

    I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

    on June 17, 2017 08:32 PM

    June 16, 2017

    I am going to be honest with you, I am writing this post out of one part frustration and one part guidance to people who I think may be inadvertently making a mistake. I wanted to write this up as a blog post so I can send it to people when I see this happening.

    It goes like this: when I follow someone on Twitter, I often get an automated Direct Message which looks something along these lines:

    These messages invariably are either trying to (a) get me to look at a product they have created, (b) trying to get me to go to their website, or (c) trying to get me to follow them somewhere else such as LinkedIn.

    Unfortunately, there are two similar approaches which I think are also problematic.

    Firstly, some people will have an automated tweet go out (publicly) that “thanks” me for following them (as best an automated bot who doesn’t know me can thank me).

    Secondly, some people will even go so far as to record a little video that personally welcomes me to their Twitter account. This is usually less than a minute long and again is published as an integrated video in a public tweet.

    Why you shouldn’t do this

    There are a few reasons why you might want to reconsider this:

    Firstly, automated Direct Messages come across as spammy. Sure, I chose to follow you, but if my first interaction with you is advertising, it doesn’t leave a great taste in my mouth. If you are going to DM me, send me a personal message from you, not a bot (or not at all). Definitely don’t try to make that bot seem like a human: much like someone trying to suppress a yawn, we can all see it, and it looks weird.

    Pictured: Not hiding a yawn.

    Secondly, don’t send out the automated thank-you tweets to your public Twitter feed. This is just noise that everyone other than the people you tagged won’t care about. If you generate too much noise, people will stop following you.

    Thirdly, in terms of the personal video messages (and in a similar way to the automated public thank-you messages), in addition to the noise it all seems a little…well, desperate. People can sniff desperation a mile off: if someone follows you, be confident in your value to them. Wow them with great content and interesting ideas, not fabricated personal thank-you messages delivered by a bot.

    What underlies all of this is that most people want authentic human engagement. While it is perfectly fine to pre-schedule content for publication (e.g. lots of people use Buffer to have a regular drip-feed of content), automating human engagement just doesn’t hit the mark with authenticity. There is an uncanny valley that people can almost always sniff out when you try to make an automated message seem like a personal interaction.

    Of course, many of the folks who do these things are perfectly well intentioned and are just trying to optimize their social media presence. Instead of doing the above things, see my 10 recommendations for social media as a starting point, and explore some other ways to engage your audience well and build growth.

    The post Don’t Use Bots to Engage With People on Social Media appeared first on Jono Bacon.

    on June 16, 2017 11:46 PM

    This week Alan and Martin go flashing. We discuss Firefox multi-process, Minecraft now has cross platform multiplayer, the GPL is being tested in court and binary blobs in hardware are probably a bad thing.

    It’s Season Ten Episode Fifteen of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Joey Sneddon are connected and speaking to your brain.

    In this week’s show:

    That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

    on June 16, 2017 09:33 PM

    June 15, 2017

    KDE Akademy 2017Akademy 2017

    Yes, I fear I have let my blog go a bit defunct. I have been very busy with a bit of a life re-invented after separation from my 18 year marriage. But all is now well in
    the land of Scarlett Gately Clark. I have now settled into my new life in beautiful Payson, AZ. I landed my dream job with Blue Systems, and recently moved to team Neon, where I will be back
    at what I am good at, debian style packaging! I also will be working on Plasma Mobile! Exciting times. I will be attending Akademy, though out of my own pocket as I was unable to
    procure funding. ( I did not ask KDE E.V due to my failure to assist with KDE CI ) I don’t know what happened with CI, I turned around and it was all done. At least it got done, thanks Ben.
    I do plan to assist in the future with CI tickets and the like, as soon as the documentation is done!
    Harald and I will be hosting a Snappy BoF at Akademy, hope to see you there!

    If you find any of my work useful, please consider a donation or become a patreon!
    I have 500USD a month in student loans that are killing me. I also need funding for Sprints and
    Akademy. Thank you for any assistance you can provide!
    Patreon for Scarlett Clark (me)

    on June 15, 2017 08:16 PM

    GNOME Web (Epiphany) in Debian 9 "Stretch"

    Debian 9 “Stretch”, the latest stable version of the venerable Linux distribution, will be released in a few days. I pushed a last-minute change to get the latest security and feature update of WebKitGTK+ (packaged as webkit2gtk 2.16.3) in before release.

    Carlos Garcia Campos discusses what’s new in 2.16, but there are many, many more improvements since the 2.6 version in Debian 8.

    Like many things in Debian, this was a team effort from many people. Thank you to the WebKitGTK+ developers, WebKitGTK+ maintainers in Debian, Debian Release Managers, Debian Stable Release Managers, Debian Security Team, Ubuntu Security Team, and testers who all had some part in making this happen.

    As with Debian 8, there is no guaranteed security support for webkit2gtk for Debian 9. This time though, there is a chance of periodic security updates without needing to get the updates through backports.

    If you would like to help test the next proposed update, please contact me so that I can help coordinate this.

    on June 15, 2017 04:02 PM

    LXD logo

    Introduction

    As you may know, LXD uses unprivileged containers by default.
    The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).

    The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.

    The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.

    From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.

    LXD does offer a number of options related to unprivileged configuration:

    • Increasing the size of the default uid/gid map
    • Setting up per-container maps
    • Punching holes into the map to expose host users and groups

    Increasing the size of the default map

    As mentioned above, in most cases, LXD will have a default map that’s made of 65536 uids/gids.

    In most cases you won’t have to change that. There are however a few cases where you may have to:

    • You need access to uid/gid higher than 65535.
      This is most common when using network authentication inside of your containers.
    • You want to use per-container maps.
      In which case you’ll need 65536 available uid/gid per container.
    • You want to punch some holes in your container’s map and need access to host uids/gids.

    The default map is usually controlled by the “shadow” set of utilities and files. On systems where that’s the case, the “/etc/subuid” and “/etc/subgid” files are used to configure those maps.

    On systems that do not have a recent enough version of the “shadow” package. LXD will assume that it doesn’t have to share uid/gid ranges with anything else and will therefore assume control of a billion uids and gids, starting at the host uid/gid 100000.

    But the common case, is a system with a recent version of shadow.
    An example of what the configuration may look like is:

    stgraber@castiana:~$ cat /etc/subuid
    lxd:100000:65536
    root:100000:65536
    
    stgraber@castiana:~$ cat /etc/subgid
    lxd:100000:65536
    root:100000:65536

    The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.

    Now if you want to increase the size of the map available to LXD. Simply edit both of the files and bump the last value from 65536 to whatever size you need. I tend to bump it to a billion just so I don’t ever have to think about it again:

    stgraber@castiana:~$ cat /etc/subuid
    lxd:100000:1000000000
    root:100000:1000000000
    
    stgraber@castiana:~$ cat /etc/subgid
    lxd:100000:1000000000
    root:100000:100000000

    After altering those files, you need to restart LXD to have it detect the new map:

    root@vorash:~# systemctl restart lxd
    root@vorash:~# cat /var/log/lxd/lxd.log
    lvl=info msg="LXD 2.14 is starting in normal mode" path=/var/lib/lxd t=2017-06-14T21:21:13+0000
    lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-06-14T21:21:13+0000
    lvl=info msg="Kernel uid/gid map:" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - u 0 0 4294967295" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - g 0 0 4294967295" t=2017-06-14T21:21:13+0000
    lvl=info msg="Configured LXD uid/gid map:" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - u 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - g 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
    lvl=info msg="Connecting to a remote simplestreams server" t=2017-06-14T21:21:13+0000
    lvl=info msg="Expiring log files" t=2017-06-14T21:21:13+0000
    lvl=info msg="Done expiring log files" t=2017-06-14T21:21:13+0000
    lvl=info msg="Starting /dev/lxd handler" t=2017-06-14T21:21:13+0000
    lvl=info msg="LXD is socket activated" t=2017-06-14T21:21:13+0000
    lvl=info msg="REST API daemon:" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2017-06-14T21:21:13+0000
    lvl=info msg=" - binding TCP socket" socket=[::]:8443 t=2017-06-14T21:21:13+0000
    lvl=info msg="Pruning expired images" t=2017-06-14T21:21:13+0000
    lvl=info msg="Updating images" t=2017-06-14T21:21:13+0000
    lvl=info msg="Done pruning expired images" t=2017-06-14T21:21:13+0000
    lvl=info msg="Done updating images" t=2017-06-14T21:21:13+0000
    root@vorash:~#

    As you can see, the configured map is logged at LXD startup and can be used to confirm that the reconfiguration worked as expected.

    You’ll then need to restart your containers to have them start using your newly expanded map.

    Per container maps

    Provided that you have a sufficient amount of uid/gid allocated to LXD, you can configure your containers to use their own, non-overlapping allocation of uids and gids.

    This can be useful for two reasons:

    1. You are running software which alters kernel resource ulimits.
      Those user-specific limits are tied to a kernel uid and will cross container boundaries leading to hard to debug issues where one container can perform an action but all others are then unable to do the same.
    2. You want to know that should there be a way for someone in one of your containers to somehow get access to the host that they still won’t be able to access or interact with any of the other containers.

    The main downsides to using this feature are:

    • It’s somewhat wasteful with using 65536 uids and gids per container.
      That being said, you’d still be able to run over 60000 isolated containers before running out of system uids and gids.
    • It’s effectively impossible to share storage between two isolated containers as everything written by one will be seen as -1 by the other. There is ongoing work around virtual filesystems in the kernel that will eventually let us get rid of that limitation.

    To have a container use its own distinct map, simply run:

    stgraber@castiana:~$ lxc config set test security.idmap.isolated true
    stgraber@castiana:~$ lxc restart test
    stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
    [{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":65536}]

    The restart step is needed to have LXD remap the entire filesystem of the container to its new map.
    Note that this step will take a varying amount of time depending on the number of files in the container and the speed of your storage.

    As can be seen above, after restart, the container is shown to have its own map of 65536 uids/gids.

    If you want LXD to allocate more than the default 65536 uids/gids to an isolated container, you can bump the size of the allocation with:

    stgraber@castiana:~$ lxc config set test security.idmap.size 200000
    stgraber@castiana:~$ lxc restart test
    stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
    [{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":200000},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":200000}]

    If you’re trying to allocate more uids/gids than are left in LXD’s allocation, LXD will let you know:

    stgraber@castiana:~$ lxc config set test security.idmap.size 2000000000
    error: Not enough uid/gid available for the container.

    Direct user/group mapping

    The fact that all uids/gids in an unprivileged container are mapped to a normally unused range on the host means that sharing of data between host and container is effectively impossible.

    Now, what if you want to share your user’s home directory with a container?

    The obvious answer to that is to define a new “disk” entry in LXD which passes your home directory to the container:

    stgraber@castiana:~$ lxc config device add test home disk source=/home/stgraber path=/home/ubuntu
    Device home added to test

    So that was pretty easy, but did it work?

    stgraber@castiana:~$ lxc exec test -- bash
    root@test:~# ls -lh /home/
    total 529K
    drwx--x--x 45 nobody nogroup 84 Jun 14 20:06 ubuntu

    No. The mount is clearly there, but it’s completely inaccessible to the container.
    To fix that, we need to take a few extra steps:

    • Allow LXD’s use of our user uid and gid
    • Restart LXD to have it load the new map
    • Set a custom map for our container
    • Restart the container to have the new map apply
    stgraber@castiana:~$ printf "lxd:$(id -u):1\nroot:$(id -u):1\n" | sudo tee -a /etc/subuid
    lxd:201105:1
    root:201105:1
    
    stgraber@castiana:~$ printf "lxd:$(id -g):1\nroot:$(id -g):1\n" | sudo tee -a /etc/subgid
    lxd:200512:1
    root:200512:1
    
    stgraber@castiana:~$ sudo systemctl restart lxd
    
    stgraber@castiana:~$ printf "uid $(id -u) 1000\ngid $(id -g) 1000" | lxc config set test raw.idmap -
    
    stgraber@castiana:~$ lxc restart test

    At which point, things should be working in the container:

    stgraber@castiana:~$ lxc exec test -- su ubuntu -l
    ubuntu@test:~$ ls -lh
    total 119K
    drwxr-xr-x 5  ubuntu ubuntu 8 Feb 18 2016 data
    drwxr-x--- 4  ubuntu ubuntu 6 Jun 13 17:05 Desktop
    drwxr-xr-x 3  ubuntu ubuntu 28 Jun 13 20:09 Downloads
    drwx------ 84 ubuntu ubuntu 84 Sep 14 2016 Maildir
    drwxr-xr-x 4  ubuntu ubuntu 4 May 20 15:38 snap
    ubuntu@test:~$ 
    
    

    Conclusion

    User namespaces, the kernel feature that makes those uid/gid mappings possible is a very powerful tool which finally made containers on Linux safe by design. It is however not the easiest thing to wrap your head around and all of that uid/gid map math can quickly become a major issue.

    In LXD we’ve tried to expose just enough of those underlying features to be useful to our users while doing the actual mapping math internally. This makes things like the direct user/group mapping above significantly easier than it otherwise would be.

    Going forward, we’re very interested in some of the work around uid/gid remapping at the filesystem level, this would let us decouple the on-disk user/group map from that used for processes, making it possible to share data between differently mapped containers and alter the various maps without needing to also remap the entire filesystem.

    Extra information

    The main LXD website is at: https://linuxcontainers.org/lxd
    Development happens on Github at: https://github.com/lxc/lxd
    Discussion forun: https://discuss.linuxcontainers.org
    Mailing-list support happens on: https://lists.linuxcontainers.org
    IRC support happens in: #lxcontainers on irc.freenode.net
    Try LXD online: https://linuxcontainers.org/lxd/try-it

    on June 15, 2017 01:30 PM

    Apollo 440

    Rhonda D'Vine

    It's been a while. And currently I shouldn't even post but rather pack my stuff because I'll get the keys to my flat in 6 days. Yay!

    But, for packing I need a good sound track. And today it is Apollo 440. I saw them live at the Sundance Festival here in Vienna 20 years ago. It's been a while, but their music still gives me power to pull through.

    So, without further ado, here are their songs:

    • Ain't Talkin' 'Bout Dub: This is the song I first stumbled upon, and got me into them.
    • Stop The Rock: This was featured in a movie I enjoyed, with a great dancing scene. :)
    • Krupa: Also a very up-cheering song!

    As always, enjoy!

    /music | permanent link | Comments: 3 | Flattr this

    on June 15, 2017 10:27 AM

    I've been working on making the Inkscape CI performant on Gitlab because if you aren't paying developers you want to make developing fun. I started with implementing ccache, which got us a 4x build time improvement. The next piece of low hanging fruit seemed to be the installation of dependencies, which rarely change, but were getting installed on each build and test run. The Gitlab CI runners use Docker and so I set out to turn those dependencies into a Docker layer.

    The well worn path for doing a Docker layer is to create a branch on Github and then add an automated build on Docker Hub. That leaves you with a Docker Repository that has your Docker layer in it. I did this for the Inkscape dependencies with this fairly simple Dockerfile:

    FROM ubuntu:16.04
    RUN apt-get update -yqq 
    RUN apt-get install -y -qq <long package list>
    

    For Inkscape though we'd really like to not set up another service and accounts and permissions. Which led me to Gitlab's Container Registry feature. I took the same Git branch and added a fairly generic .gitlab-ci.yml file that looks like this:

    variables:
      IMAGE_TAG: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}/${CI_COMMIT_REF_SLUG}:latest
    
    build:
      image: docker:latest
      services:
        - docker:dind
      stage: build
      script:
        - docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}
        - docker build --pull -t ${IMAGE_TAG} .
        - docker push ${IMAGE_TAG}
    

    That tells the Gitlab CI system to build a Docker layer with the same name as the Git branch and put it in the project's container registry. For Inkscape you can see the results here:

    We then just need to change our CI configuration for the Inkscape CI builds so that it uses our new image:

    image: registry.gitlab.com/inkscape/inkscape-ci-docker/master
    

    Overall the results were saving approximately one to two minutes per build. Not the drastic results I was hoping for, but this is likely to be caused by the builders being more IO constrained than CPU constrained, so uncompressing the layer is roughly the same cost as installing the packages. This still results in a 10% savings in total pipeline time. The bigger unexpected benefit is that it has cleaned up the CI build logs to where the first page starts the actual Inkscape build instead of having to scroll through pages of dependency installation (old vs. new).

    on June 15, 2017 05:00 AM

    June 14, 2017

    In my last blog, I described the plan to hold a meeting in Zurich about the OpenAg Food Computer.

    The Meetup page has been gathering momentum but we are still well within the capacity of the room and catering budget so if you are in Zurich, please join us.

    Thanks to our supporters

    The meeting now has sponsorship from three organizations, Project 21 at ETH, the Debian Project and Free Software Foundation of Europe.

    Sponsorship funds help with travel expenses and refreshments.

    Food is always in the news

    In my previous blog, I referred to a number of food supply problems that have occurred recently. There have been more in the news this week: a potential croissant shortage in France due to the rising cost of butter and Qatar's efforts to air-lift 4,000 cows from the US and Australia, among other things, due to the Saudi Arabia embargo.

    The food computer isn't an immediate solution to these problems but it appears to be a helpful step in the right direction.

    on June 14, 2017 07:53 PM