Petter Reinholdtsen

Entries tagged "english".

Plain text accounting file from your bitcoin transactions
7th March 2024

A while back I wrote a small script to extract the Bitcoin transactions in a wallet in the ledger plain text accounting format. The last few days I spent some time to get it working better with more special cases. In case it can be useful for others, here is a copy:

#!/usr/bin/python3
#  -*- coding: utf-8 -*-
#  Copyright (c) 2023-2024 Petter Reinholdtsen

from decimal import Decimal
import json
import subprocess
import time

import numpy

def format_float(num):
    return numpy.format_float_positional(num, trim='-')

accounts = {
    u'amount' : 'Assets:BTC:main',
}

addresses = {
    '' : 'Assets:bankkonto',
    '' : 'Assets:bankkonto',
}

def exec_json(cmd):
    proc = subprocess.Popen(cmd,stdout=subprocess.PIPE)
    j = json.loads(proc.communicate()[0], parse_float=Decimal)
    return j

def list_txs():
    # get all transactions for all accounts / addresses
    c = 0
    txs = []
    txidfee = {}
    limit=100000
    cmd = ['bitcoin-cli', 'listtransactions', '*', str(limit)]
    if True:
        txs.extend(exec_json(cmd))
    else:
        # Useful for debugging
        with open('transactions.json') as f:
            txs.extend(json.load(f, parse_float=Decimal))
    #print txs
    for tx in sorted(txs, key=lambda a: a['time']):
#        print tx['category']
        if 'abandoned' in tx and tx['abandoned']:
            continue
        if 'confirmations' in tx and 0 >= tx['confirmations']:
            continue
        when = time.strftime('%Y-%m-%d %H:%M', time.localtime(tx['time']))
        if 'message' in tx:
            desc = tx['message']
        elif 'comment' in tx:
            desc = tx['comment']
        elif 'label' in tx:
            desc = tx['label']
        else:
            desc = 'n/a'
        print("%s %s" % (when, desc))
        if 'address' in tx:
            print("  ; to bitcoin address %s" % tx['address'])
        else:
            print("  ; missing address in transaction, txid=%s" % tx['txid'])
        print(f"  ; amount={tx['amount']}")
        if 'fee'in tx:
            print(f"  ; fee={tx['fee']}")
        for f in accounts.keys():
            if f in tx and Decimal(0) != tx[f]:
                amount = tx[f]
                print("  %-20s   %s BTC" % (accounts[f], format_float(amount)))
        if 'fee' in tx and Decimal(0) != tx['fee']:
            # Make sure to list fee used in several transactions only once.
            if 'fee' in tx and tx['txid'] in txidfee \
               and tx['fee'] == txidfee[tx['txid']]:
                True
            else:
                fee = tx['fee']
                print("  %-20s   %s BTC" % (accounts['amount'], format_float(fee)))
                print("  %-20s   %s BTC" % ('Expences:BTC-fee', format_float(-fee)))
                txidfee[tx['txid']] = tx['fee']

        if 'address' in tx and tx['address'] in addresses:
            print("  %s" % addresses[tx['address']])
        else:
            if 'generate' == tx['category']:
                print("  Income:BTC-mining")
            else:
                if amount < Decimal(0):
                    print(f"  Assets:unknown:sent:update-script-addr-{tx['address']}")
                else:
                    print(f"  Assets:unknown:received:update-script-addr-{tx['address']}")

        print()
        c = c + 1
    print("# Found %d transactions" % c)
    if limit == c:
        print(f"# Warning: Limit {limit} reached, consider increasing limit.")

def main():
    list_txs()

main()

It is more of a proof of concept, and I do not expect it to handle all edge cases, but it worked for me, and perhaps you can find it useful too.

To get a more interesting result, it is useful to map accounts sent to or received from to accounting accounts, using the addresses hash. As these will be very context dependent, I leave out my list to allow each user to fill out their own list of accounts. Out of the box, 'ledger reg BTC:main' should be able to show the amount of BTCs present in the wallet at any given time in the past. For other and more valuable analysis, a account plan need to be set up in the addresses hash. Here is an example transaction:

2024-03-07 17:00 Donated to good cause
    Assets:BTC:main                           -0.1 BTC
    Assets:BTC:main                       -0.00001 BTC
    Expences:BTC-fee                       0.00001 BTC
    Expences:donations                         0.1 BTC

It need a running Bitcoin Core daemon running, as it connect to it using bitcoin-cli listtransactions * 100000 to extract the transactions listed in the Wallet.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: bitcoin, english.
RAID status from LSI Megaraid controllers using free software
3rd March 2024

The last few days I have revisited RAID setup using the LSI Megaraid controller. These are a family of controllers called PERC by Dell, and is present in several old PowerEdge servers, and I recently got my hands on one of these. I had forgotten how to handle this RAID controller in Debian, so I had to take a peek in the Debian wiki page "Linux and Hardware RAID: an administrator's summary" to remember what kind of software is available to configure and monitor the disks and controller. I prefer Free Software alternatives to proprietary tools, as the later tend to fall into disarray once the manufacturer loose interest, and often do not work with newer Linux Distributions. Sadly there is no free software tool to configure the RAID setup, only to monitor it. RAID can provide improved reliability and resilience in a storage solution, but only if it is being regularly checked and any broken disks are being replaced in time. I thus want to ensure some automatic monitoring is available.

In the discovery process, I came across a old free software tool to monitor PERC2, PERC3, PERC4 and PERC5 controllers, which to my surprise is not present in debian. To help change that I created a request for packaging of the megactl package, and tried to track down a usable version. The original project site is on Sourceforge, but as far as I can tell that project has been dead for more than 15 years. I managed to find a more recent fork on github from user hmage, but it is unclear to me if this is still being maintained. It has not seen much improvements since 2016. A more up to date edition is a git fork from the original github fork by user namiltd, and this newer fork seem a lot more promising. The owner of this github repository has replied to change proposals within hours, and had already added some improvements and support for more hardware. Sadly he is reluctant to commit to maintaining the tool and stated in my first pull request that he think a new release should be made based on the git repository owned by hmage. I perfectly understand this reluctance, as I feel the same about maintaining yet another package in Debian when I barely have time to take care of the ones I already maintain, but do not really have high hopes that hmage will have time to spend on it and hope namiltd will change his mind.

In any case, I created a draft package based on the namiltd edition and put it under the debian group on salsa.debian.org. If you own a Dell PowerEdge server with one of the PERC controllers, or any other RAID controller using the megaraid or megaraid_sas Linux kernel modules, you might want to check it out. If enough people are interested, perhaps the package will make it into the Debian archive.

There are two tools provided, megactl for the megaraid Linux kernel module, and megasasctl for the megaraid_sas Linux kernel module. The simple output from the command on one of my machines look like this (yes, I know some of the disks have problems. :).

# megasasctl 
a0       PERC H730 Mini           encl:1 ldrv:2  batt:good
a0d0       558GiB RAID 1   1x2  optimal
a0d1      3067GiB RAID 0   1x11 optimal
a0e32s0     558GiB  a0d0  online   errs: media:0  other:19
a0e32s1     279GiB  a0d1  online  
a0e32s2     279GiB  a0d1  online  
a0e32s3     279GiB  a0d1  online  
a0e32s4     279GiB  a0d1  online  
a0e32s5     279GiB  a0d1  online  
a0e32s6     279GiB  a0d1  online  
a0e32s8     558GiB  a0d0  online   errs: media:0  other:17
a0e32s9     279GiB  a0d1  online  
a0e32s10    279GiB  a0d1  online  
a0e32s11    279GiB  a0d1  online  
a0e32s12    279GiB  a0d1  online  
a0e32s13    279GiB  a0d1  online  

#

In addition to displaying a simple status report, it can also test individual drives and print the various event logs. Perhaps you too find it useful?

In the packaging process I provided some patches upstream to improve installation and ensure a Appstream metainfo file is provided to list all supported HW, to allow isenkram to propose the package on all servers with a relevant PCI card.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, isenkram, raid.
Welcome out of prison, Mickey, hope you find some freedom!
1st January 2024

Today, the animation figure Mickey Mouse finally was released from the corporate copyright prison, as the 1928 movie Steamboat Willie entered the public domain in USA. This movie was the first public appearance of Mickey Mouse. Sadly the figure is still on probation, thanks to trademark laws and a the Disney corporations powerful pack of lawyers, as described in the 2017 article in "How Mickey Mouse Evades the Public Domain" from Priceonomics. On the positive side, the primary driver for repeated extentions of the duration of copyright has been Disney thanks to Mickey Mouse and the 2028 movie, and as it now in the public domain I hope it will cause less urge to extend the already unreasonable long copyright duration.

The first book I published, the 2004 book "Free Culture" by Lawrence Lessig, published 2015 in English, French and Norwegian Bokmål, touch on the story of Disney pushed for extending the copyright duration in USA. It is a great book explaining problems with the current copyright regime and why we need Creative Commons movement, and I strongly recommend everyone to read it.

This movie (with IMDB ID tt0019422) is now available from the Internet Archive. Two copies have been uploaded so far, one uploaded 2015-11-04 (torrent) and the other 2023-01-01 (torrent) - see VLC bittorrent plugin for streaming the video using the torrent link. I am very happy to see the number of public domain movies increasing. I look forward to when those are the majority. Perhaps it will reduce the urge of the copyright industry to control its customers.

A more comprehensive list of works entering the public domain in 2024 is available from the Public Domain Review.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, opphavsrett, verkidetfri.
VLC bittorrent plugin still going strong, new upload 2.14-4
31st December 2023

The other day I uploaded a new version of the VLC bittorrent plugin to Debian, version 2.14-4, to fix a few packaging issues. This plugin extend VLC allowing it to stream videos directly from a bittorrent source using both torrent files and magnet links, as easy as using a HTTP or local file source. I believe such protocol support is a vital feature in VLC, allowing efficient streaming from sources such at the 11 million movies in the Internet Archive. Bittorrent is one of the most efficient content distribution protocols on the Internet, without centralised control, and should be used more.

The new version is now both in Debian Unstable and Testing, as well as Ubuntu. While looking after the package, I decided to ask the VLC upstream community if there was any hope to get Bittorrent support into the official VLC program, and was very happy to learn that someone is already working on it. I hope we can see some fruits of that labour next year, but do not hold my breath. In the mean time we can use the plugin, which is already installed by 0.23 percent of the Debian population according to popularity-contest. It could use a new upstream release, and I hope the upstream developer soon find time to polish it even more.

It is worth noting that the plugin store the downloaded files in ~/Downloads/vlc-bittorrent/, which can quickly fill up the user home directory during use. Users of the plugin should keep an eye with disk usage when streaming a bittorrent source.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, verkidetfri, video.
New and improved sqlcipher in Debian for accessing Signal database
12th November 2023

For a while now I wanted to have direct access to the Signal database of messages and channels of my Desktop edition of Signal. I prefer the enforced end to end encryption of Signal these days for my communication with friends and family, to increase the level of safety and privacy as well as raising the cost of the mass surveillance government and non-government entities practice these days. In August I came across a nice recipe on how to use sqlcipher to extract statistics from the Signal database explaining how to do this. Unfortunately this did not work with the version of sqlcipher in Debian. The sqlcipher package is a "fork" of the sqlite package with added support for encrypted databases. Sadly the current Debian maintainer announced more than three years ago that he did not have time to maintain sqlcipher, so it seemed unlikely to be upgraded by the maintainer. I was reluctant to take on the job myself, as I have very limited experience maintaining shared libraries in Debian. After waiting and hoping for a few months, I gave up the last week, and set out to update the package. In the process I orphaned it to make it more obvious for the next person looking at it that the package need proper maintenance.

The version in Debian was around five years old, and quite a lot of changes had taken place upstream into the Debian maintenance git repository. After spending a few days importing the new upstream versions, realising that upstream did not care much for SONAME versioning as I saw library symbols being both added and removed with minor version number changes to the project, I concluded that I had to do a SONAME bump of the library package to avoid surprising the reverse dependencies. I even added a simple autopkgtest script to ensure the package work as intended. Dug deep into the hole of learning shared library maintenance, I set out a few days ago to upload the new version to Debian experimental to see what the quality assurance framework in Debian had to say about the result. The feedback told me the pacakge was not too shabby, and yesterday I uploaded the latest version to Debian unstable. It should enter testing today or tomorrow, perhaps delayed by a small library transition.

Armed with a new version of sqlcipher, I can now have a look at the SQL database in ~/.config/Signal/sql/db.sqlite. First, one need to fetch the encryption key from the Signal configuration using this simple JSON extraction command:

/usr/bin/jq -r '."key"' ~/.config/Signal/config.json

Assuming the result from that command is 'secretkey', which is a hexadecimal number representing the key used to encrypt the database. Next, one can now connect to the database and inject the encryption key for access via SQL to fetch information from the database. Here is an example dumping the database structure:

% sqlcipher ~/.config/Signal/sql/db.sqlite
sqlite> PRAGMA key = "x'secretkey'";
sqlite> .schema
CREATE TABLE sqlite_stat1(tbl,idx,stat);
CREATE TABLE conversations(
      id STRING PRIMARY KEY ASC,
      json TEXT,

      active_at INTEGER,
      type STRING,
      members TEXT,
      name TEXT,
      profileName TEXT
    , profileFamilyName TEXT, profileFullName TEXT, e164 TEXT, serviceId TEXT, groupId TEXT, profileLastFetchedAt INTEGER);
CREATE TABLE identityKeys(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE items(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE sessions(
      id TEXT PRIMARY KEY,
      conversationId TEXT,
      json TEXT
    , ourServiceId STRING, serviceId STRING);
CREATE TABLE attachment_downloads(
    id STRING primary key,
    timestamp INTEGER,
    pending INTEGER,
    json TEXT
  );
CREATE TABLE sticker_packs(
    id TEXT PRIMARY KEY,
    key TEXT NOT NULL,

    author STRING,
    coverStickerId INTEGER,
    createdAt INTEGER,
    downloadAttempts INTEGER,
    installedAt INTEGER,
    lastUsed INTEGER,
    status STRING,
    stickerCount INTEGER,
    title STRING
  , attemptedStatus STRING, position INTEGER DEFAULT 0 NOT NULL, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync
      INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE stickers(
    id INTEGER NOT NULL,
    packId TEXT NOT NULL,

    emoji STRING,
    height INTEGER,
    isCoverOnly INTEGER,
    lastUsed INTEGER,
    path STRING,
    width INTEGER,

    PRIMARY KEY (id, packId),
    CONSTRAINT stickers_fk
      FOREIGN KEY (packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE sticker_references(
    messageId STRING,
    packId TEXT,
    CONSTRAINT sticker_references_fk
      FOREIGN KEY(packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE emojis(
    shortName TEXT PRIMARY KEY,
    lastUsage INTEGER
  );
CREATE TABLE messages(
        rowid INTEGER PRIMARY KEY ASC,
        id STRING UNIQUE,
        json TEXT,
        readStatus INTEGER,
        expires_at INTEGER,
        sent_at INTEGER,
        schemaVersion INTEGER,
        conversationId STRING,
        received_at INTEGER,
        source STRING,
        hasAttachments INTEGER,
        hasFileAttachments INTEGER,
        hasVisualMediaAttachments INTEGER,
        expireTimer INTEGER,
        expirationStartTimestamp INTEGER,
        type STRING,
        body TEXT,
        messageTimer INTEGER,
        messageTimerStart INTEGER,
        messageTimerExpiresAt INTEGER,
        isErased INTEGER,
        isViewOnce INTEGER,
        sourceServiceId TEXT, serverGuid STRING NULL, sourceDevice INTEGER, storyId STRING, isStory INTEGER
        GENERATED ALWAYS AS (type IS 'story'), isChangeCreatedByUs INTEGER NOT NULL DEFAULT 0, isTimerChangeFromSync INTEGER
        GENERATED ALWAYS AS (
          json_extract(json, '$.expirationTimerUpdate.fromSync') IS 1
        ), seenStatus NUMBER default 0, storyDistributionListId STRING, expiresAt INT
        GENERATED ALWAYS
        AS (ifnull(
          expirationStartTimestamp + (expireTimer * 1000),
          9007199254740991
        )), shouldAffectActivity INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), shouldAffectPreview INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), isUserInitiatedMessage INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'group-v2-change',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), mentionsMe INTEGER NOT NULL DEFAULT 0, isGroupLeaveEvent INTEGER
        GENERATED ALWAYS AS (
          type IS 'group-v2-change' AND
          json_array_length(json_extract(json, '$.groupV2Change.details')) IS 1 AND
          json_extract(json, '$.groupV2Change.details[0].type') IS 'member-remove' AND
          json_extract(json, '$.groupV2Change.from') IS NOT NULL AND
          json_extract(json, '$.groupV2Change.from') IS json_extract(json, '$.groupV2Change.details[0].aci')
        ), isGroupLeaveEventFromOther INTEGER
        GENERATED ALWAYS AS (
          isGroupLeaveEvent IS 1
          AND
          isChangeCreatedByUs IS 0
        ), callId TEXT
        GENERATED ALWAYS AS (
          json_extract(json, '$.callId')
        ));
CREATE TABLE sqlite_stat4(tbl,idx,neq,nlt,ndlt,sample);
CREATE TABLE jobs(
        id TEXT PRIMARY KEY,
        queueType TEXT STRING NOT NULL,
        timestamp INTEGER NOT NULL,
        data STRING TEXT
      );
CREATE TABLE reactions(
        conversationId STRING,
        emoji STRING,
        fromId STRING,
        messageReceivedAt INTEGER,
        targetAuthorAci STRING,
        targetTimestamp INTEGER,
        unread INTEGER
      , messageId STRING);
CREATE TABLE senderKeys(
        id TEXT PRIMARY KEY NOT NULL,
        senderId TEXT NOT NULL,
        distributionId TEXT NOT NULL,
        data BLOB NOT NULL,
        lastUpdatedDate NUMBER NOT NULL
      );
CREATE TABLE unprocessed(
        id STRING PRIMARY KEY ASC,
        timestamp INTEGER,
        version INTEGER,
        attempts INTEGER,
        envelope TEXT,
        decrypted TEXT,
        source TEXT,
        serverTimestamp INTEGER,
        sourceServiceId STRING
      , serverGuid STRING NULL, sourceDevice INTEGER, receivedAtCounter INTEGER, urgent INTEGER, story INTEGER);
CREATE TABLE sendLogPayloads(
        id INTEGER PRIMARY KEY ASC,

        timestamp INTEGER NOT NULL,
        contentHint INTEGER NOT NULL,
        proto BLOB NOT NULL
      , urgent INTEGER, hasPniSignatureMessage INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE sendLogRecipients(
        payloadId INTEGER NOT NULL,

        recipientServiceId STRING NOT NULL,
        deviceId INTEGER NOT NULL,

        PRIMARY KEY (payloadId, recipientServiceId, deviceId),

        CONSTRAINT sendLogRecipientsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE sendLogMessageIds(
        payloadId INTEGER NOT NULL,

        messageId STRING NOT NULL,

        PRIMARY KEY (payloadId, messageId),

        CONSTRAINT sendLogMessageIdsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE preKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE signedPreKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE badges(
        id TEXT PRIMARY KEY,
        category TEXT NOT NULL,
        name TEXT NOT NULL,
        descriptionTemplate TEXT NOT NULL
      );
CREATE TABLE badgeImageFiles(
        badgeId TEXT REFERENCES badges(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        'order' INTEGER NOT NULL,
        url TEXT NOT NULL,
        localPath TEXT,
        theme TEXT NOT NULL
      );
CREATE TABLE storyReads (
        authorId STRING NOT NULL,
        conversationId STRING NOT NULL,
        storyId STRING NOT NULL,
        storyReadDate NUMBER NOT NULL,

        PRIMARY KEY (authorId, storyId)
      );
CREATE TABLE storyDistributions(
        id STRING PRIMARY KEY NOT NULL,
        name TEXT,

        senderKeyInfoJson STRING
      , deletedAtTimestamp INTEGER, allowsReplies INTEGER, isBlockList INTEGER, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync INTEGER);
CREATE TABLE storyDistributionMembers(
        listId STRING NOT NULL REFERENCES storyDistributions(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        serviceId STRING NOT NULL,

        PRIMARY KEY (listId, serviceId)
      );
CREATE TABLE uninstalled_sticker_packs (
        id STRING NOT NULL PRIMARY KEY,
        uninstalledAt NUMBER NOT NULL,
        storageID STRING,
        storageVersion NUMBER,
        storageUnknownFields BLOB,
        storageNeedsSync INTEGER NOT NULL
      );
CREATE TABLE groupCallRingCancellations(
        ringId INTEGER PRIMARY KEY,
        createdAt INTEGER NOT NULL
      );
CREATE TABLE IF NOT EXISTS 'messages_fts_data'(id INTEGER PRIMARY KEY, block BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_idx'(segid, term, pgno, PRIMARY KEY(segid, term)) WITHOUT ROWID;
CREATE TABLE IF NOT EXISTS 'messages_fts_content'(id INTEGER PRIMARY KEY, c0);
CREATE TABLE IF NOT EXISTS 'messages_fts_docsize'(id INTEGER PRIMARY KEY, sz BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_config'(k PRIMARY KEY, v) WITHOUT ROWID;
CREATE TABLE edited_messages(
        messageId STRING REFERENCES messages(id)
          ON DELETE CASCADE,
        sentAt INTEGER,
        readStatus INTEGER
      , conversationId STRING);
CREATE TABLE mentions (
        messageId REFERENCES messages(id) ON DELETE CASCADE,
        mentionAci STRING,
        start INTEGER,
        length INTEGER
      );
CREATE TABLE kyberPreKeys(
        id STRING PRIMARY KEY NOT NULL,
        json TEXT NOT NULL, ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE callsHistory (
        callId TEXT PRIMARY KEY,
        peerId TEXT NOT NULL, -- conversation id (legacy) | uuid | groupId | roomId
        ringerId TEXT DEFAULT NULL, -- ringer uuid
        mode TEXT NOT NULL, -- enum "Direct" | "Group"
        type TEXT NOT NULL, -- enum "Audio" | "Video" | "Group"
        direction TEXT NOT NULL, -- enum "Incoming" | "Outgoing
        -- Direct: enum "Pending" | "Missed" | "Accepted" | "Deleted"
        -- Group: enum "GenericGroupCall" | "OutgoingRing" | "Ringing" | "Joined" | "Missed" | "Declined" | "Accepted" | "Deleted"
        status TEXT NOT NULL,
        timestamp INTEGER NOT NULL,
        UNIQUE (callId, peerId) ON CONFLICT FAIL
      );
[ dropped all indexes to save space in this blog post ]
CREATE TRIGGER messages_on_view_once_update AFTER UPDATE ON messages
      WHEN
        new.body IS NOT NULL AND new.isViewOnce = 1
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
      END;
CREATE TRIGGER messages_on_insert AFTER INSERT ON messages
      WHEN new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_delete AFTER DELETE ON messages BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        DELETE FROM sendLogPayloads WHERE id IN (
          SELECT payloadId FROM sendLogMessageIds
          WHERE messageId = old.id
        );
        DELETE FROM reactions WHERE rowid IN (
          SELECT rowid FROM reactions
          WHERE messageId = old.id
        );
        DELETE FROM storyReads WHERE storyId = old.storyId;
      END;
CREATE VIRTUAL TABLE messages_fts USING fts5(
        body,
        tokenize = 'signal_tokenizer'
      );
CREATE TRIGGER messages_on_update AFTER UPDATE ON messages
      WHEN
        (new.body IS NULL OR old.body IS NOT new.body) AND
         new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_insert_insert_mentions AFTER INSERT ON messages
      BEGIN
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
CREATE TRIGGER messages_on_update_update_mentions AFTER UPDATE ON messages
      BEGIN
        DELETE FROM mentions WHERE messageId = new.id;
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
sqlite>

Finally I have the tool needed to inspect and process Signal messages that I need, without using the vendor provided client. Now on to transforming it to a more useful format.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, sikkerhet, surveillance.
New chrpath release 0.17
10th November 2023

The chrpath package provide a simple command line tool to remove or modify the rpath or runpath of compiled ELF program. It is almost 10 years since I updated the code base, but I stumbled over the tool today, and decided it was time to move the code base from Subversion to git and find a new home for it, as the previous one (Debian Alioth) has been shut down. I decided to go with Codeberg this time, as it is my git service of choice these days, did a quick and dirty migration to git and updated the code with a few patches I found in the Debian bug tracker. These are the release notes:

New in 0.17 released 2023-11-10:

The latest edition is tagged and available from https://codeberg.org/pere/chrpath.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: chrpath, debian, english.
Test framework for DocBook processors / formatters
5th November 2023

All the books I have published so far has been using DocBook somewhere in the process. For the first book, the source format was DocBook, while for every later book it was an intermediate format used as the stepping stone to be able to present the same manuscript in several formats, on paper, as ebook in ePub format, as a HTML page and as a PDF file either for paper production or for Internet consumption. This is made possible with a wide variety of free software tools with DocBook support in Debian. The source format of later books have been docx via rst, Markdown, Filemaker and Asciidoc, and for all of these I was able to generate a suitable DocBook file for further processing using pandoc, a2x and asciidoctor, as well as rendering using xmlto, dbtoepub, dblatex, docbook-xsl and fop.

Most of the books I have published are translated books, with English as the source language. The use of po4a to handle translations using the gettext PO format has been a blessing, but publishing translated books had triggered the need to ensure the DocBook tools handle relevant languages correctly. For every new language I have published, I had to submit patches dblatex, dbtoepub and docbook-xsl fixing incorrect language and country specific issues in the framework themselves. Typically this has been missing keywords like 'figure' or sort ordering of index entries. After a while it became tiresome to only discover issues like this by accident, and I decided to write a DocBook "test framework" exercising various features of DocBook and allowing me to see all features exercised for a given language. It consist of a set of DocBook files, a version 4 book, a version 5 book, a v4 book set, a v4 selection of problematic tables, one v4 testing sidefloat and finally one v4 testing a book of articles. The DocBook files are accompanied with a set of build rules for building PDF using dblatex and docbook-xsl/fop, HTML using xmlto or docbook-xsl and epub using dbtoepub. The result is a set of files visualizing footnotes, indexes, table of content list, figures, formulas and other DocBook features, allowing for a quick review on the completeness of the given locale settings. To build with a different language setting, all one need to do is edit the lang= value in the .xml file to pick a different ISO 639 code value and run 'make'.

The test framework source code is available from Codeberg, and a generated set of presentations of the various examples is available as Codeberg static web pages at https://pere.codeberg.page/docbook-example/. Using this test framework I have been able to discover and report several bugs and missing features in various tools, and got a lot of them fixed. For example I got Northern Sami keywords added to both docbook-xsl and dblatex, fixed several typos in Norwegian bokmål and Norwegian Nynorsk, support for non-ascii title IDs added to pandoc, Norwegian index sorting support fixed in xindy and initial Norwegian Bokmål support added to dblatex. Some issues still remains, though. Default index sorting rules are still broken in several tools, so the Norwegian letters æ, ø and å are more often than not sorted properly in the book index.

The test framework recently received some more polish, as part of publishing my latest book. This book contained a lot of fairly complex tables, which exposed bugs in some of the tools. This made me add a new test file with various tables, as well as spend some time to brush up the build rules. My goal is for the test framework to exercise all DocBook features to make it easier to see which features work with different processors, and hopefully get them all to support the full set of DocBook features. Feel free to send patches to extend the test set, and test it with your favorite DocBook processor. Please visit these two URLs to learn more:

If you want to learn more on Docbook and translations, I recommend having a look at the the DocBook web site, the DoCookBook site and my earlier blog post on how the Skolelinux project process and translate documentation, a talk I gave earlier this year on how to translate and publish books using free software (Norwegian only).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, docbook, english.
Invidious add-on for Kodi 20
10th August 2023

I still enjoy Kodi and LibreELEC as my multimedia center at home. Sadly two of the services I really would like to use from within Kodi are not easily available. The most wanted add-on would be one making The Internet Archive available, and it has not been working for many years. The second most wanted add-on is one using the Invidious privacy enhanced Youtube frontent. A plugin for this has been partly working, but not been kept up to date in the Kodi add-on repository, and its upstream seem to have given it up in April this year, when the git repository was closed. A few days ago I got tired of this sad state of affairs and decided to have a go at improving the Invidious add-on. As Google has already attacked the Invidious concept, so it need all the support if can get. My small contribution here is to improve the service status on Kodi.

I added support to the Invidious add-on for automatically picking a working Invidious instance, instead of requiring the user to specify the URL to a specific instance after installation. I also had a look at the set of patches floating around in the various forks on github, and decided to clean up at least some of the features I liked and integrate them into my new release branch. Now the plugin can handle channel and short video items in search results. Earlier it could only handle single video instances in the search response. I also brushed up the set of metadata displayed a bit, but hope I can figure out how to get more relevant metadata displayed.

Because I only use Kodi 20 myself, I only test on version 20 and am only motivated to ensure version 20 is working. Because of API changes between version 19 and 20, I suspect it will fail with earlier Kodi versions.

I already asked to have the add-on added to the official Kodi 20 repository, and is waiting to heard back from the repo maintainers.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, kodi, multimedia, video.
What did I learn from OpenSnitch this summer?
11th June 2023

With yesterdays release of Debian 12 Bookworm, I am happy to know the the interactive application firewall OpenSnitch is available for a wider audience. I have been running it for a few weeks now, and have been surprised about some of the programs connecting to the Internet. Some programs are obviously calling out from my machine, like the NTP network based clock adjusting system and Tor to reach other Tor clients, but others were more dubious. For example, the KDE Window manager try to look up the host name in DNS, for no apparent reason, but if this lookup is blocked the KDE desktop get periodically stuck when I use it. Another surprise was how much Firefox call home directly to mozilla.com, mozilla.net and googleapis.com, to mention a few, when I visit other web pages. This direct connection happen even if I told Firefox to always use a proxy, and the proxy setting is ignored for this traffic. Other surprising connections come from audacity and dirmngr (I do not use Gnome). It took some trial and error to get a good default set of permissions. Without it, I would get popups asking for permissions at any time, also the most inconvenient ones where I am in the middle of a time sensitive gaming session.

I suspect some application developers should rethink when then need to use network connections or DNS lookups, and recommend testing OpenSnitch (only apt install opensnitch away in Debian Bookworm) to locate and report any surprising Internet connections on your desktop machine.

At the moment the upstream developer and Debian package maintainer is working on making the system more reliable in Debian, by enabling the eBPF kernel module to track processes and connections instead of depending in content in /proc/. This should enter unstable fairly soon.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2023-06-12: I got a tip about a list of privacy issues in Free Software and the #debian-privacy IRC channel discussing these topics.

Tags: debian, english, opensnitch.
wmbusmeters, parse data from your utility meter - nice free software
19th May 2023

There is a European standard for reading utility meters like water, gas, electricity or heat distribution meters. The Meter-Bus standard (EN 13757-2, EN 13757-3 and EN 13757–4) provide a cross vendor way to talk to and collect meter data. I ran into this standard when I wanted to monitor some heat distribution meters, and managed to find free software that could do the job. The meters in question broadcast encrypted messages with meter information via radio, and the hardest part was to track down the encryption keys from the vendor. With this in place I could set up a MQTT gateway to submit the meter data for graphing.

The free software systems in question, rtl-wmbus to read the messages from a software defined radio, and wmbusmeters to decrypt and decode the content of the messages, is working very well and allowe me to get frequent updates from my meters. I got in touch with upstream last year to see if there was any interest in publishing the packages via Debian. I was very happy to learn that Fredrik Öhrström volunteered to maintain the packages, and I have since assisted him in getting Debian package build rules in place as well as sponsoring the packages into the Debian archive. Sadly we completed it too late for them to become part of the next stable Debian release (Bookworm). The wmbusmeters package just cleared the NEW queue. It will need some work to fix a built problem, but I expect Fredrik will find a solution soon.

If you got a infrastructure meter supporting the Meter Bus standard, I strongly recommend having a look at these nice packages.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, nice free software.
The 2023 LinuxCNC Norwegian developer gathering
14th May 2023

The LinuxCNC project is making headway these days. A lot of patches and issues have seen activity on the project github pages recently. A few weeks ago there was a developer gathering over at the Tormach headquarter in Wisconsin, and now we are planning a new gathering in Norway. If you wonder what LinuxCNC is, lets quote Wikipedia:

"LinuxCNC is a software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It can control up to 9 axes or joints of a CNC machine using G-code (RS-274NGC) as input. It has several GUIs suited to specific kinds of usage (touch screen, interactive development)."

The Norwegian developer gathering take place the weekend June 16th to 18th this year, and is open for everyone interested in contributing to LinuxCNC. Up to date information about the gathering can be found in the developer mailing list thread where the gathering was announced. Thanks to the good people at Debian, Redpill-Linpro and NUUG Foundation, we have enough sponsor funds to pay for food, and shelter for the people traveling from afar to join us. If you would like to join the gathering, get in touch.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, linuxcnc.
OpenSnitch in Debian ready for prime time
13th May 2023

A bit delayed, the interactive application firewall OpenSnitch package in Debian now got the latest fixes ready for Debian Bookworm. Because it depend on a package missing on some architectures, the autopkgtest check of the testing migration script did not understand that the tests were actually working, so the migration was delayed. A bug in the package dependencies is also fixed, so those installing the firewall package (opensnitch) now also get the GUI admin tool (python3-opensnitch-ui) installed by default. I am very grateful to Gustavo Iñiguez Goya for his work on getting the package ready for Debian Bookworm.

Armed with this package I have discovered some surprising connections from programs I believed were able to work completly offline, and it has already proven its worth, at least to me. If you too want to get more familiar with the kind of programs using Internett connections on your machine, I recommend testing apt install opensnitch in Bookworm and see what you think.

The package is still not able to build its eBPF module within Debian. Not sure how much work it would be to get it working, but suspect some kernel related packages need to be extended with more header files to get it working.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, opensnitch.
Speech to text, she APTly whispered, how hard can it be?
23rd April 2023

While visiting a convention during Easter, it occurred to me that it would be great if I could have a digital Dictaphone with transcribing capabilities, providing me with texts to cut-n-paste into stuff I need to write. The background is that long drives often bring up the urge to write on texts I am working on, which of course is out of the question while driving. With the release of OpenAI Whisper, this seem to be within reach with Free Software, so I decided to give it a go. OpenAI Whisper is a Linux based neural network system to read in audio files and provide text representation of the speech in that audio recording. It handle multiple languages and according to its creators even can translate into a different language than the spoken one. I have not tested the latter feature. It can either use the CPU or a GPU with CUDA support. As far as I can tell, CUDA in practice limit that feature to NVidia graphics cards. I have few of those, as they do not work great with free software drivers, and have not tested the GPU option. While looking into the matter, I did discover some work to provide CUDA support on non-NVidia GPUs, and some work with the library used by Whisper to port it to other GPUs, but have not spent much time looking into GPU support yet. I've so far used an old X220 laptop as my test machine, and only transcribed using its CPU.

As it from a privacy standpoint is unthinkable to use computers under control of someone else (aka a "cloud" service) to transcribe ones thoughts and personal notes, I want to run the transcribing system locally on my own computers. The only sensible approach to me is to make the effort I put into this available for any Linux user and to upload the needed packages into Debian. Looking at Debian Bookworm, I discovered that only three packages were missing, tiktoken, triton, and openai-whisper. For a while I also believed ffmpeg-python was needed, but as its upstream seem to have vanished I found it safer to rewrite whisper to stop depending on in than to introduce ffmpeg-python into Debian. I decided to place these packages under the umbrella of the Debian Deep Learning Team, which seem like the best team to look after such packages. Discussing the topic within the group also made me aware that the triton package was already a future dependency of newer versions of the torch package being planned, and would be needed after Bookworm is released.

All required code packages have been now waiting in the Debian NEW queue since Wednesday, heading for Debian Experimental until Bookworm is released. An unsolved issue is how to handle the neural network models used by Whisper. The default behaviour of Whisper is to require Internet connectivity and download the model requested to ~/.cache/whisper/ on first invocation. This obviously would fail the deserted island test of free software as the Debian packages would be unusable for someone stranded with only the Debian archive and solar powered computer on a deserted island.

Because of this, I would love to include the models in the Debian mirror system. This is problematic, as the models are very large files, which would put a heavy strain on the Debian mirror infrastructure around the globe. The strain would be even higher if the models change often, which luckily as far as I can tell they do not. The small model, which according to its creator is most useful for English and in my experience is not doing a great job there either, is 462 MiB (deb is 414 MiB). The medium model, which to me seem to handle English speech fairly well is 1.5 GiB (deb is 1.3 GiB) and the large model is 2.9 GiB (deb is 2.6 GiB). I would assume everyone with enough resources would prefer to use the large model for highest quality. I believe the models themselves would have to go into the non-free part of the Debian archive, as they are not really including any useful source code for updating the models. The "source", aka the model training set, according to the creators consist of "680,000 hours of multilingual and multitask supervised data collected from the web", which to me reads material with both unknown copyright terms, unavailable to the general public. In other words, the source is not available according to the Debian Free Software Guidelines and the model should be considered non-free.

I asked the Debian FTP masters for advice regarding uploading a model package on their IRC channel, and based on the feedback there it is still unclear to me if such package would be accepted into the archive. In any case I wrote build rules for a OpenAI Whisper model package and modified the Whisper code base to prefer shared files under /usr/ and /var/ over user specific files in ~/.cache/whisper/ to be able to use these model packages, to prepare for such possibility. One solution might be to include only one of the models (small or medium, I guess) in the Debian archive, and ask people to download the others from the Internet. Not quite sure what to do here, and advice is most welcome (use the debian-ai mailing list).

To make it easier to test the new packages while I wait for them to clear the NEW queue, I created an APT source targeting bookworm. I selected Bookworm instead of Bullseye, even though I know the latter would reach more users, is that some of the required dependencies are missing from Bullseye and I during this phase of testing did not want to backport a lot of packages just to get up and running.

Here is a recipe to run as user root if you want to test OpenAI Whisper using Debian packages on your Debian Bookworm installation, first adding the APT repository GPG key to the list of trusted keys, then setting up the APT repository and finally installing the packages and one of the models:

curl https://geekbay.nuug.no/~pere/openai-whisper/D78F5C4796F353D211B119E28200D9B589641240.asc \
  -o /etc/apt/trusted.gpg.d/pere-whisper.asc
mkdir -p /etc/apt/sources.list.d
cat > /etc/apt/sources.list.d/pere-whisper.list <<EOF
deb https://geekbay.nuug.no/~pere/openai-whisper/ bookworm main
deb-src https://geekbay.nuug.no/~pere/openai-whisper/ bookworm main
EOF
apt update
apt install openai-whisper

The package work for me, but have not yet been tested on any other computer than my own. With it, I have been able to (badly) transcribe a 2 minute 40 second Norwegian audio clip to test using the small model. This took 11 minutes and around 2.2 GiB of RAM. Transcribing the same file with the medium model gave a accurate text in 77 minutes using around 5.2 GiB of RAM. My test machine had too little memory to test the large model, which I believe require 11 GiB of RAM. In short, this now work for me using Debian packages, and I hope it will for you and everyone else once the packages enter Debian.

Now I can start on the audio recording part of this project.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, multimedia, video.
rtlsdr-scanner, software defined radio frequency scanner for Linux - nice free software
7th April 2023

Today I finally found time to track down a useful radio frequency scanner for my software defined radio. Just for fun I tried to locate the radios used in the areas, and a good start would be to scan all the frequencies to see what is in use. I've tried to find a useful program earlier, but ran out of time before I managed to find a useful tool. This time I was more successful, and after a few false leads I found a description of rtlsdr-scanner over at the Kali site, and was able to track down the Kali package git repository to build a deb package for the scanner. Sadly the package is missing from the Debian project itself, at least in Debian Bullseye. Two runtime dependencies, python-visvis and python-rtlsdr had to be built and installed separately. Luckily 'gbp buildpackage' handled them just fine and no further packages had to be manually built. The end result worked out of the box after installation.

My initial scans for FM channels worked just fine, so I knew the scanner was functioning. But when I tried to scan every frequency from 100 to 1000 MHz, the program stopped unexpectedly near the completion. After some debugging I discovered USB software radio I used rejected frequencies above 948 MHz, triggering a unreported exception breaking the scan. Changing the scan to end at 957 worked better. I similarly found the lower limit to be around 15, and ended up with the following full scan:

Saving the scan did not work, but exporting it as a CSV file worked just fine. I ended up with around 477k CVS lines with dB level for the given frequency.

The save failure seem to be a missing UTF-8 encoding issue in the python code. Will see if I can find time to send a patch upstream later to fix this exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/main_window.py", line 485, in __on_save
    save_plot(fullName, self.scanInfo, self.spectrum, self.locations)
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/file.py", line 408, in save_plot
    handle.write(json.dumps(data, indent=4))
TypeError: a bytes-like object is required, not 'str'
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/main_window.py", line 485, in __on_save
    save_plot(fullName, self.scanInfo, self.spectrum, self.locations)
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/file.py", line 408, in save_plot
    handle.write(json.dumps(data, indent=4))
TypeError: a bytes-like object is required, not 'str'

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, nice free software.
OpenSnitch available in Debian Sid and Bookworm
25th February 2023

Thanks to the efforts of the OpenSnitch lead developer Gustavo Iñiguez Goya allowing me to sponsor the upload, the interactive application firewall OpenSnitch is now available in Debian Testing, soon to become the next stable release of Debian.

This is a package which set up a network firewall on one or more machines, which is controlled by a graphical user interface that will ask the user if a program should be allowed to connect to the local network or the Internet. If some background daemon is trying to dial home, it can be blocked from doing so with a simple mouse click, or by default simply by not doing anything when the GUI question dialog pop up. A list of all programs discovered using the network is provided in the GUI, giving the user an overview of how the machine(s) programs use the network.

OpenSnitch was uploaded for NEW processing about a month ago, and I had little hope of it getting accepted and shaping up in time for the package freeze, but the Debian ftpmasters proved to be amazingly quick at checking out the package and it was accepted into the archive about week after the first upload. It is now team maintained under the Go language team umbrella. A few fixes to the default setup is only in Sid, and should migrate to Testing/Bookworm in a week.

During testing I ran into an issue with Minecraft server broadcasts disappearing, which was quickly resolved by the developer with a patch and a proposed configuration change. I've been told this was caused by the Debian packages default use if /proc/ information to track down kernel status, instead of the newer eBPF module that can be used. The reason is simply that upstream and I have failed to find a way to build the eBPF modules for OpenSnitch without a complete configured Linux kernel source tree, which as far as we can tell is unavailable as a build dependency in Debian. We tried unsuccessfully so far to use the kernel-headers package. It would be great if someone could provide some clues how to build eBPF modules on build daemons in Debian, possibly without the full kernel source.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, opensnitch.
Is the desktop recommending your program for opening its files?
29th January 2023

Linux desktop systems have standardized how programs present themselves to the desktop system. If a package include a .desktop file in /usr/share/applications/, Gnome, KDE, LXDE, Xfce and the other desktop environments will pick up the file and use its content to generate the menu of available programs in the system. A lesser known fact is that a package can also explain to the desktop system how to recognize the files created by the program in question, and use it to open these files on request, for example via a GUI file browser.

A while back I ran into a package that did not tell the desktop system how to recognize its files and was not used to open its files in the file browser and fixed it. In the process I wrote a simple debian/tests/ script to ensure the setup keep working. It might be useful for other packages too, to ensure any future version of the package keep handling its own files.

For this to work the file format need a useful MIME type that can be used to identify the format. If the file format do not yet have a MIME type, it should define one and preferably also register it with IANA to ensure the MIME type string is reserved.

The script uses the xdg-mime program from xdg-utils to query the database of standardized package information and ensure it return sensible values. It also need the location of an example file for xdg-mime to guess the format of.

#!/bin/sh
#
# Author: Petter Reinholdtsen
# License: GPL v2 or later at your choice.
#
# Validate the MIME setup, making sure motor types have
# application/vnd.openmotor+yaml associated with them and is connected
# to the openmotor desktop file.

retval=0

mimetype="application/vnd.openmotor+yaml"
testfile="test/data/real/o3100/motor.ric"
mydesktopfile="openmotor.desktop"

filemime="$(xdg-mime query filetype "$testfile")"

if [ "$mimetype" != "$filemime" ] ; then
    retval=1
    echo "error: xdg-mime claim motor file MIME type is $filemine, not $mimetype"
else
    echo "success: xdg-mime report correct mime type $mimetype for motor file"
fi

desktop=$(xdg-mime query default "$mimetype")

if [ "$mydesktopfile" != "$desktop" ]; then
    retval=1
    echo "error: xdg-mime claim motor file should be handled by $desktop, not $mydesktopfile"
else
    echo "success: xdg-mime agree motor file should be handled by $mydesktopfile"
fi

exit $retval

It is a simple way to ensure your users are not very surprised when they try to open one of your file formats in their file browser.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
Opensnitch, the application level interactive firewall, heading into the Debian archive
22nd January 2023

While reading a blog post claiming MacOS X recently started scanning local files and reporting information about them to Apple, even on a machine where all such callback features had been disabled, I came across a description of the Little Snitch application for MacOS X. It seemed like a very nice tool to have in the tool box, and I decided to see if something similar was available for Linux.

It did not take long to find the OpenSnitch package, which has been in development since 2017, and now is in version 1.5.0. It has had a request for Debian packaging since 2018, but no-one completed the job so far. Just for fun, I decided to see if I could help, and I was very happy to discover that upstream want a Debian package too.

After struggling a bit with getting the program to run, figuring out building Go programs (and a little failed detour to look at eBPF builds too - help needed), I am very happy to report that I am sponsoring upstream to maintain the package in Debian, and it has since this morning been waiting in NEW for the ftpmasters to have a look. Perhaps it can get into the archive in time for the Bookworm release?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, opensnitch.
LinuxCNC MQTT publisher component
8th January 2023

I watched a 2015 video from Andreas Schiffler the other day, where he set up LinuxCNC to send status information to the MQTT broker IBM Bluemix. As I also use MQTT for graphing, it occured to me that a generic MQTT LinuxCNC component would be useful and I set out to implement it. Today I got the first draft limping along and submitted as a patch to the LinuxCNC project.

The simple part was setting up the MQTT publishing code in Python. I already have set up other parts submitting data to my Mosquito MQTT broker, so I could reuse that code. Writing a LinuxCNC component in Python as new to me, but using existing examples in the code repository and the extensive documentation, this was fairly straight forward. The hardest part was creating a automated test for the component to ensure it was working. Testing it in a simulated LinuxCNC machine proved very useful, as I discovered features I needed that I had not thought of yet, and adjusted the code quite a bit to make it easier to test without a operational MQTT broker available.

The draft is ready and working, but I am unsure which LinuxCNC HAL pins I should collect and publish by default (in other words, the default set of information pieces published), and how to get the machine name from the LinuxCNC INI file. The latter is a minor detail, but I expect it would be useful in a setup with several machines available. I am hoping for feedback from the experienced LinuxCNC developers and users, to make the component even better before it can go into the mainland LinuxCNC code base.

Since I started on the MQTT component, I came across another video from Kent VanderVelden where he combine LinuxCNC with a set of screen glasses controlled by a Raspberry Pi, and it occured to me that it would be useful for such use cases if LinuxCNC also provided a REST API for querying its status. I hope to start on such component once the MQTT component is working well.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, linuxcnc, robot.
ONVIF IP camera management tool finally in Debian
24th December 2022

Merry Christmas to you all. Here is a small gift to all those with IP cameras following the ONVIF specification. There is finally a nice command line and GUI tool in Debian to manage ONVIF IP cameras. After working with upstream for a few months and sponsoring the upload, I am very happy to report that the libonvif package entered Debian Sid last night.

The package provide a C library to communicate with such cameras, a command line tool to locate and update settings of (like password) the cameras and a GUI tool to configure and control the units as well as preview the video from the camera. Libonvif is available on Both Linux and Windows and the GUI tool uses the Qt library. The main competitors are non-free software, while libonvif is GNU GPL licensed. I am very glad Debian users in the future can control their cameras using a free software system provided by Debian. But the ONVIF world is full of slightly broken firmware, where the cameras pretend to follow the ONVIF specification but fail to set some configuration values or refuse to provide video to more than one recipient at the time, and the onvif project is quite young and might take a while before it completely work with your camera. Upstream seem eager to improve the library, so handling any broken camera might be just a bug report away.

The package just cleared NEW, and need a new source only upload before it can enter testing. This will happen in the next few days.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, multimedia, standard, surveillance.
Managing and using ONVIF IP cameras with Linux
19th October 2022

Recently I have been looking at how to control and collect data from a handful IP cameras using Linux. I both wanted to change their settings and to make their imagery available via a free software service under my control. Here is a summary of the tools I found.

First I had to identify the cameras and their protocols. As far as I could tell, they were using some SOAP looking protocol and their internal web server seem to only work with Microsoft Internet Explorer with some proprietary binary plugin, which in these days of course is a security disaster and also made it impossible for me to use the camera web interface. Luckily I discovered that the SOAP looking protocol is actually following the ONVIF specification, which seem to be supported by a lot of IP cameras these days.

Once the protocol was identified, I was able to find what appear to be the most popular way to configure ONVIF cameras, the free software Windows tool named ONVIF Device Manager. Lacking any other options at the time, I tried unsuccessfully to get it running using Wine, but was missing a dotnet 40 library and I found no way around it to run it on Linux.

The next tool I found to configure the cameras were a non-free Linux Qt client ONVIF Device Tool. I did not like its terms of use, so did not spend much time on it.

To collect the video and make it available in a web interface, I found the Zoneminder tool in Debian. A recent version was able to automatically detect and configure ONVIF devices, so I could use it to set up motion detection in and collection of the camera output. I had initial problems getting the ONVIF autodetection to work, as both Firefox and Chromium refused the inter-tab communication being used by the Zoneminder web pages, but managed to get konqueror to work. Apparently the "Enhanced Tracking Protection" in Firefox cause the problem. I ended up upgrading to the Bookworm edition of Zoneminder in the process to try to fix the issue, and believe the problem might be solved now.

In the process I came across the nice Linux GUI tool ONVIF Viewer allowing me to preview the camera output and validate the login passwords required. Sadly its author has grown tired of maintaining the software, so it might not see any future updates. Which is sad, as the viewer is sightly unstable and the picture tend to lock up. Note, this lockup might be due to limitations in the cameras and not the viewer implementation. I suspect the camera is only able to provide pictures to one client at the time, and the Zoneminder feed might interfere with the GUI viewer. I have asked for the tool to be included in Debian.

Finally, I found what appear to be very nice Linux free software replacement for the Windows tool, named libonvif. It provide a C library to talk to ONVIF devices as well as a command line and GUI tool using the library. Using the GUI tool I was able to change the admin passwords and update other settings of the cameras. I have asked for the package to be included in Debian.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2022-10-20: Since my initial publication of this text, I got several suggestions for more free software Linux tools. There is a ONVIF python library (already requested into Debian) and a python 3 fork using a different SOAP dependency. There is also support for ONVIF in Home Assistant, and there is an alternative to Zoneminder called Shinobi. The latter two are not included in Debian either. I have not tested any of these so far.

Tags: debian, english, multimedia, standard, surveillance.
Time to translate the Bullseye edition of the Debian Administrator's Handbook
12th September 2022

(The picture is of the previous edition.)

Almost two years after the previous Norwegian Bokmål translation of the "The Debian Administrator's Handbook" was published, a new edition is finally being prepared. The english text is updated, and it is time to start working on the translations. Around 37 percent of the strings have been updated, one way or another, and the translations starting from a complete Debian Buster edition now need to bring their translation up from 63% to 100%. The complete book is licensed using a Creative Commons license, and has been published in several languages over the years. The translations are done by volunteers to bring Linux in their native tongue. The last time I checked, it complete text was available in English, Norwegian Bokmål, German, Indonesian, Brazil Portuguese and Spanish. In addition, work has been started for Arabic (Morocco), Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, French, Greek, Italian, Japanese, Korean, Persian, Polish, Romanian, Russian, Swedish, Turkish and Vietnamese.

The translation is conducted on the hosted weblate project page. Prospective translators are recommeded to subscribe to the translators mailing list and should also check out the instructions for contributors.

I am one of the Norwegian Bokmål translators of this book, and we have just started. Your contribution is most welcome.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, debian-handbook, english.
Automatic LinuxCNC servo PID tuning?
16th July 2022

While working on a CNC with servo motors controlled by the LinuxCNC PID controller, I recently had to learn how to tune the collection of values that control such mathematical machinery that a PID controller is. It proved to be a lot harder than I hoped, and I still have not succeeded in getting the Z PID controller to successfully defy gravity, nor X and Y to move accurately and reliably. But while climbing up this rather steep learning curve, I discovered that some motor control systems are able to tune their PID controllers. I got the impression from the documentation that LinuxCNC were not. This proved to be not true.

The LinuxCNC pid component is the recommended PID controller to use. It uses eight constants Pgain, Igain, Dgain, bias, FF0, FF1, FF2 and FF3 to calculate the output value based on current and wanted state, and all of these need to have a sensible value for the controller to behave properly. Note, there are even more values involved, theser are just the most important ones. In my case I need the X, Y and Z axes to follow the requested path with little error. This has proved quite a challenge for someone who have never tuned a PID controller before, but there is at least some help to be found.

I discovered that included in LinuxCNC was this old PID component at_pid claiming to have auto tuning capabilities. Sadly it had been neglected since 2011, and could not be used as a plug in replacement for the default pid component. One would have to rewriting the LinuxCNC HAL setup to test at_pid. This was rather sad, when I wanted to quickly test auto tuning to see if it did a better job than me at figuring out good P, I and D values to use.

I decided to have a look if the situation could be improved. This involved trying to understand the code and history of the pid and at_pid components. Apparently they had a common ancestor, as code structure, comments and variable names were quite close to each other. Sadly this was not reflected in the git history, making it hard to figure out what really happened. My guess is that the author of at_pid.c took a version of pid.c, rewrote it to follow the structure he wished pid.c to have, then added support for auto tuning and finally got it included into the LinuxCNC repository. The restructuring and lack of early history made it harder to figure out which part of the code were relevant to the auto tuning, and which part of the code needed to be updated to work the same way as the current pid.c implementation. I started by trying to isolate relevant changes in pid.c, and applying them to at_pid.c. My aim was to make sure the at_pid component could replace the pid component with a simple change in the HAL setup loadrt line, without having to "rewire" the rest of the HAL configuration. After a few hours following this approach, I had learned quite a lot about the code structure of both components, while concluding I was heading down the wrong rabbit hole, and should get back to the surface and find a different path.

For the second attempt, I decided to throw away all the PID control related part of the original at_pid.c, and instead isolate and lift the auto tuning part of the code and inject it into a copy of pid.c. This ensured compatibility with the current pid component, while adding auto tuning as a run time option. To make it easier to identify the relevant parts in the future, I wrapped all the auto tuning code with '#ifdef AUTO_TUNER'. The end result behave just like the current pid component by default, as that part of the code is identical. The end result entered the LinuxCNC master branch a few days ago.

To enable auto tuning, one need to set a few HAL pins in the PID component. The most important ones are tune-effort, tune-mode and tune-start. But lets take a step back, and see what the auto tuning code will do. I do not know the mathematical foundation of the at_pid algorithm, but from observation I can tell that the algorithm will, when enabled, produce a square wave pattern centered around the bias value on the output pin of the PID controller. This can be seen using the HAL Scope provided by LinuxCNC. In my case, this is translated into voltage (+-10V) sent to the motor controller, which in turn is translated into motor speed. So at_pid will ask the motor to move the axis back and forth. The number of cycles in the pattern is controlled by the tune-cycles pin, and the extremes of the wave pattern is controlled by the tune-effort pin. Of course, trying to change the direction of a physical object instantly (as in going directly from a positive voltage to the equivalent negative voltage) do not change velocity instantly, and it take some time for the object to slow down and move in the opposite direction. This result in a more smooth movement wave form, as the axis in question were vibrating back and forth. When the axis reached the target speed in the opposing direction, the auto tuner change direction again. After several of these changes, the average time delay between the 'peaks' and 'valleys' of this movement graph is then used to calculate proposed values for Pgain, Igain and Dgain, and insert them into the HAL model to use by the pid controller. The auto tuned settings are not great, but htye work a lot better than the values I had been able to cook up on my own, at least for the horizontal X and Y axis. But I had to use very small tune-effort values, as my motor controllers error out if the voltage change too quickly. I've been less lucky with the Z axis, which is moving a heavy object up and down, and seem to confuse the algorithm. The Z axis movement became a lot better when I introduced a bias value to counter the gravitational drag, but I will have to work a lot more on the Z axis PID values.

Armed with this knowledge, it is time to look at how to do the tuning. Lets say the HAL configuration in question load the PID component for X, Y and Z like this:

loadrt pid names=pid.x,pid.y,pid.z

Armed with the new and improved at_pid component, the new line will look like this:

loadrt at_pid names=pid.x,pid.y,pid.z

The rest of the HAL setup can stay the same. This work because the components are referenced by name. If the component had used count=3 instead, all use of pid.# had to be changed to at_pid.#.

To start tuning the X axis, move the axis to the middle of its range, to make sure it do not hit anything when it start moving back and forth. Next, set the tune-effort to a low number in the output range. I used 0.1 as my initial value. Next, assign 1 to the tune-mode value. Note, this will disable the pid controlling part and feed 0 to the output pin, which in my case initially caused a lot of drift. In my case it proved to be a good idea with X and Y to tune the motor driver to make sure 0 voltage stopped the motor rotation. On the other hand, for the Z axis this proved to be a bad idea, so it will depend on your setup. It might help to set the bias value to a output value that reduce or eliminate the axis drift. Finally, after setting tune-mode, set tune-start to 1 to activate the auto tuning. If all go well, your axis will vibrate for a few seconds and when it is done, new values for Pgain, Igain and Dgain will be active. To test them, change tune-mode back to 0. Note that this might cause the machine to suddenly jerk as it bring the axis back to its commanded position, which it might have drifted away from during tuning. To summarize with some halcmd lines:

setp pid.x.tune-effort 0.1
setp pid.x.tune-mode 1
setp pid.x.tune-start 1
# wait for the tuning to complete
setp pid.x.tune-mode 0

After doing this task quite a few times while trying to figure out how to properly tune the PID controllers on the machine in, I decided to figure out if this process could be automated, and wrote a script to do the entire tuning process from power on. The end result will ensure the machine is powered on and ready to run, home all axis if it is not already done, check that the extra tuning pins are available, move the axis to its mid point, run the auto tuning and re-enable the pid controller when it is done. It can be run several times. Check out the run-auto-pid-tuner script on github if you want to learn how it is done.

My hope is that this little adventure can inspire someone who know more about motor PID controller tuning can implement even better algorithms for automatic PID tuning in LinuxCNC, making life easier for both me and all the others that want to use LinuxCNC but lack the in depth knowledge needed to tune PID controllers well.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

20th June 2022

I guess it is time to bring some light on the various free software and open culture activities and projects I have worked on or been involved in the last year and a half.

First, lets mention the book releases I managed to publish. The Cory Doctorow book "Hvordan knuse overvåkningskapitalismen" argue that it is not the magic machine learning of the big technology companies that causes the surveillance capitalism to thrive, it is the lack of trust busting to enforce existing anti-monopoly laws. I also published a family of dictionaries for machinists, one sorted on the English words, one sorted on the Norwegian and the last sorted on the North Sámi words. A bit on the back burner but not forgotten is the Debian Administrators Handbook, where a new edition is being worked on. I have not spent as much time as I want to help bring it to completion, but hope I will get more spare time to look at it before the end of the year.

With my Debian had I have spent time on several projects, both updating existing packages, helping to bring in new packages and working with upstream projects to try to get them ready to go into Debian. The list is rather long, and I will only mention my own isenkram, openmotor, vlc bittorrent plugin, xprintidle, norwegian letter style for latex, bs1770gain, and recordmydesktop. In addition to these I have sponsored several packages into Debian, like audmes.

The last year I have looked at several infrastructure projects for collecting meter data and video surveillance recordings. This include several ONVIF related tools like onvifviewer and zoneminder as well as rtl-433, wmbusmeters and rtl-wmbus.

In parallel with this I have looked at fabrication related free software solutions like pycam and LinuxCNC. The latter recently gained improved translation support using po4a and weblate, which was a harder nut to crack that I had anticipated when I started.

Several hours have been spent translating free software to Norwegian Bokmål on the Weblate hosted service. Do not have a complete list, but you will find my contributions in at least gnucash, minetest and po4a.

I also spent quite some time on the Norwegian archiving specification Noark 5, and its companion project Nikita implementing the API specification for Noark 5.

Recently I have been looking into free software tools to do company accounting here in Norway., which present an interesting mix between law, rules, regulations, format specifications and API interfaces.

I guess I should also mention the Norwegian community driven government interfacing projects Mimes Brønn and Fiksgatami, which have ended up in a kind of limbo while the future of the projects is being worked out.

These are just a few of the projects I have been involved it, and would like to give more visibility. I'll stop here to avoid delaying this post.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
3rd June 2022

Back in oktober last year, when I started looking at the LinuxCNC system, I proposed to change the documentation build system make life easier for translators. The original system consisted of independently written documentation files for each language, with no automated way to track changes done in other translations and no help for the translators to know how much was left to translated. By using the po4a system to generate POT and PO files from the English documentation, this can be improved. A small team of LinuxCNC contributors got together and today our labour finally payed off. Since a few hours ago, it is now possible to translate the LinuxCNC documentation on Weblate, alongside the program itself.

The effort to migrate the documentation to use po4a has been both slow and frustrating. I am very happy we finally made it.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

20th April 2022

Recently I wanted to upgrade the firmware of my thinkpad, and located the firmware download page from Lenovo (which annoyingly do not allow access via Tor, forcing me to hand them more personal information that I would like). The download from Lenovo is a bootable ISO image, which is a bit of a problem when all I got available is a USB memory stick. I tried booting the ISO as a USB stick, but this did not work. But genisoimage came to the rescue.

The geteltorito program in the genisoimage binary package is able to convert the bootable ISO image to a bootable USB stick using a simple command line recipe, which I then can write to the most recently inserted USB stick:

geteltorito -o usbstick.img lenovo-firmware.iso
sudo dd bs=10M if=usbstick.img of=$(ls -tr /dev/sd?|tail -1)

This USB stick booted the firmware upgrader just fine, and in a few minutes my machine had the latest and greatest BIOS firmware in place.

Tags: debian, english.
16th April 2022

Inspired by the recent news of AV1 hardware encoding support from Intel, I decided to look into the state of AV1 on Linux today. AV1 is a free and open standard as defined by Digistan without any royalty payment requirement, unlike its much used competitor encoding H.264. While looking, I came across an 5 year old question on askubuntu.com which in turn inspired me to check out how things are in Debian Stable regarding AV1. The test file listed in the question (askubuntu_test_aom.mp4) did not exist any more, so I tracked down a different set of test files on av1.webmfiles.org to test them with the various video tools I had installed on my machine. I was happy to discover that AV1 decoding and playback worked with almost every tool I tested:

mediainfo ok
dragonplayer ok
ffmpeg / ffplay ok
gnome-mplayer fail
mplayer ok
mpv ok
parole ok
vlc ok
firefox ok
chromium ok

AV1 encoding is available in Debian Stable from the aom-tools version 1.0.0.errata1-3 package, using the aomenc tool. The encoding using the package in Debian Stable is quite slow, with the frame rate for my 10 second test video at around 0.25 fps. My 10 second video test took 16 minutes and 11 seconds on my test machine.

I tested by first running ffmpeg and then aomenc using the recipe provided by the askubuntu recipe above. I had to remove the '--row-mt=1' option, as it was not supported in my 1.0.0 version. The encoding only used a single thread, according to top.

ffmpeg -i some-old-video.ogv -t 10 -pix_fmt yuv420p video.y4m
aomenc --fps=24/1 -u 0 --codec=av1 --target-bitrate=1000 \
  --lag-in-frames=25 --auto-alt-ref=1 -t 24 --cpu-used=8 \
  --tile-columns=2 --tile-rows=2 -o output.webm video.y4m

As version 1.0.0 currently have several unsolved security issues in Debian Stable, and to see if the recent backport provided in Debian is any quicker, I ran apt -t bullseye-backports install aom-tools to fetch the backported version and re-encoded the video using the latest version. This time the '--row-mt=1' option worked, and the encoding was done in 46 seconds with a frame rate of around 5.22 fps. This time it seem to be using all my four cores to encode. Encoding speed is still too low for streaming and real time, which would require frame rates above 25 fps, but might be good enough for offline encoding.

I am very happy to see AV1 playback working so well with the default tools in Debian Stable. I hope the encoding situation improve too, allowing even a slow old computer like my 10 year old laptop to be used for encoding.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

12th March 2022

Recently I had a look at a Hargassner wood chip boiler, and what kind of free software can be used to monitor and control it. The boiler can be connected to some cloud service via what the producer call an Internet Gateway, which seem to be a computer connecting to the boiler and passing the information gathered to the cloud. I discovered the boiler controller got an IP address on the local network and listen on TCP port 23 to provide status information as a text line of numbers. It also provide a HTTP server listening on port 80, but I have not yet figured out what it can do beside return an error code.

If I am to believe various free software implementations talking to such boiler, the interpretation of the line of numbers differ between type of boiler and software version on the boiler. By comparing the list of numbers on the front panel of the boiler with the numbers returned via TCP, I have been able to figure out several of the numbers, but there are a lot left to understand. I've located several temperature measurements and hours running values, as well as oxygen measurements and counters.

I decided to write a simple parser in Python for the values I figured out so far, and a simple MQTT injector publishing both the interpreted and the unknown values on a MQTT bus to make collecting and graphing simpler. The end result is available from the hargassner2mqtt project page on gitlab. I very much welcome patches extending the parser to understand more values, boiler types and software versions. I do not really expect very few free software developers got their hands on such unit to experiment, but it would be fun if others too find this project useful.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
2nd March 2022

After many months of hard work by the good people involved in LinuxCNC, the system was accepted Sunday into Debian. Once it was available from Debian, I was surprised to discover from its popularity-contest numbers that people have been reporting its use since 2012. Its project site might be a good place to check out, but sadly is not working when visiting via Tor.

But what is LinuxCNC, you are probably wondering? Perhaps a Wikipedia quote is in place?

"LinuxCNC is a software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It can control up to 9 axes or joints of a CNC machine using G-code (RS-274NGC) as input. It has several GUIs suited to specific kinds of usage (touch screen, interactive development)."

It can even control 3D printers. And even though the Wikipedia page indicate that it can only work with hard real time kernel features, it can also work with the user space soft real time features provided by the Debian kernel. The source code is available from Github. The last few months I've been involved in the translation setup for the program and documentation. Translators are most welcome to join the effort using Weblate.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14th February 2022

I am very happy to report that a new version of the VLC bittorrent plugin was just uploaded into debian. The changes since last time is mostly code clean in the download code. The package is currently in Debian unstable, but should be available in Debian testing son. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file downloaded live via bittorrent like this:

vlc https://archive.org/download/Glass_201703/Glass_201703_archive.torrent

It can also use magnet links and local .torrent files like the ones provided by the Internet Archive. Another example is the Love Nest Buster Keaton movie, where one can click on the 'Torrent' link to get going.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

3rd December 2021

A few days ago, a productive translator started working on a new translation of the Made with Creative Commons book for Brazilian Portuguese. The translation take place on the Weblate web based translation system. Once the translation is complete and proof read, we can publish it on paper as well as in PDF, ePub and HTML format. The translation is already 16% complete, and if more people get involved I am conviced it can very quickly reach 100%. If you are interested in helping out with this or other translations of the Made with Creative Commons book, start translating on Weblate. There are partial translations available in Azerbaijani, Bengali, Brazilian Portuguese, Dutch, French, German, Greek, Polish, Simplified Chinese, Swedish, Thai and Ukrainian.

The git repository for the book contain all source files needed to build the book for yourself. HTML editions to help with proof reading is also available.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

24th October 2021

The Debian Lego team saw a lot of activity the last few weeks. All the packages under the team umbrella has been updated to fix packaging, lintian issues and BTS reports. In addition, a new and inspiring team member appeared on both the debian-lego-team Team mailing list and IRC channel #debian-lego. If you are interested in Lego CAD design and LEGO Mindstorms programming, check out the team wiki page to see what Debian can offer the Lego enthusiast.

Patches has been sent upstream, causing new upstream releases, one even the first one in more than ten years, and old upstreams was released with new ones. There are still a lot of work left, and the team welcome more members to help us make sure Debian is the Linux distribution of choice for Lego builders. If you want to contribute, join us in the IRC channel and become part of the team on Salsa.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, lego, robot.
4th August 2021

Almost thirty years ago, some forward looking teachers at Samisk videregående skole og reindriftsskole teaching metal work and Northern Sámi, decided to create a list of words used in Northern Sámi metal work. After almost ten years this resulted in a dictionary database, published as the book "Mekanihkkársánit : Mekanikerord = Mekaanisen alan sanasto = Mechanic's words" in 1999. The story of this work is available from the pen of Svein Lund, one of the leading actors behind this effort. They even got the dictionary approved by the Sámi Language Council as the recommended metal work words to use.

Fast forward twenty years, I came across this work when I recently became interested in metal work, and started watching educational and funny videos on the topic, like the ones from mrpete222 and This Old Tony. But they all talk English, but I wanted to know what the tools and techniques they used were called in Norwegian. Trying to track down a good dictionary from English to Norwegian, after much searching, I came across the database of words created almost thirty years ago, with translations into English, Norwegian, Northern Sámi, Swedish and Finnish. This gave me a lot of the Norwegian phrases I had been looking for. To make it easier for the next person trying to track down a good Norwegian dictionary for the metal worker, and because I knew the person behind the database from my Skolelinux / Debian Edu days, I decided to ask if the database could be released to the public without any usage limitations, in other words as a Creative Commons licensed data set. And happily, after consulting with the Sámi Parliament of Norway, the database is now available with the Creative Commons Attribution 4.0 International license from my gitlab repository.

The dictionary entries look slightly different, depending on the language in focus. This is the same entry in the different editions.

English

lathe

dreiebenk (nb) várve, várvenbeaŋka, jorahanbeaŋka, vátnanbeaŋka (se) svarv (sv) sorvi (fi)

Norwegian

dreiebenk

lathe (en) várve, várvenbeaŋka, jorahanbeaŋka, vátnanbeaŋka (se) svarv (sv) sorvi (fi)

(nb): sponskjærande bearbeidingsmaskin der ein med skjæreverktøy lausgjør spon frå eit roterande arbetsstykke

Northern Sámi

várve, várvenbeaŋka, jorahanbeaŋka, vátnanbeaŋka

dreiebenk (nb) lathe (en) svarv (sv) sorvi (fi)

(se): mašiidna mainna čuohppá vuolahasaid jorri bargoávdnasis

(nb): sponskjærande bearbeidingsmaskin der ein med skjæreverktøy lausgjør spon frå eit roterande arbetsstykke

The database included term description in both Norwegian and Northern Sámi, but not English. Because of this, the Northern Sámi edition include both descriptions, the Norwegian edition include the Norwegian description and the English edition lack a descripiton.

Once the database was available without any usage restrictions, and armed with my experience in publishing books, I decided to publish a Norwegian/English dictionary as a book using the database, to make the data set available also on paper and as an ebook. Further into the project, it occurred to me that I could just as easily make an English dictionary, and talking to Svein and concluding that it was within reach, I decided to make a Northern Sámi dictionary too.

Thus I suddenly find myself publishing a Northern Sámi dictionary, even though I do not understand the language myself. I hope it will be well received, and can help revive the impressive work done almost thirty years ago to document the vocabulary of metal workers. If I get some help, I might even extend it with some of the words I find missing, like collet, rotary broach, carbide, knurler, arbor press and others. But the first edition build from a lightly edited version of the original database, with no new entries added. If you would like to check it out, visit my list of published books and consider buying a paper or ebook copy from lulu.com. The paper edition is only available in hardcover to increase its durability in the workshop.

I am very happy to report that in the process, and thanks to help from both Svein Lund and Børre Gaup who understand the language, the docbook tools I use to create books, dblatex and docbook-xsl, now include support for Northern Sámi. Before I started, these lacked the needed locale settings for this language, but now the patches are included upstream.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: docbook, english.
5th July 2021

I am happy observe that the The Debian Administrator's Handbook is available in six languages now. I am not sure which one of these are completely proof read, but the complete book is available in these languages:

  • English
  • Norwegian Bokmål
  • German
  • Indonesian
  • Brazil Portuguese
  • Spanish

This is the list of languages more than 70% complete, in other words with not too much left to do:

  • Chinese (Simplified) - 90%
  • French - 79%
  • Italian - 79%
  • Japanese - 77%
  • Arabic (Morocco) - 75%
  • Persian - 71%

I wonder how long it will take to bring these to 100%.

Then there is the list of languages about halfway done:

  • Russian - 63%
  • Swedish - 53%
  • Chinese (Traditional) - 46%
  • Catalan - 45%

Several are on to a good start:

  • Dutch - 26%
  • Vietnamese - 25%
  • Polish - 23%
  • Czech - 22%
  • Turkish - 18%

Finally, there are the ones just getting started:

  • Korean - 4%
  • Croatian - 2%
  • Greek - 2%
  • Danish - 1%
  • Romanian - 1%

If you want to help provide a Debian instruction book in your own language, visit Weblate to contribute to the translations.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

10th June 2021

I am very pleased to be able to share with you the announcement of a new version of the archiving system Nikita published by its lead developer Thomas Sødring:

It is with great pleasure that we can announce a new release of nikita. Version 0.6 (https://gitlab.com/OsloMet-ABI/nikita-noark5-core). This release makes new record keeping functionality available. This really is a maturity release. Both in terms of functionality but also code. Considerable effort has gone into refactoring the codebase and simplifying the code. Notable changes for this release include:

  • Significantly improved OData parsing
  • Support for business specific metadata and national identifiers
  • Continued implementation of domain model and endpoints
  • Improved testing
  • Ability to export and import from arkivstruktur.xml

We are currently in the process of reaching an agreement with an archive institution to publish their picture archive using nikita with business specific metadata and we hope that we can share this with you soon. This is an interesting project as it allows the organisation to bring an older picture archive back to life while using the original metadata values stored as business specific metadata. Combined with OData means the scope and use of the archive is significantly increased and will showcase both the flexibility and power of Noark.

I really think we are approaching a version 1.0 of nikita, even though there is still a lot of work to be done. The notable work at the moment is to implement access-control and full text indexing of documents.

My sincere thanks to everyone who has contributed to this release!

- Thomas

Release 0.6 2021-06-10 (d1ba5fc7e8bad0cfdce45ac20354b19d10ebbc7b)

  • Refactor metadata entity search
  • Remove redundant security configuration
  • Make OpenAPI documentation work
  • Change database structure / inheritance model to a more sensible approach
  • Make it possible to move entities around the fonds structure
  • Implemented a number of missing endpoints
  • Make sure yml files are in sync
  • Implemented/finalised storing and use of
         
    • Business Specific Metadata
    •    
    • Norwegian National Identifiers
    •    
    • Cross Reference
    •    
    • Keyword
    •    
    • StorageLocation
    •    
    • Author
    •    
    • Screening for relevant objects
    •    
    • ChangeLog
    •    
    • EventLog
  • Make generation of updated docker image part of successful CI pipeline
  • Implement pagination for all list requests
         
    • Refactor code to support lists
    •    
    • Refactor code for readability
    •    
    • Standardise the controller/service code
  • Finalise File->CaseFile expansion and Record->registryEntry/recordNote expansion
  • Improved Continuous Integration (CI) approach via gitlab
  • Changed conversion approach to generate tagged PDF documents
  • Updated dependencies
         
    • For security reasons
    •    
    • Brought codebase to spring-boot version 2.5.0
    •    
    • Remove import of necessary dependencies
    •    
    • Remove non-used metrics classes
  • Added new analysis to CI including
  • Implemented storing of Keyword
  • Implemented storing of Screening and ScreeningMetadata
  • Improved OData support
         
    • Better support for inheritance in queries where applicable
    •    
    • Brought in more OData tests
    •    
    • Improved OData/hibernate understanding of queries
    •    
    • Implement $count, $orderby
    •    
    • Finalise $top and $skip
    •    
    • Make sure & is used between query parameters
  • Improved Testing in codebase
         
    • A new approach for integration tests to make test more readable
    •    
    • Introduce tests in parallel with code development for TDD approach
    •    
    • Remove test that required particular access to storage
  • Implement case-handling process from received email to case-handler
         
    • Develop required GUI elements (digital postroom from email)
    •    
    • Introduced leader, quality control and postroom roles
  • Make PUT requests return 200 OK not 201 CREATED
  • Make DELETE requests return 204 NO CONTENT not 200 OK
  • Replaced 'oppdatert*' with 'endret*' everywhere to match latest spec
  • Upgrade Gitlab CI to use python > 3 for CI scripts
  • Bug fixes
         
    • Fix missing ALLOW
    •    
    • Fix reading of objects from jar file during start-up
    •    
    • Reduce the number of warnings in the codebase
    •    
    • Fix delete problems
    •    
    • Make better use of cascade for "leaf" objects
    •    
    • Add missing annotations where relevant
    •    
    • Remove the use of ETAG for delete
    •    
    • Fix missing/wrong/broken rels discovered by runtest
    •    
    • Drop unofficial convertFil (konverterFil) end point
    •    
    • Fix regex problem for dateTime
    •    
    • Fix multiple static analysis issues discovered by coverity
    •    
    • Fix proxy problem when looking for object class names
    •    
    • Add many missing translated Norwegian to English (internal) attribute/entity names
    •    
    • Change UUID generation approach to allow code also set a value
    •    
    • Fix problem with Part/PartParson
    •    
    • Fix problem with empty OData search results
    •    
    • Fix metadata entity domain problem
  • General Improvements
         
    • Makes future refactoring easier as coupling is reduced
    •    
    • Allow some constant variables to be set from property file
    •    
    • Refactor code to make reflection work better across codebase
    •    
    • Reduce the number of @Service layer classes used in @Controller classes
    •    
    • Be more consistent on naming of similar variable types
    •    
    • Start printing rels/href if they are applicable
    •    
    • Cleaner / standardised approach to deleting objects
    •    
    • Avoid concatenation when using StringBuilder
    •    
    • Consolidate code to avoid duplication
    •    
    • Tidy formatting for a more consistent reading style across similar class files
    •    
    • Make throw a log.error message not an log.info message
    •    
    • Make throw print the log value rather than printing in multiple places
    •    
    • Add some missing pronom codes
    •    
    • Fix time formatting issue in Gitlab CI
    •    
    • Remove stale / unused code
    •    
    • Use only UUID datatype rather than combination String/UUID for systemID
    •    
    • Mark variables final and @NotNull where relevant to indicate intention
  • Change Date values to DateTime to maintain compliance with Noark 5 standard
  • Domain model improvements using Hypersistence Optimizer
         
    • Move @Transactional from class to methods to avoid borrowing the JDBC Connection unnecessarily
    •    
    • Fix OneToOne performance issues
    •    
    • Fix ManyToMany performance issues
    •    
    • Add missing bidirectional synchronization support
    •    
    • Fix ManyToMany performance issue
  • Make List<> and Set<> use final-keyword to avoid potential problems during update operations
  • Changed internal URLs, replaced "hateoas-api" with "api".
  • Implemented storing of Precedence.
  • Corrected handling of screening.
  • Corrected _links collection returned for list of mixed entity types to match the specific entity.
  • Improved several internal structures.

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.oftc.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

1st May 2021

Yesterday morning I got a warning call from the Debian quality control system that the VLC bittorrent plugin was due to be removed because of a release critical bug in one of its dependencies. As you might remember, this plugin make VLC able to stream videos directly from a bittorrent source using both torrent files and magnet links, similar to using a HTTP source. I believe such protocol support is a vital feature in VLC, allowing efficient streaming from sources such at the almost 7 million movies in the Internet Archive.

The dependency was the unmaintained libtorrent-rasterbar package, and the bug in question blocked its python library from working properly. As I did not want Bullseye to release without bittorrent support in VLC, I set out to check out the status, and track down a fix for the problem. Luckily the issue had already been identified and fixed upstream, providing everything needed. All I needed to do was to fetch the Debian git repository, extract and trim the patch from upstream and apply it to the Debian package for upload.

The fixed library was uploaded yesterday evening. But that is not enough to get it into Bullseye, as Debian is currently in package freeze to prepare for a new next stable release. Only non-critical packages with autopkgtest setup included, in other words able to validate automatically that the package is working, are allowed to migrate automatically into the next release at this stage. And the unmaintained libtorrent-rasterbar lack such testing, and thus needed a manual override. I am happy to report that such manual override was approved a few minutes ago, thus increasing significantly the chance of VLC bittorrent streaming being available out of the box also for Debian/Buster users. A bit too close shave for my liking, as the Bullseye release is most likely just a few days away, and this did feel like the package was saved by the bell. I am so glad the warning email showed up in time for me to handle the issue, and a big thanks go to the Debian Release team for the quick feedback on #debian-release and their swift unblocking.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

27th February 2021

I have neglected the Valutakrambod library for a while, but decided this weekend to give it a face lift. I fixed a few minor glitches in several of the service drivers, where the API had changed since I last looked at the code. I also added support for fetching the order book from the newcomer Norwegian Bitcoin Exchange.

I also decided to migrate the project from github to gitlab in the process. If you want a python library for talking to various currency exchanges, check out code for valutakrambod.

This is what the output from 'bin/btc-rates-curses -c' looked like a few minutes ago:

           Name Pair           Bid         Ask Spread Ftcd    Age   Freq
       Bitfinex BTCEUR  39229.0000  39246.0000   0.0%   44     44    nan
        Bitmynt BTCEUR  39071.0000  41048.9000   4.8%   43     74    nan
         Bitpay BTCEUR  39326.7000         nan   nan%   39    nan    nan
       Bitstamp BTCEUR  39398.7900  39417.3200   0.0%    0      0      1
           Bl3p BTCEUR  39158.7800  39581.9000   1.1%    0    nan      3
       Coinbase BTCEUR  39197.3100  39621.9300   1.1%   38    nan    nan
         Kraken+BTCEUR  39432.9000  39433.0000   0.0%    0      0      0
        Paymium BTCEUR  39437.2100  39499.9300   0.2%    0   2264    nan
        Bitmynt BTCNOK 409750.9600 420516.8500   2.6%   43     74    nan
         Bitpay BTCNOK 410332.4000         nan   nan%   39    nan    nan
       Coinbase BTCNOK 408675.7300 412813.7900   1.0%   38    nan    nan
        MiraiEx BTCNOK 412174.1800 418396.1500   1.5%   34    nan    nan
            NBX BTCNOK 405835.9000 408921.4300   0.8%   33    nan    nan
       Bitfinex BTCUSD  47341.0000  47355.0000   0.0%   44     53    nan
         Bitpay BTCUSD  47388.5100         nan   nan%   39    nan    nan
       Coinbase BTCUSD  47153.6500  47651.3700   1.0%   37    nan    nan
         Gemini BTCUSD  47416.0900  47439.0500   0.0%   36    336    nan
         Hitbtc BTCUSD  47429.9900  47386.7400  -0.1%    0      0      0
         Kraken+BTCUSD  47401.7000  47401.8000   0.0%    0      0      0
  Exchangerates EURNOK     10.4012     10.4012   0.0%   38  76236    nan
     Norgesbank EURNOK     10.4012     10.4012   0.0%   31  76236    nan
       Bitstamp EURUSD      1.2030      1.2045   0.1%    2      2      1
  Exchangerates EURUSD      1.2121      1.2121   0.0%   38  76236    nan
     Norgesbank USDNOK      8.5811      8.5811   0.0%   31  76236    nan

Yes, I notice the negative spread on Hitbtc. Either I fail to understand their Websocket API or they are sending bogus data. I've seen the same with Kraken, and suspect there is something wrong with the data they send.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: bitcoin, english.
12th January 2021

After a lot of hard work by its maintainer Alexandre Viau and others, the decentralized communication platform Jami (earlier known as Ring), managed to get its latest version into Debian Testing. Several of its dependencies has caused build and propagation problems, which all seem to be solved now.

In addition to the fact that Jami is decentralized, similar to how bittorrent is decentralized, I first of all like how it is not connected to external IDs like phone numbers. This allow me to set up computers to send me notifications using Jami without having to find get a phone number for each computer. Automatic notification via Jami is also made trivial thanks to the provided client side API (as a DBus service). Here is my bourne shell script demonstrating how to let any system send a message to any Jami address. It will create a new identity before sending the message, if no Jami identity exist already:

#!/bin/sh
#
# Usage: $0  
#
# Send  to , create local jami account if
# missing.
#
# License: GPL v2 or later at your choice
# Author: Petter Reinholdtsen


if [ -z "$HOME" ] ; then
    echo "error: missing \$HOME, required for dbus to work"
    exit 1
fi

# First, get dbus running if not already running
DBUSLAUNCH=/usr/bin/dbus-launch
PIDFILE=/run/asterisk/dbus-session.pid
if [ -e $PIDFILE ] ; then
    . $PIDFILE
    if ! kill -0 $DBUS_SESSION_BUS_PID 2>/dev/null ; then
        unset DBUS_SESSION_BUS_ADDRESS
    fi
fi
if [ -z "$DBUS_SESSION_BUS_ADDRESS" ] && [ -x "$DBUSLAUNCH" ]; then
    DBUS_SESSION_BUS_ADDRESS="unix:path=$HOME/.dbus"
    dbus-daemon --session --address="$DBUS_SESSION_BUS_ADDRESS" --nofork --nopidfile --syslog-only < /dev/null > /dev/null 2>&1 3>&1 &
    DBUS_SESSION_BUS_PID=$!
    (
        echo DBUS_SESSION_BUS_PID=$DBUS_SESSION_BUS_PID
        echo DBUS_SESSION_BUS_ADDRESS=\""$DBUS_SESSION_BUS_ADDRESS"\"
        echo export DBUS_SESSION_BUS_ADDRESS
    ) > $PIDFILE
    . $PIDFILE
fi &

dringop() {
    part="$1"; shift
    op="$1"; shift
    dbus-send --session \
        --dest="cx.ring.Ring" /cx/ring/Ring/$part cx.ring.Ring.$part.$op $*
}

dringopreply() {
    part="$1"; shift
    op="$1"; shift
    dbus-send --session --print-reply \
        --dest="cx.ring.Ring" /cx/ring/Ring/$part cx.ring.Ring.$part.$op $*
}

firstaccount() {
    dringopreply ConfigurationManager getAccountList | \
      grep string | awk -F'"' '{print $2}' | head -n 1
}

account=$(firstaccount)

if [ -z "$account" ] ; then
    echo "Missing local account, trying to create it"
    dringop ConfigurationManager addAccount \
      dict:string:string:"Account.type","RING","Account.videoEnabled","false"
    account=$(firstaccount)
    if [ -z "$account" ] ; then
        echo "unable to create local account"
        exit 1
    fi
fi

# Not using dringopreply to ensure $2 can contain spaces
dbus-send --print-reply --session \
  --dest=cx.ring.Ring \
  /cx/ring/Ring/ConfigurationManager \
  cx.ring.Ring.ConfigurationManager.sendTextMessage \
  string:"$account" string:"$1" \
  dict:string:string:"text/plain","$2" 

If you want to check it out yourself, visit the the Jami system project page to learn more, and install the latest Jami client from Debian Unstable or Testing.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

20th October 2020

I am happy to report that we finally made it! Norwegian Bokmål became the first translation published on paper of the new Buster based edition of "The Debian Administrator's Handbook". The print proof reading copy arrived some days ago, and it looked good, so now the book is approved for general distribution. This updated paperback edition is available from lulu.com. The book is also available for download in electronic form as PDF, EPUB and Mobipocket, and can also be read online.

I am very happy to wrap up this Creative Common licensed project, which concludes several months of work by several volunteers. The number of Linux related books published in Norwegian are few, and I really hope this one will gain many readers, as it is packed with deep knowledge on Linux and the Debian ecosystem. The book will be available for various Internet book stores like Amazon and Barnes & Noble soon, but I recommend buying "Håndbok for Debian-administratoren" directly from the source at Lulu.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

11th September 2020

Thanks to the good work of several volunteers, the updated edition of the Norwegian translation for "The Debian Administrator's Handbook" is now almost completed. After many months of proof reading, I consider the proof reading complete enough for us to move to the next step, and have asked for the print version to be prepared and sent of to the print on demand service lulu.com. While it is still not to late if you find any incorrect translations on the hosted Weblate service, but it will be soon. :) You can check out the Buster edition on the web until the print edition is ready.

The book will be for sale on lulu.com and various web book stores, with links available from the web site for the book linked to above. I hope a lot of readers find it useful.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

4th July 2020

Three years ago, the first Norwegian Bokmål edition of "The Debian Administrator's Handbook" was published. This was based on Debian Jessie. Now a new and updated version based on Buster is getting ready. Work on the updated Norwegian Bokmål edition has been going on for a few months now, and yesterday, we reached the first mile stone, with 100% of the texts being translated. A lot of proof reading remains, of course, but a major step towards a new edition has been taken.

The book is translated by volunteers, and we would love to get some help with the proof reading. The translation uses the hosted Weblate service, and we welcome everyone to have a look and submit improvements and suggestions. There is also a proof readers PDF available on request, get in touch if you want to help out that way.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

6th June 2020

As a member of the Norwegian Unix User Group, I have the pleasure of receiving the USENIX magazine ;login: several times a year. I rarely have time to read all the articles, but try to at least skim through them all as there is a lot of nice knowledge passed on there. I even carry the latest issue with me most of the time to try to get through all the articles when I have a few spare minutes.

The other day I came across a nice article titled "The Secure Socket API: TLS as an Operating System Service" with a marvellous idea I hope can make it all the way into the POSIX standard. The idea is as simple as it is powerful. By introducing a new socket() option IPPROTO_TLS to use TLS, and a system wide service to handle setting up TLS connections, one both make it trivial to add TLS support to any program currently using the POSIX socket API, and gain system wide control over certificates, TLS versions and encryption systems used. Instead of doing this:

int socket = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);

the program code would be doing this:

int socket = socket(PF_INET, SOCK_STREAM, IPPROTO_TLS);

According to the ;login: article, converting a C program to use TLS would normally modify only 5-10 lines in the code, which is amazing when compared to using for example the OpenSSL API.

The project has set up the https://securesocketapi.org/ web site to spread the idea, and the code for a kernel module and the associated system daemon is available from two github repositories: ssa and ssa-daemon. Unfortunately there is no explicit license information with the code, so its copyright status is unclear. A request to solve this about it has been unsolved since 2018-08-17.

I love the idea of extending socket() to gain TLS support, and understand why it is an advantage to implement this as a kernel module and system wide service daemon, but can not help to think that it would be a lot easier to get projects to move to this way of setting up TLS if it was done with a user space approach where programs wanting to use this API approach could just link with a wrapper library.

I recommend you check out this simple and powerful approach to more secure network connections. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

24th May 2020

I am very happy to report that a more reliable VLC bittorrent plugin was just uploaded into debian. This fixes a couple of crash bugs in the plugin, hopefully making the VLC experience even better when streaming directly from a bittorrent source. The package is currently in Debian unstable, but should be available in Debian testing in two days. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file downloaded live via bittorrent like this:

vlc https://archive.org/download/Glass_201703/Glass_201703_archive.torrent

It also support magnet links and local .torrent files.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

12th May 2020

It has been way too long since my last interview, but as the Debian Edu / Skolelinux community is still active, and new people keep showing up on the IRC channel #debian-edu and the debian-edu mailing list, I decided to give it another go. I was hoping someone else might pick up the idea and run with it, but this has not happened as far as I can tell, so here we are… This time the announcement of a new free software tool to create a school year book triggered my interest, and I decided to learn more about its author.

Who are you, and how do you spend your days?

My name is Yvan MASSON, I live in France. I have my own one person business in computer services. The work consist of visiting my customers (person's home, local authority, small business) to give advise, install computers and software, fix issues, and provide computing usage training. I spend the rest of my time enjoying my family and promoting free software.

What is your approach for promoting free software?

When I think that free software could be suitable for someone, I explain what it is, with simple words, give a few known examples, and explain that while there is no fee it is a viable alternative in many situations. Most people are receptive when you explain how it is better (I simplify arguments here, I know that it is not so simple): Linux works on older hardware, there are no viruses, and the software can be audited to ensure user is not spied upon. I think the most important is to keep a clear but moderated speech: when you try to convince too much, people feel attacked and stop listening.

How did you get in contact with the Skolelinux / Debian Edu project?

I can not remember how I first heard of Skolelinux / Debian Edu, but probably on planet.debian.org. As I have been working for a school, I have interest in this type of project.

The school I am involved in is a school for "children" between 14 and 18 years old. The French government has recommended free software since 2012, but they do not always use free software themselves. The school computers are still using the Windows operating system, but all of them have the classic set of free software: Firefox ESR, LibreOffice (with the excellent extension Grammalecte that indicates French grammatical errors), SumatraPDF, Audacity, 7zip, KeePass2, VLC, GIMP, Inkscape…

What do you see as the advantages of Skolelinux / Debian Edu?

It is free software! Built on Debian, I am sure that users are not spied upon, and that it can run on low end hardware. This last point is very important, because we really need to improve "green IT". I do not know enough about Skolelinux / Debian Edu to tell how it is better than another free software solution, but what I like is the "all in one" solution: everything has been thought of and prepared to ease installation and usage.

I like Free Software because I hate using something that I can not understand. I do not say that I can understand everything nor that I want to understand everything, but knowing that someone / some company intentionally prevents me from understanding how things work is really unacceptable to me.

Secondly, and more importantly, free software is a requirement to prevent abuses regarding human rights and environmental care. Humanity can not rely on tools that are in the hands of small group of people.

What do you see as the disadvantages of Skolelinux / Debian Edu?

Again, I don't know this project enough. Maybe a dedicated website? Debian wiki works well for documentation, but is not very appealing to someone discovering the project. Also, as Skolelinux / Debian Edu uses OpenLDAP, it probably means that Windows workstations cannot use centralized authentication. Maybe the project could use Samba as an Active Directory domain controller instead, allowing Windows desktop usage when necessary.

(Editors note: In fact Windows workstations can use the centralized authentication in a Debian Edu setup, at least for some versions of Windows, but the fact that this is not well known can be seen as an indication of the need for better documentation and marketing. :)

Which free software do you use daily?

Nothing original: Debian testing/sid with Gnome desktop, Firefox, Thunderbird, LibreOffice…

Which strategy do you believe is the right one to use to get schools to use free software?

Every effort to spread free software into schools is important, whatever it is. But I think, at least where I live, that IT professionals maintaining schools networks are still very "Microsoft centric". Schools will use any working solution, but they need people to install and maintain it. How to make these professionals sensitive about free software and train them with solutions like Debian Edu / Skolelinux is a really good question :-)

8th May 2020

Half a year ago, I wrote about the Jami communication client, capable of peer-to-peer encrypted communication. It handle both messages, audio and video. It uses distributed hash tables instead of central infrastructure to connect its users to each other, which in my book is a plus. I mentioned briefly that it could also work as a SIP client, which came in handy when the higher educational sector in Norway started to promote Zoom as its video conferencing solution. I am reluctant to use the official Zoom client software, due to their copyright license clauses prohibiting users to reverse engineer (for example to check the security) and benchmark it, and thus prefer to connect to Zoom meetings with free software clients.

Jami worked OK as a SIP client to Zoom as long as there was no password set on the room. The Jami daemon leak memory like crazy (approximately 1 GiB a minute) when I am connected to the video conference, so I had to restart the client every 7-10 minutes, which is not great. I tried to get other SIP Linux clients to work without success, so I decided I would have to live with this wart until someone managed to fix the leak in the dring code base. But another problem showed up once the rooms were password protected. I could not get my dial tone signaling through from Jami to Zoom, and dial tone signaling is used to enter the password when connecting to Zoom. I tried a lot of different permutations with my Jami and Asterisk setup to try to figure out why the signaling did not get through, only to finally discover that the fundamental problem seem to be that Zoom is simply not able to receive dial tone signaling when connecting via SIP. There seem to be nothing wrong with the Jami and Asterisk end, it is simply broken in the Zoom end. I got help from a very skilled VoIP engineer figuring out this last part. And being a very skilled engineer, he was also able to locate a solution for me. Or to be exact, a workaround that solve my initial problem of connecting to password protected Zoom rooms using Jami.

So, how do you do this, I am sure you are wondering by now. The trick is already documented from Zoom, and it is to modify the SIP address to include the room password. What is most surprising about this is that the automatically generated email from Zoom with instructions on how to connect via SIP do not mention this. The SIP address to use normally consist of the room ID (a number), an @ character and the IP address of the Zoom SIP gateway. But Zoom understand a lot more than just the room ID in front of the at sign. The format is "[Meeting ID].[Password].[Layout].[Host Key]", and you can here see how you can both enter password, control the layout (full screen, active presence and gallery) and specify the host key to start the meeting. The full SIP address entered into Jami to provide the password will then look like this (all using made up numbers):

sip:657837644.522827@192.168.169.170

Now if only jami would reduce its memory usage, I could even recommend this setup to others. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

29th April 2020

The curiosity got the better of me when Slashdot reported that New Jersey was desperately looking for COBOL programmers, and a few days later it was reported that IBM tried to locate COBOL programmers.

I thus decided to have a look at free software alternatives to learn COBOL, and had the pleasure to find GnuCOBOL was already in Debian. It used to be called Open Cobol, and is a "compiler" transforming COBOL code to C or C++ before giving it to GCC or Visual Studio to build binaries.

I managed to get in touch with upstream, and was impressed with the quick response, and also was happy to see a new Debian maintainer taking over when the original one recently asked to be replaced. A new Debian upload was done as recently as yesterday.

Using the Debian package, I was able to follow a simple COBOL introduction and make and run simple COBOL programs. It was fun to learn a new programming language. If you want to test for yourself, the GnuCOBOL Wikipedia page have a few simple examples to get you startet.

As I do not have much experience with COBOL, I do not know how standard compliant it is, but it claim to pass most tests from COBOL test suite, which sound good to me. It is nice to know it is possible to learn COBOL using software without any usage restrictions, and I am very happy such nice free software project as this is available. If you as me is curious about COBOL, check it out.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

2nd March 2020

Today, after many months of development, a new release of Nikita Noark 5 core project was finally announced on the project mailing list. The Nikita free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.5 since version 0.4, see the email link above for links to a demo site:

  • Updated to Noark 5 versjon 5.0 API specification.
    • Changed formatting of _links from [] to {} to match IETF draft on JSON HAL.
    • Merged Registrering og Basisregistrering in version 4 to combined Registrering.
    • DokumentObjekt is now subtype of ArkivEnhet.
    • Introducing new entity Arkivnotat.
    • Changed all relation keys to use /v5/ instead of /v4/.
    • Corrected to use new official relation keys when possible.
    • Renamed Sakspart to Part and connect it to Mappe, Registrering and Dokumentbeskrivelse instead of only Saksmappe.
    • Moved Korrespondansepart connection from Journalpost to Registrering.
    • Moved Part and Korrespondansepart from package sakarkiv to arkivstruktur.
    • Renamed presedensstatus to presedensStatus.
    • Use new JSON content-type "application/vnd.noark5+json".
    • Updated prepopulated format list to use PRONOM codes.
    • Implemented endpoint for system information.
    • Implemented national identifiers for both file and record.
    • Implemented comments.
    • implemented sign off.
    • implemented conversion.
  • Improved/implemented OData search and paging support for more entities.
  • No longer exposes attribute Dokumentobjekt.referanseDokumentfil, one should use the relation in _links instead.
  • Corrected relation keys under https://rel.arkivverket.no/noark5/v5/api/administrasjon/, replacing 'administrasjon' with 'admin'.
  • Fixed several security and stability issues discovered by Coverity.
  • Corrected handling ETag errors, now return code 409.
  • Improved handling of Kryssreferanse.
  • Changed internal database model to use UUID/SystemID as primary keys in tables.
  • Changed internal database table names to use package prefix.
  • Changed time zone handling for date and datetime attributes, to be more according to the new definition in the API specification.
  • Change revoke-token to only drop token on POST requests, not GET.
  • Updated to newer Spring version.
  • Changed primary key and URL component for metadata code lists to use the 'kode' value instead of a SystemID.
  • Corrected implementation of Part and Sakspart.
  • Changed instance lists with subtypes (like .../registrering/ and .../mappe/) to include the attributes and _links entries for the subtype in the supertype lists.
  • Adjusted _links relations to make it possible to figure out the entity of an instance using the self->href->relation key lookup method.
  • Fixed several end points to make sure GET, PUT, POST and DELETE match each other.
  • Updated DELETE endpoints to work with UUID based entity identifiers.
  • Restructured code to use more common URL related constants in entry point values and replace @RequestMapping with method specific annotations.
  • Added first unit test code.
  • Updated web GUI to work with the updated API.
  • Changed integer fields, enforce them as numeric.
  • Rewrote and simplify metadata handling to use common service and controller code instead of duplicating for each type.
  • Implemented the remaining metadata types.
  • Changed Country list source from Wikipedia to Debian iso-codes and updated the list of Countries.
  • Many many corrections and improvements.

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

27th February 2020

On Tuesday, two scietific articles we have been working on for a while, was finally accepted for publication into Records Management Journal. Still waiting for the assigned DOI urls to start working, but you can have a look at the LaTeX originals here.

The first article is "A record-keeping approach to managing IoT-data for government agencies" (DOI 10.1108/RMJ-09-2019-0050) by Thomas Sødring, Petter Reinholdtsen and David Massey, and sketches some approaches for storing measurement data (aka Internet of Things sensor data) in a archive, thus providing a well defined mechanism for screening and deletion of the information

The second article is "Publishing and using record-keeping structural information in a blockchain" (DOI 10.1108/RMJ-09-2019-0056) by Thomas Sødring, Petter Reinholdtsen and Svein Ølnes, where we describe a way for third parties to validate authenticity and thus improve trust in the records kept in a archive.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2020-04-26: Initially managed to swap the DOI numbers. Fixed it.

Tags: english, noark5.
7th December 2019

When asked to accept terms of use and privacy policies that state it will to remove rights I otherwise had or accept unreasonable terms undermining my privacy, I choose away the service. I simply do not have the conscience to accept terms I have no indention of upholding. But how are the system and service providers to know how many people they scared away? Normally I just quietly walk away. But today, I tried a new approach. I sent the following email (removing the specifics, as I am not out to take the specific service in question) to the service provider I decided to not use, to at least give them one data point on how many users are unhappy with their terms:

From: Petter Reinholdtsen
Subject: When terms of use turn users away
To: [contact@some.site]
Date: Sat, 07 Dec 2019 16:30:56 +0100

Dear [Site Owner],

I was eager to test the system, as it seemed like a fun and interesting application of [some] technology, but after reading the terms of use and privacy policy on <URL: https://www.[some.site]/terms-of-use > and <URL: https://www.[some.site]/privacy-policy > I want you to know that I decided to turn away. There were several provisions in the terms and policy turning me off, but the final term that convinced me was being asked to sign away my right to reverse engineer.

--
Happy hacking
Petter Reinholdtsen

I do not expect much to come out of it, but sharing it here in case others want to give something similar a try too. If companies discover their terms scare away enough people, perhaps they will be improved...

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

25th November 2019

Four years ago, I did a back of the envelope calculation on how much it would cost to store audio recordings of all the phone calls in Norway, and came up with NOK 2.1 million / EUR 250 000 for the year 2013. It is time to repeat the calculation using updated numbers. The calculation is based on how much data storage is needed for each minute of audio, how many minutes all the calls in Norway sums up to, multiplied by the cost of data storage.

The number of phone call minutes for 2018 was fetched from the NKOM statistics site, and for 2018, land line calls are listed as 434 238 000 minutes, while mobile phone calls are listed with 7 542 006 000 minutes. The total number of minutes is thus 7 976 244 000. For simplicity, I decided to ignore any advantages in audio compression the last four years, and continue to assume 60 Kbytes/min as the last time.

Storage prices still varies a lot, but as last time, I decide to take a reasonable big and cheap hard drive, and double its price to include the surrounding costs into account. A 10 TB disk cost less than 4500 NOK / 450 EUR these days, and doubling it give 9000 NOK per 10 TB.

So, with the parameters in place, lets update the old table estimating cost for calls in a given year:

YearCall minutesSizePrice in NOK / EUR
200524 000 000 0001.3 PiB1 170 000 / 117 000
201218 000 000 0001.0 PiB900 000 / 90 000
201317 000 000 000950 TiB855 000 / 85 500
20187 976 244 000445 TiB401 100 / 40 110

Both the cost of storage and the number of phone call minutes have dropped since the last time, bringing the cost down to a level where I guess even small organizations can afford to store the audio recording from every phone call taken in a year in Norway. Of course, this is just the cost of buying the storage equipment. Maintenance, need to be included as well, but the volume of a single year is about a single rack of hard drives, so it is not much more than I could fit in my own home. Wonder how much the electricity bill would raise if I had that kind of storage? I doubt it would be more than a few tens of thousand NOK per year.

1st September 2019

While working on identifying and counting movies that can be legally shared on the Internet, I also looked at the Norwegian movies listed in IMDb. So far I have identified 54 candidates published before 1940 that might no longer be protected by norwegian copyright law. Of these, only 29 are available at least in part from the Norwegian National Library. It can be assumed that the remaining 25 movies are lost. It seem most useful to identify the copyright status of movies that are not lost. To verify that the movie is really no longer protected, one need to verify the list of copyright holders and figure out if and when they died. I've been able to identify some of them, but for some it is hard to figure out when they died.

This is the list of 29 movies both available from the library and possibly no longer protected by copyright law. The year range (1909-1979 on the first line) is year of publication and last year with copyright protection.

1909-1979 ( 70 year) NSB Bergensbanen 1909 - http://www.imdb.com/title/tt0347601/
1910-1980 ( 70 year) Bjørnstjerne Bjørnsons likfærd - http://www.imdb.com/title/tt9299304/
1910-1980 ( 70 year) Bjørnstjerne Bjørnsons begravelse - http://www.imdb.com/title/tt9299300/
1912-1998 ( 86 year) Roald Amundsens Sydpolsferd (1910-1912) - http://www.imdb.com/title/tt9237500/
1913-2006 ( 93 year) Roald Amundsen på sydpolen - http://www.imdb.com/title/tt0347886/
1917-1987 ( 70 year) Fanden i nøtten - http://www.imdb.com/title/tt0346964/
1919-2018 ( 99 year) Historien om en gut - http://www.imdb.com/title/tt0010259/
1920-1990 ( 70 year) Kaksen på Øverland - http://www.imdb.com/title/tt0011361/
1923-1993 ( 70 year) Norge - en skildring i 6 akter - http://www.imdb.com/title/tt0014319/
1925-1997 ( 72 year) Roald Amundsen - Ellsworths flyveekspedition 1925 - http://www.imdb.com/title/tt0016295/
1925-1995 ( 70 year) En verdensreise, eller Da knold og tott vaskede negrene hvite med 13 sæpen - http://www.imdb.com/title/tt1018948/
1926-1996 ( 70 year) Luftskibet 'Norge's flugt over polhavet - http://www.imdb.com/title/tt0017090/
1926-1996 ( 70 year) Med 'Maud' over Polhavet - http://www.imdb.com/title/tt0017129/
1927-1997 ( 70 year) Den store sultan - http://www.imdb.com/title/tt1017997/
1928-1998 ( 70 year) Noahs ark - http://www.imdb.com/title/tt1018917/
1928-1998 ( 70 year) Skjæbnen - http://www.imdb.com/title/tt1002652/
1928-1998 ( 70 year) Chefens cigarett - http://www.imdb.com/title/tt1019896/
1929-1999 ( 70 year) Se Norge - http://www.imdb.com/title/tt0020378/
1929-1999 ( 70 year) Fra Chr. Michelsen til Kronprins Olav og Prinsesse Martha - http://www.imdb.com/title/tt0019899/
1930-2000 ( 70 year) Mot ukjent land - http://www.imdb.com/title/tt0021158/
1930-2000 ( 70 year) Det er natt - http://www.imdb.com/title/tt1017904/
1930-2000 ( 70 year) Over Besseggen på motorcykel - http://www.imdb.com/title/tt0347721/
1931-2001 ( 70 year) Glimt fra New York og den Norske koloni - http://www.imdb.com/title/tt0021913/
1932-2007 ( 75 year) En glad gutt - http://www.imdb.com/title/tt0022946/
1934-2004 ( 70 year) Den lystige radio-trio - http://www.imdb.com/title/tt1002628/
1935-2005 ( 70 year) Kronprinsparets reise i Nord Norge - http://www.imdb.com/title/tt0268411/
1935-2005 ( 70 year) Stormangrep - http://www.imdb.com/title/tt1017998/
1936-2006 ( 70 year) En fargesymfoni i blått - http://www.imdb.com/title/tt1002762/
1939-2009 ( 70 year) Til Vesterheimen - http://www.imdb.com/title/tt0032036/
To be sure which one of these can be legally shared on the Internet, in addition to verifying the right holders list is complete, one need to verify the death year of these persons:
Bjørnstjerne Bjørnson (dead 1910) - http://www.imdb.com/name/nm0085085/
Gustav Adolf Olsen (missing death year) - http://www.imdb.com/name/nm0647652/
Gustav Lund (missing death year) - http://www.imdb.com/name/nm0526168/
John W. Brunius (dead 1937) - http://www.imdb.com/name/nm0116307/
Ola Cornelius (missing death year) - http://www.imdb.com/name/nm1227236/
Oskar Omdal (dead 1927) - http://www.imdb.com/name/nm3116241/
Paul Berge (missing death year) - http://www.imdb.com/name/nm0074006/
Peter Lykke-Seest (dead 1948) - http://www.imdb.com/name/nm0528064/
Roald Amundsen (dead 1928) - https://www.imdb.com/name/nm0025468/
Sverre Halvorsen (dead 1936) - http://www.imdb.com/name/nm1299757/
Thomas W. Schwartz (missing death year) - http://www.imdb.com/name/nm2616250/

Perhaps you can help me figuring death year of those missing it, or right holders if some are missing in IMDb? It would be nice to have a definite list of Norwegian movies that are legal to share on the Internet.

This is the list of 25 movies not available from the library and possibly no longer protected by copyright law:

1907-2009 (102 year) Fiskerlivets farer - http://www.imdb.com/title/tt0121288/
1912-2018 (106 year) Historien omen moder - http://www.imdb.com/title/tt0382852/
1912-2002 ( 90 year) Anny - en gatepiges roman - http://www.imdb.com/title/tt0002026/
1916-1986 ( 70 year) The Mother Who Paid - http://www.imdb.com/title/tt3619226/
1917-2018 (101 year) En vinternat - http://www.imdb.com/title/tt0008740/
1917-2018 (101 year) Unge hjerter - http://www.imdb.com/title/tt0008719/
1917-2018 (101 year) De forældreløse - http://www.imdb.com/title/tt0007972/
1918-2018 (100 year) Vor tids helte - http://www.imdb.com/title/tt0009769/
1918-2018 (100 year) Lodsens datter - http://www.imdb.com/title/tt0009314/
1919-2018 ( 99 year) Æresgjesten - http://www.imdb.com/title/tt0010939/
1921-2006 ( 85 year) Det nye year? - http://www.imdb.com/title/tt0347686/
1921-1991 ( 70 year) Under Polarkredsens himmel - http://www.imdb.com/title/tt0012789/
1923-1993 ( 70 year) Nordenfor polarcirkelen - http://www.imdb.com/title/tt0014318/
1925-1995 ( 70 year) Med 'Stavangerfjord' til Nordkap - http://www.imdb.com/title/tt0016098/
1926-1996 ( 70 year) Over Atlanterhavet og gjennem Amerika - http://www.imdb.com/title/tt0017241/
1926-1996 ( 70 year) Hallo! Amerika! - http://www.imdb.com/title/tt0016945/
1926-1996 ( 70 year) Tigeren Teodors triumf - http://www.imdb.com/title/tt1008052/
1927-1997 ( 70 year) Rød sultan - http://www.imdb.com/title/tt1017979/
1927-1997 ( 70 year) Søndagsfiskeren Flag - http://www.imdb.com/title/tt1018002/
1930-2000 ( 70 year) Ro-ro til fiskeskjær - http://www.imdb.com/title/tt1017973/
1933-2003 ( 70 year) I kongens klær - http://www.imdb.com/title/tt0024164/
1934-2004 ( 70 year) Eventyret om de tre bukkene bruse - http://www.imdb.com/title/tt1007963/
1934-2004 ( 70 year) Pål sine høner - http://www.imdb.com/title/tt1017966/
1937-2007 ( 70 year) Et mesterverk - http://www.imdb.com/title/tt1019937/
1938-2008 ( 70 year) En Harmony - http://www.imdb.com/title/tt1007975/

Several of these movies completely lack right holder information in IMDb and elsewhere. Without access to a copy of the movie, it is often impossible to get the list of people involved in making the movie, making it impossible to figure out the correct copyright status.

Not listed here are the movies still protected by copyright law. Their copyright terms varies from 79 to 144 years, according to the information I have available so far. One of the non-lost movies might change status next year, Mustads Mono from 1920. The next one might be Hvor isbjørnen ferdes from 1935 in 2024.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

10th August 2019

The recent announcement of from the New York Public Library on its results in identifying books published in the USA that are now in the public domain, inspired me to update the scripts I use to track down movies that are in the public domain. This involved updating the script used to extract lists of movies believed to be in the public domain, to work with the latest version of the source web sites. In particular the new edition of the Retro Film Vault web site now seem to list all the films available from that distributor, bringing the films identified there to more than 12.000 movies, and I was able to connect 46% of these to IMDB titles.

The new total is 16307 IMDB IDs (aka films) in the public domain or creative commons licensed, and unknown status for 31460 movies (possibly duplicates of the 16307).

The complete data set is available from a public git repository, including the scripts used to create it.

Anyway, this is the summary of the 28 collected data sources so far:

 2361 entries (   50 unique) with and 22472 without IMDB title ID in free-movies-archive-org-search.json
 2363 entries (  146 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  299 entries (   32 unique) with and    93 without IMDB title ID in free-movies-cinemovies.json
   88 entries (   52 unique) with and    36 without IMDB title ID in free-movies-creative-commons.json
 3190 entries ( 1532 unique) with and    13 without IMDB title ID in free-movies-fesfilm-xls.json
  620 entries (   24 unique) with and   283 without IMDB title ID in free-movies-fesfilm.json
 1080 entries (  165 unique) with and   651 without IMDB title ID in free-movies-filmchest-com.json
  830 entries (   13 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
   19 entries (   19 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
 7410 entries ( 7101 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-us.json
 1205 entries (   41 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
  163 entries (   22 unique) with and    88 without IMDB title ID in free-movies-infodigi-pd.json
  158 entries (  103 unique) with and     0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
  113 entries (    4 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  182 entries (   71 unique) with and     0 without IMDB title ID in free-movies-letterboxd-silent.json
  248 entries (   85 unique) with and     0 without IMDB title ID in free-movies-manual.json
  158 entries (    4 unique) with and    64 without IMDB title ID in free-movies-mubi.json
   85 entries (    1 unique) with and    23 without IMDB title ID in free-movies-openflix.json
  520 entries (   22 unique) with and   244 without IMDB title ID in free-movies-profilms-pd.json
  343 entries (   14 unique) with and    10 without IMDB title ID in free-movies-publicdomainmovies-info.json
  701 entries (   16 unique) with and   560 without IMDB title ID in free-movies-publicdomainmovies-net.json
   74 entries (   13 unique) with and    60 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (   16 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
 5506 entries ( 2941 unique) with and  6585 without IMDB title ID in free-movies-retrofilmvault.json
   16 entries (    0 unique) with and     0 without IMDB title ID in free-movies-thehillproductions.json
  110 entries (    2 unique) with and    29 without IMDB title ID in free-movies-two-movies-net.json
   73 entries (   20 unique) with and   131 without IMDB title ID in free-movies-vodo.json
16307 unique IMDB title IDs in total, 12509 only in one list, 31460 without IMDB title ID

New this time is a list of all the identified IMDB titles, with title, year and running time, provided in free-complete.json. this file also indiciate which source is used to conclude the video is free to distribute.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

4th July 2019

Childs need to learn how to guard their privacy too. To help them, European Digital Rights (EDRi) created a colorful booklet providing information on several privacy related topics, and tips on how to protect ones privacy in the digital age.

The 24 page booklet titled Digital Defenders is available in several languages. Thanks to the valuable contributions from members of the Electronic Foundation Norway (EFN) and others, it is also available in Norwegian Bokmål. If you would like to have it available in your language too, contribute via Weblate and get in touch.

But a funny, well written and good looking PDF do not have much impact, unless it is read by the right audience. To increase the chance of kids reading it, I am currently assisting EFN in getting copies printed on paper to distribute on the street and in class rooms. Print the booklet was made possible thanks to a small et of great sponsors. Thank you very much to each and every one of them! I hope to have the printed booklet ready to hand out on Tuesday, when the Norwegian Unix Users Group is organizing its yearly barbecue for geeks and free software zealots in the Oslo area. If you are nearby, feel free to come by and check out the party and the booklet.

If the booklet prove to be a success, it would be great to get more sponsoring and distribute it to every kid in the country. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

19th June 2019

Some years ago, in 2016, I wrote for the first time about the Ring peer to peer messaging system. It would provide messaging without any central server coordinating the system and without requiring all users to register a phone number or own a mobile phone. Back then, I could not get it to work, and put it aside until it had seen more development. A few days ago I decided to give it another try, and am happy to report that this time I am able to not only send and receive messages, but also place audio and video calls. But only if UDP is not blocked into your network.

The Ring system changed name earlier this year to Jami. I tried doing web search for 'ring' when I discovered it for the first time, and can only applaud this change as it is impossible to find something called Ring among the noise of other uses of that word. Now you can search for 'jami' and this client and the Jami system is the first hit at least on duckduckgo.

Jami will by default encrypt messages as well as audio and video calls, and try to send them directly between the communicating parties if possible. If this proves impossible (for example if both ends are behind NAT), it will use a central SIP TURN server maintained by the Jami project. Jami can also be a normal SIP client. If the SIP server is unencrypted, the audio and video calls will also be unencrypted. This is as far as I know the only case where Jami will do anything without encryption.

Jami is available for several platforms: Linux, Windows, MacOSX, Android, iOS, and Android TV. It is included in Debian already. Jami also work for those using F-Droid without any Google connections, while Signal do not. The protocol is described in the Ring project wiki. The system uses a distributed hash table (DHT) system (similar to BitTorrent) running over UDP. On one of the networks I use, I discovered Jami failed to work. I tracked this down to the fact that incoming UDP packages going to ports 1-49999 were blocked, and the DHT would pick a random port and end up in the low range most of the time. After talking to the developers, I solved this by enabling the dhtproxy in the settings, thus using TCP to talk to a central DHT proxy instead of peering directly with others. I've been told the developers are working on allowing DHT to use TCP to avoid this problem. I also ran into a problem when trying to talk to the version of Ring included in Debian Stable (Stretch). Apparently the protocol changed between beta2 and the current version, making these clients incompatible. Hopefully the protocol will not be made incompatible in the future.

It is worth noting that while looking at Jami and its features, I came across another communication platform I have not tested yet. The Tox protocol and family of Tox clients. It might become the topic of a future blog post.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

11th June 2019

The first book I published, Free Culture by Lawrence Lessig, is still selling a few copies. Not a lot, but enough to have contributed slightly over $500 to the Creative Commons Corporation so far. All the profit is sent there. Most books are still sold via Amazon (83 copies), with Ingram second (49) and Lulu (12) and Machette (7) as minor channels. Bying directly from Lulu bring the largest cut to Creative Commons. The English Edition sold 80 copies so far, the French 59 copies, and Norwegian only 8 copies. Nothing impressive, but nice to see the work we put down is still being appreciated. The ebook edition is available for free from Github.

Title / language Quantity
2016 jan-jun 2016 jul-dec 2017 jan-jun 2017 jul-dec 2018 jan-jun 2018 jul-dec 2019 jan-may
Culture Libre / French 3 6 19 11 7 6 7
Fri kultur / Norwegian 7 1 0 0 0 0 0
Free Culture / English 14 27 16 9 3 7 3
Total 24 34 35 20 10 13 10

It is fun to see the French edition being more popular than the English one.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

4th June 2019

Just 15 days ago, I mentioned my submission to IANA to register an official MIME type for the SOSI vector map format. This morning, just an hour ago, I was notified that the MIME type "text/vnd.sosi" is registered for this format. In addition to this registration, my file(1) patch for a pattern matching rule for SOSI files has been accepted into the official source of that program (pending a new release), and I've been told by the team behind PRONOM that the SOSI format will be included in the next release of PRONOM, which they plan to release this summer around July.

I am very happy to see all of this fall into place, for use by the Noark 5 Tjenestegrensesnitt implementations.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

2nd June 2019

A while back a college and friend from Debian and the Skolelinux / Debian Edu project approached me, asking if I knew someone that might be interested in helping out with a technology project he was running as a teacher at L'école franco-danoise - the Danish-French school and kindergarden. The kids were building robots, rovers. The story behind it is to build a rover for use on the dark side of the moon, and remote control it. As travel cost was a bit high for the final destination, and they wanted to test the concept first, he was looking for volunteers to host a rover for the kids to control in a foreign country. I ended up volunteering as a host, and last week the rover arrived. It took a while to arrive after it was built and shipped, because of customs confusion. Luckily we were able fix it quickly with help from my colleges at work.

This is what it looked like when the rover arrived. Note the cute eyes looking up on me from the wrapping

Once the robot arrived, we needed to track down batteries and figure out how to build custom firmware for it with the appropriate wifi settings. I asked a friend if I could get two 18650 batteries from his pile of Tesla batteries (he had them from the wrack of a crashed Tesla), so now the rover is running on Tesla batteries.

Building the rover firmware proved a bit harder, as the code did not work out of the box with the Arduino IDE package in Debian Buster. I suspect this is due to a unsolved license problem with arduino blocking Debian from upgrading to the latest version. In the end we gave up debugging why the IDE failed to find the required libraries, and ended up using the Arduino Makefile from the arduino-mk Debian package instead. Unfortunately the camera library is missing from the Arduino environment in Debian, so we disabled the camera support for the first firmware build, to get something up and running. With this reduced firmware, the robot could be controlled via the controller server, driving around and measuring distance using its internal acoustic sensor.

Next, With some help from my friend in Denmark, which checked in the camera library into the gitlab repository for me to use, we were able to build a new and more complete version of the firmware, and the robot is now up and running. This is what the "commander" web page look like after taking a measurement and a snapshot:

If you want to learn more about this project, you can check out the The Dark Side Challenge Hackaday web pages.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, robot.
22nd May 2019

This morning, a new release of Nikita Noark 5 core project was announced on the project mailing list. The Nikita free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.4 since version 0.3, see the email link above for links to a demo site:

  • Roll out OData handling to all endpoints where applicable
  • Changed the relation key for "ny-journalpost" to the official one.
  • Better link generation on outgoing links.
  • Tidy up code and make code and approaches more consistent throughout the codebase
  • Update rels to be in compliance with updated version in the interface standard
  • Avoid printing links on empty objects as they can't have links
  • Small bug fixes and improvements
  • Start moving generation of outgoing links to @Service layer so access control can be used when generating links
  • Log exception that was being swallowed so it's traceable
  • Fix name mapping problem
  • Update templated printing so templated should only be printed if it is set true. Requires more work to roll out across entire application.
  • Remove Record->DocumentObject as per domain model of n5v4
  • Add ability to delete lists filtered with OData
  • Return NO_CONTENT (204) on delete as per interface standard
  • Introduce support for ConstraintViolationException exception
  • Make Service classes extend NoarkService
  • Make code base respect X-Forwarded-Host, X-Forwarded-Proto and X-Forwarded-Port
  • Update CorrespondencePart* code to be more in line with Single Responsibility Principle
  • Make package name follow directory structure
  • Make sure Document number starts at 1, not 0
  • Fix isues discovered by FindBugs
  • Update from Date to ZonedDateTime
  • Fix wrong tablename
  • Introduce Service layer tests
  • Improvements to CorrespondencePart
  • Continued work on Class / Classificationsystem
  • Fix feature where authors were stored as storageLocations
  • Update HQL builder for OData
  • Update OData search capability from webpage

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

20th May 2019

As part of my involvement in the work to standardise a REST based API for Noark 5, the Norwegian archiving standard, I spent some time the last few months to try to register a MIME type and PRONOM code for the SOSI file format. The background is that there is a set of formats approved for long term storage and archiving in Norway, and among these formats, SOSI is the only format missing a MIME type and PRONOM code.

What is SOSI, you might ask? To quote Wikipedia: SOSI is short for Samordnet Opplegg for Stedfestet Informasjon (literally "Coordinated Approach for Spatial Information", but more commonly expanded in English to Systematic Organization of Spatial Information). It is a text based file format for geo-spatial vector information used in Norway. Information about the SOSI format can be found in English from Wikipedia. The specification is available in Norwegian from the Norwegian mapping authority. The SOSI standard, which originated in the beginning of nineteen eighties, was the inspiration and formed the basis for the XML based Geography Markup Language.

I have so far written a pattern matching rule for the file(1) unix tool to recognize SOSI files, submitted a request to the PRONOM project to have a PRONOM ID assigned to the format (reference TNA1555078202S60), and today send a request to IANA to register the "text/vnd.sosi" MIME type for this format (referanse IANA #1143144). If all goes well, in a few months, anyone implementing the Noark 5 Tjenestegrensesnitt API spesification should be able to use an official MIME type and PRONOM code for SOSI files. In addition, anyone using SOSI files on Linux should be able to automatically recognise the format and web sites handing out SOSI files can begin providing a more specific MIME type. So far, SOSI files has been handed out from web sites using the "application/octet-stream" MIME type, which is just a nice way of stating "I do not know". Soon, we will know. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

25th March 2019

As part of my involvement with the Nikita Noark 5 core project, I have been proposing improvements to the API specification created by The National Archives of Norway and helped migrating the text from a version control system unfriendly binary format (docx) to Markdown in git. Combined with the migration to a public git repository (on github), this has made it possible for anyone to suggest improvement to the text.

The specification is filled with UML diagrams. I believe the original diagrams were modelled using Sparx Systems Enterprise Architect, and exported as EMF files for import into docx. This approach make it very hard to track changes using a version control system. To improve the situation I have been looking for a good text based UML format with associated command line free software tools on Linux and Windows, to allow anyone to send in corrections to the UML diagrams in the specification. The tool must be text based to work with git, and command line to be able to run it automatically to generate the diagram images. Finally, it must be free software to allow anyone, even those that can not accept a non-free software license, to contribute.

I did not know much about free software UML modelling tools when I started. I have used dia and inkscape for simple modelling in the past, but neither are available on Windows, as far as I could tell. I came across a nice list of text mode uml tools, and tested out a few of the tools listed there. The PlantUML tool seemed most promising. After verifying that the packages is available in Debian and found its Java source under a GPL license on github, I set out to test if it could represent the diagrams we needed, ie the ones currently in the Noark 5 Tjenestegrensesnitt specification. I am happy to report that it could represent them, even thought it have a few warts here and there.

After a few days of modelling I completed the task this weekend. A temporary link to the complete set of diagrams (original and from PlantUML) is available in the github issue discussing the need for a text based UML format, but please note I lack a sensible tool to convert EMF files to PNGs, so the "original" rendering is not as good as the original was in the publised PDF.

Here is an example UML diagram, showing the core classes for keeping metadata about archived documents:

@startuml
skinparam classAttributeIconSize 0

!include media/uml-class-arkivskaper.iuml
!include media/uml-class-arkiv.iuml
!include media/uml-class-klassifikasjonssystem.iuml
!include media/uml-class-klasse.iuml
!include media/uml-class-arkivdel.iuml
!include media/uml-class-mappe.iuml
!include media/uml-class-merknad.iuml
!include media/uml-class-registrering.iuml
!include media/uml-class-basisregistrering.iuml
!include media/uml-class-dokumentbeskrivelse.iuml
!include media/uml-class-dokumentobjekt.iuml
!include media/uml-class-konvertering.iuml
!include media/uml-datatype-elektronisksignatur.iuml

Arkivstruktur.Arkivskaper "+arkivskaper 1..*" <-o "+arkiv 0..*" Arkivstruktur.Arkiv
Arkivstruktur.Arkiv o--> "+underarkiv 0..*" Arkivstruktur.Arkiv
Arkivstruktur.Arkiv "+arkiv 1" o--> "+arkivdel 0..*" Arkivstruktur.Arkivdel
Arkivstruktur.Klassifikasjonssystem "+klassifikasjonssystem [0..1]" <--o "+arkivdel 1..*" Arkivstruktur.Arkivdel
Arkivstruktur.Klassifikasjonssystem "+klassifikasjonssystem [0..1]" o--> "+klasse 0..*" Arkivstruktur.Klasse
Arkivstruktur.Arkivdel "+arkivdel 0..1" o--> "+mappe 0..*" Arkivstruktur.Mappe
Arkivstruktur.Arkivdel "+arkivdel 0..1" o--> "+registrering 0..*" Arkivstruktur.Registrering
Arkivstruktur.Klasse "+klasse 0..1" o--> "+mappe 0..*" Arkivstruktur.Mappe
Arkivstruktur.Klasse "+klasse 0..1" o--> "+registrering 0..*" Arkivstruktur.Registrering
Arkivstruktur.Mappe --> "+undermappe 0..*" Arkivstruktur.Mappe
Arkivstruktur.Mappe "+mappe 0..1" o--> "+registrering 0..*" Arkivstruktur.Registrering
Arkivstruktur.Merknad "+merknad 0..*" <--* Arkivstruktur.Mappe
Arkivstruktur.Merknad "+merknad 0..*" <--* Arkivstruktur.Dokumentbeskrivelse
Arkivstruktur.Basisregistrering -|> Arkivstruktur.Registrering
Arkivstruktur.Merknad "+merknad 0..*" <--* Arkivstruktur.Basisregistrering
Arkivstruktur.Registrering "+registrering 1..*" o--> "+dokumentbeskrivelse 0..*" Arkivstruktur.Dokumentbeskrivelse
Arkivstruktur.Dokumentbeskrivelse "+dokumentbeskrivelse 1" o-> "+dokumentobjekt 0..*" Arkivstruktur.Dokumentobjekt
Arkivstruktur.Dokumentobjekt *-> "+konvertering 0..*" Arkivstruktur.Konvertering
Arkivstruktur.ElektroniskSignatur -[hidden]-> Arkivstruktur.Dokumentobjekt
@enduml

The format is quite compact, with little redundant information. The text expresses entities and relations, and there is little layout related fluff. One can reuse content by using include files, allowing for consistent naming across several diagrams. The include files can be standalone PlantUML too. Here is the content of media/uml-class-arkivskaper.iuml:

@startuml
class Arkivstruktur.Arkivskaper  {
  +arkivskaperID : string
  +arkivskaperNavn : string
  +beskrivelse : string [0..1]
}
@enduml

This is what the complete diagram for the PlantUML notation above look like:

A cool feature of PlantUML is that the generated PNG files include the entire original source diagram as text. The source (with include statements expanded) can be extracted using for example exiftool. Another cool feature is that parts of the entities can be hidden after inclusion. This allow to use include files with all attributes listed, even for UML diagrams that should not list any attributes.

The diagram also show some of the warts. Some times the layout engine place text labels on top of each other, and some times it place the class boxes too close to each other, not leaving room for the labels on the relationship arrows. The former can be worked around by placing extra newlines in the labes (ie "\n"). I did not do it here to be able to demonstrate the issue. I have not found a good way around the latter, so I normally try to reduce the problem by changing from vertical to horizontal links to improve the layout.

All in all, I am quite happy with PlantUML, and very impressed with how quickly its lead developer responds to questions. So far I got an answer to my questions in a few hours when I send an email. I definitely recommend looking at PlantUML if you need to make UML diagrams. Note, PlantUML can draw a lot more than class relations. Check out the documention for a complete list. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

24th March 2019

Yesterday, a new release of Nikita Noark 5 core project was announced on the project mailing list. The free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.3 since version 0.2.1 (from NEWS.md):

  • Improved ClassificationSystem and Class behaviour.
  • Tidied up known inconsistencies between domain model and hateaos links.
  • Added experimental code for blockchain integration.
  • Make token expiry time configurable at upstart from properties file.
  • Continued work on OData search syntax.
  • Started work on pagination for entities, partly implemented for Saksmappe.
  • Finalise ClassifiedCode Metadata entity.
  • Implement mechanism to check if authentication token is still valid. This allow the GUI to return a more sensible message to the user if the token is expired.
  • Reintroduce browse.html page to allow user to browse JSON API using hateoas links.
  • Fix bug in handling file/mappe sequence number. Year change was not properly handled.
  • Update application yml files to be in sync with current development.
  • Stop 'converting' everything to PDF using libreoffice. Only convert the file formats doc, ppt, xls, docx, pptx, xlsx, odt, odp and ods.
  • Continued code style fixing, making code more readable.
  • Minor bug fixes.

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

1st February 2019

Yesterday, the Kraken virtual currency exchange announced their Websocket service, providing a stream of exchange updates to its clients. Getting updated rates quickly is a good idea, so I used their API documentation and added Websocket support to the Kraken service in Valutakrambod today. The python library can now get updates from Kraken several times per second, instead of every time the information is polled from the REST API.

If this sound interesting to you, the code for valutakrambod is available from github. Here is example output from the example client displaying rates in a curses view:

           Name Pair   Bid         Ask         Spr    Ftcd    Age
 BitcoinsNorway BTCEUR   2959.2800   3021.0500   2.0%   36    nan    nan
       Bitfinex BTCEUR   3087.9000   3088.0000   0.0%   36     37    nan
        Bitmynt BTCEUR   3001.8700   3135.4600   4.3%   36     52    nan
         Bitpay BTCEUR   3003.8659         nan   nan%   35    nan    nan
       Bitstamp BTCEUR   3008.0000   3010.2300   0.1%    0      1      1
           Bl3p BTCEUR   3000.6700   3010.9300   0.3%    1    nan    nan
       Coinbase BTCEUR   2992.1800   3023.2500   1.0%   34    nan    nan
         Kraken+BTCEUR   3005.7000   3006.6000   0.0%    0      1      0
        Paymium BTCEUR   2940.0100   2993.4400   1.8%    0   2688    nan
 BitcoinsNorway BTCNOK  29000.0000  29360.7400   1.2%   36    nan    nan
        Bitmynt BTCNOK  29115.6400  29720.7500   2.0%   36     52    nan
         Bitpay BTCNOK  29029.2512         nan   nan%   36    nan    nan
       Coinbase BTCNOK  28927.6000  29218.5900   1.0%   35    nan    nan
        MiraiEx BTCNOK  29097.7000  29741.4200   2.2%   36    nan    nan
 BitcoinsNorway BTCUSD   3385.4200   3456.0900   2.0%   36    nan    nan
       Bitfinex BTCUSD   3538.5000   3538.6000   0.0%   36     45    nan
         Bitpay BTCUSD   3443.4600         nan   nan%   34    nan    nan
       Bitstamp BTCUSD   3443.0100   3445.0500   0.1%    0      2      1
       Coinbase BTCUSD   3428.1600   3462.6300   1.0%   33    nan    nan
         Gemini BTCUSD   3445.8800   3445.8900   0.0%   36    326    nan
         Hitbtc BTCUSD   3473.4700   3473.0700  -0.0%    0      0      0
         Kraken+BTCUSD   3444.4000   3445.6000   0.0%    0      1      0
  Exchangerates EURNOK      9.6685      9.6685   0.0%   36  22226    nan
     Norgesbank EURNOK      9.6685      9.6685   0.0%   36  22226    nan
       Bitstamp EURUSD      1.1440      1.1462   0.2%    0      1      2
  Exchangerates EURUSD      1.1471      1.1471   0.0%   36  22226    nan
 BitcoinsNorway LTCEUR      1.0009     22.6538  95.6%   35    nan    nan
 BitcoinsNorway LTCNOK    259.0900    264.9300   2.2%   35    nan    nan
 BitcoinsNorway LTCUSD      0.0000     29.0000 100.0%   35    nan    nan
     Norgesbank USDNOK      8.4286      8.4286   0.0%   36  22226    nan

Yes, I notice the strange negative spread on Hitbtc. I've seen the same on Kraken. Another strange observation is that Kraken some times announce trade orders a fraction of a second in the future. I really wonder what is going on there.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: bitcoin, english.
22nd January 2019

I am amazed and very pleased to discover that since a few days ago, everything you need to program the BBC micro:bit is available from the Debian archive. All this is thanks to the hard work of Nick Morrott and the Debian python packaging team. The micro:bit project recommend the mu-editor to program the microcomputer, as this editor will take care of all the machinery required to injekt/flash micropython alongside the program into the micro:bit, as long as the pieces are available.

There are three main pieces involved. The first to enter Debian was python-uflash, which was accepted into the archive 2019-01-12. The next one was mu-editor, which showed up 2019-01-13. The final and hardest part to to into the archive was firmware-microbit-micropython, which needed to get its build system and dependencies into Debian before it was accepted 2019-01-20. The last one is already in Debian Unstable and should enter Debian Testing / Buster in three days. This all allow any user of the micro:bit to get going by simply running 'apt install mu-editor' when using Testing or Unstable, and once Buster is released as stable, all the users of Debian stable will be catered for.

As a minor final touch, I added rules to the isenkram package for recognizing micro:bit and recommend the mu-editor package. This make sure any user of the isenkram desktop daemon will get a popup suggesting to install mu-editor then the USB cable from the micro:bit is inserted for the first time.

This should make it easier to have fun.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, robot.
15th January 2019

The layered video playout server created by Sveriges Television, CasparCG Server, entered Debian today. This completes many months of work to get the source ready to go into Debian. The first upload to the Debian NEW queue happened a month ago, but the work upstream to prepare it for Debian started more than two and a half month ago. So far the casparcg-server package is only available for amd64, but I hope this can be improved. The package is in contrib because it depend on the non-free fdk-aac library. The Debian package lack support for streaming web pages because Debian is missing CEF, Chromium Embedded Framework. CEF is wanted by several packages in Debian. But because the Chromium source is not available as a build dependency, it is not yet possible to upload CEF to Debian. I hope this will change in the future.

The reason I got involved is that the Norwegian open channel Frikanalen is starting to use CasparCG for our HD playout, and I would like to have all the free software tools we use to run the TV channel available as packages from the Debian project. The last remaining piece in the puzzle is Open Broadcast Encoder, but it depend on quite a lot of patched libraries which would have to be included in Debian first.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

15th December 2018

A fun way to learn how to program Python is to follow the instructions in the book "Learn to program with Minecraft", which introduces programming in Python to people who like to play with Minecraft. The book uses a Python library to talk to a TCP/IP socket with an API accepting build instructions and providing information about the current players in a Minecraft world. The TCP/IP API was first created for the Minecraft implementation for Raspberry Pi, and has since been ported to some server versions of Minecraft. The book contain recipes for those using Windows, MacOSX and Raspian. But a little known fact is that you can follow the same recipes using the free software construction game Minetest.

There is a Minetest module implementing the same API, making it possible to use the Python programs coded to talk to Minecraft with Minetest too. I uploaded this module to Debian two weeks ago, and as soon as it clears the FTP masters NEW queue, learning to program Python with Minetest on Debian will be a simple 'apt install' away. The Debian package is maintained as part of the Debian Games team, and the packaging rules are currently located under 'unfinished' on Salsa.

You will most likely need to install several of the Minetest modules in Debian for the examples included with the library to work well, as there are several blocks used by the example scripts that are provided via modules in Minetest. Without the required blocks, a simple stone block is used instead. My initial testing with a analog clock did not get gold arms as instructed in the python library, but instead used stone arms.

I tried to find a way to add the API to the desktop version of Minecraft, but were unable to find any working recipes. The recipes I found are only working with a standalone Minecraft server setup. Are there any options to use with the normal desktop version?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
12th December 2018

A few hours ago, a new and improved version (2.4) of the VLC bittorrent plugin was uploaded to Debian. This new version include a complete rewrite of the bittorrent related code, which seem to make the plugin non-blocking. This mean you can actually exit VLC even when the plugin seem to be unable to get the bittorrent streaming started. The new version also include support for filtering playlist by file extension using command line options, if you want to avoid processing audio, video or images. The package is currently in Debian unstable, but should be available in Debian testing in two days. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file downloaded live via bittorrent like this:

vlc https://archive.org/download/Glass_201703/Glass_201703_archive.torrent

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

9th December 2018

Yesterday, I had the pleasure of watching on Frikanalen the OWASP talk by Scott Helme titled "What We’ve Learned From Billions of Security Reports". I had not heard of the Content Security Policy standard nor its ability to "call home" when a browser detect a policy breach (I do not follow web page design development much these days), and found the talk very illuminating.

The mechanism allow a web site owner to use HTTP headers to tell visitors web browser which sources (internal and external) are allowed to be used on the web site. Thus it become possible to enforce a "only local content" policy despite web designers urge to fetch programs from random sites on the Internet, like the one enabling the attack reported by Scott Helme earlier this year.

Using CSP seem like an obvious thing for a site admin to implement to take some control over the information leak that occur when external sources are used to render web pages, it is a mystery more sites are not using CSP? It is being standardized under W3C these days, and is supposed by most web browsers

I managed to find a Django middleware for implementing CSP and was happy to discover it was already in Debian. I plan to use it to add CSP support to the Frikanalen web site soon.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, standard, web.
8th November 2018

If you read my blog regularly, you probably know I am involved in running and developing the Norwegian TV channel Frikanalen. It is an open channel, allowing everyone in Norway to publish videos on a TV channel with national coverage. You can think of it as Youtube for national television. In addition to distribution on RiksTV and Uninett, Frikanalen is also available as a Kodi addon. The last few days I have updated the code to add more features. A new and improved version 0.0.3 Frikanalen addon was just made available via the Kodi repositories. This new version include a option to browse videos by category, as well as free text search in the video archive. It will now also show the video duration in the video lists, which were missing earlier. A new and experimental link to the HD video stream currently being worked on is provided, for those that want to see what the CasparCG output look like. The alternative is the SD video stream, generated using MLT. CasparCG is controlled by our mltplayout server which instead of talking to mlt is giving PLAY instructions to the CasparCG server when it is time to start a new program.

By now, you are probably wondering what kind of content is being played on the channel. These days, it is filled with technical presentations like those from NUUG, Debconf, Makercon, and TED, but there are also some periods with EMPT TV and P7.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

1st November 2018

As part of my involvement in the Nikita archive API project, I've been importing a fairly large lump of emails into a test instance of the archive to see how well this would go. I picked a subset of my notmuch email database, all public emails sent to me via @lists.debian.org, giving me a set of around 216 000 emails to import. In the process, I had a look at the various attachments included in these emails, to figure out what to do with attachments, and noticed that one of the most common attachment formats do not have an official MIME type registered with IANA/IETF. The output from diff, ie the input for patch, is on the top 10 list of formats included in these emails. At the moment people seem to use either text/x-patch or text/x-diff, but neither is officially registered. It would be better if one official MIME type were registered and used everywhere.

To try to get one official MIME type for these files, I've brought up the topic on the media-types mailing list. If you are interested in discussion which MIME type to use as the official for patch files, or involved in making software using a MIME type for patches, perhaps you would like to join the discussion?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

22nd October 2018

My current home stereo is a patchwork of various pieces I got on flee markeds over the years. It is amazing what kind of equipment show up there. I've been wondering for a while if it was possible to measure how well this equipment is working together, and decided to see how far I could get using free software. After trawling the web I came across an article from DIY Audio and Video on Speaker Testing and Analysis describing how to test speakers, and it listing several software options, among them AUDio MEasurement System (AUDMES). It is the only free software system I could find focusing on measuring speakers and audio frequency response. In the process I also found an interesting article from NOVO on Understanding Speaker Specifications and Frequency Response and an article from ecoustics on Understanding Speaker Frequency Response, with a lot of information on what to look for and how to interpret the graphs. Armed with this knowledge, I set out to measure the state of my speakers.

The first hurdle was that AUDMES hadn't seen a commit for 10 years and did not build with current compilers and libraries. I got in touch with its author, who no longer was spending time on the program but gave me write access to the subversion repository on Sourceforge. The end result is that now the code build on Linux and is capable of saving and loading the collected frequency response data in CSV format. The application is quite nice and flexible, and I was able to select the input and output audio interfaces independently. This made it possible to use a USB mixer as the input source, while sending output via my laptop headphone connection. I lacked the hardware and cabling to figure out a different way to get independent cabling to speakers and microphone.

Using this setup I could see how a large range of high frequencies apparently were not making it out of my speakers. The picture show the frequency response measurement of one of the speakers. Note the frequency lines seem to be slightly misaligned, compared to the CSV output from the program. I can not hear several of these are high frequencies, according to measurement from Free Hearing Test Software, an freeware system to measure your hearing (still looking for a free software alternative), so I do not know if they are coming out out the speakers. I thus do not quite know how to figure out if the missing frequencies is a problem with the microphone, the amplifier or the speakers, but I managed to rule out the audio card in my PC by measuring my Bose noise canceling headset using its own microphone. This setup was able to see the high frequency tones, so the problem with my stereo had to be in the amplifier or speakers.

Anyway, to try to role out one factor I ended up picking up a new set of speakers at a flee marked, and these work a lot better than the old speakers, so I guess the microphone and amplifier is OK. If you need to measure your own speakers, check out AUDMES. If more people get involved, perhaps the project could become good enough to include in Debian? And if you know of some other free software to measure speakers and amplifier performance, please let me know. I am aware of the freeware option REW, but I want something that can be developed also when the vendor looses interest.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

21st October 2018

Bittorrent is as far as I know, currently the most efficient way to distribute content on the Internet. It is used all by all sorts of content providers, from national TV stations like NRK, Linux distributors like Debian and Ubuntu, and of course the Internet archive.

Almost a month ago a new package adding Bittorrent support to VLC became available in Debian testing and unstable. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

Since the plugin was made available for the first time in Debian, several improvements have been made to it. In version 2.2-4, now available in both testing and unstable, a desktop file is provided to teach browsers to start VLC when the user click on torrent files or magnet links. The last part is thanks to me finally understanding what the strange x-scheme-handler style MIME types in desktop files are used for. By adding x-scheme-handler/magnet to the MimeType entry in the desktop file, at least the browsers Firefox and Chromium will suggest to start VLC when selecting a magnet URI on a web page. The end result is that now, with the plugin installed in Buster and Sid, one can visit any Internet Archive page with movies using a web browser and click on the torrent link to start streaming the movie.

Note, there is still some misfeatures in the plugin. One is the fact that it will hang and block VLC from exiting until the torrent streaming starts. Another is the fact that it will pick and play a random file in a multi file torrent. This is not always the video file you want. Combined with the first it can be a bit hard to get the video streaming going. But when it work, it seem to do a good job.

For the Debian packaging, I would love to find a good way to test if the plugin work with VLC using autopkgtest. I tried, but do not know enough of the inner workings of VLC to get it working. For now the autopkgtest script is only checking if the .so file was successfully loaded by VLC. If you have any suggestions, please submit a patch to the Debian bug tracking system.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

18th October 2018

This morning, the new release of the Nikita Noark 5 core project was announced on the project mailing list. The free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.2 since version 0.1.1 (from NEWS.md):

  • Fix typos in REL names
  • Tidy up error message reporting
  • Fix issue where we used Integer.valueOf(), not Integer.getInteger()
  • Change some String handling to StringBuffer
  • Fix error reporting
  • Code tidy-up
  • Fix issue using static non-synchronized SimpleDateFormat to avoid race conditions
  • Fix problem where deserialisers were treating integers as strings
  • Update methods to make them null-safe
  • Fix many issues reported by coverity
  • Improve equals(), compareTo() and hash() in domain model
  • Improvements to the domain model for metadata classes
  • Fix CORS issues when downloading document
  • Implementation of case-handling with registryEntry and document upload
  • Better support in Javascript for OPTIONS
  • Adding concept description of mail integration
  • Improve setting of default values for GET on ny-journalpost
  • Better handling of required values during deserialisation
  • Changed tilknyttetDato (M620) from date to dateTime
  • Corrected some opprettetDato (M600) (de)serialisation errors.
  • Improve parse error reporting.
  • Started on OData search and filtering.
  • Added Contributor Covenant Code of Conduct to project.
  • Moved repository and project from Github to Gitlab.
  • Restructured repository, moved code into src/ and web/.
  • Updated code to use Spring Boot version 2.
  • Added support for OAuth2 authentication.
  • Fixed several bugs discovered by Coverity.
  • Corrected handling of date/datetime fields.
  • Improved error reporting when rejecting during deserializatoin.
  • Adjusted default values provided for ny-arkivdel, ny-mappe, ny-saksmappe, ny-journalpost and ny-dokumentbeskrivelse.
  • Several fixes for korrespondansepart*.
  • Updated web GUI:
    • Now handle both file upload and download.
    • Uses new OAuth2 authentication for login.
    • Forms now fetches default values from API using GET.
    • Added RFC 822 (email), TIFF and JPEG to list of possible file formats.

The changes and improvements are extensive. Running diffstat on the changes between git tab 0.1.1 and 0.2 show 1098 files changed, 108666 insertions(+), 54066 deletions(-).

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

8th October 2018

I have earlier covered the basics of trusted timestamping using the 'openssl ts' client. See blog post for 2014, 2016 and 2017 for those stories. But some times I want to integrate the timestamping in other code, and recently I needed to integrate it into Python. After searching a bit, I found the rfc3161 library which seemed like a good fit, but I soon discovered it only worked for python version 2, and I needed something that work with python version 3. Luckily I next came across the rfc3161ng library, a fork of the original rfc3161 library. Not only is it working with python 3, it have fixed a few of the bugs in the original library, and it has an active maintainer. I decided to wrap it up and make it available in Debian, and a few days ago it entered Debian unstable and testing.

Using the library is fairly straight forward. The only slightly problematic step is to fetch the required certificates to verify the timestamp. For some services it is straight forward, while for others I have not yet figured out how to do it. Here is a small standalone code example based on of the integration tests in the library code:

#!/usr/bin/python3

"""

Python 3 script demonstrating how to use the rfc3161ng module to
get trusted timestamps.

The license of this code is the same as the license of the rfc3161ng
library, ie MIT/BSD.

"""

import os
import pyasn1.codec.der
import rfc3161ng
import subprocess
import tempfile
import urllib.request

def store(f, data):
    f.write(data)
    f.flush()
    f.seek(0)

def fetch(url, f=None):
    response = urllib.request.urlopen(url)
    data = response.read()
    if f:
        store(f, data)
    return data

def main():
    with tempfile.NamedTemporaryFile() as cert_f,\
    	 tempfile.NamedTemporaryFile() as ca_f,\
    	 tempfile.NamedTemporaryFile() as msg_f,\
    	 tempfile.NamedTemporaryFile() as tsr_f:

        # First fetch certificates used by service
        certificate_data = fetch('https://freetsa.org/files/tsa.crt', cert_f)
        ca_data_data = fetch('https://freetsa.org/files/cacert.pem', ca_f)

        # Then timestamp the message
        timestamper = \
            rfc3161ng.RemoteTimestamper('http://freetsa.org/tsr',
                                        certificate=certificate_data)
        data = b"Python forever!\n"
        tsr = timestamper(data=data, return_tsr=True)

        # Finally, convert message and response to something 'openssl ts' can verify
        store(msg_f, data)
        store(tsr_f, pyasn1.codec.der.encoder.encode(tsr))
        args = ["openssl", "ts", "-verify",
                "-data", msg_f.name,
	        "-in", tsr_f.name,
		"-CAfile", ca_f.name,
                "-untrusted", cert_f.name]
        subprocess.check_call(args)

if '__main__' == __name__:
   main()

The code fetches the required certificates, store them as temporary files, timestamp a simple message, store the message and timestamp to disk and ask 'openssl ts' to verify the timestamp. A timestamp is around 1.5 kiB in size, and should be fairly easy to store for future use.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

4th October 2018

A few days, I rescued a Windows victim over to Debian. To try to rescue the remains, I helped set up automatic sync with Google Drive. I did not find any sensible Debian package handling this automatically, so I rebuild the grive2 source from the Ubuntu UPD8 PPA to do the task and added a autostart desktop entry and a small shell script to run in the background while the user is logged in to do the sync. Here is a sketch of the setup for future reference.

I first created ~/googledrive, entered the directory and ran 'grive -a' to authenticate the machine/user. Next, I created a autostart hook in ~/.config/autostart/grive.desktop to start the sync when the user log in:

[Desktop Entry]
Name=Google drive autosync
Type=Application
Exec=/home/user/bin/grive-sync

Finally, I wrote the ~/bin/grive-sync script to sync ~/googledrive/ with the files in Google Drive.

#!/bin/sh
set -e
cd ~/
cleanup() {
    if [ "$syncpid" ] ; then
        kill $syncpid
    fi
}
trap cleanup EXIT INT QUIT
/usr/lib/grive/grive-sync.sh listen googledrive 2>&1 | sed "s%^%$0:%" &
syncpdi=$!
while true; do
    if ! xhost >/dev/null 2>&1 ; then
        echo "no DISPLAY, exiting as the user probably logged out"
        exit 1
    fi
    if [ ! -e /run/user/1000/grive-sync.sh_googledrive ] ; then
        /usr/lib/grive/grive-sync.sh sync googledrive
    fi
    sleep 300
done 2>&1 | sed "s%^%$0:%"

Feel free to use the setup if you want. It can be assumed to be GNU GPL v2 licensed (or any later version, at your leisure), but I doubt this code is possible to claim copyright on.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
29th September 2018

It would come as no surprise to anyone that I am interested in bitcoins and virtual currencies. I've been keeping an eye on virtual currencies for many years, and it is part of the reason a few months ago, I started writing a python library for collecting currency exchange rates and trade on virtual currency exchanges. I decided to name the end result valutakrambod, which perhaps can be translated to small currency shop.

The library uses the tornado python library to handle HTTP and websocket connections, and provide a asynchronous system for connecting to and tracking several services. The code is available from github.

There are two example clients of the library. One is very simple and list every updated buy/sell price received from the various services. This code is started by running bin/btc-rates and call the client code in valutakrambod/client.py. The simple client look like this:

import functools
import tornado.ioloop
import valutakrambod
class SimpleClient(object):
    def __init__(self):
        self.services = []
        self.streams = []
        pass
    def newdata(self, service, pair, changed):
        print("%-15s %s-%s: %8.3f %8.3f" % (
            service.servicename(),
            pair[0],
            pair[1],
            service.rates[pair]['ask'],
            service.rates[pair]['bid'])
        )
    async def refresh(self, service):
        await service.fetchRates(service.wantedpairs)
    def run(self):
        self.ioloop = tornado.ioloop.IOLoop.current()
        self.services = valutakrambod.service.knownServices()
        for e in self.services:
            service = e()
            service.subscribe(self.newdata)
            stream = service.websocket()
            if stream:
                self.streams.append(stream)
            else:
                # Fetch information from non-streaming services immediately
                self.ioloop.call_later(len(self.services),
                                       functools.partial(self.refresh, service))
                # as well as regularly
                service.periodicUpdate(60)
        for stream in self.streams:
            stream.connect()
        try:
            self.ioloop.start()
        except KeyboardInterrupt:
            print("Interrupted by keyboard, closing all connections.")
            pass
        for stream in self.streams:
            stream.close()

The library client loops over all known "public" services, initialises it, subscribes to any updates from the service, checks and activates websocket streaming if the service provide it, and if no streaming is supported, fetches information from the service and sets up a periodic update every 60 seconds. The output from this client can look like this:

Bl3p            BTC-EUR: 5687.110 5653.690
Bl3p            BTC-EUR: 5687.110 5653.690
Bl3p            BTC-EUR: 5687.110 5653.690
Hitbtc          BTC-USD: 6594.560 6593.690
Hitbtc          BTC-USD: 6594.560 6593.690
Bl3p            BTC-EUR: 5687.110 5653.690
Hitbtc          BTC-USD: 6594.570 6593.690
Bitstamp        EUR-USD:    1.159    1.154
Hitbtc          BTC-USD: 6594.570 6593.690
Hitbtc          BTC-USD: 6594.580 6593.690
Hitbtc          BTC-USD: 6594.580 6593.690
Hitbtc          BTC-USD: 6594.580 6593.690
Bl3p            BTC-EUR: 5687.110 5653.690
Paymium         BTC-EUR: 5680.000 5620.240

The exchange order book is tracked in addition to the best buy/sell price, for those that need to know the details.

The other example client is focusing on providing a curses view with updated buy/sell prices as soon as they are received from the services. This code is located in bin/btc-rates-curses and activated by using the '-c' argument. Without the argument the "curses" output is printed without using curses, which is useful for debugging. The curses view look like this:

           Name Pair   Bid         Ask         Spr    Ftcd    Age
 BitcoinsNorway BTCEUR   5591.8400   5711.0800   2.1%   16    nan     60
       Bitfinex BTCEUR   5671.0000   5671.2000   0.0%   16     22     59
        Bitmynt BTCEUR   5580.8000   5807.5200   3.9%   16     41     60
         Bitpay BTCEUR   5663.2700         nan   nan%   15    nan     60
       Bitstamp BTCEUR   5664.8400   5676.5300   0.2%    0      1      1
           Bl3p BTCEUR   5653.6900   5684.9400   0.5%    0    nan     19
       Coinbase BTCEUR   5600.8200   5714.9000   2.0%   15    nan    nan
         Kraken BTCEUR   5670.1000   5670.2000   0.0%   14     17     60
        Paymium BTCEUR   5620.0600   5680.0000   1.1%    1   7515    nan
 BitcoinsNorway BTCNOK  52898.9700  54034.6100   2.1%   16    nan     60
        Bitmynt BTCNOK  52960.3200  54031.1900   2.0%   16     41     60
         Bitpay BTCNOK  53477.7833         nan   nan%   16    nan     60
       Coinbase BTCNOK  52990.3500  54063.0600   2.0%   15    nan    nan
        MiraiEx BTCNOK  52856.5300  54100.6000   2.3%   16    nan    nan
 BitcoinsNorway BTCUSD   6495.5300   6631.5400   2.1%   16    nan     60
       Bitfinex BTCUSD   6590.6000   6590.7000   0.0%   16     23     57
         Bitpay BTCUSD   6564.1300         nan   nan%   15    nan     60
       Bitstamp BTCUSD   6561.1400   6565.6200   0.1%    0      2      1
       Coinbase BTCUSD   6504.0600   6635.9700   2.0%   14    nan    117
         Gemini BTCUSD   6567.1300   6573.0700   0.1%   16     89    nan
         Hitbtc+BTCUSD   6592.6200   6594.2100   0.0%    0      0      0
         Kraken BTCUSD   6565.2000   6570.9000   0.1%   15     17     58
  Exchangerates EURNOK      9.4665      9.4665   0.0%   16 107789    nan
     Norgesbank EURNOK      9.4665      9.4665   0.0%   16 107789    nan
       Bitstamp EURUSD      1.1537      1.1593   0.5%    4      5      1
  Exchangerates EURUSD      1.1576      1.1576   0.0%   16 107789    nan
 BitcoinsNorway LTCEUR      1.0000     49.0000  98.0%   16    nan    nan
 BitcoinsNorway LTCNOK    492.4800    503.7500   2.2%   16    nan     60
 BitcoinsNorway LTCUSD      1.0221     49.0000  97.9%   15    nan    nan
     Norgesbank USDNOK      8.1777      8.1777   0.0%   16 107789    nan

The code for this client is too complex for a simple blog post, so you will have to check out the git repository to figure out how it work. What I can tell is how the three last numbers on each line should be interpreted. The first is how many seconds ago information was received from the service. The second is how long ago, according to the service, the provided information was updated. The last is an estimate on how often the buy/sell values change.

If you find this library useful, or would like to improve it, I would love to hear from you. Note that for some of the services I've implemented a trading API. It might be the topic of a future blog post.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: bitcoin, english.
24th September 2018

Back in February, I got curious to see if VLC now supported Bittorrent streaming. It did not, despite the fact that the idea and code to handle such streaming had been floating around for years. I did however find a standalone plugin for VLC to do it, and half a year later I decided to wrap up the plugin and get it into Debian. I uploaded it to NEW a few days ago, and am very happy to report that it entered Debian a few hours ago, and should be available in Debian/Unstable tomorrow, and Debian/Testing in a few days.

With the vlc-plugin-bittorrent package installed you should be able to stream videos using a simple call to

vlc https://archive.org/download/TheGoat/TheGoat_archive.torrent

It can handle magnet links too. Now if only native vlc had bittorrent support. Then a lot more would be helping each other to share public domain and creative commons movies. The plugin need some stability work with seeking and picking the right file in a torrent with many files, but is already usable. Please note that the plugin is not removing downloaded files when vlc is stopped, so it can fill up your disk if you are not careful. Have fun. :)

I would love to get help maintaining this package. Get in touch if you are interested.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

2nd September 2018

I continue to explore my Kodi installation, and today I wanted to tell it to play a youtube URL I received in a chat, without having to insert search terms using the on-screen keyboard. After searching the web for API access to the Youtube plugin and testing a bit, I managed to find a recipe that worked. If you got a kodi instance with its API available from http://kodihost/jsonrpc, you can try the following to have check out a nice cover band.

curl --silent --header 'Content-Type: application/json' \
  --data-binary '{ "id": 1, "jsonrpc": "2.0", "method": "Player.Open",
  "params": {"item": { "file":
  "plugin://plugin.video.youtube/play/?video_id=LuRGVM9O0qg" } } }' \
  http://projector.local/jsonrpc

I've extended kodi-stream program to take a video source as its first argument. It can now handle direct video links, youtube links and 'desktop' to stream my desktop to Kodi. It is almost like a Chromecast. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, kodi, video.
30th August 2018

It might seem obvious that software created using tax money should be available for everyone to use and improve. Free Software Foundation Europe recentlystarted a campaign to help get more people to understand this, and I just signed the petition on Public Money, Public Code to help them. I hope you too will do the same.

13th August 2018

A few days ago, I wondered if there are any privacy respecting health monitors and/or fitness trackers available for sale these days. I would like to buy one, but do not want to share my personal data with strangers, nor be forced to have a mobile phone to get data out of the unit. I've received some ideas, and would like to share them with you. One interesting data point was a pointer to a Free Software app for Android named Gadgetbridge. It provide cloudless collection and storing of data from a variety of trackers. Its list of supported devices is a good indicator for units where the protocol is fairly open, as it is obviously being handled by Free Software. Other units are reportedly encrypting the collected information with their own public key, making sure only the vendor cloud service is able to extract data from the unit. The people contacting me about Gadgetbirde said they were using Amazfit Bip and Xiaomi Band 3.

I also got a suggestion to look at some of the units from Garmin. I was told their GPS watches can be connected via USB and show up as a USB storage device with Garmin FIT files containing the collected measurements. While proprietary, FIT files apparently can be read at least by GPSBabel and the GpxPod Nextcloud app. It is unclear to me if they can read step count and heart rate data. The person I talked to was using a Garmin Forerunner 935, which is a fairly expensive unit. I doubt it is worth it for a unit where the vendor clearly is trying its best to move from open to closed systems. I still remember when Garmin dropped NMEA support in its GPSes.

A final idea was to build ones own unit, perhaps by basing it on a wearable hardware platforms like the Flora Geo Watch. Sound like fun, but I had more money than time to spend on the topic, so I suspect it will have to wait for another time.

While I was working on tracking down links, I came across an inspiring TED talk by Dave Debronkart about being a e-patient, and discovered the web site Participatory Medicine. If you too want to track your own health and fitness without having information about your private life floating around on computers owned by others, I recommend checking it out.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
7th August 2018

Dear lazyweb,

I wonder, is there a fitness tracker / health monitor available for sale today that respect the users privacy? With this I mean a watch/bracelet capable of measuring pulse rate and other fitness/health related values (and by all means, also the correct time and location if possible), which is only provided for me to extract/read from the unit with computer without a radio beacon and Internet connection. In other words, it do not depend on a cell phone app, and do make the measurements available via other peoples computer (aka "the cloud"). The collected data should be available using only free software. I'm not interested in depending on some non-free software that will leave me high and dry some time in the future. I've been unable to find any such unit. I would like to buy it. The ones I have seen for sale here in Norway are proud to report that they share my health data with strangers (aka "cloud enabled"). Is there an alternative? I'm not interested in giving money to people requiring me to accept "privacy terms" to allow myself to measure my own health.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
31st July 2018

For a while now, I have looked for a sensible way to share images with my family using a self hosted solution, as it is unacceptable to place images from my personal life under the control of strangers working for data hoarders like Google or Dropbox. The last few days I have drafted an approach that might work out, and I would like to share it with you. I would like to publish images on a server under my control, and point some Internet connected display units using some free and open standard to the images I published. As my primary language is not limited to ASCII, I need to store metadata using UTF-8. Many years ago, I hoped to find a digital photo frame capable of reading a RSS feed with image references (aka using the <enclosure> RSS tag), but was unable to find a current supplier of such frames. In the end I gave up that approach.

Some months ago, I discovered that XScreensaver is able to read images from a RSS feed, and used it to set up a screen saver on my home info screen, showing images from the Daily images feed from NASA. This proved to work well. More recently I discovered that Kodi (both using OpenELEC and LibreELEC) provide the Feedreader screen saver capable of reading a RSS feed with images and news. For fun, I used it this summer to test Kodi on my parents TV by hooking up a Raspberry PI unit with LibreELEC, and wanted to provide them with a screen saver showing selected pictures from my selection.

Armed with motivation and a test photo frame, I set out to generate a RSS feed for the Kodi instance. I adjusted my Freedombox instance, created /var/www/html/privatepictures/, wrote a small Perl script to extract title and description metadata from the photo files and generate the RSS file. I ended up using Perl instead of python, as the libimage-exiftool-perl Debian package seemed to handle the EXIF/XMP tags I ended up using, while python3-exif did not. The relevant EXIF tags only support ASCII, so I had to find better alternatives. XMP seem to have the support I need.

I am a bit unsure which EXIF/XMP tags to use, as I would like to use tags that can be easily added/updated using normal free software photo managing software. I ended up using the tags set using this exiftool command, as these tags can also be set using digiKam:

exiftool -headline='The RSS image title' \
  -description='The RSS image description.' \
  -subject+=for-family photo.jpeg

I initially tried the "-title" and "keyword" tags, but they were invisible in digiKam, so I changed to "-headline" and "-subject". I use the keyword/subject 'for-family' to flag that the photo should be shared with my family. Images with this keyword set are located and copied into my Freedombox for the RSS generating script to find.

Are there better ways to do this? Get in touch if you have better suggestions.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
12th July 2018

Last night, I wrote a recipe to stream a Linux desktop using VLC to a instance of Kodi. During the day I received valuable feedback, and thanks to the suggestions I have been able to rewrite the recipe into a much simpler approach requiring no setup at all. It is a single script that take care of it all.

This new script uses GStreamer instead of VLC to capture the desktop and stream it to Kodi. This fixed the video quality issue I saw initially. It further removes the need to add a m3u file on the Kodi machine, as it instead connects to the JSON-RPC API in Kodi and simply ask Kodi to play from the stream created using GStreamer. Streaming the desktop to Kodi now become trivial. Copy the script below, run it with the DNS name or IP address of the kodi server to stream to as the only argument, and watch your screen show up on the Kodi screen. Note, it depend on multicast on the local network, so if you need to stream outside the local network, the script must be modified. Also note, I have no idea if audio work, as I only care about the picture part.

#!/bin/sh
#
# Stream the Linux desktop view to Kodi.  See
# https://people.skolelinux.org/pere/blog/Streaming_the_Linux_desktop_to_Kodi_using_VLC_and_RTSP.html
# for backgorund information.

# Make sure the stream is stopped in Kodi and the gstreamer process is
# killed if something go wrong (for example if curl is unable to find the
# kodi server).  Do the same when interrupting this script.
kodicmd() {
    host="$1"
    cmd="$2"
    params="$3"
    curl --silent --header 'Content-Type: application/json' \
	 --data-binary "{ \"id\": 1, \"jsonrpc\": \"2.0\", \"method\": \"$cmd\", \"params\": $params }" \
	 "http://$host/jsonrpc"
}
cleanup() {
    if [ -n "$kodihost" ] ; then
	# Stop the playing when we end
	playerid=$(kodicmd "$kodihost" Player.GetActivePlayers "{}" |
			    jq .result[].playerid)
	kodicmd "$kodihost" Player.Stop "{ \"playerid\" : $playerid }" > /dev/null
    fi
    if [ "$gstpid" ] && kill -0 "$gstpid" >/dev/null 2>&1; then
	kill "$gstpid"
    fi
}
trap cleanup EXIT INT

if [ -n "$1" ]; then
    kodihost=$1
    shift
else
    kodihost=kodi.local
fi

mcast=239.255.0.1
mcastport=1234
mcastttl=1

pasrc=$(pactl list | grep -A2 'Source #' | grep 'Name: .*\.monitor$' | \
  cut -d" " -f2|head -1)
gst-launch-1.0 ximagesrc use-damage=0 ! video/x-raw,framerate=30/1 ! \
  videoconvert ! queue2 ! \
  x264enc bitrate=8000 speed-preset=superfast tune=zerolatency qp-min=30 \
  key-int-max=15 bframes=2 ! video/x-h264,profile=high ! queue2 ! \
  mpegtsmux alignment=7 name=mux ! rndbuffersize max=1316 min=1316 ! \
  udpsink host=$mcast port=$mcastport ttl-mc=$mcastttl auto-multicast=1 sync=0 \
  pulsesrc device=$pasrc ! audioconvert ! queue2 ! avenc_aac ! queue2 ! mux. \
  > /dev/null 2>&1 &
gstpid=$!

# Give stream a second to get going
sleep 1

# Ask kodi to start streaming using its JSON-RPC API
kodicmd "$kodihost" Player.Open \
	"{\"item\": { \"file\": \"udp://@$mcast:$mcastport\" } }" > /dev/null

# wait for gst to end
wait "$gstpid"

I hope you find the approach useful. I know I do.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, kodi, video.
12th July 2018

PS: See the followup post for a even better approach.

A while back, I was asked by a friend how to stream the desktop to my projector connected to Kodi. I sadly had to admit that I had no idea, as it was a task I never had tried. Since then, I have been looking for a way to do so, preferable without much extra software to install on either side. Today I found a way that seem to kind of work. Not great, but it is a start.

I had a look at several approaches, for example using uPnP DLNA as described in 2011, but it required a uPnP server, fuse and local storage enough to store the stream locally. This is not going to work well for me, lacking enough free space, and it would impossible for my friend to get working.

Next, it occurred to me that perhaps I could use VLC to create a video stream that Kodi could play. Preferably using broadcast/multicast, to avoid having to change any setup on the Kodi side when starting such stream. Unfortunately, the only recipe I could find using multicast used the rtp protocol, and this protocol seem to not be supported by Kodi.

On the other hand, the rtsp protocol is working! Unfortunately I have to specify the IP address of the streaming machine in both the sending command and the file on the Kodi server. But it is showing my desktop, and thus allow us to have a shared look on the big screen at the programs I work on.

I did not spend much time investigating codeces. I combined the rtp and rtsp recipes from the VLC Streaming HowTo/Command Line Examples, and was able to get this working on the desktop/streaming end.

vlc screen:// --sout \
  '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{dst=projector.local,port=1234,sdp=rtsp://192.168.11.4:8080/test.sdp}'

I ssh-ed into my Kodi box and created a file like this with the same IP address:

echo rtsp://192.168.11.4:8080/test.sdp \
  > /storage/videos/screenstream.m3u

Note the 192.168.11.4 IP address is my desktops IP address. As far as I can tell the IP must be hardcoded for this to work. In other words, if someone elses machine is going to do the steaming, you have to update screenstream.m3u on the Kodi machine and adjust the vlc recipe. To get started, locate the file in Kodi and select the m3u file while the VLC stream is running. The desktop then show up in my big screen. :)

When using the same technique to stream a video file with audio, the audio quality is really bad. No idea if the problem is package loss or bad parameters for the transcode. I do not know VLC nor Kodi enough to tell.

Update 2018-07-12: Johannes Schauer send me a few succestions and reminded me about an important step. The "screen:" input source is only available once the vlc-plugin-access-extra package is installed on Debian. Without it, you will see this error message: "VLC is unable to open the MRL 'screen://'. Check the log for details." He further found that it is possible to drop some parts of the VLC command line to reduce the amount of hardcoded information. It is also useful to consider using cvlc to avoid having the VLC window in the desktop view. In sum, this give us this command line on the source end

cvlc screen:// --sout \
  '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{sdp=rtsp://:8080/}'

and this on the Kodi end

echo rtsp://192.168.11.4:8080/ \
  > /storage/videos/screenstream.m3u

Still bad image quality, though. But I did discover that streaming a DVD using dvdsimple:///dev/dvd as the source had excellent video and audio quality, so I guess the issue is in the input or transcoding parts, not the rtsp part. I've tried to change the vb and ab parameters to use more bandwidth, but it did not make a difference.

I further received a suggestion from Einar Haraldseid to try using gstreamer instead of VLC, and this proved to work great! He also provided me with the trick to get Kodi to use a multicast stream as its source. By using this monstrous oneliner, I can stream my desktop with good video quality in reasonable framerate to the 239.255.0.1 multicast address on port 1234:

gst-launch-1.0 ximagesrc use-damage=0 ! video/x-raw,framerate=30/1 ! \
  videoconvert ! queue2 ! \
  x264enc bitrate=8000 speed-preset=superfast tune=zerolatency qp-min=30 \
  key-int-max=15 bframes=2 ! video/x-h264,profile=high ! queue2 ! \
  mpegtsmux alignment=7 name=mux ! rndbuffersize max=1316 min=1316 ! \
  udpsink host=239.255.0.1 port=1234 ttl-mc=1 auto-multicast=1 sync=0 \
  pulsesrc device=$(pactl list | grep -A2 'Source #' | \
    grep 'Name: .*\.monitor$' |  cut -d" " -f2|head -1) ! \
  audioconvert ! queue2 ! avenc_aac ! queue2 ! mux.

and this on the Kodi end

echo udp://@239.255.0.1:1234 \
  > /storage/videos/screenstream.m3u

Note the trick to pick a valid pulseaudio source. It might not pick the one you need. This approach will of course lead to trouble if more than one source uses the same multicast port and address. Note the ttl-mc=1 setting, which limit the multicast packages to the local network. If the value is increased, your screen will be broadcasted further, one network "hop" for each increase (read up on multicast to learn more. :)!

Having cracked how to get Kodi to receive multicast streams, I could use this VLC command to stream to the same multicast address. The image quality is way better than the rtsp approach, but gstreamer seem to be doing a better job.

cvlc screen:// --sout '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{mux=ts,dst=239.255.0.1,port=1234,sdp=sap}'

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, kodi, video.
9th July 2018

Five years ago, I measured what the most supported MIME type in Debian was, by analysing the desktop files in all packages in the archive. Since then, the DEP-11 AppStream system has been put into production, making the task a lot easier. This made me want to repeat the measurement, to see how much things changed. Here are the new numbers, for unstable only this time:

Debian Unstable:

  count MIME type
  ----- -----------------------
     56 image/jpeg
     55 image/png
     49 image/tiff
     48 image/gif
     39 image/bmp
     38 text/plain
     37 audio/mpeg
     34 application/ogg
     33 audio/x-flac
     32 audio/x-mp3
     30 audio/x-wav
     30 audio/x-vorbis+ogg
     29 image/x-portable-pixmap
     27 inode/directory
     27 image/x-portable-bitmap
     27 audio/x-mpeg
     26 application/x-ogg
     25 audio/x-mpegurl
     25 audio/ogg
     24 text/html

The list was created like this using a sid chroot: "cat /var/lib/apt/lists/*sid*_dep11_Components-amd64.yml.gz| zcat | awk '/^ - \S+\/\S+$/ {print $2 }' | sort | uniq -c | sort -nr | head -20"

It is interesting to see how image formats have passed text/plain as the most announced supported MIME type. These days, thanks to the AppStream system, if you run into a file format you do not know, and want to figure out which packages support the format, you can find the MIME type of the file using "file --mime <filename>", and then look up all packages announcing support for this format in their AppStream metadata (XML or .desktop file) using "appstreamcli what-provides mimetype <mime-type>. For example if you, like me, want to know which packages support inode/directory, you can get a list like this:

% appstreamcli what-provides mimetype inode/directory | grep Package: | sort
Package: anjuta
Package: audacious
Package: baobab
Package: cervisia
Package: chirp
Package: dolphin
Package: doublecmd-common
Package: easytag
Package: enlightenment
Package: ephoto
Package: filelight
Package: gwenview
Package: k4dirstat
Package: kaffeine
Package: kdesvn
Package: kid3
Package: kid3-qt
Package: nautilus
Package: nemo
Package: pcmanfm
Package: pcmanfm-qt
Package: qweborf
Package: ranger
Package: sirikali
Package: spacefm
Package: spacefm
Package: vifm
%

Using the same method, I can quickly discover that the Sketchup file format is not yet supported by any package in Debian:

% appstreamcli what-provides mimetype  application/vnd.sketchup.skp
Could not find component providing 'mimetype::application/vnd.sketchup.skp'.
%

Yesterday I used it to figure out which packages support the STL 3D format:

% appstreamcli what-provides mimetype  application/sla|grep Package
Package: cura
Package: meshlab
Package: printrun
%

PS: A new version of Cura was uploaded to Debian yesterday.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

8th July 2018

Quite regularly, I let my Debian Sid/Unstable chroot stay untouch for a while, and when I need to update it there is not enough free space on the disk for apt to do a normal 'apt upgrade'. I normally would resolve the issue by doing 'apt install <somepackages>' to upgrade only some of the packages in one batch, until the amount of packages to download fall below the amount of free space available. Today, I had about 500 packages to upgrade, and after a while I got tired of trying to install chunks of packages manually. I concluded that I did not have the spare hours required to complete the task, and decided to see if I could automate it. I came up with this small script which I call 'apt-in-chunks':

#!/bin/sh
#
# Upgrade packages when the disk is too full to upgrade every
# upgradable package in one lump.  Fetching packages to upgrade using
# apt, and then installing using dpkg, to avoid changing the package
# flag for manual/automatic.

set -e

ignore() {
    if [ "$1" ]; then
	grep -v "$1"
    else
	cat
    fi
}

for p in $(apt list --upgradable | ignore "$@" |cut -d/ -f1 | grep -v '^Listing...'); do
    echo "Upgrading $p"
    apt clean
    apt install --download-only -y $p
    for f in /var/cache/apt/archives/*.deb; do
	if [ -e "$f" ]; then
	    dpkg -i /var/cache/apt/archives/*.deb
	    break
	fi
    done
done

The script will extract the list of packages to upgrade, try to download the packages needed to upgrade one package, install the downloaded packages using dpkg. The idea is to upgrade packages without changing the APT mark for the package (ie the one recording of the package was manually requested or pulled in as a dependency). To use it, simply run it as root from the command line. If it fail, try 'apt install -f' to clean up the mess and run the script again. This might happen if the new packages conflict with one of the old packages. dpkg is unable to remove, while apt can do this.

It take one option, a package to ignore in the list of packages to upgrade. The option to ignore a package is there to be able to skip the packages that are simply too large to unpack. Today this was 'ghc', but I have run into other large packages causing similar problems earlier (like TeX).

Update 2018-07-08: Thanks to Paul Wise, I am aware of two alternative ways to handle this. The "unattended-upgrades --minimal-upgrade-steps" option will try to calculate upgrade sets for each package to upgrade, and then upgrade them in order, smallest set first. It might be a better option than my above mentioned script. Also, "aptutude upgrade" can upgrade single packages, thus avoiding the need for using "dpkg -i" in the script above.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
30th June 2018

So far, at least hydro-electric power, coal power, wind power, solar power, and wood power are well known. Until a few days ago, I had never heard of stone power. Then I learn about a quarry in a mountain in Bremanger i Norway, where the Bremanger Quarry company is extracting stone and dumping the stone into a shaft leading to its shipping harbour. This downward movement in this shaft is used to produce electricity. In short, it is using falling rocks instead of falling water to produce electricity, and according to its own statements it is producing more power than it is using, and selling the surplus electricity to the Norwegian power grid. I find the concept truly amazing. Is this the worlds only stone power plant?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
26th June 2018

My movie playing setup involve Kodi, OpenELEC (probably soon to be replaced with LibreELEC) and an Infocus IN76 video projector. My projector can be controlled via both a infrared remote controller, and a RS-232 serial line. The vendor of my projector, InFocus, had been sensible enough to document the serial protocol in its user manual, so it is easily available, and I used it some years ago to write a small script to control the projector. For a while now, I longed for a setup where the projector was controlled by Kodi, for example in such a way that when the screen saver went on, the projector was turned off, and when the screen saver exited, the projector was turned on again.

A few days ago, with very good help from parts of my family, I managed to find a Kodi Add-on for controlling a Epson projector, and got in touch with its author to see if we could join forces and make a Add-on with support for several projectors. To my pleasure, he was positive to the idea, and we set out to add InFocus support to his add-on, and make the add-on suitable for the official Kodi add-on repository.

The Add-on is now working (for me, at least), with a few minor adjustments. The most important change I do relative to the master branch in the github repository is embedding the pyserial module in the add-on. The long term solution is to make a "script" type pyserial module for Kodi, that can be pulled in as a dependency in Kodi. But until that in place, I embed it.

The add-on can be configured to turn on the projector when Kodi starts, off when Kodi stops as well as turn the projector off when the screensaver start and on when the screesaver stops. It can also be told to set the projector source when turning on the projector.

If this sound interesting to you, check out the project github repository. Perhaps you can send patches to support your projector too? As soon as we find time to wrap up the latest changes, it should be available for easy installation using any Kodi instance.

For future improvements, I would like to add projector model detection and the ability to adjust the brightness level of the projector from within Kodi. We also need to figure out how to handle the cooling period of the projector. My projector refuses to turn on for 60 seconds after it was turned off. This is not handled well by the add-on at the moment.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

22nd March 2018

The leaders of the worlds have started to congratulate the re-elected Russian head of state, and this causes some criticism. I am though a little fascinated by a comment from USA senator John McCain, sited by The Hill and others:

"An American president does not lead the Free World by congratulating dictators on winning sham elections."

While I totally agree with the senator here, the way the quote is phrased make me suspect that he is unaware of the simple fact that USA have not lead the Free World since at least before its government kidnapped a completely innocent Canadian citizen in transit on his way home to Canada via John F. Kennedy International Airport in September 2002 and sent him to be tortured in Syria for a year.

USA might be running ahead, but the path they are taking is not the one taken by any Free World.

Tags: english.
21st March 2018

So, Cambridge Analytica is getting some well deserved criticism for (mis)using information it got from Facebook about 50 million people, mostly in the USA. What I find a bit surprising, is how little criticism Facebook is getting for handing the information over to Cambridge Analytica and others in the first place. And what about the people handing their private and personal information to Facebook? And last, but not least, what about the government offices who are handing information about the visitors of their web pages to Facebook? No-one who looked at the terms of use of Facebook should be surprised that information about peoples interests, political views, personal lifes and whereabouts would be sold by Facebook.

What I find to be the real scandal is the fact that Facebook is selling your personal information, not that one of the buyers used it in a way Facebook did not approve when exposed. It is well known that Facebook is selling out their users privacy, but a scandal nevertheless. Of course the information provided to them by Facebook would be misused by one of the parties given access to personal information about the millions of Facebook users. Collected information will be misused sooner or later. The only way to avoid such misuse, is to not collect the information in the first place. If you do not want Facebook to hand out information about yourself for the use and misuse of its customers, do not give Facebook the information.

Personally, I would recommend to completely remove your Facebook account, and take back some control of your personal information. According to The Guardian, it is a bit hard to find out how to request account removal (and not just 'disabling'). You need to visit a specific Facebook page and click on 'let us know' on that page to get to the real account deletion screen. Perhaps something to consider? I would not trust the information to really be deleted (who knows, perhaps NSA, GCHQ and FRA already got a copy), but it might reduce the exposure a bit.

If you want to learn more about the capabilities of Cambridge Analytica, I recommend to see the video recording of the one hour talk Paul-Olivier Dehaye gave to NUUG last april about Data collection, psychometric profiling and their impact on politics.

And if you want to communicate with your friends and loved ones, use some end-to-end encrypted method like Signal or Ring, and stop sharing your private messages with strangers like Facebook and Google.

13th March 2018

I am working on publishing yet another book related to Creative Commons. This time it is a book filled with interviews and histories from those around the globe making a living using Creative Commons.

Yesterday, after many months of hard work by several volunteer translators, the first draft of a Norwegian Bokmål edition of the book Made with Creative Commons from 2017 was complete. The Spanish translation is also complete, while the Dutch, Polish, German and Ukraine edition need a lot of work. Get in touch if you want to help make those happen, or would like to translate into your mother tongue.

The whole book project started when Gunnar Wolf announced that he was going to make a Spanish edition of the book. I noticed, and offered some input on how to make a book, based on my experience with translating the Free Culture and The Debian Administrator's Handbook books to Norwegian Bokmål. To make a long story short, we ended up working on a Bokmål edition, and now the first rough translation is complete, thanks to the hard work of Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first proof reading is almost done, and only the second and third proof reading remains. We will also need to translate the 14 figures and create a book cover. Once it is done we will publish the book on paper, as well as in PDF, ePub and possibly Mobi formats.

The book itself originates as a manuscript on Google Docs, is downloaded as ODT from there and converted to Markdown using pandoc. The Markdown is modified by a script before is converted to DocBook using pandoc. The DocBook is modified again using a script before it is used to create a Gettext POT file for translators. The translated PO file is then combined with the earlier mentioned DocBook file to create a translated DocBook file, which finally is given to dblatex to create the final PDF. The end result is a set of editions of the manuscript, one English and one for each of the translations.

The translation is conducted using the Weblate web based translation system. Please have a look there and get in touch if you would like to help out with proof reading. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

2nd March 2018

Today I was pleasantly surprised to discover my operating system of choice, Debian, was used in the info screens on the subway stations. While passing Nydalen subway station in Oslo, Norway, I discovered the info screen booting with some text scrolling. I was not quick enough with my camera to be able to record a video of the scrolling boot screen, but I did get a photo from when the boot got stuck with a corrupt file system:

[photo of subway info screen]

While I am happy to see Debian used more places, some details of the content on the screen worries me.

The image show the version booting is 'Debian GNU/Linux lenny/sid', indicating that this is based on code taken from Debian Unstable/Sid after Debian Etch (version 4) was released 2007-04-08 and before Debian Lenny (version 5) was released 2009-02-14. Since Lenny Debian has released version 6 (Squeeze) 2011-02-06, 7 (Wheezy) 2013-05-04, 8 (Jessie) 2015-04-25 and 9 (Stretch) 2017-06-15, according to a Debian version history on Wikpedia. This mean the system is running around 10 year old code, with no security fixes from the vendor for many years.

This is not the first time I discover the Oslo subway company, Ruter, running outdated software. In 2012, I discovered the ticket vending machines were running Windows 2000, and this was still the case in 2016. Given the response from the responsible people in 2016, I would assume the machines are still running unpatched Windows 2000. Thus, an unpatched Debian setup come as no surprise.

The photo is made available under the license terms Creative Commons 4.0 Attribution International (CC BY 4.0).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, ruter.
18th February 2018

Surprising as it might sound, there are still computers using the traditional Sys V init system, and there probably will be until systemd start working on Hurd and FreeBSD. The upstream project still exist, though, and up until today, the upstream source was available from Savannah via subversion. I am happy to report that this just changed.

The upstream source is now in Git, and consist of three repositories:

I do not really spend much time on the project these days, and I has mostly retired, but found it best to migrate the source to a good version control system to help those willing to move it forward.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14th February 2018

A few days ago, a new major version of VLC was announced, and I decided to check out if it now supported streaming over bittorrent and webtorrent. Bittorrent is one of the most efficient ways to distribute large files on the Internet, and Webtorrent is a variant of Bittorrent using WebRTC as its transport channel, allowing web pages to stream and share files using the same technique. The network protocols are similar but not identical, so a client supporting one of them can not talk to a client supporting the other. I was a bit surprised with what I discovered when I started to look. Looking at the release notes did not help answering this question, so I started searching the web. I found several news articles from 2013, most of them tracing the news from Torrentfreak ("Open Source Giant VLC Mulls BitTorrent Streaming Support"), about a initiative to pay someone to create a VLC patch for bittorrent support. To figure out what happend with this initiative, I headed over to the #videolan IRC channel and asked if there were some bug or feature request tickets tracking such feature. I got an answer from lead developer Jean-Babtiste Kempf, telling me that there was a patch but neither he nor anyone else knew where it was. So I searched a bit more, and came across an independent VLC plugin to add bittorrent support, created by Johan Gunnarsson in 2016/2017. Again according to Jean-Babtiste, this is not the patch he was talking about.

Anyway, to test the plugin, I made a working Debian package from the git repository, with some modifications. After installing this package, I could stream videos from The Internet Archive using VLC commands like this:

vlc https://archive.org/download/LoveNest/LoveNest_archive.torrent

The plugin is supposed to handle magnet links too, but since The Internet Archive do not have magnet links and I did not want to spend time tracking down another source, I have not tested it. It can take quite a while before the video start playing without any indication of what is going on from VLC. It took 10-20 seconds when I measured it. Some times the plugin seem unable to find the correct video file to play, and show the metadata XML file name in the VLC status line. I have no idea why.

I have created a request for a new package in Debian (RFP) and asked if the upstream author is willing to help make this happen. Now we wait to see what come out of this. I do not want to maintain a package that is not maintained upstream, nor do I really have time to maintain more packages myself, so I might leave it at this. But I really hope someone step up to do the packaging, and hope upstream is still maintaining the source. If you want to help, please update the RFP request or the upstream issue.

I have not found any traces of webtorrent support for VLC.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13th February 2018

A new version of the 3D printer slicer software Cura, version 3.1.0, is now available in Debian Testing (aka Buster) and Debian Unstable (aka Sid). I hope you find it useful. It was uploaded the last few days, and the last update will enter testing tomorrow. See the release notes for the list of bug fixes and new features. Version 3.2 was announced 6 days ago. We will try to get it into Debian as well.

More information related to 3D printing is available on the 3D printing and 3D printer wiki pages in Debian.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

11th February 2018

We write 2018, and it is 30 years since Unicode was introduced. Most of us in Norway have come to expect the use of our alphabet to just work with any computer system. But it is apparently beyond reach of the computers printing recites at a restaurant. Recently I visited a Peppes pizza resturant, and noticed a few details on the recite. Notice how 'ø' and 'å' are replaced with strange symbols in 'Servitør', 'Å BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi gleder oss til å se deg igjen'.

I would say that this state is passed sad and over in embarrassing.

I removed personal and private information to be nice.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
7th January 2018

I've continued to track down list of movies that are legal to distribute on the Internet, and identified more than 11,000 title IDs in The Internet Movie Database (IMDB) so far. Most of them (57%) are feature films from USA published before 1923. I've also tracked down more than 24,000 movies I have not yet been able to map to IMDB title ID, so the real number could be a lot higher. According to the front web page for Retro Film Vault, there are 44,000 public domain films, so I guess there are still some left to identify.

The complete data set is available from a public git repository, including the scripts used to create it. Most of the data is collected using web scraping, for example from the "product catalog" of companies selling copies of public domain movies, but any source I find believable is used. I've so far had to throw out three sources because I did not trust the public domain status of the movies listed.

Anyway, this is the summary of the 28 collected data sources so far:

 2352 entries (   66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json
 2302 entries (  120 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  195 entries (   63 unique) with and   200 without IMDB title ID in free-movies-cinemovies.json
   89 entries (   52 unique) with and    38 without IMDB title ID in free-movies-creative-commons.json
  344 entries (   28 unique) with and   655 without IMDB title ID in free-movies-fesfilm.json
  668 entries (  209 unique) with and  1064 without IMDB title ID in free-movies-filmchest-com.json
  830 entries (   21 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
   19 entries (   19 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
 6822 entries ( 6669 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-us.json
  137 entries (    0 unique) with and     0 without IMDB title ID in free-movies-imdb-externlist.json
 1205 entries (   57 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
   84 entries (   20 unique) with and   167 without IMDB title ID in free-movies-infodigi-pd.json
  158 entries (  135 unique) with and     0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
  113 entries (    4 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  182 entries (  100 unique) with and     0 without IMDB title ID in free-movies-letterboxd-silent.json
  229 entries (   87 unique) with and     1 without IMDB title ID in free-movies-manual.json
   44 entries (    2 unique) with and    64 without IMDB title ID in free-movies-openflix.json
  291 entries (   33 unique) with and   474 without IMDB title ID in free-movies-profilms-pd.json
  211 entries (    7 unique) with and     0 without IMDB title ID in free-movies-publicdomainmovies-info.json
 1232 entries (   57 unique) with and  1875 without IMDB title ID in free-movies-publicdomainmovies-net.json
   46 entries (   13 unique) with and    81 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (   64 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
 1758 entries (  882 unique) with and  3786 without IMDB title ID in free-movies-retrofilmvault.json
   16 entries (    0 unique) with and     0 without IMDB title ID in free-movies-thehillproductions.json
   63 entries (   16 unique) with and   141 without IMDB title ID in free-movies-vodo.json
11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID

I keep finding more data sources. I found the cinemovies source just a few days ago, and as you can see from the summary, it extended my list with 63 movies. Check out the mklist-* scripts in the git repository if you are curious how the lists are created. Many of the titles are extracted using searches on IMDB, where I look for the title and year, and accept search results with only one movie listed if the year matches. This allow me to automatically use many lists of movies without IMDB title ID references at the cost of increasing the risk of wrongly identify a IMDB title ID as public domain. So far my random manual checks have indicated that the method is solid, but I really wish all lists of public domain movies would include unique movie identifier like the IMDB title ID. It would make the job of counting movies in the public domain a lot easier.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

17th December 2017

After several months of working and waiting, I am happy to report that the nice and user friendly 3D printer slicer software Cura just entered Debian Unstable. It consist of five packages, cura, cura-engine, libarcus, fdm-materials, libsavitar and uranium. The last two, uranium and cura, entered Unstable yesterday. This should make it easier for Debian users to print on at least the Ultimaker class of 3D printers. My nearest 3D printer is an Ultimaker 2+, so it will make life easier for at least me. :)

The work to make this happen was done by Gregor Riepl, and I was happy to assist him in sponsoring the packages. With the introduction of Cura, Debian is up to three 3D printer slicers at your service, Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D printer, give it a go. :)

The 3D printer software is maintained by the 3D printer Debian team, flocking together on the 3dprinter-general mailing list and the #debian-3dprinting IRC channel.

The next step for Cura in Debian is to update the cura package to version 3.0.3 and then update the entire set of packages to version 3.1.0 which showed up the last few days.

13th December 2017

While looking at the scanned copies for the copyright renewal entries for movies published in the USA, an idea occurred to me. The number of renewals are so few per year, it should be fairly quick to transcribe them all and add references to the corresponding IMDB title ID. This would give the (presumably) complete list of movies published 28 years earlier that did _not_ enter the public domain for the transcribed year. By fetching the list of USA movies published 28 years earlier and subtract the movies with renewals, we should be left with movies registered in IMDB that are now in the public domain. For the year 1955 (which is the one I have looked at the most), the total number of pages to transcribe is 21. For the 28 years from 1950 to 1978, it should be in the range 500-600 pages. It is just a few days of work, and spread among a small group of people it should be doable in a few weeks of spare time.

A typical copyright renewal entry look like this (the first one listed for 1955):

ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); 10Jun55; R151558.

The movie title as well as registration and renewal dates are easy enough to locate by a program (split on first comma and look for DDmmmYY). The rest of the text is not required to find the movie in IMDB, but is useful to confirm the correct movie is found. I am not quite sure what the L and R numbers mean, but suspect they are reference numbers into the archive of the US Copyright Office.

Tracking down the equivalent IMDB title ID is probably going to be a manual task, but given the year it is fairly easy to search for the movie title using for example http://www.imdb.com/find?q=adam+and+evil+1927&s=all. Using this search, I find that the equivalent IMDB title ID for the first renewal entry from 1955 is http://www.imdb.com/title/tt0017588/.

I suspect the best way to do this would be to make a specialised web service to make it easy for contributors to transcribe and track down IMDB title IDs. In the web service, once a entry is transcribed, the title and year could be extracted from the text, a search in IMDB conducted for the user to pick the equivalent IMDB title ID right away. By spreading out the work among volunteers, it would also be possible to make at least two persons transcribe the same entries to be able to discover any typos introduced. But I will need help to make this happen, as I lack the spare time to do all of this on my own. If you would like to help, please get in touch. Perhaps you can draft a web service for crowd sourcing the task?

Note, Project Gutenberg already have some transcribed copies of the US Copyright Office renewal protocols, but I have not been able to find any film renewals there, so I suspect they only have copies of renewal for written works. I have not been able to find any transcribed versions of movie renewals so far. Perhaps they exist somewhere?

I would love to figure out methods for finding all the public domain works in other countries too, but it is a lot harder. At least for Norway and Great Britain, such work involve tracking down the people involved in making the movie and figuring out when they died. It is hard enough to figure out who was part of making a movie, but I do not know how to automate such procedure without a registry of every person involved in making movies and their death year.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

5th December 2017

Three years ago, a presumed lost animation film, Empty Socks from 1927, was discovered in the Norwegian National Library. At the time it was discovered, it was generally assumed to be copyrighted by The Walt Disney Company, and I blogged about my reasoning to conclude that it would would enter the Norwegian equivalent of the public domain in 2053, based on my understanding of Norwegian Copyright Law. But a few days ago, I came across a blog post claiming the movie was already in the public domain, at least in USA. The reasoning is as follows: The film was released in November or Desember 1927 (sources disagree), and presumably registered its copyright that year. At that time, right holders of movies registered by the copyright office received government protection for there work for 28 years. After 28 years, the copyright had to be renewed if the wanted the government to protect it further. The blog post I found claim such renewal did not happen for this movie, and thus it entered the public domain in 1956. Yet someone claim the copyright was renewed and the movie is still copyright protected. Can anyone help me to figure out which claim is correct? I have not been able to find Empty Socks in Catalog of copyright entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures available from the University of Pennsylvania, neither in page 45 for the first half of 1955, nor in page 119 for the second half of 1955. It is of course possible that the renewal entry was left out of the printed catalog by mistake. Is there some way to rule out this possibility? Please help, and update the wikipedia page with your findings.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

28th November 2017

It would be easier to locate the movie you want to watch in the Internet Archive, if the metadata about each movie was more complete and accurate. In the archiving community, a well known saying state that good metadata is a love letter to the future. The metadata in the Internet Archive could use a face lift for the future to love us back. Here is a proposal for a small improvement that would make the metadata more useful today. I've been unable to find any document describing the various standard fields available when uploading videos to the archive, so this proposal is based on my best quess and searching through several of the existing movies.

I have a few use cases in mind. First of all, I would like to be able to count the number of distinct movies in the Internet Archive, without duplicates. I would further like to identify the IMDB title ID of the movies in the Internet Archive, to be able to look up a IMDB title ID and know if I can fetch the video from there and share it with my friends.

Second, I would like the Butter data provider for The Internet archive (available from github), to list as many of the good movies as possible. The plugin currently do a search in the archive with the following parameters:

collection:moviesandfilms
AND NOT collection:movie_trailers
AND -mediatype:collection
AND format:"Archive BitTorrent"
AND year

Most of the cool movies that fail to show up in Butter do so because the 'year' field is missing. The 'year' field is populated by the year part from the 'date' field, and should be when the movie was released (date or year). Two such examples are Ben Hur from 1905 and Caminandes 2: Gran Dillama from 2013, where the year metadata field is missing.

So, my proposal is simply, for every movie in The Internet Archive where an IMDB title ID exist, please fill in these metadata fields (note, they can be updated also long after the video was uploaded, but as far as I can tell, only by the uploader):
mediatype
Should be 'movie' for movies.
collection
Should contain 'moviesandfilms'.
title
The title of the movie, without the publication year.
date
The data or year the movie was released. This make the movie show up in Butter, as well as make it possible to know the age of the movie and is useful to figure out copyright status.
director
The director of the movie. This make it easier to know if the correct movie is found in movie databases.
publisher
The production company making the movie. Also useful for identifying the correct movie.
links
Add a link to the IMDB title page, for example like this: <a href="http://www.imdb.com/title/tt0028496/">Movie in IMDB</a>. This make it easier to find duplicates and allow for counting of number of unique movies in the Archive. Other external references, like to TMDB, could be added like this too.

I did consider proposing a Custom field for the IMDB title ID (for example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it will be easier to simply place it in the links free text field.

I created a list of IMDB title IDs for several thousand movies in the Internet Archive, but I also got a list of several thousand movies without such IMDB title ID (and quite a few duplicates). It would be great if this data set could be integrated into the Internet Archive metadata to be available for everyone in the future, but with the current policy of leaving metadata editing to the uploaders, it will take a while before this happen. If you have uploaded movies into the Internet Archive, you can help. Please consider following my proposal above for your movies, to ensure that movie is properly counted. :)

The list is mostly generated using wikidata, which based on Wikipedia articles make it possible to link between IMDB and movies in the Internet Archive. But there are lots of movies without a Wikipedia article, and some movies where only a collection page exist (like for the Caminandes example above, where there are three movies but only one Wikidata entry).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

18th November 2017

A month ago, I blogged about my work to automatically check the copyright status of IMDB entries, and try to count the number of movies listed in IMDB that is legal to distribute on the Internet. I have continued to look for good data sources, and identified a few more. The code used to extract information from various data sources is available in a git repository, currently available from github.

So far I have identified 3186 unique IMDB title IDs. To gain better understanding of the structure of the data set, I created a histogram of the year associated with each movie (typically release year). It is interesting to notice where the peaks and dips in the graph are located. I wonder why they are placed there. I suspect World War II caused the dip around 1940, but what caused the peak around 2010?

I've so far identified ten sources for IMDB title IDs for movies in the public domain or with a free license. This is the statistics reported when running 'make stats' in the git repository:

  249 entries (    6 unique) with and   288 without IMDB title ID in free-movies-archive-org-butter.json
 2301 entries (  540 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  830 entries (   29 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
 2109 entries (  377 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
  291 entries (  122 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  144 entries (  135 unique) with and     0 without IMDB title ID in free-movies-manual.json
  350 entries (    1 unique) with and   801 without IMDB title ID in free-movies-publicdomainmovies.json
    4 entries (    0 unique) with and   124 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (  119 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
    8 entries (    8 unique) with and   196 without IMDB title ID in free-movies-vodo.json
 3186 unique IMDB title IDs in total

The entries without IMDB title ID are candidates to increase the data set, but might equally well be duplicates of entries already listed with IMDB title ID in one of the other sources, or represent movies that lack a IMDB title ID. I've seen examples of all these situations when peeking at the entries without IMDB title ID. Based on these data sources, the lower bound for movies listed in IMDB that are legal to distribute on the Internet is between 3186 and 4713.

It would be great for improving the accuracy of this measurement, if the various sources added IMDB title ID to their metadata. I have tried to reach the people behind the various sources to ask if they are interested in doing this, without any replies so far. Perhaps you can help me get in touch with the people behind VODO, Public Domain Torrents, Public Domain Movies and Public Domain Review to try to convince them to add more metadata to their movie entries?

Another way you could help is by adding pages to Wikipedia about movies that are legal to distribute on the Internet. If such page exist and include a link to both IMDB and The Internet Archive, the script used to generate free-movies-archive-org-wikidata.json should pick up the mapping as soon as wikidata is updates.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

1st November 2017

If you care about how fault tolerant your storage is, you might find these articles and papers interesting. They have formed how I think of when designing a storage system.

Several of these research papers are based on data collected from hundred thousands or millions of disk, and their findings are eye opening. The short story is simply do not implicitly trust RAID or redundant storage systems. Details matter. And unfortunately there are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how cluster file systems like Ceph do in this regard. After all, there is an old saying, you know you have a distributed system when the crash of a computer you have never heard of stops you from getting any work done. The same holds true if fault tolerance do not work.

Just remember, in the end, it do not matter how redundant, or how fault tolerant your storage is, if you do not continuously monitor its status to detect and replace failed disks.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, raid, sysadmin.
31st October 2017

I was surprised today to learn that a friend in academia did not know there are easily available web services available for writing LaTeX documents as a team. I thought it was common knowledge, but to make sure at least my readers are aware of it, I would like to mention these useful services for writing LaTeX documents. Some of them even provide a WYSIWYG editor to ease writing even further.

There are two commercial services available, ShareLaTeX and Overleaf. They are very easy to use. Just start a new document, select which publisher to write for (ie which LaTeX style to use), and start writing. Note, these two have announced their intention to join forces, so soon it will only be one joint service. I've used both for different documents, and they work just fine. While ShareLaTeX is free software, while the latter is not. According to a announcement from Overleaf, they plan to keep the ShareLaTeX code base maintained as free software.

But these two are not the only alternatives. Fidus Writer is another free software solution with the source available on github. I have not used it myself. Several others can be found on the nice alterntiveTo web service.

If you like Google Docs or Etherpad, but would like to write documents in LaTeX, you should check out these services. You can even host your own, if you want to. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
25th October 2017

Recently, I needed to automatically check the copyright status of a set of The Internet Movie database (IMDB) entries, to figure out which one of the movies they refer to can be freely distributed on the Internet. This proved to be harder than it sounds. IMDB for sure list movies without any copyright protection, where the copyright protection has expired or where the movie is lisenced using a permissive license like one from Creative Commons. These are mixed with copyright protected movies, and there seem to be no way to separate these classes of movies using the information in IMDB.

First I tried to look up entries manually in IMDB, Wikipedia and The Internet Archive, to get a feel how to do this. It is hard to know for sure using these sources, but it should be possible to be reasonable confident a movie is "out of copyright" with a few hours work per movie. As I needed to check almost 20,000 entries, this approach was not sustainable. I simply can not work around the clock for about 6 years to check this data set.

I asked the people behind The Internet Archive if they could introduce a new metadata field in their metadata XML for IMDB ID, but was told that they leave it completely to the uploaders to update the metadata. Some of the metadata entries had IMDB links in the description, but I found no way to download all metadata files in bulk to locate those ones and put that approach aside.

In the process I noticed several Wikipedia articles about movies had links to both IMDB and The Internet Archive, and it occured to me that I could use the Wikipedia RDF data set to locate entries with both, to at least get a lower bound on the number of movies on The Internet Archive with a IMDB ID. This is useful based on the assumption that movies distributed by The Internet Archive can be legally distributed on the Internet. With some help from the RDF community (thank you DanC), I was able to come up with this query to pass to the SPARQL interface on Wikidata:

SELECT ?work ?imdb ?ia ?when ?label
WHERE
{
  ?work wdt:P31/wdt:P279* wd:Q11424.
  ?work wdt:P345 ?imdb.
  ?work wdt:P724 ?ia.
  OPTIONAL {
        ?work wdt:P577 ?when.
        ?work rdfs:label ?label.
        FILTER(LANG(?label) = "en").
  }
}

If I understand the query right, for every film entry anywhere in Wikpedia, it will return the IMDB ID and The Internet Archive ID, and when the movie was released and its English title, if either or both of the latter two are available. At the moment the result set contain 2338 entries. Of course, it depend on volunteers including both correct IMDB and The Internet Archive IDs in the wikipedia articles for the movie. It should be noted that the result will include duplicates if the movie have entries in several languages. There are some bogus entries, either because The Internet Archive ID contain a typo or because the movie is not available from The Internet Archive. I did not verify the IMDB IDs, as I am unsure how to do that automatically.

I wrote a small python script to extract the data set from Wikidata and check if the XML metadata for the movie is available from The Internet Archive, and after around 1.5 hour it produced a list of 2097 free movies and their IMDB ID. In total, 171 entries in Wikidata lack the refered Internet Archive entry. I assume the 70 "disappearing" entries (ie 2338-2097-171) are duplicate entries.

This is not too bad, given that The Internet Archive report to contain 5331 feature films at the moment, but it also mean more than 3000 movies are missing on Wikipedia or are missing the pair of references on Wikipedia.

I was curious about the distribution by release year, and made a little graph to show how the amount of free movies is spread over the years:

I expect the relative distribution of the remaining 3000 movies to be similar.

If you want to help, and want to ensure Wikipedia can be used to cross reference The Internet Archive and The Internet Movie Database, please make sure entries like this are listed under the "External links" heading on the Wikipedia article for the movie:

* {{Internet Archive film|id=FightingLady}}
* {{IMDb title|id=0036823|title=The Fighting Lady}}

Please verify the links on the final page, to make sure you did not introduce a typo.

Here is the complete list, if you want to correct the 171 identified Wikipedia entries with broken links to The Internet Archive: Q1140317, Q458656, Q458656, Q470560, Q743340, Q822580, Q480696, Q128761, Q1307059, Q1335091, Q1537166, Q1438334, Q1479751, Q1497200, Q1498122, Q865973, Q834269, Q841781, Q841781, Q1548193, Q499031, Q1564769, Q1585239, Q1585569, Q1624236, Q4796595, Q4853469, Q4873046, Q915016, Q4660396, Q4677708, Q4738449, Q4756096, Q4766785, Q880357, Q882066, Q882066, Q204191, Q204191, Q1194170, Q940014, Q946863, Q172837, Q573077, Q1219005, Q1219599, Q1643798, Q1656352, Q1659549, Q1660007, Q1698154, Q1737980, Q1877284, Q1199354, Q1199354, Q1199451, Q1211871, Q1212179, Q1238382, Q4906454, Q320219, Q1148649, Q645094, Q5050350, Q5166548, Q2677926, Q2698139, Q2707305, Q2740725, Q2024780, Q2117418, Q2138984, Q1127992, Q1058087, Q1070484, Q1080080, Q1090813, Q1251918, Q1254110, Q1257070, Q1257079, Q1197410, Q1198423, Q706951, Q723239, Q2079261, Q1171364, Q617858, Q5166611, Q5166611, Q324513, Q374172, Q7533269, Q970386, Q976849, Q7458614, Q5347416, Q5460005, Q5463392, Q3038555, Q5288458, Q2346516, Q5183645, Q5185497, Q5216127, Q5223127, Q5261159, Q1300759, Q5521241, Q7733434, Q7736264, Q7737032, Q7882671, Q7719427, Q7719444, Q7722575, Q2629763, Q2640346, Q2649671, Q7703851, Q7747041, Q6544949, Q6672759, Q2445896, Q12124891, Q3127044, Q2511262, Q2517672, Q2543165, Q426628, Q426628, Q12126890, Q13359969, Q13359969, Q2294295, Q2294295, Q2559509, Q2559912, Q7760469, Q6703974, Q4744, Q7766962, Q7768516, Q7769205, Q7769988, Q2946945, Q3212086, Q3212086, Q18218448, Q18218448, Q18218448, Q6909175, Q7405709, Q7416149, Q7239952, Q7317332, Q7783674, Q7783704, Q7857590, Q3372526, Q3372642, Q3372816, Q3372909, Q7959649, Q7977485, Q7992684, Q3817966, Q3821852, Q3420907, Q3429733, Q774474

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14th October 2017

I find it fascinating how many of the people being locked inside the proposed border wall between USA and Mexico support the idea. The proposal to keep Mexicans out reminds me of the propaganda twist from the East Germany government calling the wall the “Antifascist Bulwark” after erecting the Berlin Wall, claiming that the wall was erected to keep enemies from creeping into East Germany, while it was obvious to the people locked inside it that it was erected to keep the people from escaping.

Do the people in USA supporting this wall really believe it is a one way wall, only keeping people on the outside from getting in, while not keeping people in the inside from getting out?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
9th October 2017

At my nearby maker space, Sonen, I heard the story that it was easier to generate gcode files for theyr 3D printers (Ultimake 2+) on Windows and MacOS X than Linux, because the software involved had to be manually compiled and set up on Linux while premade packages worked out of the box on Windows and MacOS X. I found this annoying, as the software involved, Cura, is free software and should be trivial to get up and running on Linux if someone took the time to package it for the relevant distributions. I even found a request for adding into Debian from 2013, which had seem some activity over the years but never resulted in the software showing up in Debian. So a few days ago I offered my help to try to improve the situation.

Now I am very happy to see that all the packages required by a working Cura in Debian are uploaded into Debian and waiting in the NEW queue for the ftpmasters to have a look. You can track the progress on the status page for the 3D printer team.

The uploaded packages are a bit behind upstream, and was uploaded now to get slots in the NEW queue while we work up updating the packages to the latest upstream version.

On a related note, two competitors for Cura, which I found harder to use and was unable to configure correctly for Ultimaker 2+ in the short time I spent on it, are already in Debian. If you are looking for 3D printer "slicers" and want something already available in Debian, check out slic3r and slic3r-prusa. The latter is a fork of the former.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

29th September 2017

Every mobile phone announce its existence over radio to the nearby mobile cell towers. And this radio chatter is available for anyone with a radio receiver capable of receiving them. Details about the mobile phones with very good accuracy is of course collected by the phone companies, but this is not the topic of this blog post. The mobile phone radio chatter make it possible to figure out when a cell phone is nearby, as it include the SIM card ID (IMSI). By paying attention over time, one can see when a phone arrive and when it leave an area. I believe it would be nice to make this information more available to the general public, to make more people aware of how their phones are announcing their whereabouts to anyone that care to listen.

I am very happy to report that we managed to get something visualizing this information up and running for Oslo Skaperfestival 2017 (Oslo Makers Festival) taking place today and tomorrow at Deichmanske library. The solution is based on the simple recipe for listening to GSM chatter I posted a few days ago, and will show up at the stand of Åpen Sone from the Computer Science department of the University of Oslo. The presentation will show the nearby mobile phones (aka IMSIs) as dots in a web browser graph, with lines to the dot representing mobile base station it is talking to. It was working in the lab yesterday, and was moved into place this morning.

We set up a fairly powerful desktop machine using Debian Buster/Testing with several (five, I believe) RTL2838 DVB-T receivers connected and visualize the visible cell phone towers using an English version of Hopglass. A fairly powerfull machine is needed as the grgsm_livemon_headless processes from gr-gsm converting the radio signal to data packages is quite CPU intensive.

The frequencies to listen to, are identified using a slightly patched scan-and-livemon (to set the --args values for each receiver), and the Hopglass data is generated using the patches in my meshviewer-output branch. For some reason we could not get more than four SDRs working. There is also a geographical map trying to show the location of the base stations, but I believe their coordinates are hardcoded to some random location in Germany, I believe. The code should be replaced with code to look up location in a text file, a sqlite database or one of the online databases mentioned in the github issue for the topic.

If this sound interesting, visit the stand at the festival!

24th September 2017

A little more than a month ago I wrote how to observe the SIM card ID (aka IMSI number) of mobile phones talking to nearby mobile phone base stations using Debian GNU/Linux and a cheap USB software defined radio, and thus being able to pinpoint the location of people and equipment (like cars and trains) with an accuracy of a few kilometer. Since then we have worked to make the procedure even simpler, and it is now possible to do this without any manual frequency tuning and without building your own packages.

The gr-gsm package is now included in Debian testing and unstable, and the IMSI-catcher code no longer require root access to fetch and decode the GSM data collected using gr-gsm.

Here is an updated recipe, using packages built by Debian and a git clone of two python scripts:

  1. Start with a Debian machine running the Buster version (aka testing).
  2. Run 'apt install gr-gsm python-numpy python-scipy python-scapy' as root to install required packages.
  3. Fetch the code decoding GSM packages using 'git clone github.com/Oros42/IMSI-catcher.git'.
  4. Insert USB software defined radio supported by GNU Radio.
  5. Enter the IMSI-catcher directory and run 'python scan-and-livemon' to locate the frequency of nearby base stations and start listening for GSM packages on one of them.
  6. Enter the IMSI-catcher directory and run 'python simple_IMSI-catcher.py' to display the collected information.

Note, due to a bug somewhere the scan-and-livemon program (actually its underlying program grgsm_scanner) do not work with the HackRF radio. It does work with RTL 8232 and other similar USB radio receivers you can get very cheaply (for example from ebay), so for now the solution is to scan using the RTL radio and only use HackRF for fetching GSM data.

As far as I can tell, a cell phone only show up on one of the frequencies at the time, so if you are going to track and count every cell phone around you, you need to listen to all the frequencies used. To listen to several frequencies, use the --numrecv argument to scan-and-livemon to use several receivers. Further, I am not sure if phones using 3G or 4G will show as talking GSM to base stations, so this approach might not see all phones around you. I typically see 0-400 IMSI numbers an hour when looking around where I live.

I've tried to run the scanner on a Raspberry Pi 2 and 3 running Debian Buster, but the grgsm_livemon_headless process seem to be too CPU intensive to keep up. When GNU Radio print 'O' to stdout, I am told there it is caused by a buffer overflow between the radio and GNU Radio, caused by the program being unable to read the GSM data fast enough. If you see a stream of 'O's from the terminal where you started scan-and-livemon, you need a give the process more CPU power. Perhaps someone are able to optimize the code to a point where it become possible to set up RPi3 based GSM sniffers? I tried using Raspbian instead of Debian, but there seem to be something wrong with GNU Radio on raspbian, causing glibc to abort().

9th August 2017

On friday, I came across an interesting article in the Norwegian web based ICT news magazine digi.no on how to collect the IMSI numbers of nearby cell phones using the cheap DVB-T software defined radios. The article refered to instructions and a recipe by Keld Norman on Youtube on how to make a simple $7 IMSI Catcher, and I decided to test them out.

The instructions said to use Ubuntu, install pip using apt (to bypass apt), use pip to install pybombs (to bypass both apt and pip), and the ask pybombs to fetch and build everything you need from scratch. I wanted to see if I could do the same on the most recent Debian packages, but this did not work because pybombs tried to build stuff that no longer build with the most recent openssl library or some other version skew problem. While trying to get this recipe working, I learned that the apt->pip->pybombs route was a long detour, and the only piece of software dependency missing in Debian was the gr-gsm package. I also found out that the lead upstream developer of gr-gsm (the name stand for GNU Radio GSM) project already had a set of Debian packages provided in an Ubuntu PPA repository. All I needed to do was to dget the Debian source package and built it.

The IMSI collector is a python script listening for packages on the loopback network device and printing to the terminal some specific GSM packages with IMSI numbers in them. The code is fairly short and easy to understand. The reason this work is because gr-gsm include a tool to read GSM data from a software defined radio like a DVB-T USB stick and other software defined radios, decode them and inject them into a network device on your Linux machine (using the loopback device by default). This proved to work just fine, and I've been testing the collector for a few days now.

The updated and simpler recipe is thus to

  1. start with a Debian machine running Stretch or newer,
  2. build and install the gr-gsm package available from http://ppa.launchpad.net/ptrkrysik/gr-gsm/ubuntu/pool/main/g/gr-gsm/,
  3. clone the git repostory from https://github.com/Oros42/IMSI-catcher,
  4. run grgsm_livemon and adjust the frequency until the terminal where it was started is filled with a stream of text (meaning you found a GSM station).
  5. go into the IMSI-catcher directory and run 'sudo python simple_IMSI-catcher.py' to extract the IMSI numbers.

To make it even easier in the future to get this sniffer up and running, I decided to package the gr-gsm project for Debian (WNPP #871055), and the package was uploaded into the NEW queue today. Luckily the gnuradio maintainer has promised to help me, as I do not know much about gnuradio stuff yet.

I doubt this "IMSI cacher" is anywhere near as powerfull as commercial tools like The Spy Phone Portable IMSI / IMEI Catcher or the Harris Stingray, but I hope the existance of cheap alternatives can make more people realise how their whereabouts when carrying a cell phone is easily tracked. Seeing the data flow on the screen, realizing that I live close to a police station and knowing that the police is also wearing cell phones, I wonder how hard it would be for criminals to track the position of the police officers to discover when there are police near by, or for foreign military forces to track the location of the Norwegian military forces, or for anyone to track the location of government officials...

It is worth noting that the data reported by the IMSI-catcher script mentioned above is only a fraction of the data broadcasted on the GSM network. It will only collect one frequency at the time, while a typical phone will be using several frequencies, and not all phones will be using the frequencies tracked by the grgsm_livemod program. Also, there is a lot of radio chatter being ignored by the simple_IMSI-catcher script, which would be collected by extending the parser code. I wonder if gr-gsm can be set up to listen to more than one frequency?

25th July 2017

I finally received a copy of the Norwegian Bokmål edition of "The Debian Administrator's Handbook". This test copy arrived in the mail a few days ago, and I am very happy to hold the result in my hand. We spent around one and a half year translating it. This paperbook edition is available from lulu.com. If you buy it quickly, you save 25% on the list price. The book is also available for download in electronic form as PDF, EPUB and Mobipocket, as can be read online as a web page.

This is the second book I publish (the first was the book "Free Culture" by Lawrence Lessig in English, French and Norwegian Bokmål), and I am very excited to finally wrap up this project. I hope "Håndbok for Debian-administratoren" will be well received.

12th June 2017

It is pleasing to see that the work we put down in publishing new editions of the classic Free Culture book by the founder of the Creative Commons movement, Lawrence Lessig, is still being appreciated. I had a look at the latest sales numbers for the paper edition today. Not too impressive, but happy to see some buyers still exist. All the revenue from the books is sent to the Creative Commons Corporation, and they receive the largest cut if you buy directly from Lulu. Most books are sold via Amazon, with Ingram second and only a small fraction directly from Lulu. The ebook edition is available for free from Github.

Title / languageQuantity
2016 jan-jun2016 jul-dec2017 jan-may
Culture Libre / French 3 6 15
Fri kultur / Norwegian 7 1 0
Free Culture / English 14 27 16
Total 24 34 31

A bit sad to see the low sales number on the Norwegian edition, and a bit surprising the English edition still selling so well.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

10th June 2017

I am very happy to report that the Nikita Noark 5 core project tagged its second release today. The free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.1.1 since version 0.1.0 (from NEWS.md):

  • Continued work on the angularjs GUI, including document upload.
  • Implemented correspondencepartPerson, correspondencepartUnit and correspondencepartInternal
  • Applied for coverity coverage and started submitting code on regualr basis.
  • Started fixing bugs reported by coverity
  • Corrected and completed HATEOAS links to make sure entire API is available via URLs in _links.
  • Corrected all relation URLs to use trailing slash.
  • Add initial support for storing data in ElasticSearch.
  • Now able to receive and store uploaded files in the archive.
  • Changed JSON output for object lists to have relations in _links.
  • Improve JSON output for empty object lists.
  • Now uses correct MIME type application/vnd.noark5-v4+json.
  • Added support for docker container images.
  • Added simple API browser implemented in JavaScript/Angular.
  • Started on archive client implemented in JavaScript/Angular.
  • Started on prototype to show the public mail journal.
  • Improved performance by disabling Sprint FileWatcher.
  • Added support for 'arkivskaper', 'saksmappe' and 'journalpost'.
  • Added support for some metadata codelists.
  • Added support for Cross-origin resource sharing (CORS).
  • Changed login method from Basic Auth to JSON Web Token (RFC 7519) style.
  • Added support for GET-ing ny-* URLs.
  • Added support for modifying entities using PUT and eTag.
  • Added support for returning XML output on request.
  • Removed support for English field and class names, limiting ourself to the official names.
  • ...

If this sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

7th June 2017

This is a copy of an email I posted to the nikita-noark mailing list. Please follow up there if you would like to discuss this topic. The background is that we are making a free software archive system based on the Norwegian Noark 5 standard for government archives.

I've been wondering a bit lately how trusted timestamps could be stored in Noark 5. Trusted timestamps can be used to verify that some information (document/file/checksum/metadata) have not been changed since a specific time in the past. This is useful to verify the integrity of the documents in the archive.

Then it occured to me, perhaps the trusted timestamps could be stored as dokument variants (ie dokumentobjekt referered to from dokumentbeskrivelse) with the filename set to the hash it is stamping?

Given a "dokumentbeskrivelse" with an associated "dokumentobjekt", a new dokumentobjekt is associated with "dokumentbeskrivelse" with the same attributes as the stamped dokumentobjekt except these attributes:

  • format -> "RFC3161"
  • mimeType -> "application/timestamp-reply"
  • formatDetaljer -> "<source URL for timestamp service>"
  • filenavn -> "<sjekksum>.tsr"

This assume a service following IETF RFC 3161 is used, which specifiy the given MIME type for replies and the .tsr file ending for the content of such trusted timestamp. As far as I can tell from the Noark 5 specifications, it is OK to have several variants/renderings of a dokument attached to a given dokumentbeskrivelse objekt. It might be stretching it a bit to make some of these variants represent crypto-signatures useful for verifying the document integrity instead of representing the dokument itself.

Using the source of the service in formatDetaljer allow several timestamping services to be used. This is useful to spread the risk of key compromise over several organisations. It would only be a problem to trust the timestamps if all of the organisations are compromised.

The following oneliner on Linux can be used to generate the tsr file. $input is the path to the file to checksum, and $sha256 is the SHA-256 checksum of the file (ie the ".tsr" value mentioned above).

openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \
  | curl -s -H "Content-Type: application/timestamp-query" \
      --data-binary "@-" http://zeitstempel.dfn.de > $sha256.tsr

To verify the timestamp, you first need to download the public key of the trusted timestamp service, for example using this command:

wget -O ca-cert.txt \
  https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt

Note, the public key should be stored alongside the timestamps in the archive to make sure it is also available 100 years from now. It is probably a good idea to standardise how and were to store such public keys, to make it easier to find for those trying to verify documents 100 or 1000 years from now. :)

The verification itself is a simple openssl command:

openssl ts -verify -data $inputfile -in $sha256.tsr \
  -CAfile ca-cert.txt -text

Is there any reason this approach would not work? Is it somehow against the Noark 5 specification?

19th March 2017

The Nikita Noark 5 core project is implementing the Norwegian standard for keeping an electronic archive of government documents. The Noark 5 standard document the requirement for data systems used by the archives in the Norwegian government, and the Noark 5 web interface specification document a REST web service for storing, searching and retrieving documents and metadata in such archive. I've been involved in the project since a few weeks before Christmas, when the Norwegian Unix User Group announced it supported the project. I believe this is an important project, and hope it can make it possible for the government archives in the future to use free software to keep the archives we citizens depend on. But as I do not hold such archive myself, personally my first use case is to store and analyse public mail journal metadata published from the government. I find it useful to have a clear use case in mind when developing, to make sure the system scratches one of my itches.

If you would like to help make sure there is a free software alternatives for the archives, please join our IRC channel (#nikita on irc.freenode.net) and the project mailing list.

When I got involved, the web service could store metadata about documents. But a few weeks ago, a new milestone was reached when it became possible to store full text documents too. Yesterday, I completed an implementation of a command line tool archive-pdf to upload a PDF file to the archive using this API. The tool is very simple at the moment, and find existing fonds, series and files while asking the user to select which one to use if more than one exist. Once a file is identified, the PDF is associated with the file and uploaded, using the title extracted from the PDF itself. The process is fairly similar to visiting the archive, opening a cabinet, locating a file and storing a piece of paper in the archive. Here is a test run directly after populating the database with test data using our API tester:

~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf
using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446
using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446

 0 - Title of the test case file created 2017-03-18T23:49:32.103446
 1 - Title of the test file created 2017-03-18T23:49:32.103446
Select which mappe you want (or search term): 0
Uploading mangelmelding/mangler.pdf
  PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt
  File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446
~/src//noark5-tester$

You can see here how the fonds (arkiv) and serie (arkivdel) only had one option, while the user need to choose which file (mappe) to use among the two created by the API tester. The archive-pdf tool can be found in the git repository for the API tester.

In the project, I have been mostly working on the API tester so far, while getting to know the code base. The API tester currently use the HATEOAS links to traverse the entire exposed service API and verify that the exposed operations and objects match the specification, as well as trying to create objects holding metadata and uploading a simple XML file to store. The tester has proved very useful for finding flaws in our implementation, as well as flaws in the reference site and the specification.

The test document I uploaded is a summary of all the specification defects we have collected so far while implementing the web service. There are several unclear and conflicting parts of the specification, and we have started writing down the questions we get from implementing it. We use a format inspired by how The Austin Group collect defect reports for the POSIX standard with their instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :).

The Nikita project is implemented using Java and Spring, and is fairly easy to get up and running using Docker containers for those that want to test the current code base. The API tester is implemented in Python.

9th March 2017

Over the years, administrating thousand of NFS mounting linux computers at the time, I often needed a way to detect if the machine was experiencing NFS hang. If you try to use df or look at a file or directory affected by the hang, the process (and possibly the shell) will hang too. So you want to be able to detect this without risking the detection process getting stuck too. It has not been obvious how to do this. When the hang has lasted a while, it is possible to find messages like these in dmesg:

nfs: server nfsserver not responding, still trying
nfs: server nfsserver OK

It is hard to know if the hang is still going on, and it is hard to be sure looking in dmesg is going to work. If there are lots of other messages in dmesg the lines might have rotated out of site before they are noticed.

While reading through the nfs client implementation in linux kernel code, I came across some statistics that seem to give a way to detect it. The om_timeouts sunrpc value in the kernel will increase every time the above log entry is inserted into dmesg. And after digging a bit further, I discovered that this value show up in /proc/self/mountstats on Linux.

The mountstats content seem to be shared between files using the same file system context, so it is enough to check one of the mountstats files to get the state of the mount point for the machine. I assume this will not show lazy umounted NFS points, nor NFS mount points in a different process context (ie with a different filesystem view), but that does not worry me.

The content for a NFS mount point look similar to this:

[...]
device /dev/mapper/Debian-var mounted on /var with fstype ext3
device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1
        opts:   rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all
        age:    7863311
        caps:   caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255
        sec:    flavor=1,pseudoflavor=1
        events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0 
        bytes:  166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809 
        RPC iostats version: 1.0  p/v: 100003/3 (nfs)
        xprt:   tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820
        per-op statistics
                NULL: 0 0 0 0 0 0 0 0
             GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132
             SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943
              LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511
              ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140
            READLINK: 125 125 0 20472 18620 0 1112 1118
                READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693
               WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771
              CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398
               MKDIR: 3680 3680 0 773980 993920 26 23990 24245
             SYMLINK: 903 903 0 233428 245488 6 5865 5917
               MKNOD: 80 80 0 20148 21760 0 299 304
              REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636
               RMDIR: 3367 3367 0 645112 484848 22 5782 6002
              RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288
                LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579
             READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917
         READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937
              FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022
              FSINFO: 2 2 0 232 328 0 1 1
            PATHCONF: 1 1 0 116 140 0 0 0
              COMMIT: 0 0 0 0 0 0 0 0

device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc
[...]

The key number to look at is the third number in the per-op list. It is the number of NFS timeouts experiences per file system operation. Here 22 write timeouts and 5 access timeouts. If these numbers are increasing, I believe the machine is experiencing NFS hang. Unfortunately the timeout value do not start to increase right away. The NFS operations need to time out first, and this can take a while. The exact timeout value depend on the setup. For example the defaults for TCP and UDP mount points are quite different, and the timeout value is affected by the soft, hard, timeo and retrans NFS mount options.

The only way I have been able to get working on Debian and RedHat Enterprise Linux for getting the timeout count is to peek in /proc/. But according to Solaris 10 System Administration Guide: Network Services, the 'nfsstat -c' command can be used to get these timeout values. But this do not work on Linux, as far as I can tell. I asked Debian about this, but have not seen any replies yet.

Is there a better way to figure out if a Linux NFS client is experiencing NFS hangs? Is there a way to detect which processes are affected? Is there a way to get the NFS mount going quickly once the network problem causing the NFS hang has been cleared? I would very much welcome some clues, as we regularly run into NFS hangs.

8th March 2017

So the new president in the United States of America claim to be surprised to discover that he was wiretapped during the election before he was elected president. He even claim this must be illegal. Well, doh, if it is one thing the confirmations from Snowden documented, it is that the entire population in USA is wiretapped, one way or another. Of course the president candidates were wiretapped, alongside the senators, judges and the rest of the people in USA.

Next, the Federal Bureau of Investigation ask the Department of Justice to go public rejecting the claims that Donald Trump was wiretapped illegally. I fail to see the relevance, given that I am sure the surveillance industry in USA believe they have all the legal backing they need to conduct mass surveillance on the entire world.

There is even the director of the FBI stating that he never saw an order requesting wiretapping of Donald Trump. That is not very surprising, given how the FISA court work, with all its activity being secret. Perhaps he only heard about it?

What I find most sad in this story is how Norwegian journalists present it. In a news reports the other day in the radio from the Norwegian National broadcasting Company (NRK), I heard the journalist claim that 'the FBI denies any wiretapping', while the reality is that 'the FBI denies any illegal wiretapping'. There is a fundamental and important difference, and it make me sad that the journalists are unable to grasp it.

Update 2017-03-13: Look like The Intercept report that US Senator Rand Paul confirm what I state above.

3rd March 2017

For almost a year now, we have been working on making a Norwegian Bokmål edition of The Debian Administrator's Handbook. Now, thanks to the tireless effort of Ole-Erik, Ingrid and Andreas, the initial translation is complete, and we are working on the proof reading to ensure consistent language and use of correct computer science terms. The plan is to make the book available on paper, as well as in electronic form. For that to happen, the proof reading must be completed and all the figures need to be translated. If you want to help out, get in touch.

A fresh PDF edition in A4 format (the final book will have smaller pages) of the book created every morning is available for proofreading. If you find any errors, please visit Weblate and correct the error. The state of the translation including figures is a useful source for those provide Norwegian bokmål screen shots and figures.

1st March 2017

A few days ago I ordered a small batch of the ChaosKey, a small USB dongle for generating entropy created by Bdale Garbee and Keith Packard. Yesterday it arrived, and I am very happy to report that it work great! According to its designers, to get it to work out of the box, you need the Linux kernel version 4.1 or later. I tested on a Debian Stretch machine (kernel version 4.9), and there it worked just fine, increasing the available entropy very quickly. I wrote a small test oneliner to test. It first print the current entropy level, drain /dev/random, and then print the entropy level for five seconds. Here is the situation without the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
  done
300
0+1 oppføringer inn
0+1 oppføringer ut
28 byte kopiert, 0,000264565 s, 106 kB/s
4
8
12
17
21
%

The entropy level increases by 3-4 every second. In such case any application requiring random bits (like a HTTPS enabled web server) will halt and wait for more entrpy. And here is the situation with the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
  done
1079
0+1 oppføringer inn
0+1 oppføringer ut
104 byte kopiert, 0,000487647 s, 213 kB/s
433
1028
1031
1035
1038
%

Quite the difference. :) I bought a few more than I need, in case someone want to buy one here in Norway. :)

Update: The dongle was presented at Debconf last year. You might find the talk recording illuminating. It explains exactly what the source of randomness is, if you are unable to spot it from the schema drawing available from the ChaosKey web site linked at the start of this blog post.

Tags: debian, english.
21st February 2017

I just noticed the new Norwegian proposal for archiving rules in the goverment list ECMA-376 / ISO/IEC 29500 (aka OOXML) as valid formats to put in long term storage. Luckily such files will only be accepted based on pre-approval from the National Archive. Allowing OOXML files to be used for long term storage might seem like a good idea as long as we forget that there are plenty of ways for a "valid" OOXML document to have content with no defined interpretation in the standard, which lead to a question and an idea.

Is there any tool to detect if a OOXML document depend on such undefined behaviour? It would be useful for the National Archive (and anyone else interested in verifying that a document is well defined) to have such tool available when considering to approve the use of OOXML. I'm aware of the officeotron OOXML validator, but do not know how complete it is nor if it will report use of undefined behaviour. Are there other similar tools available? Please send me an email if you know of any such tool.

Tags: english, nuug, standard.
13th February 2017

A few days ago, we received the ruling from my day in court. The case in question is a challenge of the seizure of the DNS domain popcorn-time.no. The ruling simply did not mention most of our arguments, and seemed to take everything ØKOKRIM said at face value, ignoring our demonstration and explanations. But it is hard to tell for sure, as we still have not seen most of the documents in the case and thus were unprepared and unable to contradict several of the claims made in court by the opposition. We are considering an appeal, but it is partly a question of funding, as it is costing us quite a bit to pay for our lawyer. If you want to help, please donate to the NUUG defense fund.

The details of the case, as far as we know it, is available in Norwegian from the NUUG blog. This also include the ruling itself.

3rd February 2017

On Wednesday, I spent the entire day in court in Follo Tingrett representing the member association NUUG, alongside the member association EFN and the DNS registrar IMC, challenging the seizure of the DNS name popcorn-time.no. It was interesting to sit in a court of law for the first time in my life. Our team can be seen in the picture above: attorney Ola Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil Eriksen and NUUG board member Petter Reinholdtsen.

The case at hand is that the Norwegian National Authority for Investigation and Prosecution of Economic and Environmental Crime (aka Økokrim) decided on their own, to seize a DNS domain early last year, without following the official policy of the Norwegian DNS authority which require a court decision. The web site in question was a site covering Popcorn Time. And Popcorn Time is the name of a technology with both legal and illegal applications. Popcorn Time is a client combining searching a Bittorrent directory available on the Internet with downloading/distribute content via Bittorrent and playing the downloaded content on screen. It can be used illegally if it is used to distribute content against the will of the right holder, but it can also be used legally to play a lot of content, for example the millions of movies available from the Internet Archive or the collection available from Vodo. We created a video demonstrating legally use of Popcorn Time and played it in Court. It can of course be downloaded using Bittorrent.

I did not quite know what to expect from a day in court. The government held on to their version of the story and we held on to ours, and I hope the judge is able to make sense of it all. We will know in two weeks time. Unfortunately I do not have high hopes, as the Government have the upper hand here with more knowledge about the case, better training in handling criminal law and in general higher standing in the courts than fairly unknown DNS registrar and member associations. It is expensive to be right also in Norway. So far the case have cost more than NOK 70 000,-. To help fund the case, NUUG and EFN have asked for donations, and managed to collect around NOK 25 000,- so far. Given the presentation from the Government, I expect the government to appeal if the case go our way. And if the case do not go our way, I hope we have enough funding to appeal.

From the other side came two people from Økokrim. On the benches, appearing to be part of the group from the government were two people from the Simonsen Vogt Wiik lawyer office, and three others I am not quite sure who was. Økokrim had proposed to present two witnesses from The Motion Picture Association, but this was rejected because they did not speak Norwegian and it was a bit late to bring in a translator, but perhaps the two from MPA were present anyway. All seven appeared to know each other. Good to see the case is take seriously.

If you, like me, believe the courts should be involved before a DNS domain is hijacked by the government, or you believe the Popcorn Time technology have a lot of useful and legal applications, I suggest you too donate to the NUUG defense fund. Both Bitcoin and bank transfer are available. If NUUG get more than we need for the legal action (very unlikely), the rest will be spend promoting free software, open standards and unix-like operating systems in Norway, so no matter what happens the money will be put to good use.

If you want to lean more about the case, I recommend you check out the blog posts from NUUG covering the case. They cover the legal arguments on both sides.

9th January 2017

Did you ever wonder where the web trafic really flow to reach the web servers, and who own the network equipment it is flowing through? It is possible to get a glimpse of this from using traceroute, but it is hard to find all the details. Many years ago, I wrote a system to map the Norwegian Internet (trying to figure out if our plans for a network game service would get low enough latency, and who we needed to talk to about setting up game servers close to the users. Back then I used traceroute output from many locations (I asked my friends to run a script and send me their traceroute output) to create the graph and the map. The output from traceroute typically look like this:

traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets
 1  uio-gw10.uio.no (129.240.202.1)  0.447 ms  0.486 ms  0.621 ms
 2  uio-gw8.uio.no (129.240.24.229)  0.467 ms  0.578 ms  0.675 ms
 3  oslo-gw1.uninett.no (128.39.65.17)  0.385 ms  0.373 ms  0.358 ms
 4  te3-1-2.br1.fn3.as2116.net (193.156.90.3)  1.174 ms  1.172 ms  1.153 ms
 5  he16-1-1.cr1.san110.as2116.net (195.0.244.234)  2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48)  3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234)  2.857 ms
 6  ae1.ar8.oslosda310.as2116.net (195.0.242.39)  0.662 ms  0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23)  0.622 ms
 7  89.191.10.146 (89.191.10.146)  0.931 ms  0.917 ms  0.955 ms
 8  * * *
 9  * * *
[...]

This show the DNS names and IP addresses of (at least some of the) network equipment involved in getting the data traffic from me to the www.stortinget.no server, and how long it took in milliseconds for a package to reach the equipment and return to me. Three packages are sent, and some times the packages do not follow the same path. This is shown for hop 5, where three different IP addresses replied to the traceroute request.

There are many ways to measure trace routes. Other good traceroute implementations I use are traceroute (using ICMP packages) mtr (can do both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP traceroute and a lot of other capabilities). All of them are easily available in Debian.

This time around, I wanted to know the geographic location of different route points, to visualize how visiting a web page spread information about the visit to a lot of servers around the globe. The background is that a web site today often will ask the browser to get from many servers the parts (for example HTML, JSON, fonts, JavaScript, CSS, video) required to display the content. This will leak information about the visit to those controlling these servers and anyone able to peek at the data traffic passing by (like your ISP, the ISPs backbone provider, FRA, GCHQ, NSA and others).

Lets pick an example, the Norwegian parliament web site www.stortinget.no. It is read daily by all members of parliament and their staff, as well as political journalists, activits and many other citizens of Norway. A visit to the www.stortinget.no web site will ask your browser to contact 8 other servers: ajax.googleapis.com, insights.hotjar.com, script.hotjar.com, static.hotjar.com, stats.g.doubleclick.net, www.google-analytics.com, www.googletagmanager.com and www.netigate.se. I extracted this by asking PhantomJS to visit the Stortinget web page and tell me all the URLs PhantomJS downloaded to render the page (in HAR format using their netsniff example. I am very grateful to Gorm for showing me how to do this). My goal is to visualize network traces to all IP addresses behind these DNS names, do show where visitors personal information is spread when visiting the page.

map of combined traces for URLs used by www.stortinget.no using GeoIP

When I had a look around for options, I could not find any good free software tools to do this, and decided I needed my own traceroute wrapper outputting KML based on locations looked up using GeoIP. KML is easy to work with and easy to generate, and understood by several of the GIS tools I have available. I got good help from by NUUG colleague Anders Einar with this, and the result can be seen in my kmltraceroute git repository. Unfortunately, the quality of the free GeoIP databases I could find (and the for-pay databases my friends had access to) is not up to the task. The IP addresses of central Internet infrastructure would typically be placed near the controlling companies main office, and not where the router is really located, as you can see from the KML file I created using the GeoLite City dataset from MaxMind.

scapy traceroute graph for URLs used by www.stortinget.no

I also had a look at the visual traceroute graph created by the scrapy project, showing IP network ownership (aka AS owner) for the IP address in question. The graph display a lot of useful information about the traceroute in SVG format, and give a good indication on who control the network equipment involved, but it do not include geolocation. This graph make it possible to see the information is made available at least for UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level 3 Communications and NetDNA.

example geotraceroute view for www.stortinget.no

In the process, I came across the web service GeoTraceroute by Salim Gasmi. Its methology of combining guesses based on DNS names, various location databases and finally use latecy times to rule out candidate locations seemed to do a very good job of guessing correct geolocation. But it could only do one trace at the time, did not have a sensor in Norway and did not make the geolocations easily available for postprocessing. So I contacted the developer and asked if he would be willing to share the code (he refused until he had time to clean it up), but he was interested in providing the geolocations in a machine readable format, and willing to set up a sensor in Norway. So since yesterday, it is possible to run traces from Norway in this service thanks to a sensor node set up by the NUUG assosiation, and get the trace in KML format for further processing.

map of combined traces for URLs used by www.stortinget.no using geotraceroute

Here we can see a lot of trafic passes Sweden on its way to Denmark, Germany, Holland and Ireland. Plenty of places where the Snowden confirmations verified the traffic is read by various actors without your best interest as their top priority.

Combining KML files is trivial using a text editor, so I could loop over all the hosts behind the urls imported by www.stortinget.no and ask for the KML file from GeoTraceroute, and create a combined KML file with all the traces (unfortunately only one of the IP addresses behind the DNS name is traced this time. To get them all, one would have to request traces using IP number instead of DNS names from GeoTraceroute). That might be the next step in this project.

Armed with these tools, I find it a lot easier to figure out where the IP traffic moves and who control the boxes involved in moving it. And every time the link crosses for example the Swedish border, we can be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in Britain and NSA in USA and cables around the globe. (Hm, what should we tell them? :) Keep that in mind if you ever send anything unencrypted over the Internet.

PS: KML files are drawn using the KML viewer from Ivan Rublev, as it was less cluttered than the local Linux application Marble. There are heaps of other options too.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

4th January 2017

Do you have a large iCalendar file with lots of old entries, and would like to archive them to save space and resources? At least those of us using KOrganizer know that turning on and off an event set become slower and slower the more entries are in the set. While working on migrating our calendars to a Radicale CalDAV server on our Freedombox server, my loved one wondered if I could find a way to split up the calendar file she had in KOrganizer, and I set out to write a tool. I spent a few days writing and polishing the system, and it is now ready for general consumption. The code for ical-archiver is publicly available from a git repository on github. The system is written in Python and depend on the vobject Python module.

To use it, locate the iCalendar file you want to operate on and give it as an argument to the ical-archiver script. This will generate a set of new files, one file per component type per year for all components expiring more than two years in the past. The vevent, vtodo and vjournal entries are handled by the script. The remaining entries are stored in a 'remaining' file.

This is what a test run can look like:

% ical-archiver t/2004-2016.ics 
Found 3612 vevents
Found 6 vtodos
Found 2 vjournals
Writing t/2004-2016.ics-subset-vevent-2004.ics
Writing t/2004-2016.ics-subset-vevent-2005.ics
Writing t/2004-2016.ics-subset-vevent-2006.ics
Writing t/2004-2016.ics-subset-vevent-2007.ics
Writing t/2004-2016.ics-subset-vevent-2008.ics
Writing t/2004-2016.ics-subset-vevent-2009.ics
Writing t/2004-2016.ics-subset-vevent-2010.ics
Writing t/2004-2016.ics-subset-vevent-2011.ics
Writing t/2004-2016.ics-subset-vevent-2012.ics
Writing t/2004-2016.ics-subset-vevent-2013.ics
Writing t/2004-2016.ics-subset-vevent-2014.ics
Writing t/2004-2016.ics-subset-vjournal-2007.ics
Writing t/2004-2016.ics-subset-vjournal-2011.ics
Writing t/2004-2016.ics-subset-vtodo-2012.ics
Writing t/2004-2016.ics-remaining.ics
%

As you can see, the original file is untouched and new files are written with names derived from the original file. If you are happy with their content, the *-remaining.ics file can replace the original the the others can be archived or imported as historical calendar collections.

The script should probably be improved a bit. The error handling when discovering broken entries is not good, and I am not sure yet if it make sense to split different entry types into separate files or not. The program is thus likely to change. If you find it interesting, please get in touch. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, standard.
23rd December 2016

I received a very nice Christmas present today. As my regular readers probably know, I have been working on the the Isenkram system for many years. The goal of the Isenkram system is to make it easier for users to figure out what to install to get a given piece of hardware to work in Debian, and a key part of this system is a way to map hardware to packages. Isenkram have its own mapping database, and also uses data provided by each package using the AppStream metadata format. And today, AppStream in Debian learned to look up hardware the same way Isenkram is doing it, ie using fnmatch():

% appstreamcli what-provides modalias \
  usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00
Identifier: pymissile [generic]
Name: pymissile
Summary: Control original Striker USB Missile Launcher
Package: pymissile
% appstreamcli what-provides modalias usb:v0694p0002d0000
Identifier: libnxt [generic]
Name: libnxt
Summary: utility library for talking to the LEGO Mindstorms NXT brick
Package: libnxt
---
Identifier: t2n [generic]
Name: t2n
Summary: Simple command-line tool for Lego NXT
Package: t2n
---
Identifier: python-nxt [generic]
Name: python-nxt
Summary: Python driver/interface/wrapper for the Lego Mindstorms NXT robot
Package: python-nxt
---
Identifier: nbc [generic]
Name: nbc
Summary: C compiler for LEGO Mindstorms NXT bricks
Package: nbc
%

A similar query can be done using the combined AppStream and Isenkram databases using the isenkram-lookup tool:

% isenkram-lookup usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00
pymissile
% isenkram-lookup usb:v0694p0002d0000
libnxt
nbc
python-nxt
t2n
%

You can find modalias values relevant for your machine using cat $(find /sys/devices/ -name modalias).

If you want to make this system a success and help Debian users make the most of the hardware they have, please help add AppStream metadata for your package following the guidelines documented in the wiki. So far only 11 packages provide such information, among the several hundred hardware specific packages in Debian. The Isenkram database on the other hand contain 101 packages, mostly related to USB dongles. Most of the packages with hardware mapping in AppStream are LEGO Mindstorms related, because I have, as part of my involvement in the Debian LEGO team given priority to making sure LEGO users get proposed the complete set of packages in Debian for that particular hardware. The team also got a nice Christmas present today. The nxt-firmware package made it into Debian. With this package in place, it is now possible to use the LEGO Mindstorms NXT unit with only free software, as the nxt-firmware package contain the source and firmware binaries for the NXT brick.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

20th December 2016

The Isenkram system I wrote two years ago to make it easier in Debian to find and install packages to get your hardware dongles to work, is still going strong. It is a system to look up the hardware present on or connected to the current system, and map the hardware to Debian packages. It can either be done using the tools in isenkram-cli or using the user space daemon in the isenkram package. The latter will notify you, when inserting new hardware, about what packages to install to get the dongle working. It will even provide a button to click on to ask packagekit to install the packages.

Here is an command line example from my Thinkpad laptop:

% isenkram-lookup  
bluez
cheese
ethtool
fprintd
fprintd-demo
gkrellm-thinkbat
hdapsd
libpam-fprintd
pidgin-blinklight
thinkfan
tlp
tp-smapi-dkms
tp-smapi-source
tpb
%

It can also list the firware package providing firmware requested by the load kernel modules, which in my case is an empty list because I have all the firmware my machine need:

% /usr/sbin/isenkram-autoinstall-firmware -l
info: did not find any firmware files requested by loaded kernel modules.  exiting
%

The last few days I had a look at several of the around 250 packages in Debian with udev rules. These seem like good candidates to install when a given hardware dongle is inserted, and I found several that should be proposed by isenkram. I have not had time to check all of them, but am happy to report that now there are 97 packages packages mapped to hardware by Isenkram. 11 of these packages provide hardware mapping using AppStream, while the rest are listed in the modaliases file provided in isenkram.

These are the packages with hardware mappings at the moment. The marked packages are also announcing their hardware support using AppStream, for everyone to use:

air-quality-sensor, alsa-firmware-loaders, argyll, array-info, avarice, avrdude, b43-fwcutter, bit-babbler, bluez, bluez-firmware, brltty, broadcom-sta-dkms, calibre, cgminer, cheese, colord, colorhug-client, dahdi-firmware-nonfree, dahdi-linux, dfu-util, dolphin-emu, ekeyd, ethtool, firmware-ipw2x00, fprintd, fprintd-demo, galileo, gkrellm-thinkbat, gphoto2, gpsbabel, gpsbabel-gui, gpsman, gpstrans, gqrx-sdr, gr-fcdproplus, gr-osmosdr, gtkpod, hackrf, hdapsd, hdmi2usb-udev, hpijs-ppds, hplip, ipw3945-source, ipw3945d, kde-config-tablet, kinect-audio-setup, libnxt, libpam-fprintd, lomoco, madwimax, minidisc-utils, mkgmap, msi-keyboard, mtkbabel, nbc, nqc, nut-hal-drivers, ola, open-vm-toolbox, open-vm-tools, openambit, pcgminer, pcmciautils, pcscd, pidgin-blinklight, printer-driver-splix, pymissile, python-nxt, qlandkartegt, qlandkartegt-garmin, rosegarden, rt2x00-source, sispmctl, soapysdr-module-hackrf, solaar, squeak-plugins-scratch, sunxi-tools, t2n, thinkfan, thinkfinger-tools, tlp, tp-smapi-dkms, tp-smapi-source, tpb, tucnak, uhd-host, usbmuxd, viking, virtualbox-ose-guest-x11, w1retap, xawtv, xserver-xorg-input-vmmouse, xserver-xorg-input-wacom, xserver-xorg-video-qxl, xserver-xorg-video-vmware, yubikey-personalization and zd1211-firmware

If you know of other packages, please let me know with a wishlist bug report against the isenkram-cli package, and ask the package maintainer to add AppStream metadata according to the guidelines to provide the information for everyone. In time, I hope to get rid of the isenkram specific hardware mapping and depend exclusively on AppStream.

Note, the AppStream metadata for broadcom-sta-dkms is matching too much hardware, and suggest that the package with with any ethernet card. See bug #838735 for the details. I hope the maintainer find time to address it soon. In the mean time I provide an override in isenkram.

11th December 2016

In my early years, I played the epic game Elite on my PC. I spent many months trading and fighting in space, and reached the 'elite' fighting status before I moved on. The original Elite game was available on Commodore 64 and the IBM PC edition I played had a 64 KB executable. I am still impressed today that the authors managed to squeeze both a 3D engine and details about more than 2000 planet systems across 7 galaxies into a binary so small.

I have known about the free software game Oolite inspired by Elite for a while, but did not really have time to test it properly until a few days ago. It was great to discover that my old knowledge about trading routes were still valid. But my fighting and flying abilities were gone, so I had to retrain to be able to dock on a space station. And I am still not able to make much resistance when I am attacked by pirates, so I bougth and mounted the most powerful laser in the rear to be able to put up at least some resistance while fleeing for my life. :)

When playing Elite in the late eighties, I had to discover everything on my own, and I had long lists of prices seen on different planets to be able to decide where to trade what. This time I had the advantages of the Elite wiki, where information about each planet is easily available with common price ranges and suggested trading routes. This improved my ability to earn money and I have been able to earn enough to buy a lot of useful equipent in a few days. I believe I originally played for months before I could get a docking computer, while now I could get it after less then a week.

If you like science fiction and dreamed of a life as a vagabond in space, you should try out Oolite. It is available for Linux, MacOSX and Windows, and is included in Debian and derivatives since 2011.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

25th November 2016

Two years ago, I did some experiments with eatmydata and the Debian installation system, observing how using eatmydata could speed up the installation quite a bit. My testing measured speedup around 20-40 percent for Debian Edu, where we install around 1000 packages from within the installer. The eatmydata package provide a way to disable/delay file system flushing. This is a bit risky in the general case, as files that should be stored on disk will stay only in memory a bit longer than expected, causing problems if a machine crashes at an inconvenient time. But for an installation, if the machine crashes during installation the process is normally restarted, and avoiding disk operations as much as possible to speed up the process make perfect sense.

I added code in the Debian Edu specific installation code to enable eatmydata, but did not have time to push it any further. But a few months ago I picked it up again and worked with the libeatmydata package maintainer Mattia Rizzolo to make it easier for everyone to get this installation speedup in Debian. Thanks to our cooperation There is now an eatmydata-udeb package in Debian testing and unstable, and simply enabling/installing it in debian-installer (d-i) is enough to get the quicker installations. It can be enabled using preseeding. The following untested kernel argument should do the trick:

preseed/early_command="anna-install eatmydata-udeb"

This should ask d-i to install the package inside the d-i environment early in the installation sequence. Having it installed in d-i in turn will make sure the relevant scripts are called just after debootstrap filled /target/ with the freshly installed Debian system to configure apt to run dpkg with eatmydata. This is enough to speed up the installation process. There is a proposal to extend the idea a bit further by using /etc/ld.so.preload instead of apt.conf, but I have not tested its impact.

13th November 2016

The Coz profiler, a nice profiler able to run benchmarking experiments on the instrumented multi-threaded program, finally made it into Debian unstable yesterday. Lluís Vilanova and I have spent many months since I blogged about the coz tool in August working with upstream to make it suitable for Debian. There are still issues with clang compatibility, inline assembly only working x86 and minimized JavaScript libraries.

To test it, install 'coz-profiler' using apt and run it like this:

coz run --- /path/to/binary-with-debug-info

This will produce a profile.coz file in the current working directory with the profiling information. This is then given to a JavaScript application provided in the package and available from a project web page. To start the local copy, invoke it in a browser like this:

sensible-browser /usr/share/coz-profiler/viewer/index.htm

See the project home page and the USENIX ;login: article on Coz for more information on how it is working.

Tags: debian, english.
7th November 2016

A few days ago I ran a very biased and informal survey to get an idea about what options are being used to communicate with end to end encryption with friends and family. I explicitly asked people not to list options only used in a work setting. The background is the uneasy feeling I get when using Signal, a feeling shared by others as a blog post from Sander Venima about why he do not recommend Signal anymore (with feedback from the Signal author available from ycombinator). I wanted an overview of the options being used, and hope to include those options in a less biased survey later on. So far I have not taken the time to look into the individual proposed systems. They range from text sharing web pages, via file sharing and email to instant messaging, VOIP and video conferencing. For those considering which system to use, it is also useful to have a look at the EFF Secure messaging scorecard which is slightly out of date but still provide valuable information.

So, on to the list. There were some used by many, some used by a few, some rarely used ones and a few mentioned but without anyone claiming to use them. Notice the grouping is in reality quite random given the biased self selected set of participants. First the ones used by many:

Then the ones used by a few.

Then the ones used by even fewer people

And finally the ones mentioned by not marked as used by anyone. This might be a mistake, perhaps the person adding the entry forgot to flag it as used?

Given the network effect it seem obvious to me that we as a society have been divided and conquered by those interested in keeping encrypted and secure communication away from the masses. The finishing remarks from Aral Balkan in his talk "Free is a lie" about the usability of free software really come into effect when you want to communicate in private with your friends and family. We can not expect them to allow the usability of communication tool to block their ability to talk to their loved ones.

Note for example the option IRC w/OTR. Most IRC clients do not have OTR support, so in most cases OTR would not be an option, even if you wanted to. In my personal experience, about 1 in 20 I talk to have a IRC client with OTR. For private communication to really be available, most people to talk to must have the option in their currently used client. I can not simply ask my family to install an IRC client. I need to guide them through a technical multi-step process of adding extensions to the client to get them going. This is a non-starter for most.

I would like to be able to do video phone calls, audio phone calls, exchange instant messages and share files with my loved ones, without being forced to share with people I do not know. I do not want to share the content of the conversations, and I do not want to share who I communicate with or the fact that I communicate with someone. Without all these factors in place, my private life is being more or less invaded.

Update 2019-10-08: Børge Dvergsdal, who told me he is Customer Relationship Manager @ Whereby (formerly appear.in), asked if I could mention that appear.in is now renamed and found at https://whereby.com/. And sure, why not. Apparently they changed the name because they were unable to trademark appear.in somewhere... While I am at it, I can mention that Ring changed name to Jami, now available from https://jami.net/. Luckily they were able to have a direct redirect from ring.cx to jami.net, so the user experience is almost the same.

4th November 2016

A while back I received a Gyro sensor for the NXT Mindstorms controller as a birthday present. It had been on my wishlist for a while, because I wanted to build a Segway like balancing lego robot. I had already built a simple balancing robot with the kids, using the light/color sensor included in the NXT kit as the balance sensor, but it was not working very well. It could balance for a while, but was very sensitive to the light condition in the room and the reflective properties of the surface and would fall over after a short while. I wanted something more robust, and had the gyro sensor from HiTechnic I believed would solve it on my wishlist for some years before it suddenly showed up as a gift from my loved ones. :)

Unfortunately I have not had time to sit down and play with it since then. But that changed some days ago, when I was searching for lego segway information and came across a recipe from HiTechnic for building the HTWay, a segway like balancing robot. Build instructions and source code was included, so it was just a question of putting it all together. And thanks to the great work of many Debian developers, the compiler needed to build the source for the NXT is already included in Debian, so I was read to go in less than an hour. The resulting robot do not look very impressive in its simplicity:

Because I lack the infrared sensor used to control the robot in the design from HiTechnic, I had to comment out the last task (taskControl). I simply placed /* and */ around it get the program working without that sensor present. Now it balances just fine until the battery status run low:

Now we would like to teach it how to follow a line and take remote control instructions using the included Bluetooth receiver in the NXT.

If you, like me, love LEGO and want to make sure we find the tools they need to work with LEGO in Debian and all our derivative distributions like Ubuntu, check out the LEGO designers project page and join the Debian LEGO team. Personally I own a RCX and NXT controller (no EV3), and would like to make sure the Debian tools needed to program the systems I own work as they should.

Tags: debian, english, lego, robot.
10th October 2016

In July I wrote how to get the Signal Chrome/Chromium app working without the ability to receive SMS messages (aka without a cell phone). It is time to share some experiences and provide an updated setup.

The Signal app have worked fine for several months now, and I use it regularly to chat with my loved ones. I had a major snag at the end of my summer vacation, when the the app completely forgot my setup, identity and keys. The reason behind this major mess was running out of disk space. To avoid that ever happening again I have started storing everything in userdata/ in git, to be able to roll back to an earlier version if the files are wiped by mistake. I had to use it once after introducing the git backup. When rolling back to an earlier version, one need to use the 'reset session' option in Signal to get going, and notify the people you talk with about the problem. I assume there is some sequence number tracking in the protocol to detect rollback attacks. The git repository is rather big (674 MiB so far), but I have not tried to figure out if some of the content can be added to a .gitignore file due to lack of spare time.

I've also hit the 90 days timeout blocking, and noticed that this make it impossible to send messages using Signal. I could still receive them, but had to patch the code with a new timestamp to send. I believe the timeout is added by the developers to force people to upgrade to the latest version of the app, even when there is no protocol changes, to reduce the version skew among the user base and thus try to keep the number of support requests down.

Since my original recipe, the Signal source code changed slightly, making the old patch fail to apply cleanly. Below is an updated patch, including the shell wrapper I use to start Signal. The original version required a new user to locate the JavaScript console and call a function from there. I got help from a friend with more JavaScript knowledge than me to modify the code to provide a GUI button instead. This mean that to get started you just need to run the wrapper and click the 'Register without mobile phone' to get going now. I've also modified the timeout code to always set it to 90 days in the future, to avoid having to patch the code regularly.

So, the updated recipe for Debian Jessie:

  1. First, install required packages to get the source code and the browser you need. Signal only work with Chrome/Chromium, as far as I know, so you need to install it.
    apt install git tor chromium
    git clone https://github.com/WhisperSystems/Signal-Desktop.git
    
  2. Modify the source code using command listed in the the patch block below.
  3. Start Signal using the run-signal-app wrapper (for example using `pwd`/run-signal-app).
  4. Click on the 'Register without mobile phone', will in a phone number you can receive calls to the next minute, receive the verification code and enter it into the form field and press 'Register'. Note, the phone number you use will be user Signal username, ie the way others can find you on Signal.
  5. You can now use Signal to contact others. Note, new contacts do not show up in the contact list until you restart Signal, and there is no way to assign names to Contacts. There is also no way to create or update chat groups. I suspect this is because the web app do not have a associated contact database.

I am still a bit uneasy about using Signal, because of the way its main author moxie0 reject federation and accept dependencies to major corporations like Google (part of the code is fetched from Google) and Amazon (the central coordination point is owned by Amazon). See for example the LibreSignal issue tracker for a thread documenting the authors view on these issues. But the network effect is strong in this case, and several of the people I want to communicate with already use Signal. Perhaps we can all move to Ring once it work on my laptop? It already work on Windows and Android, and is included in Debian and Ubuntu, but not working on Debian Stable.

Anyway, this is the patch I apply to the Signal code to get it working. It switch to the production servers, disable to timeout, make registration easier and add the shell wrapper:

cd Signal-Desktop; cat <<EOF | patch -p1
diff --git a/js/background.js b/js/background.js
index 24b4c1d..579345f 100644
--- a/js/background.js
+++ b/js/background.js
@@ -33,9 +33,9 @@
         });
     });
 
-    var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org';
+    var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org';
     var SERVER_PORTS = [80, 4433, 8443];
-    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com';
+    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com';
     var messageReceiver;
     window.getSocketStatus = function() {
         if (messageReceiver) {
diff --git a/js/expire.js b/js/expire.js
index 639aeae..beb91c3 100644
--- a/js/expire.js
+++ b/js/expire.js
@@ -1,6 +1,6 @@
 ;(function() {
     'use strict';
-    var BUILD_EXPIRATION = 0;
+    var BUILD_EXPIRATION = Date.now() + (90 * 24 * 60 * 60 * 1000);
 
     window.extension = window.extension || {};
 
diff --git a/js/views/install_view.js b/js/views/install_view.js
index 7816f4f..1d6233b 100644
--- a/js/views/install_view.js
+++ b/js/views/install_view.js
@@ -38,7 +38,8 @@
             return {
                 'click .step1': this.selectStep.bind(this, 1),
                 'click .step2': this.selectStep.bind(this, 2),
-                'click .step3': this.selectStep.bind(this, 3)
+                'click .step3': this.selectStep.bind(this, 3),
+                'click .callreg': function() { extension.install('standalone') },
             };
         },
         clearQR: function() {
diff --git a/options.html b/options.html
index dc0f28e..8d709f6 100644
--- a/options.html
+++ b/options.html
@@ -14,7 +14,10 @@
         <div class='nav'>
           <h1>{{ installWelcome }}</h1>
           <p>{{ installTagline }}</p>
-          <div> <a class='button step2'>{{ installGetStartedButton }}</a> </div>
+          <div> <a class='button step2'>{{ installGetStartedButton }}</a>
+	    <br> <a class="button callreg">Register without mobile phone</a>
+
+	  </div>
           <span class='dot step1 selected'></span>
           <span class='dot step2'></span>
           <span class='dot step3'></span>
--- /dev/null   2016-10-07 09:55:13.730181472 +0200
+++ b/run-signal-app   2016-10-10 08:54:09.434172391 +0200
@@ -0,0 +1,12 @@
+#!/bin/sh
+set -e
+cd $(dirname $0)
+mkdir -p userdata
+userdata="`pwd`/userdata"
+if [ -d "$userdata" ] && [ ! -d "$userdata/.git" ] ; then
+    (cd $userdata && git init)
+fi
+(cd $userdata && git add . && git commit -m "Current status." || true)
+exec chromium \
+  --proxy-server="socks://localhost:9050" \
+  --user-data-dir=$userdata --load-and-launch-app=`pwd`
EOF
chmod a+rx run-signal-app

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

7th October 2016

The Isenkram system provide a practical and easy way to figure out which packages support the hardware in a given machine. The command line tool isenkram-lookup and the tasksel options provide a convenient way to list and install packages relevant for the current hardware during system installation, both user space packages and firmware packages. The GUI background daemon on the other hand provide a pop-up proposing to install packages when a new dongle is inserted while using the computer. For example, if you plug in a smart card reader, the system will ask if you want to install pcscd if that package isn't already installed, and if you plug in a USB video camera the system will ask if you want to install cheese if cheese is currently missing. This already work just fine.

But Isenkram depend on a database mapping from hardware IDs to package names. When I started no such database existed in Debian, so I made my own data set and included it with the isenkram package and made isenkram fetch the latest version of this database from git using http. This way the isenkram users would get updated package proposals as soon as I learned more about hardware related packages.

The hardware is identified using modalias strings. The modalias design is from the Linux kernel where most hardware descriptors are made available as a strings that can be matched using filename style globbing. It handle USB, PCI, DMI and a lot of other hardware related identifiers.

The downside to the Isenkram specific database is that there is no information about relevant distribution / Debian version, making isenkram propose obsolete packages too. But along came AppStream, a cross distribution mechanism to store and collect metadata about software packages. When I heard about the proposal, I contacted the people involved and suggested to add a hardware matching rule using modalias strings in the specification, to be able to use AppStream for mapping hardware to packages. This idea was accepted and AppStream is now a great way for a package to announce the hardware it support in a distribution neutral way. I wrote a recipe on how to add such meta-information in a blog post last December. If you have a hardware related package in Debian, please announce the relevant hardware IDs using AppStream.

In Debian, almost all packages that can talk to a LEGO Mindestorms RCX or NXT unit, announce this support using AppStream. The effect is that when you insert such LEGO robot controller into your Debian machine, Isenkram will propose to install the packages needed to get it working. The intention is that this should allow the local user to start programming his robot controller right away without having to guess what packages to use or which permissions to fix.

But when I sat down with my son the other day to program our NXT unit using his Debian Stretch computer, I discovered something annoying. The local console user (ie my son) did not get access to the USB device for programming the unit. This used to work, but no longer in Jessie and Stretch. After some investigation and asking around on #debian-devel, I discovered that this was because udev had changed the mechanism used to grant access to local devices. The ConsoleKit mechanism from /lib/udev/rules.d/70-udev-acl.rules no longer applied, because LDAP users no longer was added to the plugdev group during login. Michael Biebl told me that this method was obsolete and the new method used ACLs instead. This was good news, as the plugdev mechanism is a mess when using a remote user directory like LDAP. Using ACLs would make sure a user lost device access when she logged out, even if the user left behind a background process which would retain the plugdev membership with the ConsoleKit setup. Armed with this knowledge I moved on to fix the access problem for the LEGO Mindstorms related packages.

The new system uses a udev tag, 'uaccess'. It can either be applied directly for a device, or is applied in /lib/udev/rules.d/70-uaccess.rules for classes of devices. As the LEGO Mindstorms udev rules did not have a class, I decided to add the tag directly in the udev rules files included in the packages. Here is one example. For the nqc C compiler for the RCX, the /lib/udev/rules.d/60-nqc.rules file now look like this:

SUBSYSTEM=="usb", ACTION=="add", ATTR{idVendor}=="0694", ATTR{idProduct}=="0001", \
    SYMLINK+="rcx-%k", TAG+="uaccess"

The key part is the 'TAG+="uaccess"' at the end. I suspect all packages using plugdev in their /lib/udev/rules.d/ files should be changed to use this tag (either directly or indirectly via 70-uaccess.rules). Perhaps a lintian check should be created to detect this?

I've been unable to find good documentation on the uaccess feature. It is unclear to me if the uaccess tag is an internal implementation detail like the udev-acl tag used by /lib/udev/rules.d/70-udev-acl.rules. If it is, I guess the indirect method is the preferred way. Michael asked for more documentation from the systemd project and I hope it will make this clearer. For now I use the generic classes when they exist and is already handled by 70-uaccess.rules, and add the tag directly if no such class exist.

To learn more about the isenkram system, please check out my blog posts tagged isenkram.

To help out making life for LEGO constructors in Debian easier, please join us on our IRC channel #debian-lego and join the Debian LEGO team in the Alioth project we created yesterday. A mailing list is not yet created, but we are working on it. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

30th August 2016

In April we started to work on a Norwegian Bokmål edition of the "open access" book on how to set up and administrate a Debian system. Today I am happy to report that the first draft is now publicly available. You can find it on get the Debian Administrator's Handbook page (under Other languages). The first eight chapters have a first draft translation, and we are working on proofreading the content. If you want to help out, please start contributing using the hosted weblate project page, and get in touch using the translators mailing list. Please also check out the instructions for contributors. A good way to contribute is to proofread the text and update weblate if you find errors.

Our goal is still to make the Norwegian book available on paper as well as electronic form.

11th August 2016

This summer, I read a great article "coz: This Is the Profiler You're Looking For" in USENIX ;login: about how to profile multi-threaded programs. It presented a system for profiling software by running experiences in the running program, testing how run time performance is affected by "speeding up" parts of the code to various degrees compared to a normal run. It does this by slowing down parallel threads while the "faster up" code is running and measure how this affect processing time. The processing time is measured using probes inserted into the code, either using progress counters (COZ_PROGRESS) or as latency meters (COZ_BEGIN/COZ_END). It can also measure unmodified code by measuring complete the program runtime and running the program several times instead.

The project and presentation was so inspiring that I would like to get the system into Debian. I created a WNPP request for it and contacted upstream to try to make the system ready for Debian by sending patches. The build process need to be changed a bit to avoid running 'git clone' to get dependencies, and to include the JavaScript web page used to visualize the collected profiling information included in the source package. But I expect that should work out fairly soon.

The way the system work is fairly simple. To run an coz experiment on a binary with debug symbols available, start the program like this:

coz run --- program-to-run

This will create a text file profile.coz with the instrumentation information. To show what part of the code affect the performance most, use a web browser and either point it to http://plasma-umass.github.io/coz/ or use the copy from git (in the gh-pages branch). Check out this web site to have a look at several example profiling runs and get an idea what the end result from the profile runs look like. To make the profiling more useful you include <coz.h> and insert the COZ_PROGRESS or COZ_BEGIN and COZ_END at appropriate places in the code, rebuild and run the profiler. This allow coz to do more targeted experiments.

A video published by ACM presenting the Coz profiler is available from Youtube. There is also a paper from the 25th Symposium on Operating Systems Principles available titled Coz: finding code that counts with causal profiling.

The source code for Coz is available from github. It will only build with clang because it uses a C++ feature missing in GCC, but I've submitted a patch to solve it and hope it will be included in the upstream source soon.

Please get in touch if you, like me, would like to see this piece of software in Debian. I would very much like some help with the packaging effort, as I lack the in depth knowledge on how to package C++ libraries.

5th August 2016

As my regular readers probably remember, the last year I published a French and Norwegian translation of the classic Free Culture book by the founder of the Creative Commons movement, Lawrence Lessig. A bit less known is the fact that due to the way I created the translations, using docbook and po4a, I also recreated the English original. And because I already had created a new the PDF edition, I published it too. The revenue from the books are sent to the Creative Commons Corporation. In other words, I do not earn any money from this project, I just earn the warm fuzzy feeling that the text is available for a wider audience and more people can learn why the Creative Commons is needed.

Today, just for fun, I had a look at the sales number over at Lulu.com, which take care of payment, printing and shipping. Much to my surprise, the English edition is selling better than both the French and Norwegian edition, despite the fact that it has been available in English since it was first published. In total, 24 paper books was sold for USD $19.99 between 2016-01-01 and 2016-07-31:

Title / languageQuantity
Culture Libre / French3
Fri kultur / Norwegian7
Free Culture / English14

The books are available both from Lulu.com and from large book stores like Amazon and Barnes&Noble. Most revenue, around $10 per book, is sent to the Creative Commons project when the book is sold directly by Lulu.com. The other channels give less revenue. The summary from Lulu tell me 10 books was sold via the Amazon channel, 10 via Ingram (what is this?) and 4 directly by Lulu. And Lulu.com tells me that the revenue sent so far this year is USD $101.42. No idea what kind of sales numbers to expect, so I do not know if that is a good amount of sales for a 10 year old book or not. But it make me happy that the buyers find the book, and I hope they enjoy reading it as much as I did.

The ebook edition is available for free from Github.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

1st August 2016

Did you know there is a TV channel broadcasting talks from DebConf 16 across an entire country? Or that there is a TV channel broadcasting talks by or about Linus Torvalds, Tor, OpenID, Common Lisp, Civic Tech, EFF founder John Barlow, how to make 3D printer electronics and many more fascinating topics? It works using only free software (all of it available from Github), and is administrated using a web browser and a web API.

The TV channel is the Norwegian open channel Frikanalen, and I am involved via the NUUG member association in running and developing the software for the channel. The channel is organised as a member organisation where its members can upload and broadcast what they want (think of it as Youtube for national broadcasting television). Individuals can broadcast too. The time slots are handled on a first come, first serve basis. Because the channel have almost no viewers and very few active members, we can experiment with TV technology without too much flack when we make mistakes. And thanks to the few active members, most of the slots on the schedule are free. I see this as an opportunity to spread knowledge about technology and free software, and have a script I run regularly to fill up all the open slots the next few days with technology related video. The end result is a channel I like to describe as Techno TV - filled with interesting talks and presentations.

It is available on channel 50 on the Norwegian national digital TV network (RiksTV). It is also available as a multicast stream on Uninett. And finally, it is available as a WebM unicast stream from Frikanalen and NUUG. Check it out. :)

7th July 2016

Yesterday, I tried to unlock a HTC Desire HD phone, and it proved to be a slight challenge. Here is the recipe if I ever need to do it again. It all started by me wanting to try the recipe to set up an hardened Android installation from the Tor project blog on a device I had access to. It is a old mobile phone with a broken microphone The initial idea had been to just install CyanogenMod on it, but did not quite find time to start on it until a few days ago.

The unlock process is supposed to be simple: (1) Boot into the boot loader (press volume down and power at the same time), (2) select 'fastboot' before (3) connecting the device via USB to a Linux machine, (4) request the device identifier token by running 'fastboot oem get_identifier_token', (5) request the device unlocking key using the HTC developer web site and unlock the phone using the key file emailed to you.

Unfortunately, this only work fi you have hboot version 2.00.0029 or newer, and the device I was working on had 2.00.0027. This apparently can be easily fixed by downloading a Windows program and running it on your Windows machine, if you accept the terms Microsoft require you to accept to use Windows - which I do not. So I had to come up with a different approach. I got a lot of help from AndyCap on #nuug, and would not have been able to get this working without him.

First I needed to extract the hboot firmware from the windows binary for HTC Desire HD downloaded as 'the RUU' from HTC. For this there is is a github project named unruu using libunshield. The unshield tool did not recognise the file format, but unruu worked and extracted rom.zip, containing the new hboot firmware and a text file describing which devices it would work for.

Next, I needed to get the new firmware into the device. For this I followed some instructions available from HTC1Guru.com, and ran these commands as root on a Linux machine with Debian testing:

adb reboot-bootloader
fastboot oem rebootRUU
fastboot flash zip rom.zip
fastboot flash zip rom.zip
fastboot reboot

The flash command apparently need to be done twice to take effect, as the first is just preparations and the second one do the flashing. The adb command is just to get to the boot loader menu, so turning the device on while holding volume down and the power button should work too.

With the new hboot version in place I could start following the instructions on the HTC developer web site. I got the device token like this:

fastboot oem get_identifier_token 2>&1 | sed 's/(bootloader) //'

And once I got the unlock code via email, I could use it like this:

fastboot flash unlocktoken Unlock_code.bin

And with that final step in place, the phone was unlocked and I could start stuffing the software of my own choosing into the device. So far I only inserted a replacement recovery image to wipe the phone before I start. We will see what happen next. Perhaps I should install Debian on it. :)

3rd July 2016

For a while now, I have wanted to test the Signal app, as it is said to provide end to end encrypted communication and several of my friends and family are already using it. As I by choice do not own a mobile phone, this proved to be harder than expected. And I wanted to have the source of the client and know that it was the code used on my machine. But yesterday I managed to get it working. I used the Github source, compared it to the source in the Signal Chrome app available from the Chrome web store, applied patches to use the production Signal servers, started the app and asked for the hidden "register without a smart phone" form. Here is the recipe how I did it.

First, I fetched the Signal desktop source from Github, using

git clone https://github.com/WhisperSystems/Signal-Desktop.git

Next, I patched the source to use the production servers, to be able to talk to other Signal users:

cat <<EOF | patch -p0
diff -ur ./js/background.js userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/background.js
--- ./js/background.js  2016-06-29 13:43:15.630344628 +0200
+++ userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/background.js    2016-06-29 14:06:29.530300934 +0200
@@ -47,8 +47,8 @@
         });
     });
 
-    var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org';
-    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com';
+    var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org:4433';
+    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com';
     var messageReceiver;
     window.getSocketStatus = function() {
         if (messageReceiver) {
diff -ur ./js/expire.js userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/expire.js
--- ./js/expire.js      2016-06-29 13:43:15.630344628 +0200
+++ userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/expire.js2016-06-29 14:06:29.530300934 +0200
@@ -1,6 +1,6 @@
 ;(function() {
     'use strict';
-    var BUILD_EXPIRATION = 0;
+    var BUILD_EXPIRATION = 1474492690000;
 
     window.extension = window.extension || {};
 
EOF

The first part is changing the servers, and the second is updating an expiration timestamp. This timestamp need to be updated regularly. It is set 90 days in the future by the build process (Gruntfile.js). The value is seconds since 1970 times 1000, as far as I can tell.

Based on a tip and good help from the #nuug IRC channel, I wrote a script to launch Signal in Chromium.

#!/bin/sh
cd $(dirname $0)
mkdir -p userdata
exec chromium \
  --proxy-server="socks://localhost:9050" \
  --user-data-dir=`pwd`/userdata --load-and-launch-app=`pwd`

The script start the app and configure Chromium to use the Tor SOCKS5 proxy to make sure those controlling the Signal servers (today Amazon and Whisper Systems) as well as those listening on the lines will have a harder time location my laptop based on the Signal connections if they use source IP address.

When the script starts, one need to follow the instructions under "Standalone Registration" in the CONTRIBUTING.md file in the git repository. I right clicked on the Signal window to get up the Chromium debugging tool, visited the 'Console' tab and wrote 'extension.install("standalone")' on the console prompt to get the registration form. Then I entered by land line phone number and pressed 'Call'. 5 seconds later the phone rang and a robot voice repeated the verification code three times. After entering the number into the verification code field in the form, I could start using Signal from my laptop.

As far as I can tell, The Signal app will leak who is talking to whom and thus who know who to those controlling the central server, but such leakage is hard to avoid with a centrally controlled server setup. It is something to keep in mind when using Signal - the content of your chats are harder to intercept, but the meta data exposing your contact network is available to people you do not know. So better than many options, but not great. And sadly the usage is connected to my land line, thus allowing those controlling the server to associate it to my home and person. I would prefer it if only those I knew could tell who I was on Signal. There are options avoiding such information leakage, but most of my friends are not using them, so I am stuck with Signal for now.

Update 2017-01-10: There is an updated blog post on this topic in Experience and updated recipe for using the Signal app without a mobile phone.

6th June 2016

When I set out a few weeks ago to figure out which multimedia player in Debian claimed to support most file formats / MIME types, I was a bit surprised how varied the sets of MIME types the various players claimed support for. The range was from 55 to 130 MIME types. I suspect most media formats are supported by all players, but this is not really reflected in the MimeTypes values in their desktop files. There are probably also some bogus MIME types listed, but it is hard to identify which one this is.

Anyway, in the mean time I got in touch with upstream for some of the players suggesting to add more MIME types to their desktop files, and decided to spend some time myself improving the situation for my favorite media player VLC. The fixes for VLC entered Debian unstable yesterday. The complete list of MIME types can be seen on the Multimedia player MIME type support status Debian wiki page.

The new "best" multimedia player in Debian? It is VLC, followed by totem, parole, kplayer, gnome-mpv, mpv, smplayer, mplayer-gui and kmplayer. I am sure some of the other players desktop files support several of the formats currently listed as working only with vlc, toten and parole.

A sad observation is that only 14 MIME types are listed as supported by all the tested multimedia players in Debian in their desktop files: audio/mpeg, audio/vnd.rn-realaudio, audio/x-mpegurl, audio/x-ms-wma, audio/x-scpls, audio/x-wav, video/mp4, video/mpeg, video/quicktime, video/vnd.rn-realvideo, video/x-matroska, video/x-ms-asf, video/x-ms-wmv and video/x-msvideo. Personally I find it sad that video/ogg and video/webm is not supported by all the media players in Debian. As far as I can tell, all of them can handle both formats.

5th June 2016

Many years ago, when koffice was fresh and with few users, I decided to test its presentation tool when making the slides for a talk I was giving for NUUG on Japhar, a free Java virtual machine. I wrote the first draft of the slides, saved the result and went to bed the day before I would give the talk. The next day I took a plane to the location where the meeting should take place, and on the plane I started up koffice again to polish the talk a bit, only to discover that kpresenter refused to load its own data file. I cursed a bit and started making the slides again from memory, to have something to present when I arrived. I tested that the saved files could be loaded, and the day seemed to be rescued. I continued to polish the slides until I suddenly discovered that the saved file could no longer be loaded into kpresenter. In the end I had to rewrite the slides three times, condensing the content until the talk became shorter and shorter. After the talk I was able to pinpoint the problem – kpresenter wrote inline images in a way itself could not understand. Eventually that bug was fixed and kpresenter ended up being a great program to make slides. The point I'm trying to make is that we expect a program to be able to load its own data files, and it is embarrassing to its developers if it can't.

Did you ever experience a program failing to load its own data files from the desktop file browser? It is not a uncommon problem. A while back I discovered that the screencast recorder gtk-recordmydesktop would save an Ogg Theora video file the KDE file browser would refuse to open. No video player claimed to understand such file. I tracked down the cause being file --mime-type returning the application/ogg MIME type, which no video player I had installed listed as a MIME type they would understand. I asked for file to change its behavour and use the MIME type video/ogg instead. I also asked several video players to add video/ogg to their desktop files, to give the file browser an idea what to do about Ogg Theora files. After a while, the desktop file browsers in Debian started to handle the output from gtk-recordmydesktop properly.

But history repeats itself. A few days ago I tested the music system Rosegarden again, and I discovered that the KDE and xfce file browsers did not know what to do with the Rosegarden project files (*.rg). I've reported the rosegarden problem to BTS and a fix is commited to git and will be included in the next upload. To increase the chance of me remembering how to fix the problem next time some program fail to load its files from the file browser, here are some notes on how to fix it.

The file browsers in Debian in general operates on MIME types. There are two sources for the MIME type of a given file. The output from file --mime-type mentioned above, and the content of the shared MIME type registry (under /usr/share/mime/). The file MIME type is mapped to programs supporting the MIME type, and this information is collected from the desktop files available in /usr/share/applications/. If there is one desktop file claiming support for the MIME type of the file, it is activated when asking to open a given file. If there are more, one can normally select which one to use by right-clicking on the file and selecting the wanted one using 'Open with' or similar. In general this work well. But it depend on each program picking a good MIME type (preferably a MIME type registered with IANA), file and/or the shared MIME registry recognizing the file and the desktop file to list the MIME type in its list of supported MIME types.

The /usr/share/mime/packages/rosegarden.xml entry for the Shared MIME database look like this:

<?xml version="1.0" encoding="UTF-8"?>
<mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info">
  <mime-type type="audio/x-rosegarden">
    <sub-class-of type="application/x-gzip"/>
    <comment>Rosegarden project file</comment>
    <glob pattern="*.rg"/>
  </mime-type>
</mime-info>

This states that audio/x-rosegarden is a kind of application/x-gzip (it is a gzipped XML file). Note, it is much better to use an official MIME type registered with IANA than it is to make up ones own unofficial ones like the x-rosegarden type used by rosegarden.

The desktop file of the rosegarden program failed to list audio/x-rosegarden in its list of supported MIME types, causing the file browsers to have no idea what to do with *.rg files:

% grep Mime /usr/share/applications/rosegarden.desktop
MimeType=audio/x-rosegarden-composition;audio/x-rosegarden-device;audio/x-rosegarden-project;audio/x-rosegarden-template;audio/midi;
X-KDE-NativeMimeType=audio/x-rosegarden-composition
%

The fix was to add "audio/x-rosegarden;" at the end of the MimeType= line.

If you run into a file which fail to open the correct program when selected from the file browser, please check out the output from file --mime-type for the file, ensure the file ending and MIME type is registered somewhere under /usr/share/mime/ and check that some desktop file under /usr/share/applications/ is claiming support for this MIME type. If not, please report a bug to have it fixed. :)

Tags: debian, english.
28th May 2016

A little more than 11 years ago, one of the creators of Tor, and the current President of the Tor project, Roger Dingledine, gave a talk for the members of the Norwegian Unix User group (NUUG). A video of the talk was recorded, and today, thanks to the great help from David Noble, I finally was able to publish the video of the talk on Frikanalen, the Norwegian open channel TV station where NUUG currently publishes its talks. You can watch the live stream using a web browser with WebM support, or check out the recording on the video on demand page for the talk "Tor: Anonymous communication for the US Department of Defence...and you.".

Here is the video included for those of you using browsers with HTML video and Ogg Theora support:

I guess the gist of the talk can be summarised quite simply: If you want to help the military in USA (and everyone else), use Tor. :)

25th May 2016

The isenkram system is a user-focused solution in Debian for handling hardware related packages. The idea is to have a database of mappings between hardware and packages, and pop up a dialog suggesting for the user to install the packages to use a given hardware dongle. Some use cases are when you insert a Yubikey, it proposes to install the software needed to control it; when you insert a braille reader list it proposes to install the packages needed to send text to the reader; and when you insert a ColorHug screen calibrator it suggests to install the driver for it. The system work well, and even have a few command line tools to install firmware packages and packages for the hardware already in the machine (as opposed to hotpluggable hardware).

The system was initially written using aptdaemon, because I found good documentation and example code on how to use it. But aptdaemon is going away and is generally being replaced by PackageKit, so Isenkram needed a rewrite. And today, thanks to the great patch from my college Sunil Mohan Adapa in the FreedomBox project, the rewrite finally took place. I've just uploaded a new version of Isenkram into Debian Unstable with the patch included, and the default for the background daemon is now to use PackageKit. To check it out, install the isenkram package and insert some hardware dongle and see if it is recognised.

If you want to know what kind of packages isenkram would propose for the machine it is running on, you can check out the isenkram-lookup program. This is what it look like on a Thinkpad X230:

% isenkram-lookup 
bluez
cheese
fprintd
fprintd-demo
gkrellm-thinkbat
hdapsd
libpam-fprintd
pidgin-blinklight
thinkfan
tleds
tp-smapi-dkms
tp-smapi-source
tpb
%p

The hardware mappings come from several places. The preferred way is for packages to announce their hardware support using the cross distribution appstream system. See previous blog posts about isenkram to learn how to do that.

23rd May 2016

Yesterday I updated the battery-stats package in Debian with a few patches sent to me by skilled and enterprising users. There were some nice user and visible changes. First of all, both desktop menu entries now work. A design flaw in one of the script made the history graph fail to show up (its PNG was dumped in ~/.xsession-errors) if no controlling TTY was available. The script worked when called from the command line, but not when called from the desktop menu. I changed this to look for a DISPLAY variable or a TTY before deciding where to draw the graph, and now the graph window pop up as expected.

The next new feature is a discharge rate estimator in one of the graphs (the one showing the last few hours). New is also the user of colours showing charging in blue and discharge in red. The percentages of this graph is relative to last full charge, not battery design capacity.

The other graph show the entire history of the collected battery statistics, comparing it to the design capacity of the battery to visualise how the battery life time get shorter over time. The red line in this graph is what the previous graph considers 100 percent:

In this graph you can see that I only charge the battery to 80 percent of last full capacity, and how the capacity of the battery is shrinking. :(

The last new feature is in the collector, which now will handle more hardware models. On some hardware, Linux power supply information is stored in /sys/class/power_supply/ACAD/, while the collector previously only looked in /sys/class/power_supply/AC/. Now both are checked to figure if there is power connected to the machine.

If you are interested in how your laptop battery is doing, please check out the battery-stats in Debian unstable, or rebuild it on Jessie to get it working on Debian stable. :) The upstream source is available from github. Patches are very welcome.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
21st May 2016

A few weeks ago the French paperback edition of Lawrence Lessigs 2004 book Cultura Libre was published. Today I noticed that the book is now available from book stores. You can now buy it from Amazon ($19.99), Barnes & Noble ($?) and as always from Lulu.com ($19.99). The revenue is donated to the Creative Commons project. If you buy from Lulu.com, they currently get $10.59, while if you buy from one of the book stores most of the revenue go to the book store and the Creative Commons project get much (not sure how much less).

I was a bit surprised to discover that there is a kindle edition sold by Amazon Digital Services LLC on Amazon. Not quite sure how that edition was created, but if you want to download a electronic edition (PDF, EPUB, Mobi) generated from the same files used to create the paperback edition, they are available from github.

19th May 2016

I just donated to the NUUG defence "fond" to fund the effort in Norway to get the seizure of the news site popcorn-time.no tested in court. I hope everyone that agree with me will do the same.

Would you be worried if you knew the police in your country could hijack DNS domains of news sites covering free software system without talking to a judge first? I am. What if the free software system combined search engine lookups, bittorrent downloads and video playout and was called Popcorn Time? Would that affect your view? It still make me worried.

In March 2016, the Norwegian police seized (as in forced NORID to change the IP address pointed to by it to one controlled by the police) the DNS domain popcorn-time.no, without any supervision from the courts. I did not know about the web site back then, and assumed the courts had been involved, and was very surprised when I discovered that the police had hijacked the DNS domain without asking a judge for permission first. I was even more surprised when I had a look at the web site content on the Internet Archive, and only found news coverage about Popcorn Time, not any material published without the right holders permissions.

The seizure was widely covered in the Norwegian press (see for example Hegnar Online and ITavisen and NRK), at first due to the press release sent out by Økokrim, but then based on protests from the law professor Olav Torvund and lawyer Jon Wessel-Aas. It even got some coverage on TorrentFreak.

I wrote about the case a month ago, when the Norwegian Unix User Group (NUUG), where I am an active member, decided to ask the courts to test this seizure. The request was denied, but NUUG and its co-requestor EFN have not given up, and now they are rallying for support to get the seizure legally challenged. They accept both bank and Bitcoin transfer for those that want to support the request.

If you as me believe news sites about free software should not be censored, even if the free software have both legal and illegal applications, and that DNS hijacking should be tested by the courts, I suggest you show your support by donating to NUUG.

12th May 2016

Today, after many years of hard work from many people, ZFS for Linux finally entered Debian. The package status can be seen on the package tracker for zfs-linux. and the team status page. If you want to help out, please join us. The source code is available via git on Alioth. It would also be great if you could help out with the dkms package, as it is an important piece of the puzzle to get ZFS working.

Tags: debian, english.
8th May 2016

Where I set out to figure out which multimedia player in Debian claim support for most file formats.

A few years ago, I had a look at the media support for Browser plugins in Debian, to get an idea which plugins to include in Debian Edu. I created a script to extract the set of supported MIME types for each plugin, and used this to find out which multimedia browser plugin supported most file formats / media types. The result can still be seen on the Debian wiki, even though it have not been updated for a while. But browser plugins are less relevant these days, so I thought it was time to look at standalone players.

A few days ago I was tired of VLC not being listed as a viable player when I wanted to play videos from the Norwegian National Broadcasting Company, and decided to investigate why. The cause is a missing MIME type in the VLC desktop file. In the process I wrote a script to compare the set of MIME types announced in the desktop file and the browser plugin, only to discover that there is quite a large difference between the two for VLC. This discovery made me dig up the script I used to compare browser plugins, and adjust it to compare desktop files instead, to try to figure out which multimedia player in Debian support most file formats.

The result can be seen on the Debian Wiki, as a table listing all MIME types supported by one of the packages included in the table, with the package supporting most MIME types being listed first in the table.

The best multimedia player in Debian? It is totem, followed by parole, kplayer, mpv, vlc, smplayer mplayer-gui gnome-mpv and kmplayer. Time for the other players to update their announced MIME support?

4th May 2016
A friend of mine made me aware of The Pyra, a handheld computer which will be delivered with Debian preinstalled. I would love to get one of those for my birthday. :)

The machine is a complete ARM-based PC with micro HDMI, SATA, USB plugs and many others connectors, and include a full keyboard and a 5" LCD touch screen. The 6000mAh battery is claimed to provide a whole day of battery life time, but I have not seen any independent tests confirming this. The vendor is still collecting preorders, and the last I heard last night was that 22 more orders were needed before production started.

As far as I know, this is the first handheld preinstalled with Debian. Please let me know if you know of any others. Is it the first computer being sold with Debian preinstalled?

Tags: debian, english.
18th April 2016

It is days like today I am really happy to be a member of the Norwegian Unix User group, a member association for those of us believing in free software, open standards and unix-like operating systems. NUUG announced today it will try to bring the seizure of the DNS domain popcorn-time.no as unlawful, to stand up for the principle that writing about a controversial topic is not infringing copyrights, and censuring web pages by hijacking DNS domain should be decided by the courts, not the police. The DNS domain was seized by the Norwegian National Authority for Investigation and Prosecution of Economic and Environmental Crime a month ago. I hope this bring more paying members to NUUG to give the association the financial muscle needed to bring this case as far as it must go to stop this kind of DNS hijacking.

13th April 2016

I first got to know I.F. Stone when I came across an article by Jon Schwarz on The Intercept about his extraordinary contribution to investigative journalism in USA. The article is about a new documentary in two parts (part one is 12 minutes and part two is 30 minutes), and I found both truly fascinating. It is amazing what he was able to find by digging up public sources and government papers. He documented lots of government abuse and cover ups, and I find his weekly news letters inspiring to read even today.

All governments are run by liars and nothing they say should be believed.
- I. F. Stone

His starting point was that reporters should not assume governments and corporations are telling the truth, but verify all their claims as much as possible. I wonder how many Norwegian reporters can be said to follow the principles of I. F. Stone. They are definitely in short supply. If you, like me half a year ago, have never heard of him, check him out.

12th April 2016

I'm happy to report that the French paperback edition of my project to translate the Free Culture book by Lawrence Lessig is now available for sale on Lulu.com. Once I have formally verified my proof reading copy, which should be in the mail, the paperback edition should be available in book stores like Amazon and Barnes & Noble too.

This French edition, Culture Libre, is the work of the dblatex developer Benoît Guillon, who created the PO file from the initial translation available from the Wikilivres wiki pages and completed and corrected the translation to match the original docbook edition my project is using, as well as coordinated the proof reading of the final result. I believe the end result look great, but I am biased and do not read French. In addition to the paperback edition, the book is available in PDF, EPUB and Mobi format from the github project page linked to above.

When enabling book store distribution on Lulu.com, I had to nearly triple the price to allow the book stores some profit. I also had to accept that I will get some revenue when a book is sold via Lulu.com. But because of the non-commercial clause in the book license (CC-BY-NC), this might be a problem. To bypass the problem I discussed how to handle the revenue with the author, and we agreed that the revenue for these editions go to the Creative Commons non-profit Corporation who handle donations to the Creative Commons project. So far they have earned around USD 70 on sales of the English and Norwegian Bokmål editions, according to Lulu.com. They will get the revenue for the French edition too. Their revenue is higher if you buy the book directly from Lulu.com instead of via a book store, so I recommend you buy directly from Lulu.com.

Perhaps you would like to get the book published in your language? The translation is done using a web based translator service, so the technical bar to enter is fairly low. Get in touch if you would like to make this happen.

10th April 2016

During this weekends bug squashing party and developer gathering, we decided to do our part to make sure there are good books about Debian available in Norwegian Bokmål, and got in touch with the people behind the Debian Administrator's Handbook project to get started. If you want to help out, please start contributing using the hosted weblate project page, and get in touch using the translators mailing list. Please also check out the instructions for contributors.

The book is already available on paper in English, French and Japanese, and our goal is to get it available on paper in Norwegian Bokmål too. In addition to the paper edition, there are also EPUB and Mobi versions available. And there are incomplete translations available for many more languages.

7th April 2016

Just for fun I had a look at the popcon number of ZFS related packages in Debian, and was quite surprised with what I found. I use ZFS myself at home, but did not really expect many others to do so. But I might be wrong.

According to the popcon results for spl-linux, there are 1019 Debian installations, or 0.53% of the population, with the package installed. As far as I know the only use of the spl-linux package is as a support library for ZFS on Linux, so I use it here as proxy for measuring the number of ZFS installation on Linux in Debian. In the kFreeBSD variant of Debian the ZFS feature is already available, and there the popcon results for zfsutils show 1625 Debian installations or 0.84% of the population. So I guess I am not alone in using ZFS on Debian.

But even though the Debian project leader Lucas Nussbaum announced in April 2015 that the legal obstacles blocking ZFS on Debian were cleared, the package is still not in Debian. The package is again in the NEW queue. Several uploads have been rejected so far because the debian/copyright file was incomplete or wrong, but there is no reason to give up. The current status can be seen on the team status page, and the source code is available on Alioth.

As I want ZFS to be included in next version of Debian to make sure my home server can function in the future using only official Debian packages, and the current blocker is to get the debian/copyright file accepted by the FTP masters in Debian, I decided a while back to try to help out the team. This was the background for my blog post about creating, updating and checking debian/copyright semi-automatically, and I used the techniques I explored there to try to find any errors in the copyright file. It is not very easy to check every one of the around 2000 files in the source package, but I hope we this time got it right. If you want to help out, check out the git source and try to find missing entries in the debian/copyright file.

Tags: debian, english.
2nd April 2016

Two years ago, I had a look at trusted timestamping options available, and among other things noted a still open bug in the tsget script included in openssl that made it harder than necessary to use openssl as a trusted timestamping client. A few days ago I was told the Norwegian government office DIFI is close to releasing their own trusted timestamp service, and in the process I was happy to learn about a replacement for the tsget script using only curl:

openssl ts -query -data "/etc/shells" -cert -sha256 -no_nonce \
  | curl -s -H "Content-Type: application/timestamp-query" \
         --data-binary "@-" http://zeitstempel.dfn.de > etc-shells.tsr
openssl ts -reply -text -in etc-shells.tsr

This produces a binary timestamp file (etc-shells.tsr) which can be used to verify that the content of the file /etc/shell with the calculated sha256 hash existed at the point in time when the request was made. The last command extract the content of the etc-shells.tsr in human readable form. The idea behind such timestamp is to be able to prove using cryptography that the content of a file have not changed since the file was stamped.

To verify that the file on disk match the public key signature in the timestamp file, run the following commands. It make sure you have the required certificate for the trusted timestamp service available and use it to compare the file content with the timestamp. In production, one should of course use a better method to verify the service certificate.

wget -O ca-cert.txt https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt
openssl ts -verify -data /etc/shells -in etc-shells.tsr -CAfile ca-cert.txt -text

Wikipedia have a lot more information about trusted Timestamping and linked timestamping, and there are several trusted timestamping services around, both as commercial services and as free and public services. Among the latter is the zeitstempel.dfn.de service mentioned above and freetsa.org service linked to from the wikipedia web site. I believe the DIFI service should show up on https://tsa.difi.no, but it is not available to the public at the moment. I hope this will change when it is into production. The RFC 3161 trusted timestamping protocol standard is even implemented in LibreOffice, Microsoft Office and Adobe Acrobat, making it possible to verify when a document was created.

I would find it useful to be able to use such trusted timestamp service to make it possible to verify that my stored syslog files have not been tampered with. This is not a new idea. I found one example implemented on the Endian network appliances where the configuration of such feature was described in 2012.

But I could not find any free implementation of such feature when I searched, so I decided to try to build a prototype named syslog-trusted-timestamp. My idea is to generate a timestamp of the old log files after they are rotated, and store the timestamp in the new log file just after rotation. This will form a chain that would make it possible to see if any old log files are tampered with. But syslog is bad at handling kilobytes of binary data, so I decided to base64 encode the timestamp and add an ID and line sequence numbers to the base64 data to make it possible to reassemble the timestamp file again. To use it, simply run it like this:

syslog-trusted-timestamp /path/to/list-of-log-files

This will send a timestamp from one or more timestamp services (not yet decided nor implemented) for each listed file to the syslog using logger(1). To verify the timestamp, the same program is used with the --verify option:

syslog-trusted-timestamp --verify /path/to/log-file /path/to/log-with-timestamp

The verification step is not yet well designed. The current implementation depend on the file path being unique and unchanging, and this is not a solid assumption. It also uses process number as timestamp ID, and this is bound to create ID collisions. I hope to have time to come up with a better way to handle timestamp IDs and verification later.

Please check out the prototype for syslog-trusted-timestamp on github and send suggestions and improvement, or let me know if there already exist a similar system for timestamping logs already to allow me to join forces with others with the same interest.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, sikkerhet.
23rd March 2016

Since this morning, the battery-stats package in Debian include an extended collector that will collect the complete battery history for later processing and graphing. The original collector store the battery level as percentage of last full level, while the new collector also record battery vendor, model, serial number, design full level, last full level and current battery level. This make it possible to predict the lifetime of the battery as well as visualise the energy flow when the battery is charging or discharging.

The new tools are available in /usr/share/battery-stats/ in the version 0.5.1 package in unstable. Get the new battery level graph and lifetime prediction by running:

/usr/share/battery-stats/battery-stats-graph /var/log/battery-stats.csv

Or select the 'Battery Level Graph' from your application menu.

The flow in/out of the battery can be seen by running (no menu entry yet):

/usr/share/battery-stats/battery-stats-graph-flow

I'm not quite happy with the way the data is visualised, at least when there are few data points. The graphs look a bit better with a few years of data.

A while back one important feature I use in the battery stats collector broke in Debian. The scripts in /usr/lib/pm-utils/power.d/ were no longer executed. I suspect it happened when Jessie started using systemd, but I do not know. The issue is reported as bug #818649 against pm-utils. I managed to work around it by adding an udev rule to call the collector script every time the power connector is connected and disconnected. With this fix in place it was finally time to make a new release of the package, and get it into Debian.

If you are interested in how your laptop battery is doing, please check out the battery-stats in Debian unstable, or rebuild it on Jessie to get it working on Debian stable. :) The upstream source is available from github. As always, patches are very welcome.

Tags: debian, english.
19th March 2016

Back in 2013 I proposed a way to make paper and PDF invoices easier to process electronically by adding a QR code with the key information about the invoice. I suggested using vCard field definition, to get some standard format for name and address, but any format would work. I did not do anything about the proposal, but hoped someone one day would make something like it. It would make it possible to efficiently send machine readable invoices directly between seller and buyer.

This was the background when I came across a proposal and specification from the web based accounting and invoicing supplier Visma in Sweden called UsingQR. Their PDF invoices contain a QR code with the key information of the invoice in JSON format. This is the typical content of a QR code following the UsingQR specification (based on a real world example, some numbers replaced to get a more bogus entry). I've reformatted the JSON to make it easier to read. Normally this is all on one long line:

{
 "vh":500.00,
 "vm":0,
 "vl":0,
 "uqr":1,
 "tp":1,
 "nme":"Din Leverandør",
 "cc":"NO",
 "cid":"997912345 MVA",
 "iref":"12300001",
 "idt":"20151022",
 "ddt":"20151105",
 "due":2500.0000,
 "cur":"NOK",
 "pt":"BBAN",
 "acc":"17202612345",
 "bc":"BIENNOK1",
 "adr":"0313 OSLO"
}

The interpretation of the fields can be found in the format specification (revision 2 from june 2014). The format seem to have most of the information needed to handle accounting and payment of invoices, at least the fields I have needed so far here in Norway.

Unfortunately, the site and document do not mention anything about the patent, trademark and copyright status of the format and the specification. Because of this, I asked the people behind it back in November to clarify. Ann-Christine Savlid (ann-christine.savlid (at) visma.com) replied that Visma had not applied for patent or trademark protection for this format, and that there were no copyright based usage limitations for the format. I urged her to make sure this was explicitly written on the web pages and in the specification, but unfortunately this has not happened yet. So I guess if there is submarine patents, hidden trademarks or a will to sue for copyright infringements, those starting to use the UsingQR format might be at risk, but if this happen there is some legal defense in the fact that the people behind the format claimed it was safe to do so. At least with patents, there is always a chance of getting sued...

I also asked if they planned to maintain the format in an independent standard organization to give others more confidence that they would participate in the standardization process on equal terms with Visma, but they had no immediate plans for this. Their plan was to work with banks to try to get more users of the format, and evaluate the way forward if the format proved to be popular. I hope they conclude that using an open standard organisation like IETF is the correct place to maintain such specification.

Update 2016-03-20: Via Twitter I became aware of some comments about this blog post that had several useful links and references to similar systems. In the Czech republic, the Czech Banking Association standard #26, with short name SPAYD, uses QR codes with payment information. More information is available from the Wikipedia page on Short Payment Descriptor. And in Germany, there is a system named BezahlCode, (specification v1.8 2013-12-05 available as PDF), which uses QR codes with URL-like formatting using "bank:" as the URI schema/protocol to provide the payment information. There is also the ZUGFeRD file format that perhaps could be transfered using QR codes, but I am not sure if it is done already. Last, in Bolivia there are reports that tax information since november 2014 need to be printed in QR format on invoices. I have not been able to track down a specification for this format, because of my limited language skill sets.

Tags: english, standard.
15th March 2016

Back in September, I blogged about the system I wrote to collect statistics about my laptop battery, and how it showed the decay and death of this battery (now replaced). I created a simple deb package to handle the collection and graphing, but did not want to upload it to Debian as there were already a battery-stats package in Debian that should do the same thing, and I did not see a point of uploading a competing package when battery-stats could be fixed instead. I reported a few bugs about its non-function, and hoped someone would step in and fix it. But no-one did.

I got tired of waiting a few days ago, and took matters in my own hands. The end result is that I am now the new upstream developer of battery stats (available from github) and part of the team maintaining battery-stats in Debian, and the package in Debian unstable is finally able to collect battery status using the /sys/class/power_supply/ information provided by the Linux kernel. If you install the battery-stats package from unstable now, you will be able to get a graph of the current battery fill level, to get some idea about the status of the battery. The source package build and work just fine in Debian testing and stable (and probably oldstable too, but I have not tested). The default graph you get for that system look like this:

My plans for the future is to merge my old scripts into the battery-stats package, as my old scripts collected a lot more details about the battery. The scripts are merged into the upstream battery-stats git repository already, but I am not convinced they work yet, as I changed a lot of paths along the way. Will have to test a bit more before I make a new release.

I will also consider changing the file format slightly, as I suspect the way I combine several values into one field might make it impossible to know the type of the value when using it for processing and graphing.

If you would like I would like to keep an close eye on your laptop battery, check out the battery-stats package in Debian and on github. I would love some help to improve the system further.

Tags: debian, english.
19th February 2016

Making packages for Debian requires quite a lot of attention to details. And one of the details is the content of the debian/copyright file, which should list all relevant licenses used by the code in the package in question, preferably in machine readable DEP5 format.

For large packages with lots of contributors it is hard to write and update this file manually, and if you get some detail wrong, the package is normally rejected by the ftpmasters. So getting it right the first time around get the package into Debian faster, and save both you and the ftpmasters some work.. Today, while trying to figure out what was wrong with the zfsonlinux copyright file, I decided to spend some time on figuring out the options for doing this job automatically, or at least semi-automatically.

Lucikly, there are at least two tools available for generating the file based on the code in the source package, debmake and cme. I'm not sure which one of them came first, but both seem to be able to create a sensible draft file. As far as I can tell, none of them can be trusted to get the result just right, so the content need to be polished a bit before the file is OK to upload. I found the debmake option in a blog posts from 2014.

To generate using debmake, use the -cc option:

debmake -cc > debian/copyright

Note there are some problems with python and non-ASCII names, so this might not be the best option.

The cme option is based on a config parsing library, and I found this approach in a blog post from 2015. To generate using cme, use the 'update dpkg-copyright' option:

cme update dpkg-copyright

This will create or update debian/copyright. The cme tool seem to handle UTF-8 names better than debmake.

When the copyright file is created, I would also like some help to check if the file is correct. For this I found two good options, debmake -k and license-reconcile. The former seem to focus on license types and file matching, and is able to detect ineffective blocks in the copyright file. The latter reports missing copyright holders and years, but was confused by inconsistent license names (like CDDL vs. CDDL-1.0). I suspect it is good to use both and fix all issues reported by them before uploading. But I do not know if the tools and the ftpmasters agree on what is important to fix in a copyright file, so the package might still be rejected.

The devscripts tool licensecheck deserve mentioning. It will read through the source and try to find all copyright statements. It is not comparing the result to the content of debian/copyright, but can be useful when verifying the content of the copyright file.

Are you aware of better tools in Debian to create and update debian/copyright file. Please let me know, or blog about it on planet.debian.org.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2016-02-20: I got a tip from Mike Gabriel on how to use licensecheck and cdbs to create a draft copyright file

licensecheck --copyright -r `find * -type f` | \
  /usr/lib/cdbs/licensecheck2dep5 > debian/copyright.auto

He mentioned that he normally check the generated file into the version control system to make it easier to discover license and copyright changes in the upstream source. I will try to do the same with my packages in the future.

Update 2016-02-21: The cme author recommended against using -quiet for new users, so I removed it from the proposed command line.

Tags: debian, english.
4th February 2016

The appstream system is taking shape in Debian, and one provided feature is a very convenient way to tell you which package to install to make a given firmware file available when the kernel is looking for it. This can be done using apt-file too, but that is for someone else to blog about. :)

Here is a small recipe to find the package with a given firmware file, in this example I am looking for ctfw-3.2.3.0.bin, randomly picked from the set of firmware announced using appstream in Debian unstable. In general you would be looking for the firmware requested by the kernel during kernel module loading. To find the package providing the example file, do like this:

% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides firmware:runtime ctfw-3.2.3.0.bin | \
  awk '/Package:/ {print $2}'
firmware-qlogic
%

See the appstream wiki page to learn how to embed the package metadata in a way appstream can use.

This same approach can be used to find any package supporting a given MIME type. This is very useful when you get a file you do not know how to handle. First find the mime type using file --mime-type, and next look up the package providing support for it. Lets say you got an SVG file. Its MIME type is image/svg+xml, and you can find all packages handling this type like this:

% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides mimetype image/svg+xml | \
  awk '/Package:/ {print $2}'
bkchem
phototonic
inkscape
shutter
tetzle
geeqie
xia
pinta
gthumb
karbon
comix
mirage
viewnior
postr
ristretto
kolourpaint4
eog
eom
gimagereader
midori
%

I believe the MIME types are fetched from the desktop file for packages providing appstream metadata.

Tags: debian, english.
24th January 2016

Most people seem not to realise that every time they walk around with the computerised radio beacon known as a mobile phone their position is tracked by the phone company and often stored for a long time (like every time a SMS is received or sent). And if their computerised radio beacon is capable of running programs (often called mobile apps) downloaded from the Internet, these programs are often also capable of tracking their location (if the app requested access during installation). And when these programs send out information to central collection points, the location is often included, unless extra care is taken to not send the location. The provided information is used by several entities, for good and bad (what is good and bad, depend on your point of view). What is certain, is that the private sphere and the right to free movement is challenged and perhaps even eradicated for those announcing their location this way, when they share their whereabouts with private and public entities.

The phone company logs provide a register of locations to check out when one want to figure out what the tracked person was doing. It is unavailable for most of us, but provided to selected government officials, company staff, those illegally buying information from unfaithful servants and crackers stealing the information. But the public information can be collected and analysed, and a free software tool to do so is called Creepy or Cree.py. I discovered it when I read an article about Creepy in the Norwegian newspaper Aftenposten i November 2014, and decided to check if it was available in Debian. The python program was in Debian, but the version in Debian was completely broken and practically unmaintained. I uploaded a new version which did not work quite right, but did not have time to fix it then. This Christmas I decided to finally try to get Creepy operational in Debian. Now a fixed version is available in Debian unstable and testing, and almost all Debian specific patches are now included upstream.

The Creepy program visualises geolocation information fetched from Twitter, Instagram, Flickr and Google+, and allow one to get a complete picture of every social media message posted recently in a given area, or track the movement of a given individual across all these services. Earlier it was possible to use the search API of at least some of these services without identifying oneself, but these days it is impossible. This mean that to use Creepy, you need to configure it to log in as yourself on these services, and provide information to them about your search interests. This should be taken into account when using Creepy, as it will also share information about yourself with the services.

The picture above show the twitter messages sent from (or at least geotagged with a position from) the city centre of Oslo, the capital of Norway. One useful way to use Creepy is to first look at information tagged with an area of interest, and next look at all the information provided by one or more individuals who was in the area. I tested it by checking out which celebrity provide their location in twitter messages by checkout out who sent twitter messages near a Norwegian TV station, and next could track their position over time, making it possible to locate their home and work place, among other things. A similar technique have been used to locate Russian soldiers in Ukraine, and it is both a powerful tool to discover lying governments, and a useful tool to help people understand the value of the private information they provide to the public.

The package is not trivial to backport to Debian Stable/Jessie, as it depend on several python modules currently missing in Jessie (at least python-instagram, python-flickrapi and python-requests-toolbelt).

(I have uploaded the image to screenshots.debian.net and licensed it under the same terms as the Creepy program in Debian.)

15th January 2016

During his DebConf15 keynote, Jacob Appelbaum observed that those listening on the Internet lines would have good reason to believe a computer have a given security hole if it download a security fix from a Debian mirror. This is a good reason to always use encrypted connections to the Debian mirror, to make sure those listening do not know which IP address to attack. In August, Richard Hartmann observed that encryption was not enough, when it was possible to interfere download size to security patches or the fact that download took place shortly after a security fix was released, and proposed to always use Tor to download packages from the Debian mirror. He was not the first to propose this, as the apt-transport-tor package by Tim Retout already existed to make it easy to convince apt to use Tor, but I was not aware of that package when I read the blog post from Richard.

Richard discussed the idea with Peter Palfrader, one of the Debian sysadmins, and he set up a Tor hidden service on one of the central Debian mirrors using the address vwakviie2ienjx6t.onion, thus making it possible to download packages directly between two tor nodes, making sure the network traffic always were encrypted.

Here is a short recipe for enabling this on your machine, by installing apt-transport-tor and replacing http and https urls with tor+http and tor+https, and using the hidden service instead of the official Debian mirror site. I recommend installing etckeeper before you start to have a history of the changes done in /etc/.

apt install apt-transport-tor
sed -i 's% http://ftp.debian.org/% tor+http://vwakviie2ienjx6t.onion/%' /etc/apt/sources.list
sed -i 's% http% tor+http%' /etc/apt/sources.list

If you have more sources listed in /etc/apt/sources.list.d/, run the sed commands for these too. The sed command is assuming your are using the ftp.debian.org Debian mirror. Adjust the command (or just edit the file manually) to match your mirror.

This work in Debian Jessie and later. Note that tools like apt-file only recently started using the apt transport system, and do not work with these tor+http URLs. For apt-file you need the version currently in experimental, which need a recent apt version currently only in unstable. So if you need a working apt-file, this is not for you.

Another advantage from this change is that your machine will start using Tor regularly and at fairly random intervals (every time you update the package lists or upgrade or install a new package), thus masking other Tor traffic done from the same machine. Using Tor will become normal for the machine in question.

On Freedombox, APT is set up by default to use apt-transport-tor when Tor is enabled. It would be great if it was the default on any Debian system.

23rd December 2015

When I was a kid, we used to collect "car numbers", as we used to call the car license plate numbers in those days. I would write the numbers down in my little book and compare notes with the other kids to see how many region codes we had seen and if we had seen some exotic or special region codes and numbers. It was a fun game to pass time, as we kids have plenty of it.

A few days I came across the OpenALPR project, a free software project to automatically discover and report license plates in images and video streams, and provide the "car numbers" in a machine readable format. I've been looking for such system for a while now, because I believe it is a bad idea that the automatic number plate recognition tool only is available in the hands of the powerful, and want it to be available also for the powerless to even the score when it comes to surveillance and sousveillance. I discovered the developer wanted to get the tool into Debian, and as I too wanted it to be in Debian, I volunteered to help him get it into shape to get the package uploaded into the Debian archive.

Today we finally managed to get the package into shape and uploaded it into Debian, where it currently waits in the NEW queue for review by the Debian ftpmasters.

I guess you are wondering why on earth such tool would be useful for the common folks, ie those not running a large government surveillance system? Well, I plan to put it in a computer on my bike and in my car, tracking the cars nearby and allowing me to be notified when number plates on my watch list are discovered. Another use case was suggested by a friend of mine, who wanted to set it up at his home to open the car port automatically when it discovered the plate on his car. When I mentioned it perhaps was a bit foolhardy to allow anyone capable of placing his license plate number of a piece of cardboard to open his car port, men replied that it was always unlocked anyway. I guess for such use case it make sense. I am sure there are other use cases too, for those with imagination and a vision.

If you want to build your own version of the Debian package, check out the upstream git source and symlink ./distros/debian to ./debian/ before running "debuild" to build the source. Or wait a bit until the package show up in unstable.

20th December 2015

Around three years ago, I created the isenkram system to get a more practical solution in Debian for handing hardware related packages. A GUI system in the isenkram package will present a pop-up dialog when some hardware dongle supported by relevant packages in Debian is inserted into the machine. The same lookup mechanism to detect packages is available as command line tools in the isenkram-cli package. In addition to mapping hardware, it will also map kernel firmware files to packages and make it easy to install needed firmware packages automatically. The key for this system to work is a good way to map hardware to packages, in other words, allow packages to announce what hardware they will work with.

I started by providing data files in the isenkram source, and adding code to download the latest version of these data files at run time, to ensure every user had the most up to date mapping available. I also added support for storing the mapping in the Packages file in the apt repositories, but did not push this approach because while I was trying to figure out how to best store hardware/package mappings, the appstream system was announced. I got in touch and suggested to add the hardware mapping into that data set to be able to use appstream as a data source, and this was accepted at least for the Debian version of appstream.

A few days ago using appstream in Debian for this became possible, and today I uploaded a new version 0.20 of isenkram adding support for appstream as a data source for mapping hardware to packages. The only package so far using appstream to announce its hardware support is my pymissile package. I got help from Matthias Klumpp with figuring out how do add the required metadata in pymissile. I added a file debian/pymissile.metainfo.xml with this content:

<?xml version="1.0" encoding="UTF-8"?>
<component>
  <id>pymissile</id>
  <metadata_license>MIT</metadata_license>
  <name>pymissile</name>
  <summary>Control original Striker USB Missile Launcher</summary>
  <description>
    <p>
      Pymissile provides a curses interface to control an original
      Marks and Spencer / Striker USB Missile Launcher, as well as a
      motion control script to allow a webcamera to control the
      launcher.
    </p>
  </description>
  <provides>
    <modalias>usb:v1130p0202d*</modalias>
  </provides>
</component>

The key for isenkram is the component/provides/modalias value, which is a glob style match rule for hardware specific strings (modalias strings) provided by the Linux kernel. In this case, it will map to all USB devices with vendor code 1130 and product code 0202.

Note, it is important that the license of all the metadata files are compatible to have permissions to aggregate them into archive wide appstream files. Matthias suggested to use MIT or BSD licenses for these files. A challenge is figuring out a good id for the data, as it is supposed to be globally unique and shared across distributions (in other words, best to coordinate with upstream what to use). But it can be changed later or, so we went with the package name as upstream for this project is dormant.

To get the metadata file installed in the correct location for the mirror update scripts to pick it up and include its content the appstream data source, the file must be installed in the binary package under /usr/share/appdata/. I did this by adding the following line to debian/pymissile.install:

debian/pymissile.metainfo.xml usr/share/appdata

With that in place, the command line tool isenkram-lookup will list all packages useful on the current computer automatically, and the GUI pop-up handler will propose to install the package not already installed if a hardware dongle is inserted into the machine in question.

Details of the modalias field in appstream is available from the DEP-11 proposal.

To locate the modalias values of all hardware present in a machine, try running this command on the command line:

cat $(find /sys/devices/|grep modalias)

To learn more about the isenkram system, please check out my blog posts tagged isenkram.

30th November 2015

A blog post from my fellow Debian developer Paul Wise titled "The GPL is not magic pixie dust" explain the importance of making sure the GPL is enforced. I quote the blog post from Paul in full here with his permission:

Become a Software Freedom Conservancy Supporter!

The GPL is not magic pixie dust. It does not work by itself.
The first step is to choose a copyleft license for your code.
The next step is, when someone fails to follow that copyleft license, it must be enforced
and its a simple fact of our modern society that such type of work
is incredibly expensive to do and incredibly difficult to do.

-- Bradley Kuhn, in FaiF episode 0x57

As the Debian Website used to imply, public domain and permissively licensed software can lead to the production of more proprietary software as people discover useful software, extend it and or incorporate it into their hardware or software products. Copyleft licenses such as the GNU GPL were created to close off this avenue to the production of proprietary software but such licenses are not enough. With the ongoing adoption of Free Software by individuals and groups, inevitably the community's expectations of license compliance are violated, usually out of ignorance of the way Free Software works, but not always. As Karen and Bradley explained in FaiF episode 0x57, copyleft is nothing if no-one is willing and able to stand up in court to protect it. The reality of today's world is that legal representation is expensive, difficult and time consuming. With gpl-violations.org in hiatus until some time in 2016, the Software Freedom Conservancy (a tax-exempt charity) is the major defender of the Linux project, Debian and other groups against GPL violations. In March the SFC supported a lawsuit by Christoph Hellwig against VMware for refusing to comply with the GPL in relation to their use of parts of the Linux kernel. Since then two of their sponsors pulled corporate funding and conferences blocked or cancelled their talks. As a result they have decided to rely less on corporate funding and more on the broad community of individuals who support Free Software and copyleft. So the SFC has launched a campaign to create a community of folks who stand up for copyleft and the GPL by supporting their work on promoting and supporting copyleft and Free Software.

If you support Free Software, like what the SFC do, agree with their compliance principles, are happy about their successes in 2015, work on a project that is an SFC member and or just want to stand up for copyleft, please join Christopher Allan Webber, Carol Smith, Jono Bacon, myself and others in becoming a supporter. For the next week your donation will be matched by an anonymous donor. Please also consider asking your employer to match your donation or become a sponsor of SFC. Don't forget to spread the word about your support for SFC via email, your blog and or social media accounts.

I agree with Paul on this topic and just signed up as a Supporter of Software Freedom Conservancy myself. Perhaps you should be a supporter too?

17th November 2015

I've needed a new OpenPGP key for a while, but have not had time to set it up properly. I wanted to generate it offline and have it available on a OpenPGP smart card for daily use, and learning how to do it and finding time to sit down with an offline machine almost took forever. But finally I've been able to complete the process, and have now moved from my old GPG key to a new GPG key. See the full transition statement, signed with both my old and new key for the details. This is my new key:

pub   3936R/111D6B29EE4E02F9 2015-11-03 [expires: 2019-11-14]
      Key fingerprint = 3AC7 B2E3 ACA5 DF87 78F1  D827 111D 6B29 EE4E 02F9
uid                  Petter Reinholdtsen <pere@hungry.com>
uid                  Petter Reinholdtsen <pere@debian.org>
sub   4096R/87BAFB0E 2015-11-03 [expires: 2019-11-02]
sub   4096R/F91E6DE9 2015-11-03 [expires: 2019-11-02]
sub   4096R/A0439BAB 2015-11-03 [expires: 2019-11-02]

The key can be downloaded from the OpenPGP key servers, signed by my old key.

If you signed my old key (DB4CCC4B2A30D729), I'd very much appreciate a signature on my new key, details and instructions in the transition statement. I m happy to reciprocate if you have a similarly signed transition statement to present.

3rd November 2015

In Norway, all government offices are required by law to keep a list of every document or letter arriving and leaving their offices. Internal notes should also be documented. The document list (called a mail journal - "postjournal" in Norwegian) is public information and thanks to the Norwegian Freedom of Information Act (Offentleglova) the mail journal is available for everyone. Most offices even publish the mail journal on their web pages, as PDFs or tables in web pages. The state-level offices even have a shared web based search service (called Offentlig Elektronisk Postjournal - OEP) to make it possible to search the entries in the list. Not all journal entries show up on OEP, and the search service is hard to use, but OEP does make it easier to find at least some interesting journal entries .

In 2012 I came across a document in the mail journal for the Norwegian Ministry of Transport and Communications on OEP that piqued my interest. The title of the document was "Internet Governance and how it affects national security" (Norwegian: "Internet Governance og påvirkning på nasjonal sikkerhet"). The document date was 2012-05-22, and it was said to be sent from the "Permanent Mission of Norway to the United Nations". I asked for a copy, but my request was rejected with a reference to a legal clause said to authorize them to reject it (offentleglova § 20, letter c) and an explanation that the document was exempt because of foreign policy interests as it contained information related to the Norwegian negotiating position, negotiating strategies or similar. I was told the information in the document related to the ongoing negotiation in the International Telecommunications Union (ITU). The explanation made sense to me in early January 2013, as a ITU conference in Dubay discussing Internet Governance (World Conference on International Telecommunications - WCIT-12) had just ended, reportedly in chaos when USA walked out of the negotiations and 25 countries including Norway refused to sign the new treaty. It seemed reasonable to believe talks were still going on a few weeks later. Norway was represented at the ITU meeting by two authorities, the Norwegian Communications Authority and the Ministry of Transport and Communications. This might be the reason the letter was sent to the ministry. As I was unable to find the document in the mail journal of any Norwegian UN mission, I asked the ministry who had sent the document to the ministry, and was told that it was the Deputy Permanent Representative with the Permanent Mission of Norway in Geneva.

Three years later, I was still curious about the content of that document, and again asked for a copy, believing the negotiation was over now. This time I asked both the Ministry of Transport and Communications as the receiver and asked the Permanent Mission of Norway in Geneva as the sender for a copy, to see if they both agreed that it should be withheld from the public. The ministry upheld its rejection quoting the same law reference as before, while the permanent mission rejected it quoting a different clause (offentleglova § 20 letter b), claiming that they were required to keep the content of the document from the public because it contained information given to Norway with the expressed or implied expectation that the information should not be made public. I asked the permanent mission for an explanation, and was told that the document contained an account from a meeting held in the Pentagon for a limited group of NATO nations where the organiser of the meeting did not intend the content of the meeting to be publicly known. They explained that giving me a copy might cause Norway to not get access to similar information in the future and thus hurt the future foreign interests of Norway. They also explained that the Permanent Mission of Norway in Geneva was not the author of the document, they only got a copy of it, and because of this had not listed it in their mail journal.

Armed with this knowledge I asked the Ministry to reconsider and asked who was the author of the document, now realising that it was not same as the "sender" according to Ministry of Transport and Communications. The ministry upheld its rejection but told me the name of the author of the document. According to a government report the author was with the Permanent Mission of Norway in New York a bit more than a year later (2014-09-22), so I guessed that might be the office responsible for writing and sending the report initially and asked them for a copy but I was obviously wrong as I was told that the document was unknown to them and that the author did not work there when the document was written. Next, I asked the Permanent Mission of Norway in Geneva and the Foreign Ministry to reconsider and at least tell me who sent the document to Deputy Permanent Representative with the Permanent Mission of Norway in Geneva. The Foreign Ministry also upheld its rejection, but told me that the person sending the document to Permanent Mission of Norway in Geneva was the defence attaché with the Norwegian Embassy in Washington. I do not know if this is the same person as the author of the document.

If I understand the situation correctly, someone capable of inviting selected NATO nations to a meeting in Pentagon organised a meeting where someone representing the Norwegian defence attaché in Washington attended, and the account from this meeting is interpreted by the Ministry of Transport and Communications to expose Norways negotiating position, negotiating strategies and similar regarding the ITU negotiations on Internet Governance. It is truly amazing what can be derived from mere meta-data.

I wonder which NATO countries besides Norway attended this meeting? And what exactly was said and done at the meeting? Anyone know?

31st October 2015

People keep asking me where to get the various forms of the book I published last week, the Norwegian Bokmål edition of Lawrence Lessigs book Free Culture. It was published on paper via lulu.com, and is also available in PDF, ePub and MOBI format. I currently sell the paper edition for self cost from lulu.com, but might extend the distribution to book stores like Amazon and Barnes & Noble later. This will double the price and force me to make a profit from selling the book. Anyway, here are links to get the book in different formats:

Note that the MOBI version have problems with the table of content, at least with the viewers I have been able to test. And the ePub file have several problems according to epubcheck, but seem to display fine in the viewers I have tested. All the files needed to create the book in various forms are available from the github project page.

The project got press coverage from the Norwegian IT news site digi.no. Check out the article "Vil åpne politikernes øyne for Creative Commons".

I've blogged about the project as it moved along. The blogs document the translation progress and insights I had along the way.

23rd October 2015

Click here to buy the book.

In 2004, as the Creative Commons movement gained momentum, its creator Lawrence Lessig wrote the book Free Culture to explain the problems with increasing copyright regulation and suggest some solutions. I read the book back then and was very moved by it. Reading the book inspired me and changed the way I looked on copyright law, and I would love it if more people would read it too.

Because of this, I decided in the summer of 2012 to translate it to Norwegian Bokmål and publish it for those of my friends and family that prefer to read books in Norwegian. I translated the book using docbook and a gettext PO file, and a byproduct of this process is a new edition of the English original. I've been in touch with the author during by work, and he said it was fine with him if I also published an English version. So I decided to do so. Today, I made this edition available for sale on Lulu.com, for those interested in a paper book. This is the cover:

The Norwegian Bokmål version will be available for purchase in a few days. I also plan to publish a French version in a few weeks or months, depending on the amount of people with knowledge of French to join the translation project. So far there is only one active person, but the French book is almost completely translated but need some proof reading.

The book is also available in PDF, ePub and MOBI formats from my github project page. Note the ePub and MOBI versions have some formatting problems I believe is due to bugs in the docbook tool dbtoepub (Debian BTS issues #795842 and #796871), but I have not taken the time to investigate. I recommend the PDF and ePub version for now, as they seem to show up fine in the viewers I have available.

After the translation to Norwegian Bokmål was complete, I was able to secure some sponsoring from the NUUG Foundation to print the book. This is the reason their logo is located on the back cover. I am very grateful for their contribution, and will use it to give a copy of the Norwegian edition to members of the Norwegian Parliament and other decision makers here in Norway.

19th October 2015

Last year, US president candidate in the Democratic Party Lawrence interviewed Edward Snowden. The one hour interview was published by Harvard Law School 2014-10-23 on Youtube, and the meeting took place 2014-10-20.

The questions are very good, and there is lots of useful information to be learned and very interesting issues to think about being raised. Please check it out.

I find it especially interesting to hear again that Snowden did try to bring up his reservations through the official channels without any luck. It is in sharp contrast to the answers made 2013-11-06 by the Norwegian prime minister Erna Solberg to the Norwegian Parliament, claiming Snowden is no Whistle-Blower because he should have taken up his concerns internally and using official channels. It make me sad that this is the political leadership we have here in Norway.

8th October 2015

The movie "The Internet's Own Boy: The Story of Aaron Swartz" is both inspiring and depressing at the same time. The work of Aaron Swartz has inspired me in my work, and I am grateful of all the improvements he was able to initiate or complete. I wish I am able to do as much good in my life as he did in his. Every minute of this 1:45 long movie is inspiring in documenting how much impact a single person can have on improving the society and this world. And it is depressing in documenting how the law enforcement of USA (and other countries) is corrupted to a point where they can push a bright kid to his death for downloading too many scientific articles. Aaron is dead. Let us all weep.

The movie is also available on Youtube. I wish there were Norwegian subtitles available, so I could show it to my parents.

1st October 2015

As I wrap up the Norwegian version of Free Culture book by Lawrence Lessig (still waiting for my final proof reading copy to arrive in the mail), my great dblatex helper and developer of the dblatex docbook processor, Benoît Guillon, decided a to try to create a French version of the book. He started with the French translation available from the Wikilivres wiki pages, and wrote a program to convert it into a PO file, allowing the translation to be integrated into the po4a based framework I use to create the Norwegian translation from the English edition. We meet on the #dblatex IRC channel to discuss the work. If you want to help create a French edition, check out his git repository and join us on IRC. If the French edition look good, we might publish it as a paper book on lulu.com. A French version of the drawings and the cover need to be provided for this to happen.

24th September 2015

When I get a new laptop, the battery life time at the start is OK. But this do not last. The last few laptops gave me a feeling that within a year, the life time is just a fraction of what it used to be, and it slowly become painful to use the laptop without power connected all the time. Because of this, when I got a new Thinkpad X230 laptop about two years ago, I decided to monitor its battery state to have more hard facts when the battery started to fail.

First I tried to find a sensible Debian package to record the battery status, assuming that this must be a problem already handled by someone else. I found battery-stats, which collects statistics from the battery, but it was completely broken. I sent a few suggestions to the maintainer, but decided to write my own collector as a shell script while I waited for feedback from him. Via a blog post about the battery development on a MacBook Air I also discovered batlog, not available in Debian.

I started my collector 2013-07-15, and it has been collecting battery stats ever since. Now my /var/log/hjemmenett-battery-status.log file contain around 115,000 measurements, from the time the battery was working great until now, when it is unable to charge above 7% of original capacity. My collector shell script is quite simple and look like this:

#!/bin/sh
# Inspired by
# http://www.ifweassume.com/2013/08/the-de-evolution-of-my-laptop-battery.html
# See also
# http://blog.sleeplessbeastie.eu/2013/01/02/debian-how-to-monitor-battery-capacity/
logfile=/var/log/hjemmenett-battery-status.log

files="manufacturer model_name technology serial_number \
    energy_full energy_full_design energy_now cycle_count status"

if [ ! -e "$logfile" ] ; then
    (
	printf "timestamp,"
	for f in $files; do
	    printf "%s," $f
	done
	echo
    ) > "$logfile"
fi

log_battery() {
    # Print complete message in one echo call, to avoid race condition
    # when several log processes run in parallel.
    msg=$(printf "%s," $(date +%s); \
	for f in $files; do \
	    printf "%s," $(cat $f); \
	done)
    echo "$msg"
}

cd /sys/class/power_supply

for bat in BAT*; do
    (cd $bat && log_battery >> "$logfile")
done

The script is called when the power management system detect a change in the power status (power plug in or out), and when going into and out of hibernation and suspend. In addition, it collect a value every 10 minutes. This make it possible for me know when the battery is discharging, charging and how the maximum charge change over time. The code for the Debian package is now available on github.

The collected log file look like this:

timestamp,manufacturer,model_name,technology,serial_number,energy_full,energy_full_design,energy_now,cycle_count,status,
1376591133,LGC,45N1025,Li-ion,974,62800000,62160000,39050000,0,Discharging,
[...]
1443090528,LGC,45N1025,Li-ion,974,4900000,62160000,4900000,0,Full,
1443090601,LGC,45N1025,Li-ion,974,4900000,62160000,4900000,0,Full,

I wrote a small script to create a graph of the charge development over time. This graph depicted above show the slow death of my laptop battery.

But why is this happening? Why are my laptop batteries always dying in a year or two, while the batteries of space probes and satellites keep working year after year. If we are to believe Battery University, the cause is me charging the battery whenever I have a chance, and the fix is to not charge the Lithium-ion batteries to 100% all the time, but to stay below 90% of full charge most of the time. I've been told that the Tesla electric cars limit the charge of their batteries to 80%, with the option to charge to 100% when preparing for a longer trip (not that I would want a car like Tesla where rights to privacy is abandoned, but that is another story), which I guess is the option we should have for laptops on Linux too.

Is there a good and generic way with Linux to tell the battery to stop charging at 80%, unless requested to charge to 100% once in preparation for a longer trip? I found one recipe on askubuntu for Ubuntu to limit charging on Thinkpad to 80%, but could not get it to work (kernel module refused to load).

I wonder why the battery capacity was reported to be more than 100% at the start. I also wonder why the "full capacity" increases some times, and if it is possible to repeat the process to get the battery back to design capacity. And I wonder if the discharge and charge speed change over time, or if this stay the same. I did not yet try to write a tool to calculate the derivative values of the battery level, but suspect some interesting insights might be learned from those.

Update 2015-09-24: I got a tip to install the packages acpi-call-dkms and tlp (unfortunately missing in Debian stable) packages instead of the tp-smapi-dkms package I had tried to use initially, and use 'tlp setcharge 40 80' to change when charging start and stop. I've done so now, but expect my existing battery is toast and need to be replaced. The proposal is unfortunately Thinkpad specific.

Tags: debian, english.
3rd September 2015

Creating a good looking book cover proved harder than I expected. I wanted to create a cover looking similar to the original cover of the Free Culture book we are translating to Norwegian, and I wanted it in vector format for high resolution printing. But my inkscape knowledge were not nearly good enough to pull that off.

But thanks to the great inkscape community, I was able to wrap up the cover yesterday evening. I asked on the #inkscape IRC channel on Freenode for help and clues, and Marc Jeanmougin (Mc-) volunteered to try to recreate it based on the PDF of the cover from the HTML version. Not only did he create a SVG document with the original and his vector version side by side, he even provided an instruction video explaining how he did it. But the instruction video is not easy to follow for an untrained inkscape user. The video is a recording on how he did it, and he is obviously very experienced as the menu selections are very quick and he mentioned on IRC that he did use some keyboard shortcuts that can't be seen on the video, but it give a good idea about the inkscape operations to use to create the stripes with the embossed copyright sign in the center.

I took his SVG file, copied the vector image and re-sized it to fit on the cover I was drawing. I am happy with the end result, and the current english version look like this:

I am not quite sure about the text on the back, but guess it will do. I picked three quotes from the official site for the book, and hope it will work to trigger the interest of potential readers. The Norwegian cover will look the same, but with the texts and bar code replaced with the Norwegian version.

The book is very close to being ready for publication, and I expect to upload the final draft to Lulu in the next few days and order a final proof reading copy to verify that everything look like it should before allowing everyone to order their own copy of Free Culture, in English or Norwegian Bokmål. I'm waiting to give the the productive proof readers a chance to complete their work.

19th August 2015

Today, finally, my first printed draft edition of the Norwegian translation of Free Culture I have been working on for the last few years arrived in the mail. I had to fake a cover to get the interior printed, and the exterior of the book look awful, but that is irrelevant at this point. I asked for a printed pocket book version to get an idea about the font sizes and paper format as well as how good the figures and images look in print, but also to test what the pocket book version would look like. After receiving the 500 page pocket book, it became obvious to me that that pocket book size is too small for this book. I believe the book is too thick, and several tables and figures do not look good in the size they get with that small page sizes. I believe I will go with the 5.5x8.5 inch size instead. A surprise discovery from the paper version was how bad the URLs look in print. They are very hard to read in the colophon page. The URLs are red in the PDF, but light gray on paper. I need to change the color of links somehow to look better. But there is a printed book in my hand, and it feels great. :)

Now I only need to fix the cover, wrap up the postscript with the store behind the book, and collect the last corrections from the proof readers before the book is ready for proper printing. Cover artists willing to work for free and create a Creative Commons licensed vector file looking similar to the original is most welcome, as my skills as a graphics designer are mostly missing.

9th August 2015

Typesetting a book is harder than I hoped. As the translation is mostly done, and a volunteer proof reader was going to check the text on paper, it was time this summer to focus on formatting my translated docbook based version of the Free Culture book by Lawrence Lessig. I've been trying to get both docboox-xsl+fop and dblatex to give me a good looking PDF, but in the end I went with dblatex, because its Debian maintainer and upstream developer were responsive and very helpful in solving my formatting challenges.

Last night, I finally managed to create a PDF that no longer made Lulu.com complain after uploading, and I ordered a text version of the book on paper. It is lacking a proper book cover and is not tagged with the correct ISBN number, but should give me an idea what the finished book will look like.

Instead of using Lulu, I did consider printing the book using CreateSpace, but ended up using Lulu because it had smaller book size options (CreateSpace seem to lack pocket book with extended distribution). I looked for a similar service in Norway, but have not seen anything so far. Please let me know if I am missing out on something here.

But I still struggle to decide the book size. Should I go for pocket book (4.25x6.875 inches / 10.8x17.5 cm) with 556 pages, Digest (5.5x8.5 inches / 14x21.6 cm) with 323 pages or US Trade (6x8 inches / 15.3x22.9 cm) with 280 pages? Fewer pager give a cheaper book, and a smaller book is easier to carry around. The test book I ordered was pocket book sized, to give me an idea how well that fit in my hand, but I suspect I will end up using a digest sized book in the end to bring the prize down further.

My biggest challenge at the moment is making nice cover art. My inkscape skills are not yet up to the task of replicating the original cover in SVG format. I also need to figure out what to write about the book on the back (will most likely use the same text as the description on web based book stores). I would love help with this, if you are willing to license the art source and final version using the same CC license as the book. My artistic skills are not really up to the task.

I plan to publish the book in both English and Norwegian and on paper, in PDF form as well as EPUB and MOBI format. The current status can as usual be found on github in the archive/ directory. So far I have spent all time on making the PDF version look good. Someone should probably do the same with the dbtoepub generated e-book. Help is definitely needed here, as I expect to run out of steem before I find time to improve the epub formatting.

Please let me know via github if you find typos in the book or discover translations that should be improved. The final proof reading is being done right now, and I expect to publish the finished result in a few months.

16th July 2015

I'm still working on the Norwegian version of the Free Culture book by Lawrence Lessig, and is now working on the final typesetting and layout. One of the features I want to get the structure similar to the original book is to typeset the footnotes as endnotes in the notes chapter. Based on the feedback from the Debian maintainer and the dblatex developer, I came up with this recipe I would like to share with you. The proposal was to create a new LaTeX class file and add the LaTeX code there, but this is not always practical, when I want to be able to replace the class using a make file variable. So my proposal misuses the latex.begindocument XSL parameter value, to get a small fragment into the correct location in the generated LaTeX File.

First, decide where in the DocBook document to place the endnotes, and add this text there:

<?latex \theendnotes ?>

Next, create a xsl stylesheet file dblatex-endnotes.xsl to add the code needed to add the endnote instructions in the preamble of the generated LaTeX document, with content like this:

<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>
  <xsl:param name="latex.begindocument">
    <xsl:text>
\usepackage{endnotes}
\let\footnote=\endnote
\def\enoteheading{\mbox{}\par\vskip-\baselineskip }
\begin{document}
    </xsl:text>
  </xsl:param>
</xsl:stylesheet>

Finally, load this xsl file when running dblatex, for example like this:

dblatex --xsl-user=dblatex-endnotes.xsl freeculture.nb.xml

The end result can be seen on github, where my book project is located.

7th July 2015

After asking the Norwegian Broadcasting Company (NRK) why they can broadcast and stream H.264 video without an agreement with the MPEG LA, I was wiser, but still confused. So I asked MPEG LA if their understanding matched that of NRK. As far as I can tell, it does not.

I started by asking for more information about the various licensing classes and what exactly is covered by the "Internet Broadcast AVC Video" class that NRK pointed me at to explain why NRK did not need a license for streaming H.264 video:

According to a MPEG LA press release dated 2010-02-02, there is no charge when using MPEG AVC/H.264 according to the terms of "Internet Broadcast AVC Video". I am trying to understand exactly what the terms of "Internet Broadcast AVC Video" is, and wondered if you could help me. What exactly is covered by these terms, and what is not?

The only source of more information I have been able to find is a PDF named AVC Patent Portfolio License Briefing, which states this about the fees:

  • Where End User pays for AVC Video
    • Subscription (not limited by title) – 100,000 or fewer subscribers/yr = no royalty; > 100,000 to 250,000 subscribers/yr = $25,000; >250,000 to 500,000 subscribers/yr = $50,000; >500,000 to 1M subscribers/yr = $75,000; >1M subscribers/yr = $100,000
    • Title-by-Title - 12 minutes or less = no royalty; >12 minutes in length = lower of (a) 2% or (b) $0.02 per title
  • Where remuneration is from other sources
    • Free Television - (a) one-time $2,500 per transmission encoder or (b) annual fee starting at $2,500 for > 100,000 HH rising to maximum $10,000 for >1,000,000 HH
    • Internet Broadcast AVC Video (not title-by-title, not subscription) – no royalty for life of the AVC Patent Portfolio License

Am I correct in assuming that the four categories listed is the categories used when selecting licensing terms, and that "Internet Broadcast AVC Video" is the category for things that do not fall into one of the other three categories? Can you point me to a good source explaining what is ment by "title-by-title" and "Free Television" in the license terms for AVC/H.264?

Will a web service providing H.264 encoded video content in a "video on demand" fashing similar to Youtube and Vimeo, where no subscription is required and no payment is required from end users to get access to the videos, fall under the terms of the "Internet Broadcast AVC Video", ie no royalty for life of the AVC Patent Portfolio license? Does it matter if some users are subscribed to get access to personalized services?

Note, this request and all answers will be published on the Internet.

The answer came quickly from Benjamin J. Myers, Licensing Associate with the MPEG LA:

Thank you for your message and for your interest in MPEG LA. We appreciate hearing from you and I will be happy to assist you.

As you are aware, MPEG LA offers our AVC Patent Portfolio License which provides coverage under patents that are essential for use of the AVC/H.264 Standard (MPEG-4 Part 10). Specifically, coverage is provided for end products and video content that make use of AVC/H.264 technology. Accordingly, the party offering such end products and video to End Users concludes the AVC License and is responsible for paying the applicable royalties.

Regarding Internet Broadcast AVC Video, the AVC License generally defines such content to be video that is distributed to End Users over the Internet free-of-charge. Therefore, if a party offers a service which allows users to upload AVC/H.264 video to its website, and such AVC Video is delivered to End Users for free, then such video would receive coverage under the sublicense for Internet Broadcast AVC Video, which is not subject to any royalties for the life of the AVC License. This would also apply in the scenario where a user creates a free online account in order to receive a customized offering of free AVC Video content. In other words, as long as the End User is given access to or views AVC Video content at no cost to the End User, then no royalties would be payable under our AVC License.

On the other hand, if End Users pay for access to AVC Video for a specific period of time (e.g., one month, one year, etc.), then such video would constitute Subscription AVC Video. In cases where AVC Video is delivered to End Users on a pay-per-view basis, then such content would constitute Title-by-Title AVC Video. If a party offers Subscription or Title-by-Title AVC Video to End Users, then they would be responsible for paying the applicable royalties you noted below.

Finally, in the case where AVC Video is distributed for free through an "over-the-air, satellite and/or cable transmission", then such content would constitute Free Television AVC Video and would be subject to the applicable royalties.

For your reference, I have attached a .pdf copy of the AVC License. You will find the relevant sublicense information regarding AVC Video in Sections 2.2 through 2.5, and the corresponding royalties in Section 3.1.2 through 3.1.4. You will also find the definitions of Title-by-Title AVC Video, Subscription AVC Video, Free Television AVC Video, and Internet Broadcast AVC Video in Section 1 of the License. Please note that the electronic copy is provided for informational purposes only and cannot be used for execution.

I hope the above information is helpful. If you have additional questions or need further assistance with the AVC License, please feel free to contact me directly.

Having a fresh copy of the license text was useful, and knowing that the definition of Title-by-Title required payment per title made me aware that my earlier understanding of that phrase had been wrong. But I still had a few questions:

I have a small followup question. Would it be possible for me to get a license with MPEG LA even if there are no royalties to be paid? The reason I ask, is that some video related products have a copyright clause limiting their use without a license with MPEG LA. The clauses typically look similar to this:

This product is licensed under the AVC patent portfolio license for the personal and non-commercial use of a consumer to (a) encode video in compliance with the AVC standard ("AVC video") and/or (b) decode AVC video that was encoded by a consumer engaged in a personal and non-commercial activity and/or AVC video that was obtained from a video provider licensed to provide AVC video. No license is granted or shall be implied for any other use. additional information may be obtained from MPEG LA L.L.C.

It is unclear to me if this clause mean that I need to enter into an agreement with MPEG LA to use the product in question, even if there are no royalties to be paid to MPEG LA. I suspect it will differ depending on the jurisdiction, and mine is Norway. What is MPEG LAs view on this?

According to the answer, MPEG LA believe those using such tools for non-personal or commercial use need a license with them:

With regard to the Notice to Customers, I would like to begin by clarifying that the Notice from Section 7.1 of the AVC License reads:

THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL USE OF A CONSUMER OR OTHER USES IN WHICH IT DOES NOT RECEIVE REMUNERATION TO (i) ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD ("AVC VIDEO") AND/OR (ii) DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM

The Notice to Customers is intended to inform End Users of the personal usage rights (for example, to watch video content) included with the product they purchased, and to encourage any party using the product for commercial purposes to contact MPEG LA in order to become licensed for such use (for example, when they use an AVC Product to deliver Title-by-Title, Subscription, Free Television or Internet Broadcast AVC Video to End Users, or to re-Sell a third party's AVC Product as their own branded AVC Product).

Therefore, if a party is to be licensed for its use of an AVC Product to Sell AVC Video on a Title-by-Title, Subscription, Free Television or Internet Broadcast basis, that party would need to conclude the AVC License, even in the case where no royalties were payable under the License. On the other hand, if that party (either a Consumer or business customer) simply uses an AVC Product for their own internal purposes and not for the commercial purposes referenced above, then such use would be included in the royalty paid for the AVC Products by the licensed supplier.

Finally, I note that our AVC License provides worldwide coverage in countries that have AVC Patent Portfolio Patents, including Norway.

I hope this clarification is helpful. If I may be of any further assistance, just let me know.

The mentioning of Norwegian patents made me a bit confused, so I asked for more information:

But one minor question at the end. If I understand you correctly, you state in the quote above that there are patents in the AVC Patent Portfolio that are valid in Norway. This make me believe I read the list available from <URL: http://www.mpegla.com/main/programs/AVC/Pages/PatentList.aspx > incorrectly, as I believed the "NO" prefix in front of patents were Norwegian patents, and the only one I could find under Mitsubishi Electric Corporation expired in 2012. Which patents are you referring to that are relevant for Norway?

Again, the quick answer explained how to read the list of patents in that list:

Your understanding is correct that the last AVC Patent Portfolio Patent in Norway expired on 21 October 2012. Therefore, where AVC Video is both made and Sold in Norway after that date, then no royalties would be payable for such AVC Video under the AVC License. With that said, our AVC License provides historic coverage for AVC Products and AVC Video that may have been manufactured or Sold before the last Norwegian AVC patent expired. I would also like to clarify that coverage is provided for the country of manufacture and the country of Sale that has active AVC Patent Portfolio Patents.

Therefore, if a party offers AVC Products or AVC Video for Sale in a country with active AVC Patent Portfolio Patents (for example, Sweden, Denmark, Finland, etc.), then that party would still need coverage under the AVC License even if such products or video are initially made in a country without active AVC Patent Portfolio Patents (for example, Norway). Similarly, a party would need to conclude the AVC License if they make AVC Products or AVC Video in a country with active AVC Patent Portfolio Patents, but eventually Sell such AVC Products or AVC Video in a country without active AVC Patent Portfolio Patents.

As far as I understand it, MPEG LA believe anyone using Adobe Premiere and other video related software with a H.264 distribution license need a license agreement with MPEG LA to use such tools for anything non-private or commercial, while it is OK to set up a Youtube-like service as long as no-one pays to get access to the content. I still have no clear idea how this applies to Norway, where none of the patents MPEG LA is licensing are valid. Will the copyright terms take precedence or can those terms be ignored because the patents are not valid in Norway?

5th July 2015

Several people contacted me after my previous blog post about my need for a new laptop, and provided very useful feedback. I wish to thank every one of these. Several pointed me to the possibility of fixing my X230, and I am already in the process of getting Lenovo to do so thanks to the on site, next day support contract covering the machine. But the battery is almost useless (I expect to replace it with a non-official battery) and I do not expect the machine to live for many more years, so it is time to plan its replacement. If I did not have a support contract, it was suggested to find replacement parts using FrancEcrans, but it might present a language barrier as I do not understand French.

One tip I got was to use the Skinflint web service to compare laptop models. It seem to have more models available than prisjakt.no. Another tip I got from someone I know have similar keyboard preferences was that the HP EliteBook 840 keyboard is not very good, and this matches my experience with earlier EliteBook keyboards I tested. Because of this, I will not consider it any further.

When I wrote my blog post, I was not aware of Thinkpad X250, the newest Thinkpad X model. The keyboard reintroduces mouse buttons (which is missing from the X240), and is working fairly well with Debian Sid/Unstable according to Corsac.net. The reports I got on the keyboard quality are not consistent. Some say the keyboard is good, others say it is ok, while others say it is not very good. Those with experience from X41 and and X60 agree that the X250 keyboard is not as good as those trusty old laptops, and suggest I keep and fix my X230 instead of upgrading, or get a used X230 to replace it. I'm also told that the X250 lack leds for caps lock, disk activity and battery status, which is very convenient on my X230. I'm also told that the CPU fan is running very often, making it a bit noisy. In any case, the X250 do not work out of the box with Debian Stable/Jessie, one of my requirements.

I have also gotten a few vendor proposals, one was Pro-Star, another was Libreboot. The latter look very attractive to me.

Again, thank you all for the very useful feedback. It help a lot as I keep looking for a replacement.

Update 2015-07-06: I was recommended to check out the lapstore.de web shop for used laptops. They got several different old thinkpad X models, and provide one year warranty.

Tags: debian, english.
3rd July 2015

My primary work horse laptop is failing, and will need a replacement soon. The left 5 cm of the screen on my Thinkpad X230 started flickering yesterday, and I suspect the cause is a broken cable, as changing the angle of the screen some times get rid of the flickering.

My requirements have not really changed since I bought it, and is still as I described them in 2013. The last time I bought a laptop, I had good help from prisjakt.no where I could select at least a few of the requirements (mouse pin, wifi, weight) and go through the rest manually. Three button mouse and a good keyboard is not available as an option, and all the three laptop models proposed today (Thinkpad X240, HP EliteBook 820 G1 and G2) lack three mouse buttons). It is also unclear to me how good the keyboard on the HP EliteBooks are. I hope Lenovo have not messed up the keyboard, even if the quality and robustness in the X series have deteriorated since X41.

I wonder how I can find a sensible laptop when none of the options seem sensible to me? Are there better services around to search the set of available laptops for features? Please send me an email if you have suggestions.

Update 2015-07-23: I got a suggestion to check out the FSF list of endorsed hardware, which is useful background information.

Tags: debian, english.
2nd July 2015

Last oktober I was involved on behalf of NUUG with recording the talks at MakerCon Nordic, a conference for the Maker movement. Since then it has been the plan to publish the recordings on Frikanalen, which finally happened the last few days. A few talks are missing because the speakers asked the organizers to not publish them, but most of the talks are available. The talks are being broadcasted on RiksTV channel 50 and using multicast on Uninett, as well as being available from the Frikanalen web site. The unedited recordings are available on Youtube too.

This is the list of talks available at the moment. Visit the Frikanalen video pages to view them.

  • Evolutionary algorithms as a design tool - from art to robotics (Kyrre Glette)
  • Make and break (Hans Gerhard Meier)
  • Making a one year school course for young makers (Olav Helland)
  • Innovation Inspiration - IPR Databases as a Source of Inspiration (Hege Langlo)
  • Making a toy for makers (Erik Torstensson)
  • How to make 3D printer electronics (Elias Bakken)
  • Hovering Clouds: Looking at online tool offerings for Product Design and 3D Printing (William Kempton)
  • Travelling maker stories (Øyvind Nydal Dahl)
  • Making the first Maker Faire in Sweden (Nils Olander)
  • Breaking the mold: Printing 1000’s of parts (Espen Sivertsen)
  • Ultimaker — and open source 3D printing (Erik de Bruijn)
  • Autodesk’s 3D Printing Platform: Sparking innovation (Hilde Sevens)
  • How Making is Changing the World – and How You Can Too! (Jennifer Turliuk)
  • Open-Source Adventuring: OpenROV, OpenExplorer and the Future of Connected Exploration (David Lang)
  • Making in Norway (Haakon Karlsen Jr., Graham Hayward and Jens Dyvik)
  • The Impact of the Maker Movement (Mike Senese)

Part of the reason this took so long was that the scripts NUUG had to prepare a recording for publication were five years old and no longer worked with the current video processing tools (command line argument changes). In addition, we needed better audio normalization, which sent me on a detour to package bs1770gain for Debian. Now this is in place and it became a lot easier to publish NUUG videos on Frikanalen.

15th June 2015

It is a bit work to figure out the ownership structure of companies in Norway. The information is publicly available, but one need to recursively look up ownership for all owners to figure out the complete ownership graph of a given set of companies. To save me the work in the future, I wrote a script to do this automatically, outputting the ownership structure using the Graphviz/dotty format. The data source is web scraping from Proff, because I failed to find a useful source directly from the official keepers of the ownership data, Brønnøysundsregistrene.

To get an ownership graph for a set of companies, fetch the code from git and run it using the organisation number. I'm using the Norwegian newspaper Dagbladet as an example here, as its ownership structure is very simple:

% time ./bin/eierskap-dotty 958033540 > dagbladet.dot

real    0m2.841s
user    0m0.184s
sys     0m0.036s
%

The script accept several organisation numbers on the command line, allowing a cluster of companies to be graphed in the same image. The resulting dot file for the example above look like this. The edges are labeled with the ownership percentage, and the nodes uses the organisation number as their name and the name as the label:

digraph ownership {
rankdir = LR;
"Aller Holding A/s" -> "910119877" [label="100%"]
"910119877" -> "998689015" [label="100%"]
"998689015" -> "958033540" [label="99%"]
"974530600" -> "958033540" [label="1%"]
"958033540" [label="AS DAGBLADET"]
"998689015" [label="Berner Media Holding AS"]
"974530600" [label="Dagbladets Stiftelse"]
"910119877" [label="Aller Media AS"]
}

To view the ownership graph, run "dotty dagbladet.dot" or convert it to a PNG using "dot -T png dagbladet.dot > dagbladet.png". The result can be seen below:

Note that I suspect the "Aller Holding A/S" entry to be incorrect data in the official ownership register, as that name is not registered in the official company register for Norway. The ownership register is sensitive to typos and there seem to be no strict checking of the ownership links.

Let me know if you improve the script or find better data sources. The code is licensed according to GPL 2 or newer.

Update 2015-06-15: Since the initial post I've been told that "Aller Holding A/S" is a Danish company, which explain why it did not have a Norwegian organisation number. I've also been told that there is a web services API available from Brønnøysundsregistrene, for those willing to accept the terms or pay the price.

11th June 2015

Television loudness is the source of frustration for viewers everywhere. Some channels are very load, others are less loud, and ads tend to shout very high to get the attention of the viewers, and the viewers do not like this. This fact is well known to the TV channels. See for example the BBC white paper "Terminology for loudness and level dBTP, LU, and all that" from 2011 for a summary of the problem domain. To better address the need for even loadness, the TV channels got together several years ago to agree on a new way to measure loudness in digital files as one step in standardizing loudness. From this came the ITU-R standard BS.1770, "Algorithms to measure audio programme loudness and true-peak audio level".

The ITU-R BS.1770 specification describe an algorithm to measure loadness in LUFS (Loudness Units, referenced to Full Scale). But having a way to measure is not enough. To get the same loudness across TV channels, one also need to decide which value to standardize on. For European TV channels, this was done in the EBU Recommondaton R128, "Loudness normalisation and permitted maximum level of audio signals", which specifies a recommended level of -23 LUFS. In Norway, I have been told that NRK, TV2, MTG and SBS have decided among themselves to follow the R128 recommondation for playout from 2016-03-01.

There are free software available to measure and adjust the loudness level using the LUFS. In Debian, I am aware of a library named libebur128 able to measure the loudness and since yesterday morning a new binary named bs1770gain capable of both measuring and adjusting was uploaded and is waiting for NEW processing. I plan to maintain the latter in Debian under the Debian multimedia umbrella.

The free software based TV channel I am involved in, Frikanalen, plan to follow the R128 recommondation ourself as soon as we can adjust the software to do so, and the bs1770gain tool seem like a good fit for that part of the puzzle to measure loudness on new video uploaded to Frikanalen. Personally, I plan to use bs1770gain to adjust the loudness of videos I upload to Frikanalen on behalf of the NUUG member organisation. The program seem to be able to measure the LUFS value of any media file handled by ffmpeg, but I've only successfully adjusted the LUFS value of WAV files. I suspect it should be able to adjust it for all the formats handled by ffmpeg.

10th May 2015

5 days ago, the Norwegian Parliament decided, unanimously, that all citizens of Norway, no matter if they are suspected of something criminal or not, are required to give fingerprints to the police (vote details from Holder de ord). The law make it sound like it will be optional, but in a few years there will be no option any more. The ID will be required to vote, to get a bank account, a bank card, to change address on the post office, to receive an electronic ID or to get a drivers license and many other tasks required to function in Norway. The banks plan to stop providing their own ID on the bank cards when this new national ID is introduced, and the national road authorities plan to change the drivers license to no longer be usable as identity cards. In effect, to function as a citizen in Norway a national ID card will be required, and to get it one need to provide the fingerprints to the police.

In addition to handing the fingerprint to the police (which promised to not make a copy of the fingerprint image at that point in time, but say nothing about doing it later), a picture of the fingerprint will be stored on the RFID chip, along with a picture of the face and other information about the person. Some of the information will be encrypted, but the encryption will be the same system as currently used in the passports. The codes to decrypt will be available to a lot of government offices and their suppliers around the globe, but for those that do not know anyone in those circles it is good to know that the encryption is already broken. And they can be read from 70 meters away. This can be mitigated a bit by keeping it in a Faraday cage (metal box or metal wire container), but one will be required to take it out of there often enough to expose ones private and personal information to a lot of people that have no business getting access to that information.

The new Norwegian national IDs are a vehicle for identity theft, and I feel sorry for us all having politicians accepting such invasion of privacy without any objections. So are the Norwegian passports, but it has been possible to function in Norway without those so far. That option is going away with the passing of the new law. In this, I envy the Germans, because for them it is optional how much biometric information is stored in their national ID.

And if forced collection of fingerprints was not bad enough, the information collected in the national ID card register can be handed over to foreign intelligence services and police authorities, "when extradition is not considered disproportionate".

Update 2015-05-12: For those unable to believe that the Parliament really could make such decision, I wrote a summary of the sources I have for concluding the way I do (Norwegian Only, as the sources are all in Norwegian).

1st May 2015

Many years ago, a friend of mine calculated how much it would cost to store the sound of all phone calls in Norway, and came up with the cost of around 20 million NOK (2.4 mill EUR) for all the calls in a year. I got curious and wondered what the same calculation would look like today. To do so one need an idea of how much data storage is needed for each minute of sound, how many minutes all the calls in Norway sums up to, and the cost of data storage.

The 2005 numbers are from digi.no, the 2012 numbers are from a NKOM report, and I got the 2013 numbers after asking NKOM via email. I was told the numbers for 2014 will be presented May 20th, and decided not to wait for those, as I doubt they will be very different from the numbers from 2013.

The amount of data storage per minute sound depend on the wanted quality, and for phone calls it is generally believed that 8 Kbit/s is enough. See for example a summary on voice quality from Cisco for some alternatives. 8 Kbit/s is 60 Kbytes/min, and this can be multiplied with the number of call minutes to get the storage requirements.

Storage prices varies a lot, depending on speed, backup strategies, availability requirements etc. But a simple way to calculate can be to use the price of a TiB-disk (around 1000 NOK / 120 EUR) and double it to take space, power and redundancy into account. It could be much higher with high speed and good redundancy requirements.

But back to the question, What would it cost to store all phone calls in Norway? Not much. Here is a small table showing the estimated cost, which is within the budget constraint of most medium and large organisations:

YearCall minutesSizePrice in NOK / EUR
200524 000 000 0001.3 PiB3 mill / 358 000
201218 000 000 0001.0 PiB2.2 mill / 262 000
201317 000 000 000950 TiB2.1 mill / 250 000

This is the cost of buying the storage. Maintenance need to be taken into account too, but calculating that is left as an exercise for the reader. But it is obvious to me from those numbers that recording the sound of all phone calls in Norway is not going to be stopped because it is too expensive. I wonder if someone already is collecting the data?

26th April 2015

I am happy to report that the Debian Edu team sent out this announcement today:

the Debian Edu / Skolelinux project is pleased to announce the first
*beta* release of Debian Edu "Jessie" 8.0+edu0~b1, which for the first
time is composed entirely of packages from the current Debian stable
release, Debian 8 "Jessie".

(As most reading this will know, Debian "Jessie" hasn't actually been
released by now. The release is still in progress but should finish
later today ;)

We expect to make a final release of Debian Edu "Jessie" in the coming
weeks, timed with the first point release of Debian Jessie. Upgrades
from this beta release of Debian Edu Jessie to the final release will
be possible and encouraged!

Please report feedback to debian-edu@lists.debian.org and/or submit
bugs: http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

Debian Edu - sometimes also known as "Skolelinux" - is a complete
operating system for schools, universities and other
organisations. Through its pre- prepared installation profiles
administrators can install servers, workstations and laptops which
will work in harmony on the school network.  With Debian Edu, the
teachers themselves or their technical support staff can roll out a
complete multi-user, multi-machine study environment within hours or
days.

Debian Edu is already in use at several hundred schools all over the
world, particularly in Germany, Spain and Norway. Installations come
with hundreds of applications pre-installed, plus the whole Debian
archive of thousands of compatible packages within easy reach.

For those who want to give Debian Edu Jessie a try, download and
installation instructions are available, including detailed
instructions in the manual explaining the first steps, such as setting
up a network or adding users.  Please note that the password for the
user your prompted for during installation must have a length of at
least 5 characters!

== Where to download ==

A multi-architecture CD / usbstick image (649 MiB) for network booting
can be downloaded at the following locations:

    http://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~b1-CD.iso
    rsync -avzP ftp.skolelinux.org::skolelinux-cd/debian-edu-8.0+edu0~b1-CD.iso . 

The SHA1SUM of this image is: 54a524d16246cddd8d2cfd6ea52f2dd78c47ee0a

Alternatively an extended DVD / usbstick image (4.9 GiB) is also
available, with more software included (saving additional download
time):

    http://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~b1-USB.iso
    rsync -avzP ftp.skolelinux.org::skolelinux-cd/debian-edu-8.0+edu0~b1-USB.iso 

The SHA1SUM of this image is: fb1f1504a490c077a48653898f9d6a461cb3c636

Sources are available from the Debian archive, see
http://ftp.debian.org/debian-cd/8.0.0/source/ for some download
options.

== Debian Edu Jessie manual in seven languages ==

Please see https://wiki.debian.org/DebianEdu/Documentation/Jessie/ for
the English version of the Debian Edu jessie manual.

This manual has been fully translated to German, French, Italian,
Danish, Dutch and Norwegian Bokmål. A partly translated version exists
for Spanish.  See http://maintainer.skolelinux.org/debian-edu-doc/ for
online version of the translated manual.

More information about Debian 8 "Jessie" itself is provided in the
release notes and the installation manual:
- http://www.debian.org/releases/jessie/releasenotes
- http://www.debian.org/releases/jessie/installmanual


== Errata / known problems ==

    It takes up to 15 minutes for a changed hostname to be updated via
    DHCP (#780461).

    The hostname script fails to update LTSP server hostname (#783087). 

Workaround: run update-hostname-from-ip on the client to update the
hostname immediately.

Check https://wiki.debian.org/DebianEdu/Status/Jessie for a possibly
more current and complete list.

== Some more details about Debian Edu 8.0+edu0~b1 Codename Jessie released 2015-04-25 ==

=== Software updates ===

Everything which is new in Debian 8 Jessie, e.g.:

 * Linux kernel 3.16.7-ctk9; for the i386 architecture, support for
   i486 processors has been dropped; oldest supported ones: i586 (like
   Intel Pentium and AMD K5).

 * Desktop environments KDE Plasma Workspaces 4.11.13, GNOME 3.14,
   Xfce 4.12, LXDE 0.5.6
   * new optional desktop environment: MATE 1.8
   * KDE Plasma Workspaces is installed by default; to choose one of
     the others see the manual.
 * the browsers Iceweasel 31 ESR and Chromium 41
 * LibreOffice 4.3.3
 * GOsa 2.7.4
 * LTSP 5.5.4
 * CUPS print system 1.7.5
 * new boot framework: systemd
 * Educational toolbox GCompris 14.12
 * Music creator Rosegarden 14.02
 * Image editor Gimp 2.8.14
 * Virtual stargazer Stellarium 0.13.1
 * golearn 0.9
 * tuxpaint 0.9.22
 * New version of debian-installer from Debian Jessie.
 * Debian Jessie includes about 43000 packages available for installation.
 * More information about Debian 8 Jessie is provided in its release
   notes and the installation manual, see the link above.

=== Installation changes ===

    Installations done via PXE now also install firmware automatically
    for the hardware present.

=== Fixed bugs ===

A number of bugs have been fixed in this release; the most noticeable
from a user perspective:

 * Inserting incorrect DNS information in Gosa will no longer break
   DNS completely, but instead stop DNS updates until the incorrect
   information is corrected (710362)

 * shutdown-at-night now shuts the system down if gdm3 is used (775608). 

=== Sugar desktop removed ===

As the Sugar desktop was removed from Debian Jessie, it is also not
available in Debian Edu jessie.


== About Debian Edu / Skolelinux ==

Debian Edu, also known as Skolelinux, is a Linux distribution based on
Debian providing an out-of-the box environment of a completely
configured school network. Directly after installation a school server
running all services needed for a school network is set up just
waiting for users and machines being added via GOsa², a comfortable
Web-UI. A netbooting environment is prepared using PXE, so after
initial installation of the main server from CD or USB stick all other
machines can be installed via the network. The provided school server
provides LDAP database and Kerberos authentication service,
centralized home directories, DHCP server, web proxy and many other
services.  The desktop contains more than 60 educational software
packages and more are available from the Debian archive, and schools
can choose between KDE, GNOME, LXDE, Xfce and MATE desktop
environment.

== About Debian ==

The Debian Project was founded in 1993 by Ian Murdock to be a truly
free community project. Since then the project has grown to be one of
the largest and most influential open source projects. Thousands of
volunteers from all over the world work together to create and
maintain Debian software. Available in 70 languages, and supporting a
huge range of computer types, Debian calls itself the universal
operating system.

== Thanks ==

Thanks to everyone making Debian and Debian Edu / Skolelinux happen!
You rock.
15th April 2015

It was a surprise to me to learn that project to create a complete computer system for schools I've involved in, Debian Edu / Skolelinux, was being used in India. But apparently it is, and I managed to get an interview with one of the friends of the project there, Shirish Agarwal.

Who are you, and how do you spend your days?

My name is Shirish Agarwal. Based out of the educational and historical city of Pune, from the western state of Maharashtra, India. My bread comes from giving training, giving policy tips, installations on free software to mom and pop shops in different fields from Desktop publishing to retail shops as well as work with few software start-ups as well.

How did you get in contact with the Skolelinux / Debian Edu project?

It started innocently enough. I have been using Debian for a few years and in one local minidebconf / debutsav I was asked if there was anything for schools or education. I had worked / played with free educational softwares such as Gcompris and Stellarium for my many nieces and nephews so researched and found Debian Edu or Skolelinux as it was known then. Since then I have started using the various education meta-packages provided by the project.

What do you see as the advantages of Skolelinux / Debian Edu?

It's closest I have seen where a package full of educational software are packed, which are free and open (both literally and figuratively). Even if I take the simplest software which is gcompris, the number of activities therein are amazing. Another one of the softwares that I have liked for a long time is stellarium. Even pysycache is cool except for couple of issues I encountered #781841 and #781842.

I prefer software installed on the system over web based solutions, as a web site can disappear any time but the software on disk has the possibility of a larger life span. Of course with both it's more a question if it has enough users who make it fun or sustainable or both for the developer per-se.

What do you see as the disadvantages of Skolelinux / Debian Edu?

I do see that the Debian Edu team seems to be short-handed and I think more efforts should be made to make it popular and ask and take help from people and the larger community wherever possible.

I don't see any disadvantage to use Skolelinux apart from the fact that most apps. are generic which is good or bad how you see it. However, saying that I do acknowledge the fact that the canvas is pretty big and there are lot of interesting ideas that could be done but for reasons not known not done or if done I don't know about them. Let me share some of the ideas (these are more upstream based but still) I have had for a long time :

1. Classical maths question of two trains in opposing directions each running @x kmph/mph at y distance, when they will meet and how far would each travel and similar questions like these.

The computer is a fantastic system where questions like these can be drawn, animated and the methodology and answers teased out in interactive manner. While sites such as the Ask Dr. Math FAQ on The Two Trains problem (as an example or point of inspiration) can be used there is lot more that can be done. I dunno if there is a free software which does something like this. The idea being a blend of objects + animation + interaction which does this. The whole interaction could be gamified with points or sounds or colourful celebration whenever the user gets even part of the question or/and methodology right. That would help reinforce good behaviour. This understanding could be used to share/showcase everything from how the first wheel came to be, to evolution to how astronomy started, psychics and everything in-between.

One specific idea in the train part was having the Linux mascot on one train and the BSD or GNU mascot on the other train and they meeting somewhere in-between. Characters from blender movies could also be used.

2. Loads of crossword-puzzles with reference to subjects: We have enormous data sets in Wikipedia and Wikitionary. I don't think it should be a big job to design crossword puzzles. Using categories and sub-categories it should be doable to have Q&A single word answers from the existing data-sets. What would make it easy or hard could be the length of the word + existence of many or few vowels depending on the user's input.

3. Jigsaw puzzles - We already have a great software called palapeli with number of slicers making it pretty interesting. What needs to be done is to download large number of public domain and copyleft images, tease and use IPTC tags to categorise them into nature, history etc. and let it loose. This could turn to be really huge collection of images. One source could be taken from commons.wikimedia.org, others could be huge collection of royalty-free stock photos. Potential is immense.

Apart from this, free software suffers in two directions, we lag both in development (of using new features per-se) and maintenance a lot. This is more so in educational software as these applications need to be timely and the opportunity cost of missing deadlines is immense. If we are able to solve issues of funding for development and maintenance of such software I don't see any big difficulties. I know of few start-ups in and around India who would love to develop and maintain such software if funding issues could be solved.

Which free software do you use daily?

That would be huge list. Some of the softwares are obviously apt, aptitude, debdelta, leafpad, the shell of course (zsh nowadays), quassel for IRC. In games I use shisen-sho while card-games are evenly between kpat and Aiselriot. In desktops it's a tie between gnome-flashback and mate.

Which strategy do you believe is the right one to use to get schools to use free software?

I think it should first start with using specific FOSS apps. in whatever environment they are. If it's MS-Windows or Mac so be it. Once they are habitual with the apps. and there is buy-in from the school management then it could be installed anywhere. Most of the people now understand the concept of a repository because of the various online stores so it isn't hard to convince on that front.

What is harder is having enough people with technical skills and passion to service them. If you get buy-in from one or two teachers then ideas like above could also be asked to be done as a project as well.

I think where we fall short more than anything is in marketing. For instance, Debian has this whole range of fonts in its archive but there isn't even a page where all those different fonts in the La Ipsum format could be tried out for newcomers.

One of the issues faced constantly in installations is with updates and upgrades. People have this myth that each update and upgrade means the user interface will / has to change. I have seen this innumerable times. That perhaps is one of the reasons which browsers like Iceweasel / Firefox change user interfaces so much, not because it might be needed or be functional but because people believe that changed user interfaces are better. This, can easily be pointed with the user interfaces changed with almost every MS-Windows and Mac OS releases.

The problems with Debian Edu for deployment are many. The biggest is the huge gap between what is taught in schools and what Debian Edu is aimed at.

Me and my friends did teach on week-ends in a government school for around 2 years, and gathered some experience there. Some of the things we learnt/discovered there was :

  1. Most of the teachers are very territorial about their subjects and they do not want you to teach anything out of the portion/syllabus given.
  2. They want any activity on the system in accordance to whatever is in the syllabus.
  3. There are huge barriers both with the English language and at times with objects or whatever. An example, let's say in gcompris you have objects falling down and you have to name them and let's say the falling object is a hat or a fedora hat, this would not be as recognizable as say a Puneri Pagdi so there is need to inject local objects, words wherever possible. Especially for word-games there are so many hindi words which have become part of english vocabulary (for instance in parley), those could be made into a hinglish collection or something but that is something for upstream to do.
7th April 2015

I am happy to let you all know that I'm going to the Open Source Developers' Conference Nordic 2015!

It take place Friday 8th to Sunday 10th of May in Oslo next to where I work, and I finally got around to submitting a talk proposal for it (dead link for most people until the talk is accepted). As part of my involvement with the Norwegian Unix User Group member association I have been slightly involved in the planning of this conference for a while now, with a focus on organising a Civic Hacking Hackathon with our friends over at mySociety and Holder de ord. This part is named the 'My Society' track in the program. There is still space for more talks and participants. I hope to see you there.

Check out the talks submitted and accepted so far.

4th April 2015

During eastern I had some time to continue working on the Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig. At the moment I am proof reading the finished text, looking for typos, inconsistent wordings and sentences that do not flow as they should. I'm more than two thirds done with the text, and welcome others to check the text up to chapter 13. The current status is available on the github project pages. You can also check out the PDF, EPUB and HTML version available in the archive directory.

Please report typos, bugs and improvements to the github project if you find any.

9th March 2015

The Norwegian Unix User Group, where I am a member, and where people interested in free software, open standards and UNIX like operating systems like Linux and the BSDs come together, record our monthly technical presentations on video. The purpose is to document the talks and spread them to a wider audience. For this, the the Norwegian nationwide open channel Frikanalen is a useful venue. Since a few days ago, when I figured out the REST API to program the channel time schedule, the channel has been filled with NUUG talks, related recordings and some Creative Commons licensed TED talks (from archive.org). I fill all "leftover bits" on the channel with content from NUUG, which at the moment is almost 17 of 24 hours every day.

The list of NUUG videos uploaded so far include things like a one hour talk by John Perry Barlow when he visited Oslo, a presentation of Haiku, the BeOS re-implementation, the history of FiksGataMi, the Norwegian version of FixMyStreet, the good old Warriors of the net video and many others.

We have a large backlog of NUUG talks not yet uploaded to Frikanalen, and plan to upload every useful bit to the channel to spread the word there. I also hope to find useful recordings from the Chaos Computer Club and Debian conferences and spread them on the channel as well. But this require locating the videos and their meta information (title, description, license, etc), and preparing the recordings for broadcast, and I have not yet had the spare time to focus on this. Perhaps you want to help. Please join us on IRC, #nuug on irc.freenode.net if you want to help make this happen.

But as I said, already the channel is already almost exclusively filled with technical topics, and if you want to learn something new today, check out the Ogg Theora web stream or use one of the other ways to get access to the channel. Unfortunately the Ogg Theora recoding for distribution still do not properly sync the video and sound. It is generated by recoding a internal MPEG transport stream with MPEG4 coded video (ie H.264) to Ogg Theora / Vorbis, and we have not been able to find a way that produces acceptable quality. Help needed, please get in touch if you know how to fix it using free software.

28th February 2015

Today I was happy to learn that the documentary Citizenfour by Laura Poitras finally will show up in Norway. According to the magazine Montages, a deal has finally been made for Cinema distribution in Norway and the movie will have its premiere soon. This is great news. As part of my involvement with the Norwegian Unix User Group, me and a friend have tried to get the movie to Norway ourselves, but obviously we were too late and Tor Fosse beat us to it. I am happy he did, as the movie will make its way to the public and we do not have to make it happen ourselves. The trailer can be seen on youtube, if you are curious what kind of film this is.

The whistle blower Edward Snowden really deserve political asylum here in Norway, but I am afraid he would not be safe.

25th February 2015

The Norwegian nationwide open channel Frikanalen is still going strong. It allow everyone to send the video they want on national television. It is a TV station administrated completely using a web browser, running only Free Software, providing a REST api for administrators and members, and with distribution on the national DVB-T distribution network RiksTV. But only between 12:00 and 17:30 Norwegian time. This has finally changed, after many years with limited distribution. A few weeks ago, we set up a Ogg Theora stream via icecast to allow everyone with Internet access to check out the channel the rest of the day. This is presented on the Frikanalen web site now. And since a few days ago, the channel is also available via multicast on UNINETT, available for those using IPTV TVs and set-top boxes in the Norwegian National Research and Education network.

If you want to see what is on the channel, point your media player to one of these sources. The first should work with most players and browsers, while as far as I know, the multicast UDP stream only work with VLC.

The Ogg Theora / icecast stream is not working well, as the video and audio is slightly out of sync. We have not been able to figure out how to fix it. It is generated by recoding a internal MPEG transport stream with MPEG4 coded video (ie H.264) to Ogg Theora / Vorbis, and the result is less then stellar. If you have ideas how to fix it, please let us know on frikanalen (at) nuug.no. We currently use this with ffmpeg2theora 0.29:

./ffmpeg2theora.linux <OBE_gemini_URL.ts> -F 25 -x 720 -y 405 \
 --deinterlace --inputfps 25 -c 1 -H 48000 --keyint 8 --buf-delay 100 \
 --nosync -V 700 -o - | oggfwd video.nuug.no 8000 <pw> /frikanalen.ogv

If you get the multicast UDP stream working, please let me know, as I am curious how far the multicast stream reach. It do not make it to my home network, nor any other commercially available network in Norway that I am aware of.

10th February 2015

Aftenposten, one of the largest newspapers in Norway, today report that three of the nude body scanners now is put to use at Gardermoen, the main airport in Norway. This way the travelers can have their body photographed without cloths when visiting Norway. Of course this horrible news is presented with a positive spin, stating that "now travelers can move past the security check point faster and more efficiently", but fail to mention that the machines in question take pictures of their nude bodies and store them internally in the computer, while only presenting sketch figure of the body to the public. The article is written in a way that leave the impression that the new machines do not take these nude pictures and only create the sketch figures. In reality the same nude pictures are still taken, but not presented to everyone. They are still available for the owners of the system and the people doing maintenance of the scanners, as long as they are taken and stored.

Wikipedia have a more on Full body scanners, including example images and a summary of the controversy about these scanners.

Personally I will decline to use these machines, as I believe strip searches of my body is a very intrusive attack on my privacy, and not something everyone should have to accept to travel.

8th February 2015

When running a TV station with both broadcast and web stream distribution, it is useful to know that the stream is working. As I am involved in the Norwegian open channel Frikanalen as part of my activity in the NUUG member organisation, I wrote a script to use mplayer to connect to a video stream, pick two images 35 seconds apart and compare them. If the images are missing or identical, something is probably wrong with the stream and an alarm should be triggered. The script is written as a Nagios plugin, allowing us to use Nagios to run the check regularly and sound the alarm when something is wrong. It is able to detect both a hanging and a broken video stream.

I just uploaded the code for the script into the Frikanalen git repository on github. If you run a TV station with web streaming, perhaps you can find it useful too.

Last year, the Frikanalen public TV station transformed into using only Linux based free software to administrate, schedule and distribute the TV content. The source code for the entire TV station is available from the Github project page. Everyone can use it to send their content on national TV, and we provide both a web GUI and a web API to add and schedule content. And thanks to last weeks developer gathering and following activity, we now have the schedule available as XMLTV too. Still a lot of work left to do, especially with the process to add videos and with the scheduling, so your contribution is most welcome. Perhaps you want to set up your own TV station?

Update 2015-02-25: Got a tip from Uninett about their qstream monitoring system, which gather connection time, jitter, packet loss and burst bandwidth usage. It look useful to check if UDP streams are working as they should.

12th January 2015

A few days ago, the Free Software Foundation announced a new video explaining Free software in simple terms. The video named User Liberation is 3 minutes long, and I recommend showing it to everyone you know as a way to explain what Free Software is all about. Unfortunately several of the people I know do not understand English and Spanish, so it did not make sense to show it to them.

But today I was told that English subtitles were available and set out to provide Norwegian Bokmål subtitles based on these. The result has been sent to FSF and made available in a git repository provided by Github. Please let me know if you find errors or have improvements to the subtitles.

Update 2015-02-03: Since I publised this post, FSF created a Libreplanet project to track subtitles for the video.

Tags: english, video.
30th December 2014

I am very happy that we in the Norwegian Unix User group (NUUG), spearheaded by Marius Halden from NUUG and Matthew Somerville from mySociety, finally managed to upgrade the code base for the Norwegian version of FixMyStreet. This was the first major update since 2011. The refurbished FiksGataMi is already live, and seem to hold up the pressure. The press release and announcement went out this morning.

FixMyStreet is a web platform for allowing the citizens to easily report problems with public infrastructure to the responsible authorities. Think of it as a shared mail client with map support, allowing everyone to see what already was reported and comment on the reports in public.

19th December 2014

So, Sony caved in (according to Rob Lowe) and demonstrated that America lost its first cyberwar (according to Newt Gingrich). It should not surprise anyone, after the whistle blower Edward Snowden documented that the government of USA and their allies for many years have done their best to make sure the technology used by its citizens is filled with security holes allowing the secret services to spy on its own population. No one in their right minds could believe that the ability to snoop on the people all over the globe could only be used by the personnel authorized to do so by the president of the United States of America. If the capabilities are there, they will be used by friend and foe alike, and now they are being used to bring Sony on its knees.

I doubt it will a lesson learned, and expect USA to lose its next cyber war too, given how eager the western intelligence communities (and probably the non-western too, but it is less in the news) seem to be to continue its current dragnet surveillance practice.

There is a reason why China and others are trying to move away from Windows to Linux and other alternatives, and it is not to avoid sending its hard earned dollars to Cayman Islands (or whatever tax haven Microsoft is using these days to collect the majority of its income. :)

22nd November 2014

By now, it is well known that Debian Jessie will not be using sysvinit as its boot system by default. But how can one keep using sysvinit in Jessie? It is fairly easy, and here are a few recipes, courtesy of Erich Schubert and Simon McVittie.

If you already are using Wheezy and want to upgrade to Jessie and keep sysvinit as your boot system, create a file /etc/apt/preferences.d/use-sysvinit with this content before you upgrade:

Package: systemd-sysv
Pin: release o=Debian
Pin-Priority: -1

This file content will tell apt and aptitude to not consider installing systemd-sysv as part of any installation and upgrade solution when resolving dependencies, and thus tell it to avoid systemd as a default boot system. The end result should be that the upgraded system keep using sysvinit.

If you are installing Jessie for the first time, there is no way to get sysvinit installed by default (debootstrap used by debian-installer have no option for this), but one can tell the installer to switch to sysvinit before the first boot. Either by using a kernel argument to the installer, or by adding a line to the preseed file used. First, the kernel command line argument:

preseed/late_command="in-target apt-get install --purge -y sysvinit-core"

Next, the line to use in a preseed file:

d-i preseed/late_command string in-target apt-get install -y sysvinit-core

One can of course also do this after the first boot by installing the sysvinit-core package.

I recommend only using sysvinit if you really need it, as the sysvinit boot sequence in Debian have several hardware specific bugs on Linux caused by the fact that it is unpredictable when hardware devices show up during boot. But on the other hand, the new default boot system still have a few rough edges I hope will be fixed before Jessie is released.

Update 2014-11-26: Inspired by a blog post by Torsten Glaser, added --purge to the preseed line.

10th November 2014

The right to communicate with your friends and family in private, without anyone snooping, is a right every citicen have in a liberal democracy. But this right is under serious attack these days.

A while back it occurred to me that one way to make the dragnet surveillance conducted by NSA, GCHQ, FRA and others (and confirmed by the whisleblower Snowden) more expensive for Internet email, is to deliver all email using SMTP via Tor. Such SMTP option would be a nice addition to the FreedomBox project if we could send email between FreedomBox machines without leaking metadata about the emails to the people peeking on the wire. I proposed this on the FreedomBox project mailing list in October and got a lot of useful feedback and suggestions. It also became obvious to me that this was not a novel idea, as the same idea was tested and documented by Johannes Berg as early as 2006, and both the Mailpile and the Cables systems propose a similar method / protocol to pass emails between users.

To implement such system one need to set up a Tor hidden service providing the SMTP protocol on port 25, and use email addresses looking like username@hidden-service-name.onion. With such addresses the connections to port 25 on hidden-service-name.onion using Tor will go to the correct SMTP server. To do this, one need to configure the Tor daemon to provide the hidden service and the mail server to accept emails for this .onion domain. To learn more about Exim configuration in Debian and test the design provided by Johannes Berg in his FAQ, I set out yesterday to create a Debian package for making it trivial to set up such SMTP over Tor service based on Debian. Getting it to work were fairly easy, and the source code for the Debian package is available from github. I plan to move it into Debian if further testing prove this to be a useful approach.

If you want to test this, set up a blank Debian machine without any mail system installed (or run apt-get purge exim4-config to get rid of exim4). Install tor, clone the git repository mentioned above, build the deb and install it on the machine. Next, run /usr/lib/exim4-smtorp/setup-exim-hidden-service and follow the instructions to get the service up and running. Restart tor and exim when it is done, and test mail delivery using swaks like this:

torsocks swaks --server dutlqrrmjhtfa3vp.onion \
  --to fbx@dutlqrrmjhtfa3vp.onion

This will test the SMTP delivery using tor. Replace the email address with your own address to test your server. :)

The setup procedure is still to complex, and I hope it can be made easier and more automatic. Especially the tor setup need more work. Also, the package include a tor-smtp tool written in C, but its task should probably be rewritten in some script language to make the deb architecture independent. It would probably also make the code easier to review. The tor-smtp tool currently need to listen on a socket for exim to talk to it and is started using xinetd. It would be better if no daemon and no socket is needed. I suspect it is possible to get exim to run a command line tool for delivery instead of talking to a socket, and hope to figure out how in a future version of this system.

Until I wipe my test machine, I can be reached using the fbx@dutlqrrmjhtfa3vp.onion mail address, deliverable over SMTorP. :)

27th October 2014

I am happy to report that I on behalf of the Debian Edu team just sent out this announcement:

The Debian Edu Team is pleased to announce the release of Debian Edu
Jessie 8.0+edu0~alpha0

Debian Edu is a complete operating system for schools. Through its
various installation profiles you can install servers, workstations
and laptops which will work together on the school network. With
Debian Edu, the teachers themselves or their technical support can
roll out a complete multi-user multi-machine study environment within
hours or a few days. Debian Edu comes with hundreds of applications
pre-installed, but you can always add more packages from Debian.

For those who want to give Debian Edu Jessie a try, download and
installation instructions are available, including detailed
instructions in the manual[1] explaining the first steps, such as
setting up a network or adding users. Please note that the password
for the user your prompted for during installation must have a length
of at least 5 characters!

 [1] <URL: https://wiki.debian.org/DebianEdu/Documentation/Jessie >

Would you like to give your school's computer a longer life? Are you
tired of sneaker administration, running from computer to computer
reinstalling the operating system? Would you like to administrate all
the computers in your school using only a couple of hours every week?
Check out Debian Edu Jessie!

Skolelinux is used by at least two hundred schools all over the world,
mostly in Germany and Norway.

About Debian Edu and Skolelinux
===============================

Debian Edu, also known as Skolelinux[2], is a Linux distribution based
on Debian providing an out-of-the box environment of a completely
configured school network. Immediately after installation a school
server running all services needed for a school network is set up just
waiting for users and machines being added via GOsa², a comfortable
Web-UI. A netbooting environment is prepared using PXE, so after
initial installation of the main server from CD or USB stick all other
machines can be installed via the network.  The provided school server
provides LDAP database and Kerberos authentication service,
centralized home directories, DHCP server, web proxy and many other
services.  The desktop contains more than 60 educational software
packages[3] and more are available from the Debian archive, and
schools can choose between KDE, Gnome, LXDE, Xfce and MATE desktop
environment.

 [2] <URL: http://www.skolelinux.org/ >
 [3] <URL: https://people.skolelinux.org/pere/blog/Educational_applications_included_in_Debian_Edu___Skolelinux__the_screenshot_collection____.html >

Full release notes and manual
=============================

Below the download URLs there is a list of some of the new features
and bugfixes of Debian Edu 8.0+edu0~alpha0 Codename Jessie. The full
list is part of the manual. (See the feature list in the manual[4] for
the English version.) For some languages manual translations are
available, see the manual translation overview[5].

 [4] <URL: https://wiki.debian.org/DebianEdu/Documentation/Jessie/Features >
 [5] <URL: http://maintainer.skolelinux.org/debian-edu-doc/ >

Where to get it
---------------

To download the multiarch netinstall CD release (624 MiB) you can use

 * ftp://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~alpha0-CD.iso
 * http://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~alpha0-CD.iso
 * rsync -avzP ftp.skolelinux.org::skolelinux-cd/debian-edu-8.0+edu0~alpha0-CD.iso .

The SHA1SUM of this image is: 361188818e036ce67280a572f757de82ebfeb095

New features for Debian Edu 8.0+edu0~alpha0 Codename Jessie released 2014-10-27
===============================================================================


Installation changes
--------------------

 * PXE installation now installs firmware automatically for the hardware present.

Software updates
----------------

Everything which is new in Debian Jessie 8.0, eg:

 * Linux kernel 3.16.x
 * Desktop environments KDE "Plasma" 4.11.12, GNOME 3.14, Xfce 4.10,
   LXDE 0.5.6 and MATE 1.8 (KDE "Plasma" is installed by default; to
   choose one of the others see manual.)
 * the browsers Iceweasel 31 ESR and Chromium 38 
 * !LibreOffice 4.3.3
 * GOsa 2.7.4
 * LTSP 5.5.4
 * CUPS print system 1.7.5
 * new boot framework: systemd
 * Educational toolbox GCompris 14.07 
 * Music creator Rosegarden 14.02
 * Image editor Gimp 2.8.14
 * Virtual stargazer Stellarium 0.13.0
 * golearn 0.9
 * tuxpaint 0.9.22
 * New version of debian-installer from Debian Jessie.
 * Debian Jessie includes about 42000 packages available for
   installation.
 * More information about Debian Jessie 8.0 is provided in the release
   notes[6] and the installation manual[7].

 [6] <URL: http://www.debian.org/releases/jessie/releasenotes >
 [7] <URL: http://www.debian.org/releases/jessie/installmanual >

Fixed bugs
----------

 * Inserting incorrect DNS information in Gosa will no longer break
   DNS completely, but instead stop DNS updates until the incorrect
   information is corrected (Debian bug #710362)
 * and many others.

Documentation and translation updates
------------------------------------- 

 * The Debian Edu Jessie Manual is fully translated to German, French,
   Italian, Danish and Dutch. Partly translated versions exist for
   Norwegian Bokmal and Spanish.

Other changes
-------------

 * Due to new Squid settings, powering off or rebooting the main
   server takes more time.
 * To manage printers localhost:631 has to be used, currently www:631
   doesn't work.

Regressions / known problems
----------------------------

 * Installing LTSP chroot fails with a bug related to eatmydata about
   exim4-config failing to run its postinst (see Debian bug #765694
   and Debian bug #762103).
 * Munin collection is not properly configured on clients (Debian bug
   #764594).  The fix is available in a newer version of munin-node.
 * PXE setup for Main Server and Thin Client Server setup does not
   work when installing on a machine without direct Internet access.
   Will be fixed when Debian bug #766960 is fixed in Jessie.

See the status page[8] for the complete list.

 [8] <URL: https://wiki.debian.org/DebianEdu/Status/Jessie >

How to report bugs
------------------

<URL: http://wiki.debian.org/DebianEdu/HowTo/ReportBugs >

About Debian
============

The Debian Project was founded in 1993 by Ian Murdock to be a truly
free community project. Since then the project has grown to be one of
the largest and most influential open source projects. Thousands of
volunteers from all over the world work together to create and
maintain Debian software. Available in 70 languages, and supporting a
huge range of computer types, Debian calls itself the universal
operating system.

Contact Information
For further information, please visit the Debian web pages[9] or send
mail to press@debian.org.

 [9] <URL: http://www.debian.org/ >
23rd October 2014

I spent last weekend at Makercon Nordic, a great conference and workshop for makers in Norway and the surrounding countries. I had volunteered on behalf of the Norwegian Unix Users Group (NUUG) to video record the talks, and we had a great and exhausting time recording the entire day, two days in a row. There were only two of us, Hans-Petter and me, and we used the regular video equipment for NUUG, with a dvswitch, a camera and a VGA to DV convert box, and mixed video and slides live.

Hans-Petter did the post-processing, consisting of uploading the around 180 GiB of raw video to Youtube, and the result is now becoming public on the MakerConNordic account. The videos have the license NUUG always use on our recordings, which is Creative Commons Navngivelse-Del på samme vilkår 3.0 Norge. Many great talks available. Check it out! :)

Tags: english, nuug, video.
22nd October 2014

If you ever had to moderate a mailman list, like the ones on alioth.debian.org, you know the web interface is fairly slow to operate. First you visit one web page, enter the moderation password and get a new page shown with a list of all the messages to moderate and various options for each email address. This take a while for every list you moderate, and you need to do it regularly to do a good job as a list moderator. But there is a quick alternative, the listadmin program. It allow you to check lists for new messages to moderate in a fraction of a second. Here is a test run on two lists I recently took over:

% time listadmin xiph
fetching data for pkg-xiph-commits@lists.alioth.debian.org ... nothing in queue
fetching data for pkg-xiph-maint@lists.alioth.debian.org ... nothing in queue

real    0m1.709s
user    0m0.232s
sys     0m0.012s
%

In 1.7 seconds I had checked two mailing lists and confirmed that there are no message in the moderation queue. Every morning I currently moderate 68 mailman lists, and it normally take around two minutes. When I took over the two pkg-xiph lists above a few days ago, there were 400 emails waiting in the moderator queue. It took me less than 15 minutes to process them all using the listadmin program.

If you install the listadmin package from Debian and create a file ~/.listadmin.ini with content like this, the moderation task is a breeze:

username username@example.org
spamlevel 23
default discard
discard_if_reason "Posting restricted to members only. Remove us from your mail list."

password secret
adminurl https://{domain}/mailman/admindb/{list}
mailman-list@lists.example.com

password hidden
other-list@otherserver.example.org

There are other options to set as well. Check the manual page to learn the details.

If you are forced to moderate lists on a mailman installation where the SSL certificate is self signed or not properly signed by a generally accepted signing authority, you can set a environment variable when calling listadmin to disable SSL verification:

PERL_LWP_SSL_VERIFY_HOSTNAME=0 listadmin

If you want to moderate a subset of the lists you take care of, you can provide an argument to the listadmin script like I do in the initial screen dump (the xiph argument). Using an argument, only lists matching the argument string will be processed. This make it quick to accept messages if you notice the moderation request in your email.

Without the listadmin program, I would never be the moderator of 68 mailing lists, as I simply do not have time to spend on that if the process was any slower. The listadmin program have saved me hours of time I could spend elsewhere over the years. It truly is nice free software.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2014-10-27: Added missing 'username' statement in configuration example. Also, I've been told that the PERL_LWP_SSL_VERIFY_HOSTNAME=0 setting do not work for everyone. Not sure why.

17th October 2014

When PXE installing laptops with Debian, I often run into the problem that the WiFi card require some firmware to work properly. And it has been a pain to fix this using preseeding in Debian. Normally something more is needed. But thanks to my isenkram package and its recent tasksel extension, it has now become easy to do this using simple preseeding.

The isenkram-cli package provide tasksel tasks which will install firmware for the hardware found in the machine (actually, requested by the kernel modules for the hardware). (It can also install user space programs supporting the hardware detected, but that is not the focus of this story.)

To get this working in the default installation, two preeseding values are needed. First, the isenkram-cli package must be installed into the target chroot (aka the hard drive) before tasksel is executed in the pkgsel step of the debian-installer system. This is done by preseeding the base-installer/includes debconf value to include the isenkram-cli package. The package name is next passed to debootstrap for installation. With the isenkram-cli package in place, tasksel will automatically use the isenkram tasks to detect hardware specific packages for the machine being installed and install them, because isenkram-cli contain tasksel tasks.

Second, one need to enable the non-free APT repository, because most firmware unfortunately is non-free. This is done by preseeding the apt-mirror-setup step. This is unfortunate, but for a lot of hardware it is the only option in Debian.

The end result is two lines needed in your preseeding file to get firmware installed automatically by the installer:

base-installer base-installer/includes string isenkram-cli
apt-mirror-setup apt-setup/non-free boolean true

The current version of isenkram-cli in testing/jessie will install both firmware and user space packages when using this method. It also do not work well, so use version 0.15 or later. Installing both firmware and user space packages might give you a bit more than you want, so I decided to split the tasksel task in two, one for firmware and one for user space programs. The firmware task is enabled by default, while the one for user space programs is not. This split is implemented in the package currently in unstable.

If you decide to give this a go, please let me know (via email) how this recipe work for you. :)

So, I bet you are wondering, how can this work. First and foremost, it work because tasksel is modular, and driven by whatever files it find in /usr/lib/tasksel/ and /usr/share/tasksel/. So the isenkram-cli package place two files for tasksel to find. First there is the task description file (/usr/share/tasksel/descs/isenkram.desc):

Task: isenkram-packages
Section: hardware
Description: Hardware specific packages (autodetected by isenkram)
 Based on the detected hardware various hardware specific packages are
 proposed.
Test-new-install: show show
Relevance: 8
Packages: for-current-hardware

Task: isenkram-firmware
Section: hardware
Description: Hardware specific firmware packages (autodetected by isenkram)
 Based on the detected hardware various hardware specific firmware
 packages are proposed.
Test-new-install: mark show
Relevance: 8
Packages: for-current-hardware-firmware

The key parts are Test-new-install which indicate how the task should be handled and the Packages line referencing to a script in /usr/lib/tasksel/packages/. The scripts use other scripts to get a list of packages to install. The for-current-hardware-firmware script look like this to list relevant firmware for the machine:

#!/bin/sh
#
PATH=/usr/sbin:$PATH
export PATH
isenkram-autoinstall-firmware -l

With those two pieces in place, the firmware is installed by tasksel during the normal d-i run. :)

If you want to test what tasksel will install when isenkram-cli is installed, run DEBIAN_PRIORITY=critical tasksel --test --new-install to get the list of packages that tasksel would install.

Debian Edu will be pilots in testing this feature, as isenkram is used there now to install firmware, replacing the earlier scripts.

4th October 2014

Today I came across an unexpected Ubuntu boot screen. Above the bread shelf on the ICA shop at Storo in Oslo, the grub menu of Ubuntu with Linux kernel 3.2.0-23 (ie probably version 12.04 LTS) was stuck on a screen normally showing the bread types and prizes:

If it had booted as it was supposed to, I would never had known about this hidden Linux installation. It is interesting what errors can reveal.

Tags: debian, english.
4th October 2014

The lsdvd project got a new set of developers a few weeks ago, after the original developer decided to step down and pass the project to fresh blood. This project is now maintained by Petter Reinholdtsen and Steve Dibb.

I just wrapped up a new lsdvd release, available in git or from the download page. This is the changelog dated 2014-10-03 for version 0.17.

  • Ignore 'phantom' audio, subtitle tracks
  • Check for garbage in the program chains, which indicate that a track is non-existant, to work around additional copy protection
  • Fix displaying content type for audio tracks, subtitles
  • Fix pallete display of first entry
  • Fix include orders
  • Ignore read errors in titles that would not be displayed anyway
  • Fix the chapter count
  • Make sure the array size and the array limit used when initialising the palette size is the same.
  • Fix array printing.
  • Correct subsecond calculations.
  • Add sector information to the output format.
  • Clean up code to be closer to ANSI C and compile without warnings with more GCC compiler warnings.

This change bring together patches for lsdvd in use in various Linux and Unix distributions, as well as patches submitted to the project the last nine years. Please check it out. :)

26th September 2014

The Debian Edu / Skolelinux project provide a Linux solution for schools, including a powerful desktop with education software, a central server providing web pages, user database, user home directories, central login and PXE boot of both clients without disk and the installation to install Debian Edu on machines with disk (and a few other services perhaps to small to mention here). We in the Debian Edu team are currently working on the Jessie based version, trying to get everything in shape before the freeze, to avoid having to maintain our own package repository in the future. The current status can be seen on the Debian wiki, and there is still heaps of work left. Some fatal problems block testing, breaking the installer, but it is possible to work around these to get anyway. Here is a recipe on how to get the installation limping along.

First, download the test ISO via ftp, http or rsync (use ftp.skolelinux.org::cd-edu-testing-nolocal-netinst/debian-edu-amd64-i386-NETINST-1.iso). The ISO build was broken on Tuesday, so we do not get a new ISO every 12 hours or so, but thankfully the ISO we already got we are able to install with some tweaking.

When you get to the Debian Edu profile question, go to tty2 (use Alt-Ctrl-F2), run

nano /usr/bin/edu-eatmydata-install

and add 'exit 0' as the second line, disabling the eatmydata optimization. Return to the installation, select the profile you want and continue. Without this change, exim4-config will fail to install due to a known bug in eatmydata.

When you get the grub question at the end, answer /dev/sda (or if this do not work, figure out what your correct value would be. All my test machines need /dev/sda, so I have no advice if it do not fit your need.

If you installed a profile including a graphical desktop, log in as root after the initial boot from hard drive, and install the education-desktop-XXX metapackage. XXX can be kde, gnome, lxde, xfce or mate. If you want several desktop options, install more than one metapackage. Once this is done, reboot and you should have a working graphical login screen. This workaround should no longer be needed once the education-tasks package version 1.801 enter testing in two days.

I believe the ISO build will start working on two days when the new tasksel package enter testing and Steve McIntyre get a chance to update the debian-cd git repository. The eatmydata, grub and desktop issues are already fixed in unstable and testing, and should show up on the ISO as soon as the ISO build start working again. Well the eatmydata optimization is really just disabled. The proper fix require an upload by the eatmydata maintainer applying the patch provided in bug #702711. The rest have proper fixes in unstable.

I hope this get you going with the installation testing, as we are quickly running out of time trying to get our Jessie based installation ready before the distribution freeze in a month.

25th September 2014

I use the lsdvd tool to handle my fairly large DVD collection. It is a nice command line tool to get details about a DVD, like title, tracks, track length, etc, in XML, Perl or human readable format. But lsdvd have not seen any new development since 2006 and had a few irritating bugs affecting its use with some DVDs. Upstream seemed to be dead, and in January I sent a small probe asking for a version control repository for the project, without any reply. But I use it regularly and would like to get an updated version into Debian. So two weeks ago I tried harder to get in touch with the project admin, and after getting a reply from him explaining that he was no longer interested in the project, I asked if I could take over. And yesterday, I became project admin.

I've been in touch with a Gentoo developer and the Debian maintainer interested in joining forces to maintain the upstream project, and I hope we can get a new release out fairly quickly, collecting the patches spread around on the internet into on place. I've added the relevant Debian patches to the freshly created git repository, and expect the Gentoo patches to make it too. If you got a DVD collection and care about command line tools, check out the git source and join the project mailing list. :)

16th September 2014

The Debian installer could be a lot quicker. When we install more than 2000 packages in Skolelinux / Debian Edu using tasksel in the installer, unpacking the binary packages take forever. A part of the slow I/O issue was discussed in bug #613428 about too much file system sync-ing done by dpkg, which is the package responsible for unpacking the binary packages. Other parts (like code executed by postinst scripts) might also sync to disk during installation. All this sync-ing to disk do not really make sense to me. If the machine crash half-way through, I start over, I do not try to salvage the half installed system. So the failure sync-ing is supposed to protect against, hardware or system crash, is not really relevant while the installer is running.

A few days ago, I thought of a way to get rid of all the file system sync()-ing in a fairly non-intrusive way, without the need to change the code in several packages. The idea is not new, but I have not heard anyone propose the approach using dpkg-divert before. It depend on the small and clever package eatmydata, which uses LD_PRELOAD to replace the system functions for syncing data to disk with functions doing nothing, thus allowing programs to live dangerous while speeding up disk I/O significantly. Instead of modifying the implementation of dpkg, apt and tasksel (which are the packages responsible for selecting, fetching and installing packages), it occurred to me that we could just divert the programs away, replace them with a simple shell wrapper calling "eatmydata $program $@", to get the same effect. Two days ago I decided to test the idea, and wrapped up a simple implementation for the Debian Edu udeb.

The effect was stunning. In my first test it reduced the running time of the pkgsel step (installing tasks) from 64 to less than 44 minutes (20 minutes shaved off the installation) on an old Dell Latitude D505 machine. I am not quite sure what the optimised time would have been, as I messed up the testing a bit, causing the debconf priority to get low enough for two questions to pop up during installation. As soon as I saw the questions I moved the installation along, but do not know how long the question were holding up the installation. I did some more measurements using Debian Edu Jessie, and got these results. The time measured is the time stamp in /var/log/syslog between the "pkgsel: starting tasksel" and the "pkgsel: finishing up" lines, if you want to do the same measurement yourself. In Debian Edu, the tasksel dialog do not show up, and the timing thus do not depend on how quickly the user handle the tasksel dialog.

Machine/setup Original tasksel Optimised tasksel Reduction
Latitude D505 Main+LTSP LXDE 64 min (07:46-08:50) <44 min (11:27-12:11) >20 min 18%
Latitude D505 Roaming LXDE 57 min (08:48-09:45) 34 min (07:43-08:17) 23 min 40%
Latitude D505 Minimal 22 min (10:37-10:59) 11 min (11:16-11:27) 11 min 50%
Thinkpad X200 Minimal 6 min (08:19-08:25) 4 min (08:04-08:08) 2 min 33%
Thinkpad X200 Roaming KDE 19 min (09:21-09:40) 15 min (10:25-10:40) 4 min 21%

The test is done using a netinst ISO on a USB stick, so some of the time is spent downloading packages. The connection to the Internet was 100Mbit/s during testing, so downloading should not be a significant factor in the measurement. Download typically took a few seconds to a few minutes, depending on the amount of packages being installed.

The speedup is implemented by using two hooks in Debian Installer, the pre-pkgsel.d hook to set up the diverts, and the finish-install.d hook to remove the divert at the end of the installation. I picked the pre-pkgsel.d hook instead of the post-base-installer.d hook because I test using an ISO without the eatmydata package included, and the post-base-installer.d hook in Debian Edu can only operate on packages included in the ISO. The negative effect of this is that I am unable to activate this optimization for the kernel installation step in d-i. If the code is moved to the post-base-installer.d hook, the speedup would be larger for the entire installation.

I've implemented this in the debian-edu-install git repository, and plan to provide the optimization as part of the Debian Edu installation. If you want to test this yourself, you can create two files in the installer (or in an udeb). One shell script need do go into /usr/lib/pre-pkgsel.d/, with content like this:

#!/bin/sh
set -e
. /usr/share/debconf/confmodule
info() {
    logger -t my-pkgsel "info: $*"
}
error() {
    logger -t my-pkgsel "error: $*"
}
override_install() {
    apt-install eatmydata || true
    if [ -x /target/usr/bin/eatmydata ] ; then
        for bin in dpkg apt-get aptitude tasksel ; do
            file=/usr/bin/$bin
            # Test that the file exist and have not been diverted already.
            if [ -f /target$file ] ; then
                info "diverting $file using eatmydata"
                printf "#!/bin/sh\neatmydata $bin.distrib \"\$@\"\n" \
                    > /target$file.edu
                chmod 755 /target$file.edu
                in-target dpkg-divert --package debian-edu-config \
                    --rename --quiet --add $file
                ln -sf ./$bin.edu /target$file
            else
                error "unable to divert $file, as it is missing."
            fi
        done
    else
        error "unable to find /usr/bin/eatmydata after installing the eatmydata pacage"
    fi
}

override_install

To clean up, another shell script should go into /usr/lib/finish-install.d/ with code like this:

#! /bin/sh -e
. /usr/share/debconf/confmodule
error() {
    logger -t my-finish-install "error: $@"
}
remove_install_override() {
    for bin in dpkg apt-get aptitude tasksel ; do
        file=/usr/bin/$bin
        if [ -x /target$file.edu ] ; then
            rm /target$file
            in-target dpkg-divert --package debian-edu-config \
                --rename --quiet --remove $file
            rm /target$file.edu
        else
            error "Missing divert for $file."
        fi
    done
    sync # Flush file buffers before continuing
}

remove_install_override

In Debian Edu, I placed both code fragments in a separate script edu-eatmydata-install and call it from the pre-pkgsel.d and finish-install.d scripts.

By now you might ask if this change should get into the normal Debian installer too? I suspect it should, but am not sure the current debian-installer coordinators find it useful enough. It also depend on the side effects of the change. I'm not aware of any, but I guess we will see if the change is safe after some more testing. Perhaps there is some package in Debian depending on sync() and fsync() having effect? Perhaps it should go into its own udeb, to allow those of us wanting to enable it to do so without affecting everyone.

Update 2014-09-24: Since a few days ago, enabling this optimization will break installation of all programs using gnutls because of bug #702711. An updated eatmydata package in Debian will solve it.

Update 2014-10-17: The bug mentioned above is fixed in testing and the optimization work again. And I have discovered that the dpkg-divert trick is not really needed and implemented a slightly simpler approach as part of the debian-edu-install package. See tools/edu-eatmydata-install in the source package.

Update 2014-11-11: Unfortunately, a new bug #765738 in eatmydata only triggering on i386 made it into testing, and broke this installation optimization again. If unblock request 768893 is accepted, it should be working again.

10th September 2014

Yesterday, I had the pleasure of attending a talk with the Norwegian Unix User Group about the OpenPGP keyserver pool sks-keyservers.net, and was very happy to learn that there is a large set of publicly available key servers to use when looking for peoples public key. So far I have used subkeys.pgp.net, and some times wwwkeys.nl.pgp.net when the former were misbehaving, but those days are ended. The servers I have used up until yesterday have been slow and some times unavailable. I hope those problems are gone now.

Behind the round robin DNS entry of the sks-keyservers.net service there is a pool of more than 100 keyservers which are checked every day to ensure they are well connected and up to date. It must be better than what I have used so far. :)

Yesterdays speaker told me that the service is the default keyserver provided by the default configuration in GnuPG, but this do not seem to be used in Debian. Perhaps it should?

Anyway, I've updated my ~/.gnupg/options file to now include this line:

keyserver pool.sks-keyservers.net

With GnuPG version 2 one can also locate the keyserver using SRV entries in DNS. Just for fun, I did just that at work, so now every user of GnuPG at the University of Oslo should find a OpenGPG keyserver automatically should their need it:

% host -t srv _pgpkey-http._tcp.uio.no
_pgpkey-http._tcp.uio.no has SRV record 0 100 11371 pool.sks-keyservers.net.
%

Now if only the HKP lookup protocol supported finding signature paths, I would be very happy. It can look up a given key or search for a user ID, but I normally do not want that, but to find a trust path from my key to another key. Given a user ID or key ID, I would like to find (and download) the keys representing a signature path from my key to the key in question, to be able to get a trust path between the two keys. This is as far as I can tell not possible today. Perhaps something for a future version of the protocol?

25th August 2014

Two years later, I am still not sure if it is legal here in Norway to use or publish a video in H.264 or MPEG4 format edited by the commercially licensed video editors, without limiting the use to create "personal" or "non-commercial" videos or get a license agreement with MPEG LA. If one want to publish and broadcast video in a non-personal or commercial setting, it might be that those tools can not be used, or that video format can not be used, without breaking their copyright license. I am not sure. Back then, I found that the copyright license terms for Adobe Premiere and Apple Final Cut Pro both specified that one could not use the program to produce anything else without a patent license from MPEG LA. The issue is not limited to those two products, though. Other much used products like those from Avid and Sorenson Media have terms of use are similar to those from Adobe and Apple. The complicating factor making me unsure if those terms have effect in Norway or not is that the patents in question are not valid in Norway, but copyright licenses are.

These are the terms for Avid Artist Suite, according to their published end user license text (converted to lower case text for easier reading):

18.2. MPEG-4. MPEG-4 technology may be included with the software. MPEG LA, L.L.C. requires this notice:

This product is licensed under the MPEG-4 visual patent portfolio license for the personal and non-commercial use of a consumer for (i) encoding video in compliance with the MPEG-4 visual standard (“MPEG-4 video”) and/or (ii) decoding MPEG-4 video that was encoded by a consumer engaged in a personal and non-commercial activity and/or was obtained from a video provider licensed by MPEG LA to provide MPEG-4 video. No license is granted or shall be implied for any other use. Additional information including that relating to promotional, internal and commercial uses and licensing may be obtained from MPEG LA, LLC. See http://www.mpegla.com. This product is licensed under the MPEG-4 systems patent portfolio license for encoding in compliance with the MPEG-4 systems standard, except that an additional license and payment of royalties are necessary for encoding in connection with (i) data stored or replicated in physical media which is paid for on a title by title basis and/or (ii) data which is paid for on a title by title basis and is transmitted to an end user for permanent storage and/or use, such additional license may be obtained from MPEG LA, LLC. See http://www.mpegla.com for additional details.

18.3. H.264/AVC. H.264/AVC technology may be included with the software. MPEG LA, L.L.C. requires this notice:

This product is licensed under the AVC patent portfolio license for the personal use of a consumer or other uses in which it does not receive remuneration to (i) encode video in compliance with the AVC standard (“AVC video”) and/or (ii) decode AVC video that was encoded by a consumer engaged in a personal activity and/or was obtained from a video provider licensed to provide AVC video. No license is granted or shall be implied for any other use. Additional information may be obtained from MPEG LA, L.L.C. See http://www.mpegla.com.

Note the requirement that the videos created can only be used for personal or non-commercial purposes.

The Sorenson Media software have similar terms:

With respect to a license from Sorenson pertaining to MPEG-4 Video Decoders and/or Encoders: Any such product is licensed under the MPEG-4 visual patent portfolio license for the personal and non-commercial use of a consumer for (i) encoding video in compliance with the MPEG-4 visual standard (“MPEG-4 video”) and/or (ii) decoding MPEG-4 video that was encoded by a consumer engaged in a personal and non-commercial activity and/or was obtained from a video provider licensed by MPEG LA to provide MPEG-4 video. No license is granted or shall be implied for any other use. Additional information including that relating to promotional, internal and commercial uses and licensing may be obtained from MPEG LA, LLC. See http://www.mpegla.com.

With respect to a license from Sorenson pertaining to MPEG-4 Consumer Recorded Data Encoder, MPEG-4 Systems Internet Data Encoder, MPEG-4 Mobile Data Encoder, and/or MPEG-4 Unique Use Encoder: Any such product is licensed under the MPEG-4 systems patent portfolio license for encoding in compliance with the MPEG-4 systems standard, except that an additional license and payment of royalties are necessary for encoding in connection with (i) data stored or replicated in physical media which is paid for on a title by title basis and/or (ii) data which is paid for on a title by title basis and is transmitted to an end user for permanent storage and/or use. Such additional license may be obtained from MPEG LA, LLC. See http://www.mpegla.com for additional details.

Some free software like Handbrake and FFMPEG uses GPL/LGPL licenses and do not have any such terms included, so for those, there is no requirement to limit the use to personal and non-commercial.

31st July 2014

The complete and free “out of the box” software solution for schools, Debian Edu / Skolelinux, is used quite a lot in Germany, and one of the people involved is Bernd Zeitzen, who show up on the project mailing lists from time to time with interesting questions and tips on how to adjust the setup. I managed to interview him this summer.

Who are you, and how do you spend your days?

My name is Bernd Zeitzen and I'm married with Hedda, a self employed physiotherapist. My former profession is tool maker, but I haven't worked for 30 years in this job. 30 years ago I started to support my wife and become her officeworker and a few years later the administrator for a small computer network, today based on Ubuntu Server (Samba, OpenVPN). For her daily work she has to use Windows Desktops because the software she needs to organize her business only works with Windows . :-(

In 1988 we started with one PC and DOS, then I learned to use Windows 98, 2000, XP, …, 8, Ubuntu, MacOSX. Today we are running a Linux server with 6 Windows clients and 10 persons (teacher of children with special needs, speech therapist, occupational therapist, psychologist and officeworkers) using our Samba shares via OpenVPN to work with the documentations of our patients.

How did you get in contact with the Skolelinux / Debian Edu project?

Two years ago a friend of mine asked me, if I want to get a job in his school (Gymnasium Harsewinkel). They started with Skolelinux / Debian Edu and they were looking for people to give support to the teachers using the software and the network and teaching the pupils increasing their computer skills in optional lessons. I'm spending 4-6 hours a week with this job.

What do you see as the advantages of Skolelinux / Debian Edu?

The independence.

First: Every person is allowed to use, share and develop the software. Even if you are poor, you are allowed to use the software included in Skolelinux/Debian Edu and all the other Free Software.

Second: The software runs on old machines and this gives us the possibility to recycle computers, weeded out from offices. The servers and desktops are running for more than two years and they are working reliable.

We have two servers (one tjener and one terminal server), 45 workstations in three classrooms and seven laptops as a mobile solution for all classrooms. These machines are all booting from the terminal server. In the moment we are installing 30 laptops as mobile workstations. Then the pupils have the possibility to work with these machines in their classrooms. Internet access is realized by a WLAN router, connected to the schools network. This is all done without a dedicated system administrator or a computer science teacher.

What do you see as the disadvantages of Skolelinux / Debian Edu?

Teachers and pupils are Windows users. <Irony on> And Linux isn't cool. It's software for freaks using the command line. <Irony off> They don't realize the stability of the system.

Which free software do you use daily?

Firefox, Thunderbird, LibreOffice, Ubuntu Server 12.04 (Samba, Apache, MySQL, Joomla!, … and Skolelinux / Debian Edu)

Which strategy do you believe is the right one to use to get schools to use free software?

In Germany we have the situation: every school is free to decide which software they want to use. This decision is influenced by teachers who learned to use Windows and MS Office. They buy a PC with Windows preinstalled and an additional testing version of MS Office. They don't know about the possibility to use Free Software instead. Another problem are the publisher of school books. They develop their software, added to the school books, for Windows.

23rd July 2014

This summer I finally had time to continue working on the Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with todays copyright law. Yesterday, I finally completed translated the book text. There are still some foot/end notes left to translate, the colophon page need to be rewritten, and a few words and phrases still need to be translated, but the Norwegian text is ready for the first proof reading. :) More spell checking is needed, and several illustrations need to be cleaned up. The work stopped up because I had to give priority to other projects the last year, and the progress graph of the translation show this very well:

If you want to read the result, check out the github project pages and the PDF, EPUB and HTML version available in the archive directory.

Please report typos, bugs and improvements to the github project if you find any.

17th June 2014

The Debian Edu / Skolelinux project provide an instruction manual for teachers, system administrators and other users that contain useful tips for setting up and maintaining a Debian Edu installation. This text is about how the text processing of this manual is handled in the project.

One goal of the project is to provide information in the native language of its users, and for this we need to handle translations. But we also want to make sure each language contain the same information, so for this we need a good way to keep the translations in sync. And we want it to be easy for our users to improve the documentation, avoiding the need to learn special formats or tools to contribute, and the obvious way to do this is to make it possible to edit the documentation using a web browser. We also want it to be easy for translators to keep the translation up to date, and give them help in figuring out what need to be translated. Here is the list of tools and the process we have found trying to reach all these goals.

We maintain the authoritative source of our manual in the Debian wiki, as several wiki pages written in English. It consist of one front page with references to the different chapters, several pages for each chapter, and finally one "collection page" gluing all the chapters together into one large web page (aka the AllInOne page). The AllInOne page is the one used for further processing and translations. Thanks to the fact that the MoinMoin installation on wiki.debian.org support exporting pages in the Docbook format, we can fetch the list of pages to export using the raw version of the AllInOne page, loop over each of them to generate a Docbook XML version of the manual. This process also download images and transform image references to use the locally downloaded images. The generated Docbook XML files are slightly broken, so some post-processing is done using the documentation/scripts/get_manual program, and the result is a nice Docbook XML file (debian-edu-wheezy-manual.xml) and a handfull of images. The XML file can now be used to generate PDF, HTML and epub versions of the English manual. This is the basic step of our process, making PDF (using dblatex), HTML (using xsltproc) and epub (using dbtoepub) version from Docbook XML, and the resulting files are placed in the debian-edu-doc-en binary package.

But English documentation is not enough for us. We want translated documentation too, and we want to make it easy for translators to track the English original. For this we use the poxml package, which allow us to transform the English Docbook XML file into a translation file (a .pot file), usable with the normal gettext based translation tools used by those translating free software. The pot file is used to create and maintain translation files (several .po files), which the translations update with the native language translations of all titles, paragraphs and blocks of text in the original. The next step is combining the original English Docbook XML and the translation file (say debian-edu-wheezy-manual.nb.po), to create a translated Docbook XML file (in this case debian-edu-wheezy-manual.nb.xml). This translated (or partly translated, if the translation is not complete) Docbook XML file can then be used like the original to create a PDF, HTML and epub version of the documentation.

The translators use different tools to edit the .po files. We recommend using lokalize, while some use emacs and vi, others can use web based editors like Poodle or Transifex. All we care about is where the .po file end up, in our git repository. Updated translations can either be committed directly to git, or submitted as bug reports against the debian-edu-doc package.

One challenge is images, which both might need to be translated (if they show translated user applications), and are needed in different formats when creating PDF and HTML versions (epub is a HTML version in this regard). For this we transform the original PNG images to the needed density and format during build, and have a way to provide translated images by storing translated versions in images/$LANGUAGECODE/. I am a bit unsure about the details here. The package maintainers know more.

If you wonder what the result look like, we provide the content of the documentation packages on the web. See for example the Italian PDF version or the German HTML version. We do not yet build the epub version by default, but perhaps it will be done in the future.

To learn more, check out the debian-edu-doc package, the manual on the wiki and the translation instructions in the manual.

29th May 2014

Dear lazyweb. I'm planning to set up a small Raspberry Pi computer in my car, connected to a small screen next to the rear mirror. I plan to hook it up with a GPS and a USB wifi card too. The idea is to get my own "Carputer". But I wonder if someone already created a good free software solution for such car computer.

This is my current wish list for such system:

  • Work on Raspberry Pi.
  • Show current speed limit based on location, and warn if going too fast (for example using color codes yellow and red on the screen, or make a sound). This could be done either using either data from Openstreetmap or OCR info gathered from a dashboard camera.
  • Track automatic toll road passes and their cost, show total spent and make it possible to calculate toll costs for planned route.
  • Collect GPX tracks for use with OpenStreetMap.
  • Automatically detect and use any wireless connection to connect to home server. Try IP over DNS (iodine) or ICMP (Hans) if direct connection do not work.
  • Set up mesh network to talk to other cars with the same system, or some standard car mesh protocol.
  • Warn when approaching speed cameras and speed camera ranges (speed calculated between two cameras).
  • Suport dashboard/front facing camera to discover speed limits and run OCR to track registration number of passing cars.

If you know of any free software car computer system supporting some or all of these features, please let me know.

Tags: english.
29th April 2014

I've been following the Gnash project for quite a while now. It is a free software implementation of Adobe Flash, both a standalone player and a browser plugin. Gnash implement support for the AVM1 format (and not the newer AVM2 format - see Lightspark for that one), allowing several flash based sites to work. Thanks to the friendly developers at Youtube, it also work with Youtube videos, because the Javascript code at Youtube detect Gnash and serve a AVM1 player to those users. :) Would be great if someone found time to implement AVM2 support, but it has not happened yet. If you install both Lightspark and Gnash, Lightspark will invoke Gnash if it find a AVM1 flash file, so you can get both handled as free software. Unfortunately, Lightspark so far only implement a small subset of AVM2, and many sites do not work yet.

A few months ago, I started looking at Coverity, the static source checker used to find heaps and heaps of bugs in free software (thanks to the donation of a scanning service to free software projects by the company developing this non-free code checker), and Gnash was one of the projects I decided to check out. Coverity is able to find lock errors, memory errors, dead code and more. A few days ago they even extended it to also be able to find the heartbleed bug in OpenSSL. There are heaps of checks being done on the instrumented code, and the amount of bogus warnings is quite low compared to the other static code checkers I have tested over the years.

Since a few weeks ago, I've been working with the other Gnash developers squashing bugs discovered by Coverity. I was quite happy today when I checked the current status and saw that of the 777 issues detected so far, 374 are marked as fixed. This make me confident that the next Gnash release will be more stable and more dependable than the previous one. Most of the reported issues were and are in the test suite, but it also found a few in the rest of the code.

If you want to help out, you find us on the gnash-dev mailing list and on the #gnash channel on irc.freenode.net IRC server.

23rd April 2014

It would be nice if it was easier in Debian to get all the hardware related packages relevant for the computer installed automatically. So I implemented one, using my Isenkram package. To use it, install the tasksel and isenkram packages and run tasksel as user root. You should be presented with a new option, "Hardware specific packages (autodetected by isenkram)". When you select it, tasksel will install the packages isenkram claim is fit for the current hardware, hot pluggable or not.

The implementation is in two files, one is the tasksel menu entry description, and the other is the script used to extract the list of packages to install. The first part is in /usr/share/tasksel/descs/isenkram.desc and look like this:

Task: isenkram
Section: hardware
Description: Hardware specific packages (autodetected by isenkram)
 Based on the detected hardware various hardware specific packages are
 proposed.
Test-new-install: mark show
Relevance: 8
Packages: for-current-hardware

The second part is in /usr/lib/tasksel/packages/for-current-hardware and look like this:

#!/bin/sh
#
(
    isenkram-lookup
    isenkram-autoinstall-firmware -l
) | sort -u

All in all, a very short and simple implementation making it trivial to install the hardware dependent package we all may want to have installed on our machines. I've not been able to find a way to get tasksel to tell you exactly which packages it plan to install before doing the installation. So if you are curious or careful, check the output from the isenkram-* command line tools first.

The information about which packages are handling which hardware is fetched either from the isenkram package itself in /usr/share/isenkram/, from git.debian.org or from the APT package database (using the Modaliases header). The APT package database parsing have caused a nasty resource leak in the isenkram daemon (bugs #719837 and #730704). The cause is in the python-apt code (bug #745487), but using a workaround I was able to get rid of the file descriptor leak and reduce the memory leak from ~30 MiB per hardware detection down to around 2 MiB per hardware detection. It should make the desktop daemon a lot more useful. The fix is in version 0.7 uploaded to unstable today.

I believe the current way of mapping hardware to packages in Isenkram is is a good draft, but in the future I expect isenkram to use the AppStream data source for this. A proposal for getting proper AppStream support into Debian is floating around as DEP-11, and GSoC project will take place this summer to improve the situation. I look forward to seeing the result, and welcome patches for isenkram to start using the information when it is ready.

If you want your package to map to some specific hardware, either add a "Xb-Modaliases" header to your control file like I did in the pymissile package or submit a bug report with the details to the isenkram package. See also all my blog posts tagged isenkram for details on the notation. I expect the information will be migrated to AppStream eventually, but for the moment I got no better place to store it.

15th April 2014

The Freedombox project is working on providing the software and hardware to make it easy for non-technical people to host their data and communication at home, and being able to communicate with their friends and family encrypted and away from prying eyes. It is still going strong, and today a major mile stone was reached.

Today, the last of the packages currently used by the project to created the system images were accepted into Debian Unstable. It was the freedombox-setup package, which is used to configure the images during build and on the first boot. Now all one need to get going is the build code from the freedom-maker git repository and packages from Debian. And once the freedombox-setup package enter testing, we can build everything directly from Debian. :)

Some key packages used by Freedombox are freedombox-setup, plinth, pagekite, tor, privoxy, owncloud and dnsmasq. There are plans to integrate more packages into the setup. User documentation is maintained on the Debian wiki. Please check out the manual and help us improve it.

To test for yourself and create boot images with the FreedomBox setup, run this on a Debian machine using a user with sudo rights to become root:

sudo apt-get install git vmdebootstrap mercurial python-docutils \
  mktorrent extlinux virtualbox qemu-user-static binfmt-support \
  u-boot-tools
git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \
  freedom-maker
make -C freedom-maker dreamplug-image raspberry-image virtualbox-image

Root access is needed to run debootstrap and mount loopback devices. See the README in the freedom-maker git repo for more details on the build. If you do not want all three images, trim the make line. Note that the virtualbox-image target is not really virtualbox specific. It create a x86 image usable in kvm, qemu, vmware and any other x86 virtual machine environment. You might need the version of vmdebootstrap in Jessie to get the build working, as it include fixes for a race condition with kpartx.

If you instead want to install using a Debian CD and the preseed method, boot a Debian Wheezy ISO and use this boot argument to load the preseed values:

url=http://www.reinholdtsen.name/freedombox/preseed-jessie.dat

I have not tested it myself the last few weeks, so I do not know if it still work.

If you wonder how to help, one task you could look at is using systemd as the boot system. It will become the default for Linux in Jessie, so we need to make sure it is usable on the Freedombox. I did a simple test a few weeks ago, and noticed dnsmasq failed to start during boot when using systemd. I suspect there are other problems too. :) To detect problems, there is a test suite included, which can be run from the plinth web interface.

Give it a go and let us know how it goes on the mailing list, and help us get the new release published. :) Please join us on IRC (#freedombox on irc.debian.org) and the mailing list if you want to help make this vision come true.

9th April 2014

For a while now, I have been looking for a sensible offsite backup solution for use at home. My requirements are simple, it must be cheap and locally encrypted (in other words, I keep the encryption keys, the storage provider do not have access to my private files). One idea me and my friends had many years ago, before the cloud storage providers showed up, was to use Google mail as storage, writing a Linux block device storing blocks as emails in the mail service provided by Google, and thus get heaps of free space. On top of this one can add encryption, RAID and volume management to have lots of (fairly slow, I admit that) cheap and encrypted storage. But I never found time to implement such system. But the last few weeks I have looked at a system called S3QL, a locally mounted network backed file system with the features I need.

S3QL is a fuse file system with a local cache and cloud storage, handling several different storage providers, any with Amazon S3, Google Drive or OpenStack API. There are heaps of such storage providers. S3QL can also use a local directory as storage, which combined with sshfs allow for file storage on any ssh server. S3QL include support for encryption, compression, de-duplication, snapshots and immutable file systems, allowing me to mount the remote storage as a local mount point, look at and use the files as if they were local, while the content is stored in the cloud as well. This allow me to have a backup that should survive fire. The file system can not be shared between several machines at the same time, as only one can mount it at the time, but any machine with the encryption key and access to the storage service can mount it if it is unmounted.

It is simple to use. I'm using it on Debian Wheezy, where the package is included already. So to get started, run apt-get install s3ql. Next, pick a storage provider. I ended up picking Greenqloud, after reading their nice recipe on how to use S3QL with their Amazon S3 service, because I trust the laws in Iceland more than those in USA when it come to keeping my personal data safe and private, and thus would rather spend money on a company in Iceland. Another nice recipe is available from the article S3QL Filesystem for HPC Storage by Jeff Layton in the HPC section of Admin magazine. When the provider is picked, figure out how to get the API key needed to connect to the storage API. With Greencloud, the key did not show up until I had added payment details to my account.

Armed with the API access details, it is time to create the file system. First, create a new bucket in the cloud. This bucket is the file system storage area. I picked a bucket name reflecting the machine that was going to store data there, but any name will do. I'll refer to it as bucket-name below. In addition, one need the API login and password, and a locally created password. Store it all in ~root/.s3ql/authinfo2 like this:

[s3c]
storage-url: s3c://s.greenqloud.com:443/bucket-name
backend-login: API-login
backend-password: API-password
fs-passphrase: local-password

I create my local passphrase using pwget 50 or similar, but any sensible way to create a fairly random password should do it. Armed with these details, it is now time to run mkfs, entering the API details and password to create it:

# mkdir -m 700 /var/lib/s3ql-cache
# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl s3c://s.greenqloud.com:443/bucket-name
Enter backend login: 
Enter backend password: 
Before using S3QL, make sure to read the user's guide, especially
the 'Important Rules to Avoid Loosing Data' section.
Enter encryption password: 
Confirm encryption password: 
Generating random encryption key...
Creating metadata tables...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Wrote 0.00 MB of compressed metadata.
# 

The next step is mounting the file system to make the storage available.

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
Using 4 upload threads.
Downloading and decompressing metadata...
Reading metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Mounting filesystem...
# df -h /s3ql
Filesystem                              Size  Used Avail Use% Mounted on
s3c://s.greenqloud.com:443/bucket-name  1.0T     0  1.0T   0% /s3ql
#

The file system is now ready for use. I use rsync to store my backups in it, and as the metadata used by rsync is downloaded at mount time, no network traffic (and storage cost) is triggered by running rsync. To unmount, one should not use the normal umount command, as this will not flush the cache to the cloud storage, but instead running the umount.s3ql command like this:

# umount.s3ql /s3ql
# 

There is a fsck command available to check the file system and correct any problems detected. This can be used if the local server crashes while the file system is mounted, to reset the "already mounted" flag. This is what it look like when processing a working file system:

# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name
Using cached metadata.
File system seems clean, checking anyway.
Checking DB integrity...
Creating temporary extra indices...
Checking lost+found...
Checking cached objects...
Checking names (refcounts)...
Checking contents (names)...
Checking contents (inodes)...
Checking contents (parent inodes)...
Checking objects (reference counts)...
Checking objects (backend)...
..processed 5000 objects so far..
..processed 10000 objects so far..
..processed 15000 objects so far..
Checking objects (sizes)...
Checking blocks (referenced objects)...
Checking blocks (refcounts)...
Checking inode-block mapping (blocks)...
Checking inode-block mapping (inodes)...
Checking inodes (refcounts)...
Checking inodes (sizes)...
Checking extended attributes (names)...
Checking extended attributes (inodes)...
Checking symlinks (inodes)...
Checking directory reachability...
Checking unix conventions...
Checking referential integrity...
Dropping temporary indices...
Backing up old metadata...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Wrote 0.89 MB of compressed metadata.
# 

Thanks to the cache, working on files that fit in the cache is very quick, about the same speed as local file access. Uploading large amount of data is to me limited by the bandwidth out of and into my house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, which is very close to my upload speed, and downloading the same Debian installation ISO gave me 610 kiB/s, close to my download speed. Both were measured using dd. So for me, the bottleneck is my network, not the file system code. I do not know what a good cache size would be, but suspect that the cache should e larger than your working set.

I mentioned that only one machine can mount the file system at the time. If another machine try, it is told that the file system is busy:

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
Using 8 upload threads.
Backend reports that fs is still mounted elsewhere, aborting.
#

The file content is uploaded when the cache is full, while the metadata is uploaded once every 24 hour by default. To ensure the file system content is flushed to the cloud, one can either umount the file system, or ask S3QL to flush the cache and metadata using s3qlctrl:

# s3qlctrl upload-meta /s3ql
# s3qlctrl flushcache /s3ql
# 

If you are curious about how much space your data uses in the cloud, and how much compression and deduplication cut down on the storage usage, you can use s3qlstat on the mounted file system to get a report:

# s3qlstat /s3ql
Directory entries:    9141
Inodes:               9143
Data blocks:          8851
Total data size:      22049.38 MB
After de-duplication: 21955.46 MB (99.57% of total)
After compression:    21877.28 MB (99.22% of total, 99.64% of de-duplicated)
Database size:        2.39 MB (uncompressed)
(some values do not take into account not-yet-uploaded dirty blocks in cache)
#

I mentioned earlier that there are several possible suppliers of storage. I did not try to locate them all, but am aware of at least Greenqloud, Google Drive, Amazon S3 web serivces, Rackspace and Crowncloud. The latter even accept payment in Bitcoin. Pick one that suit your need. Some of them provide several GiB of free storage, but the prize models are quite different and you will have to figure out what suits you best.

While researching this blog post, I had a look at research papers and posters discussing the S3QL file system. There are several, which told me that the file system is getting a critical check by the science community and increased my confidence in using it. One nice poster is titled "An Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject Store and Transformative Parallel I/O Approach" by Hsing-Bung Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields and Pamela Smith. Please have a look.

Given my problems with different file systems earlier, I decided to check out the mounted S3QL file system to see if it would be usable as a home directory (in other word, that it provided POSIX semantics when it come to locking and umask handling etc). Running my test code to check file system semantics, I was happy to discover that no error was found. So the file system can be used for home directories, if one chooses to do so.

If you do not want a locally file system, and want something that work without the Linux fuse file system, I would like to mention the Tarsnap service, which also provide locally encrypted backup using a command line client. It have a nicer access control system, where one can split out read and write access, allowing some systems to write to the backup and others to only read from it.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

1st April 2014

Microsoft have announced that Windows XP reaches its end of life 2014-04-08, in 7 days. But there are heaps of machines still running Windows XP, and depending on Windows XP to run their applications, and upgrading will be expensive, both when it comes to money and when it comes to the amount of effort needed to migrate from Windows XP to a new operating system. Some obvious options (buy new a Windows machine, buy a MacOSX machine, install Linux on the existing machine) are already well known and covered elsewhere. Most of them involve leaving the user applications installed on Windows XP behind and trying out replacements or updated versions. In this blog post I want to mention one strange bird that allow people to keep the hardware and the existing Windows XP applications and run them on a free software operating system that is Windows XP compatible.

ReactOS is a free software operating system (GNU GPL licensed) working on providing a operating system that is binary compatible with Windows, able to run windows programs directly and to use Windows drivers for hardware directly. The project goal is for Windows user to keep their existing machines, drivers and software, and gain the advantages from user a operating system without usage limitations caused by non-free licensing. It is a Windows clone running directly on the hardware, so quite different from the approach taken by the Wine project, which make it possible to run Windows binaries on Linux.

The ReactOS project share code with the Wine project, so most shared libraries available on Windows are already implemented already. There is also a software manager like the one we are used to on Linux, allowing the user to install free software applications with a simple click directly from the Internet. Check out the screen shots on the project web site for an idea what it look like (it looks just like Windows before metro).

I do not use ReactOS myself, preferring Linux and Unix like operating systems. I've tested it, and it work fine in a virt-manager virtual machine. The browser, minesweeper, notepad etc is working fine as far as I can tell. Unfortunately, my main test application is the software included on a CD with the Lego Mindstorms NXT, which seem to install just fine from CD but fail to leave any binaries on the disk after the installation. So no luck with that test software. No idea why, but hope someone else figure out and fix the problem. I've tried the ReactOS Live ISO on a physical machine, and it seemed to work just fine. If you like Windows and want to keep running your old Windows binaries, check it out by downloading the installation CD, the live CD or the preinstalled virtual machine image.

30th March 2014

Debian Edu / Skolelinux keep gaining new users. Some weeks ago, a person showed up on IRC, #debian-edu, with a wish to contribute, and I managed to get a interview with this great contributor Roger Marsal to learn more about his background.

Who are you, and how do you spend your days?

My name is Roger Marsal, I'm 27 years old (1986 generation) and I live in Barcelona, Spain. I've got a strong business background and I work as a patrimony manager and as a real estate agent. Additionally, I've co-founded a British based tech company that is nowadays on the last development phase of a new social networking concept.

I'm a Linux enthusiast that started its journey with Ubuntu four years ago and have recently switched to Debian seeking rock solid stability and as a necessary step to gain expertise.

In a nutshell, I spend my days working and learning as much as I can to face both my job, entrepreneur project and feed my Linux hunger.

How did you get in contact with the Skolelinux / Debian Edu project?

I discovered the LTSP advantages with "Ubuntu 12.04 alternate install" and after a year of use I started looking for an alternative. Even though I highly value and respect the Ubuntu project, I thought it was necessary for me to change to a more robust and stable alternative. As far as I was using Debian on my personal laptop I thought it would be fine to install Debian and configure an LTSP server myself. Surprised, I discovered that the Debian project also supported a kind of Edubuntu equivalent, and after having some pain I obtained a Debian Edu network up and running. I just loved it.

What do you see as the advantages of Skolelinux / Debian Edu?

I found a main advantage in that, once you know "the tips and tricks", a new installation just works out of the box. It's the most complete alternative I've found to create an LTSP network. All the other distributions seems to be made of plastic, Debian Edu seems to be made of steel.

What do you see as the disadvantages of Skolelinux / Debian Edu?

I found two main disadvantages.

I'm not an expert but I've got notions and I had to spent a considerable amount of time trying to bring up a standard network topology. I'm quite stubborn and I just worked until I did but I'm sure many people with few resources (not big schools, but academies for example) would have switched or dropped.

It's amazing how such a complex system like Debian Edu has achieved this out-of-the-box state. Even though tweaking without breaking gets more difficult, as more factors have to be considered. This can discourage many people too.

Which free software do you use daily?

I use Debian, Firefox, Okular, Inkscape, LibreOffice and Virtualbox.

Which strategy do you believe is the right one to use to get schools to use free software?

I don't think there is a need for a particular strategy. The free attribute in both "freedom" and "no price" meanings is what will really bring free software to schools. In my experience I can think of the "R" statistical language; a few years a ago was an extremely nerd tool for university people. Today it's being increasingly used to teach statistics at many different level of studies. I believe free and open software will increasingly gain popularity, but I'm sure schools will be one of the first scenarios where this will happen.

25th March 2014

Did you ever need to store logs or other files in a way that would allow it to be used as evidence in court, and needed a way to demonstrate without reasonable doubt that the file had not been changed since it was created? Or, did you ever need to document that a given document was received at some point in time, like some archived document or the answer to an exam, and not changed after it was received? The problem in these settings is to remove the need to trust yourself and your computers, while still being able to prove that a file is the same as it was at some given time in the past.

A solution to these problems is to have a trusted third party "stamp" the document and verify that at some given time the document looked a given way. Such notarius service have been around for thousands of years, and its digital equivalent is called a trusted timestamping service. The Internet Engineering Task Force standardised how such service could work a few years ago as RFC 3161. The mechanism is simple. Create a hash of the file in question, send it to a trusted third party which add a time stamp to the hash and sign the result with its private key, and send back the signed hash + timestamp. Both email, FTP and HTTP can be used to request such signature, depending on what is provided by the service used. Anyone with the document and the signature can then verify that the document matches the signature by creating their own hash and checking the signature using the trusted third party public key. There are several commercial services around providing such timestamping. A quick search for "rfc 3161 service" pointed me to at least DigiStamp, Quo Vadis, Global Sign and Global Trust Finder. The system work as long as the private key of the trusted third party is not compromised.

But as far as I can tell, there are very few public trusted timestamp services available for everyone. I've been looking for one for a while now. But yesterday I found one over at Deutches Forschungsnetz mentioned in a blog by David Müller. I then found a good recipe on how to use the service over at the University of Greifswald.

The OpenSSL library contain both server and tools to use and set up your own signing service. See the ts(1SSL), tsget(1SSL) manual pages for more details. The following shell script demonstrate how to extract a signed timestamp for any file on the disk in a Debian environment:

#!/bin/sh
set -e
url="http://zeitstempel.dfn.de"
caurl="https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt"
reqfile=$(mktemp -t tmp.XXXXXXXXXX.tsq)
resfile=$(mktemp -t tmp.XXXXXXXXXX.tsr)
cafile=chain.txt
if [ ! -f $cafile ] ; then
    wget -O $cafile "$caurl"
fi
openssl ts -query -data "$1" -cert | tee "$reqfile" \
    | /usr/lib/ssl/misc/tsget -h "$url" -o "$resfile"
openssl ts -reply -in "$resfile" -text 1>&2
openssl ts -verify -data "$1" -in "$resfile" -CAfile "$cafile" 1>&2
base64 < "$resfile"
rm "$reqfile" "$resfile"

The argument to the script is the file to timestamp, and the output is a base64 encoded version of the signature to STDOUT and details about the signature to STDERR. Note that due to a bug in the tsget script, you might need to modify the included script and remove the last line. Or just write your own HTTP uploader using curl. :) Now you too can prove and verify that files have not been changed.

But the Internet need more public trusted timestamp services. Perhaps something for Uninett or my work place the University of Oslo to set up?

Tags: english, sikkerhet.
21st March 2014

Keeping your DVD collection safe from scratches and curious children fingers while still having it available when you want to see a movie is not straight forward. My preferred method at the moment is to store a full copy of the ISO on a hard drive, and use VLC, Popcorn Hour or other useful players to view the resulting file. This way the subtitles and bonus material are still available and using the ISO is just like inserting the original DVD record in the DVD player.

Earlier I used dd for taking security copies, but it do not handle DVDs giving read errors (which are quite a few of them). I've also tried using dvdbackup and genisoimage, but these days I use the marvellous python library and program python-dvdvideo written by Bastian Blank. It is in Debian already and the binary package name is python3-dvdvideo. Instead of trying to read every block from the DVD, it parses the file structure and figure out which block on the DVD is actually in used, and only read those blocks from the DVD. This work surprisingly well, and I have been able to almost backup my entire DVD collection using this method.

So far, python-dvdvideo have failed on between 10 and 20 DVDs, which is a small fraction of my collection. The most common problem is DVDs using UTF-16 instead of UTF-8 characters, which according to Bastian is against the DVD specification (and seem to cause some players to fail too). A rarer problem is what seem to be inconsistent DVD structures, as the python library claim there is a overlap between objects. An equally rare problem claim some value is out of range. No idea what is going on there. I wish I knew enough about the DVD format to fix these, to ensure my movie collection will stay with me in the future.

So, if you need to keep your DVDs safe, back them up using python-dvdvideo. :)

14th March 2014

The Freedombox project is working on providing the software and hardware for making it easy for non-technical people to host their data and communication at home, and being able to communicate with their friends and family encrypted and away from prying eyes. It has been going on for a while, and is slowly progressing towards a new test release (0.2).

And what day could be better than the Pi day to announce that the new version will provide "hard drive" / SD card / USB stick images for Dreamplug, Raspberry Pi and VirtualBox (or any other virtualization system), and can also be installed using a Debian installer preseed file. The Debian based Freedombox is now based on Debian Jessie, where most of the needed packages used are already present. Only one, the freedombox-setup package, is missing. To try to build your own boot image to test the current status, fetch the freedom-maker scripts and build using vmdebootstrap with a user with sudo access to become root:

git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \
  freedom-maker
sudo apt-get install git vmdebootstrap mercurial python-docutils \
  mktorrent extlinux virtualbox qemu-user-static binfmt-support \
  u-boot-tools
make -C freedom-maker dreamplug-image raspberry-image virtualbox-image

Root access is needed to run debootstrap and mount loopback devices. See the README for more details on the build. If you do not want all three images, trim the make line. But note that thanks to a race condition in vmdebootstrap, the build might fail without the patch to the kpartx call.

If you instead want to install using a Debian CD and the preseed method, boot a Debian Wheezy ISO and use this boot argument to load the preseed values:

url=http://www.reinholdtsen.name/freedombox/preseed-jessie.dat

But note that due to a recently introduced bug in apt in Jessie, the installer will currently hang while setting up APT sources. Killing the 'apt-cdrom ident' process when it hang a few times during the installation will get the installation going. This affect all installations in Jessie, and I expect it will be fixed soon.

Give it a go and let us know how it goes on the mailing list, and help us get the new release published. :) Please join us on IRC (#freedombox on irc.debian.org) and the mailing list if you want to help make this vision come true.

12th March 2014

On larger sites, it is useful to use a dedicated storage server for storing user home directories and data. The design for handling this in Debian Edu / Skolelinux, is to update the automount rules in LDAP and let the automount daemon on the clients take care of the rest. I was reminded about the need to document this better when one of the customers of Skolelinux Drift AS, where I am on the board of directors, asked about how to do this. The steps to get this working are the following:

  1. Add new storage server in DNS. I use nas-server.intern as the example host here.
  2. Add automoun LDAP information about this server in LDAP, to allow all clients to automatically mount it on reqeust.
  3. Add the relevant entries in tjener.intern:/etc/fstab, because tjener.intern do not use automount to avoid mounting loops.

DNS entries are added in GOsa², and not described here. Follow the instructions in the manual (Machine Management with GOsa² in section Getting started).

Ensure that the NFS export points on the server are exported to the relevant subnets or machines:

root@tjener:~# showmount -e nas-server
Export list for nas-server:
/storage         10.0.0.0/8
root@tjener:~#

Here everything on the backbone network is granted access to the /storage export. With NFSv3 it is slightly better to limit it to netgroup membership or single IP addresses to have some limits on the NFS access.

The next step is to update LDAP. This can not be done using GOsa², because it lack a module for automount. Instead, use ldapvi and add the required LDAP objects using an editor.

ldapvi --ldap-conf -ZD '(cn=admin)' -b ou=automount,dc=skole,dc=skolelinux,dc=no

When the editor show up, add the following LDAP objects at the bottom of the document. The "/&" part in the last LDAP object is a wild card matching everything the nas-server exports, removing the need to list individual mount points in LDAP.

add cn=nas-server,ou=auto.skole,ou=automount,dc=skole,dc=skolelinux,dc=no
objectClass: automount
cn: nas-server
automountInformation: -fstype=autofs --timeout=60 ldap:ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no

add ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no
objectClass: top
objectClass: automountMap
ou: auto.nas-server

add cn=/,ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no
objectClass: automount
cn: /
automountInformation: -fstype=nfs,tcp,rsize=32768,wsize=32768,rw,intr,hard,nodev,nosuid,noatime nas-server.intern:/&

The last step to remember is to mount the relevant mount points in tjener.intern by adding them to /etc/fstab, creating the mount directories using mkdir and running "mount -a" to mount them.

When this is done, your users should be able to access the files on the storage server directly by just visiting the /tjener/nas-server/storage/ directory using any application on any workstation, LTSP client or LTSP server.

22nd February 2014

Many years ago, I wrote a GPL licensed version of the netgroup and innetgr tools, because I needed them in Skolelinux. I called the project ng-utils, and it has served me well. I placed the project under the Hungry Programmer umbrella, and it was maintained in our CVS repository. But many years ago, the CVS repository was dropped (lost, not migrated to new hardware, not sure), and the project have lacked a proper home since then.

Last summer, I had a look at the package and made a new release fixing a irritating crash bug, but was unable to store the changes in a proper source control system. I applied for a project on Alioth, but did not have time to follow up on it. Until today. :)

After many hours of cleaning and migration, the ng-utils project now have a new home, and a git repository with the highlight of the history of the project. I published all release tarballs and imported them into the git repository. As the project is really stable and not expected to gain new features any time soon, I decided to make a new release and call it 1.0. Visit the new project home on https://alioth.debian.org/projects/ng-utils/ if you want to check it out. The new version is also uploaded into Debian Unstable.

Tags: debian, english.
3rd February 2014

A few days ago I decided to try to help the Hurd people to get their changes into sysvinit, to allow them to use the normal sysvinit boot system instead of their old one. This follow up on the great Google Summer of Code work done last summer by Justus Winter to get Debian on Hurd working more like Debian on Linux. To get started, I downloaded a prebuilt hard disk image from http://ftp.debian-ports.org/debian-cd/hurd-i386/current/debian-hurd.img.tar.gz, and started it using virt-manager.

The first think I had to do after logging in (root without any password) was to get the network operational. I followed the instructions on the Debian GNU/Hurd ports page and ran these commands as root to get the machine to accept a IP address from the kvm internal DHCP server:

settrans -fgap /dev/netdde /hurd/netdde
kill $(ps -ef|awk '/[p]finet/ { print $2}')
kill $(ps -ef|awk '/[d]evnode/ { print $2}')
dhclient /dev/eth0

After this, the machine had internet connectivity, and I could upgrade it and install the sysvinit packages from experimental and enable it as the default boot system in Hurd.

But before I did that, I set a password on the root user, as ssh is running on the machine it for ssh login to work a password need to be set. Also, note that a bug somewhere in openssh on Hurd block compression from working. Remember to turn that off on the client side.

Run these commands as root to upgrade and test the new sysvinit stuff:

cat > /etc/apt/sources.list.d/experimental.list <<EOF
deb http://http.debian.net/debian/ experimental main
EOF
apt-get update
apt-get dist-upgrade
apt-get install -t experimental initscripts sysv-rc sysvinit \
    sysvinit-core sysvinit-utils
update-alternatives --config runsystem

To reboot after switching boot system, you have to use reboot-hurd instead of just reboot, as there is not yet a sysvinit process able to receive the signals from the normal 'reboot' command. After switching to sysvinit as the boot system, upgrading every package and rebooting, the network come up with DHCP after boot as it should, and the settrans/pkill hack mentioned at the start is no longer needed. But for some strange reason, there are no longer any login prompt in the virtual console, so I logged in using ssh instead.

Note that there are some race conditions in Hurd making the boot fail some times. No idea what the cause is, but hope the Hurd porters figure it out. At least Justus said on IRC (#debian-hurd on irc.debian.org) that they are aware of the problem. A way to reduce the impact is to upgrade to the Hurd packages built by Justus by adding this repository to the machine:

cat > /etc/apt/sources.list.d/hurd-ci.list <<EOF
deb http://darnassus.sceen.net/~teythoon/hurd-ci/ sid main
EOF

At the moment the prebuilt virtual machine get some packages from http://ftp.debian-ports.org/debian, because some of the packages in unstable do not yet include the required patches that are lingering in BTS. This is the completely list of "unofficial" packages installed:

# aptitude search '?narrow(?version(CURRENT),?origin(Debian Ports))'
i   emacs                   - GNU Emacs editor (metapackage)
i   gdb                     - GNU Debugger
i   hurd-recommended        - Miscellaneous translators
i   isc-dhcp-client         - ISC DHCP client
i   isc-dhcp-common         - common files used by all the isc-dhcp* packages
i   libc-bin                - Embedded GNU C Library: Binaries
i   libc-dev-bin            - Embedded GNU C Library: Development binaries
i   libc0.3                 - Embedded GNU C Library: Shared libraries
i A libc0.3-dbg             - Embedded GNU C Library: detached debugging symbols
i   libc0.3-dev             - Embedded GNU C Library: Development Libraries and Hea
i   multiarch-support       - Transitional package to ensure multiarch compatibilit
i A x11-common              - X Window System (X.Org) infrastructure
i   xorg                    - X.Org X Window System
i A xserver-xorg            - X.Org X server
i A xserver-xorg-input-all  - X.Org X server -- input driver metapackage
#

All in all, testing hurd has been an interesting experience. :) X.org did not work out of the box and I never took the time to follow the porters instructions to fix it. This time I was interested in the command line stuff.

29th January 2014

Bitcoin is a incredible use of peer to peer communication and encryption, allowing direct and immediate money transfer without any central control. It is sometimes claimed to be ideal for illegal activity, which I believe is quite a long way from the truth. At least I would not conduct illegal money transfers using a system where the details of every transaction are kept forever. This point is investigated in USENIX ;login: from December 2013, in the article "A Fistful of Bitcoins - Characterizing Payments Among Men with No Names" by Sarah Meiklejohn, Marjori Pomarole,Grant Jordan, Kirill Levchenko, Damon McCoy, Geoffrey M. Voelker, and Stefan Savage. They analyse the transaction log in the Bitcoin system, using it to find addresses belong to individuals and organisations and follow the flow of money from both Bitcoin theft and trades on Silk Road to where the money end up. This is how they wrap up their article:

"To demonstrate the usefulness of this type of analysis, we turned our attention to criminal activity. In the Bitcoin economy, criminal activity can appear in a number of forms, such as dealing drugs on Silk Road or simply stealing someone else’s bitcoins. We followed the flow of bitcoins out of Silk Road (in particular, from one notorious address) and from a number of highly publicized thefts to see whether we could track the bitcoins to known services. Although some of the thieves attempted to use sophisticated mixing techniques (or possibly mix services) to obscure the flow of bitcoins, for the most part tracking the bitcoins was quite straightforward, and we ultimately saw large quantities of bitcoins flow to a variety of exchanges directly from the point of theft (or the withdrawal from Silk Road).

As acknowledged above, following stolen bitcoins to the point at which they are deposited into an exchange does not in itself identify the thief; however, it does enable further de-anonymization in the case in which certain agencies can determine (through, for example, subpoena power) the real-world owner of the account into which the stolen bitcoins were deposited. Because such exchanges seem to serve as chokepoints into and out of the Bitcoin economy (i.e., there are few alternative ways to cash out), we conclude that using Bitcoin for money laundering or other illicit purposes does not (at least at present) seem to be particularly attractive."

These researches are not the first to analyse the Bitcoin transaction log. The 2011 paper "An Analysis of Anonymity in the Bitcoin System" by Fergal Reid and Martin Harrigan is summarized like this:

"Anonymity in Bitcoin, a peer-to-peer electronic currency system, is a complicated issue. Within the system, users are identified by public-keys only. An attacker wishing to de-anonymize its users will attempt to construct the one-to-many mapping between users and public-keys and associate information external to the system with the users. Bitcoin tries to prevent this attack by storing the mapping of a user to his or her public-keys on that user's node only and by allowing each user to generate as many public-keys as required. In this chapter we consider the topological structure of two networks derived from Bitcoin's public transaction history. We show that the two networks have a non-trivial topological structure, provide complementary views of the Bitcoin system and have implications for anonymity. We combine these structures with external information and techniques such as context discovery and flow analysis to investigate an alleged theft of Bitcoins, which, at the time of the theft, had a market value of approximately half a million U.S. dollars."

I hope these references can help kill the urban myth that Bitcoin is anonymous. It isn't really a good fit for illegal activites. Use cash if you need to stay anonymous, at least until regular DNA sampling of notes and coins become the norm. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14th January 2014

Coverity is a nice tool to find problems in C, C++ and Java code using static source code analysis. It can detect a lot of different problems, and is very useful to find memory and locking bugs in the error handling part of the source. The company behind it provide check of free software projects as a community service, and many hundred free software projects are already checked. A few days ago I decided to have a closer look at the Coverity system, and discovered that the gnash and ipmitool projects I am involved with was already registered. But these are fairly big, and I would also like to have a small and easy project to check, and decided to request checking of the chrpath project. It was added to the checker and discovered seven potential defects. Six of these were real, mostly resource "leak" when the program detected an error. Nothing serious, as the resources would be released a fraction of a second later when the program exited because of the error, but it is nice to do it right in case the source of the program some time in the future end up in a library. Having fixed all defects and added a mailing list for the chrpath developers, I decided it was time to publish a new release. These are the release notes:

New in 0.16 released 2014-01-14:

  • Fixed all minor bugs discovered by Coverity.
  • Updated config.sub and config.guess from the GNU project.
  • Mention new project mailing list in the documentation.

You can download the new version 0.16 from alioth. Please let us know via the Alioth project if something is wrong with the new release. The test suite did not discover any old errors, so if you find a new one, please also include a test suite check.

25th December 2013

The Debian Edu / Skolelinux project consist of both newcomers and old timers, and this time I was able to get an interview with a newcomer in the project who showed up on the IRC channel a few weeks ago to let us know about his successful installation of Debian Edu Wheezy in his School. Say hello to Dominik George.

Who are you, and how do you spend your days?

I am a 23 year-old student from Germany who has spent half of his life with open source. In "real life", I am, as already mentioned, a student in the fields of Computer Science, Electrical Engineering, Information Technologies and Anglistics. Due to my (only partially voluntary) huge engagement in the open source world, these things are a bit vacant right now however.

I also have been working as a project teacher at a Gymasnium (public school) for various years now. I took up that work some time around 2005 when still attending that school myself and have continued it until today. I also had been running the (kind of very advanced) network of that school together with a team of very interested and talented students in the age of 11 to 15 years, who took the chance to learn a lot about open source and networking before I left the school to help building another school's informational education concept from scratch.

That said, one might see me as a kind of "glue" between school kids and the elderly of teachers as well as between the open source ecosystem and the (even more complex) educational ecosystem.

When I am not busy with open source or education, I like Geocaching and cycling.

How did you get in contact with the Skolelinux / Debian Edu project?

I think that happened some time around 2009 when I first attended FrOSCon and visited the project booth. I think I wasn't too interested back then because I used to have an attitude of disliking software that does too much stuff on its own. Maybe I was too inexperienced to realise the upsides of an "out-of-the-box" solution ;).

The first time I actively talked to Skolelinux people was at OpenRheinRuhr 2011 when the BiscuIT project, a home-grewn software used by my school for various really cool things from timetables and class contact lists to lunch ordering, student ID card printing and project elections first got to a stage where it could have been published. I asked the Skolelinux guys running the booth if the project were interested in it and gave a small demonstration, but there wasn't any real feedback and the guys seemed rather uninterested.

After I left the school where I developed the software, it got mostly lost, but I am now reimplementing it for my new school. I have reusability and compatibility in mind, and I hop there will be a new basis for contributing it to the Skolelinux project ;)!

What do you see as the advantages of Skolelinux / Debian Edu?

The most important advantage seems to be that it "just works". After overcoming some minor (but still very annoying) glitches in the installer, I got a fully functional, working school network, without the month-long hassle I experienced when setting all that up from scratch in earlier years. And above that, it rocked - I didn't have any real hardware at hand, because the school was just founded and has no money whatsoever, so I installed a combined server (main server, terminal services and workstation) in a VM on my personal notebook, bridging the LTSP network interface to the ethernet port, and then PXE-booted the Windows notebooks that were lying around from it. I could use 8 clients without any performance issues, by using a tiny little VM on a tiny little notebook. I think that's enough to say that it rocks!

Secondly, there are marketing reasons. Life's bad, and so no politician will ever permit a setup described as "Debian, an universal operating system, with some really cool educational tools" while they will be jsut fine with "Skolelinux, a single-purpose solution for your school network", even if both turn out to be the very same thing (yes, this is unfair towards the Skolelinux project, and must not be taken too seriously - you get the idea, anyway).

What do you see as the disadvantages of Skolelinux / Debian Edu?

I have not been involved with Skolelinux long enough to really answer this question in a fair way. Thus, please allow me to put it in other words: "What do you expect from Skolelinux to keep liking it?" I can list a few points about that:

  • always strive to get all things integrated into Debian upstream
  • be open to discussion about changes and the like, even with newcomers
  • be helpful at being helpful ;)

I'm really sorry I cannot say much more about that :(!

Which free software do you use daily?

First of all, all software I use is free and open. I have abandoned all non-free software (except for firmware on my darned phone) this year.

I run Debian GNU/Linux on all PC systems I use. On that, I mostly run text tools. I use mksh as shell, jupp as very advanced text editor (I even got the developer to help me write a script/macro based full-featured student management software with the two), mcabber for XMPP and irssi for IRC. For that overly coloured world called the WWW, I use Iceweasel (Firefox). Oh, and mutt for e-mail.

However, while I am personally aware of the fact that text tools are more efficient and powerful than anything else, I also use (or at least operate) some tools that are suitable to bring open source to kids. One of these things is Jappix, which I already introduced to some kids even before they got aware of Facebook, making them see for themselves that they do not need Facebook now ;).

Which strategy do you believe is the right one to use to get schools to use free software?

Well, that's a two-sided thing. One side is what I believe, and one side is what I have experienced.

I believe that the right strategy is showing them the benefits. But that won't work out as long as the acceptance of free alternatives grows globally. What I mean is that if all the kids are almost forced to use Windows, Facebook, Skype, you name it at home, they will not see why they would want to use alternatives at school. I have seen students take seat in front of a fully-functional, modern Debian desktop that could do anything their Windows at home could do, and they jsut refused to use it because "Linux sucks". It is something that makes the council of our city spend around 600000 € to buy software - not including hardware, mind you - for operating school networks, and for installing a system that, as has been proved, does not work. For those of you readers who are good at maths, have you already found out how many lives could have been saved with that money if we had instead used it to bring education to parts of the world that need it? I have, and found it to be nothing less dramatic than plain criminal.

That said, the only feasible way appears to be the bottom up method. We have to bring free software to kids and parents. I have founded an association named Teckids here in Germany that does just that. We organise several events for kids and adolescents in the area of free and open source software, for example the FrogLabs, which share staff with Teckids and are the youth programme of the Free and Open Source Software Conference (FrOSCon). We do a lot more than most other conferences - this year, we first offered the FrogLabs as a holiday camp for kids aged 10 to 16. It was a huge success, with approx. 30 kids taking part and learning with and about free software through a whole weekend. All of us had a lot of fun, and the results were really exciting.

Apart from that, we are preparing a campaign that is supposed to bring the message of free alternatives to stuff kids use every day to them and their parents, e.g. the use of Jabber / Jappix instead of Facebook and Skype. To make that possible, we are planning to get together a team of clever kids who understand very well what their peers need and can bring it across to them. So we will have a peer-driven network of adolescents who teach each other and collect feedback from the community of minors. We then take that feedback and our own experience to work closely with open source projects, such as Skolelinux or Jappix, at improving their software in a way that makes it more and more attractive for the target group. At least I hope that we will have good cooperation with Skolelinux in the future ;)!

So in conclusion, what I believe is that, if it weren't for the world being so bad, it should be very clear to the political decision makers that the only way to go nowadays is free software for various reasons, but I have learnt that the only way that seems to work is bottom up.

6th December 2013

It has been a while since I managed to publish the last interview, but the Debian Edu / Skolelinux community is still going strong, and yesterday we even had a new school administrator show up on #debian-edu to share his success story with installing Debian Edu at their school. This time I have been able to get some helpful comments from the creator of Knoppix, Klaus Knopper, who was involved in a Skolelinux project in Germany a few years ago.

Who are you, and how do you spend your days?

I am Klaus Knopper. I have a master degree in electrical engineering, and is currently professor in information management at the university of applied sciences Kaiserslautern / Germany and freelance Open Source software developer and consultant.

All of this is pretty much of the work I spend my days with. Apart from teaching, I'm also conducting some more or less experimental projects like the Knoppix GNU/Linux live system (Debian-based like Skolelinux), ADRIANE (a blind-friendly talking desktop system) and LINBO (Linux-based network boot console, a fast remote install and repair system supporting various operating systems).

How did you get in contact with the Skolelinux / Debian Edu project?

The credit for this have to go to Kurt Gramlich, who is the German coordinator for Skolelinux. We were looking for an all-in-one open source community-supported distribution for schools, and Kurt introduced us to Skolelinux for this purpose.

What do you see as the advantages of Skolelinux / Debian Edu?

  • Quick installation,
  • works (almost) out of the box,
  • contains many useful software packages for teaching and learning,
  • is a purely community-based distro and not controlled by a single company,
  • has a large number of supporters and teachers who share their experience and problem solutions.

What do you see as the disadvantages of Skolelinux / Debian Edu?

  • Skolelinux is - as we had to learn - not easily upgradable to the next version. Opposed to its genuine Debian base, upgrading to a new version means a full new installation from scratch to get it working again reliably.
  • Skolelinux is based on Debian/stable, and therefore always a little outdated in terms of program versions compared to Edubuntu or similar educational Linux distros, which rather use Debian/testing as their base.
  • Skolelinux has some very self-opinionated and stubborn default configuration which in my opinion adds unnecessary complexity and is not always suitable for a schools needs, the preset network configuration is actually a core definition feature of Skolelinux and not easy to change, so schools sometimes have to change their network configuration to make it "Skolelinux-compatible".
  • Some proposed extensions, which were made available as contribution, like secure examination mode and lecture material distribution and collection, were not accepted into the mainline Skolelinux development and are now not easy to maintain in the future because of Skolelinux somewhat undeterministic update schemes.
  • Skolelinux has only a very tiny number of base developers compared to Debian.

For these reasons and experience from our project, I would now rather consider using plain Debian for schools next time, until Skolelinux is more closely integrated into Debian and becomes upgradeable without reinstallation.

Which free software do you use daily?

GNU/Linux with LXDE desktop, bash for interactive dialog and programming, texlive for documentation and correspondence, occasionally LibreOffice for document format conversion. Various programming languages for teaching.

Which strategy do you believe is the right one to use to get schools to use free software?

Strong arguments are

  • Knowledge is free, and so should be methods and tools for teaching and learning.
  • Students can learn with and use the same software at school, at home, and at their working place without running into license or conversion problems.
  • Closed source or proprietary software hides knowledge rather than exposing it, and proprietary software vendors try to bind customers to certain products. But teachers need to teach science, not products.
  • If you have everything you for daily work as open source, what would you need proprietary software for?
30th November 2013

If you want the ability to electronically communicate directly with your neighbors and friends using a network controlled by your peers in stead of centrally controlled by a few corporations, or would like to experiment with interesting network technology, the Dugnasnett for alle i Oslo might be project for you. 39 mesh nodes are currently being planned, in the freshly started initiative from NUUG and Hackeriet to create a wireless community network. The work is inspired by Freifunk, Athens Wireless Metropolitan Network, Roofnet and other successful mesh networks around the globe. Two days ago we held a workshop to try to get people started on setting up their own mesh node, and there we decided to create a new mailing list dugnadsnett (at) nuug.no and IRC channel #dugnadsnett.no to coordinate the work. See also the NUUG blog post announcing the mailing list and IRC channel.

24th November 2013

After many years break from the package and a vain hope that development would be continued by someone else, I finally pulled my acts together this morning and wrapped up a new release of chrpath, the command line tool to modify the rpath and runpath of already compiled ELF programs. The update was triggered by the persistence of Isha Vishnoi at IBM, which needed a new config.guess file to get support for the ppc64le architecture (powerpc 64-bit Little Endian) he is working on. I checked the Debian, Ubuntu and Fedora packages for interesting patches (failed to find the source from OpenSUSE and Mandriva packages), and found quite a few nice fixes. These are the release notes:

New in 0.15 released 2013-11-24:

  • Updated config.sub and config.guess from the GNU project to work with newer architectures. Thanks to isha vishnoi for the heads up.
  • Updated README with current URLs.
  • Added byteswap fix found in Ubuntu, credited Jeremy Kerr and Matthias Klose.
  • Added missing help for -k|--keepgoing option, using patch by Petr Machata found in Fedora.
  • Rewrite removal of RPATH/RUNPATH to make sure the entry in .dynamic is a NULL terminated string. Based on patch found in Fedora credited Axel Thimm and Christian Krause.

You can download the new version 0.15 from alioth. Please let us know via the Alioth project if something is wrong with the new release. The test suite did not discover any old errors, so if you find a new one, please also include a testsuite check.

21st November 2013

Drones, flying robots, are getting more and more popular. The most know ones are the killer drones used by some government to murder people they do not like without giving them the chance of a fair trial, but the technology have many good uses too, from mapping and forest maintenance to photography and search and rescue. I am sure it is just a question of time before "bad drones" are in the hands of private enterprises and not only state criminals but petty criminals too. The drone technology is very useful and very dangerous. To have some control over the use of drones, I agree with Daniel Suarez in his TED talk "The kill decision shouldn't belong to a robot", where he suggested this little gem to keep the good while limiting the bad use of drones:

Each robot and drone should have a cryptographically signed I.D. burned in at the factory that can be used to track its movement through public spaces. We have license plates on cars, tail numbers on aircraft. This is no different. And every citizen should be able to download an app that shows the population of drones and autonomous vehicles moving through public spaces around them, both right now and historically. And civic leaders should deploy sensors and civic drones to detect rogue drones, and instead of sending killer drones of their own up to shoot them down, they should notify humans to their presence. And in certain very high-security areas, perhaps civic drones would snare them and drag them off to a bomb disposal facility.

But notice, this is more an immune system than a weapons system. It would allow us to avail ourselves of the use of autonomous vehicles and drones while still preserving our open, civil society.

The key is that every citizen should be able to read the radio beacons sent from the drones in the area, to be able to check both the government and others use of drones. For such control to be effective, everyone must be able to do it. What should such beacon contain? At least formal owner, purpose, contact information and GPS location. Probably also the origin and target position of the current flight. And perhaps some registration number to be able to look up the drone in a central database tracking their movement. Robots should not have privacy. It is people who need privacy.

13th November 2013

Today NUUG and Hackeriet announced our plans to join forces and create a wireless community network in Oslo. The workshop to help people get started will take place Thursday 2013-11-28, but we already are collecting the geolocation of people joining forces to make this happen. We have 9 locations plotted on the map, but we will need more before we have a connected mesh spread across Oslo. If this sound interesting to you, please join us at the workshop. If you are too impatient to wait 15 days, please join us on the IRC channel #nuug on irc.freenode.net right away. :)

10th November 2013

Continuing my research into mesh networking, I was recommended to use TP-Link 3040 and 3600 access points as mesh nodes, and the pair I bought arrived on Friday. Here are my notes on how to set up the MR3040 as a mesh node using OpenWrt.

I started by following the instructions on the OpenWRT wiki for TL-MR3040, and downloaded the recommended firmware image (openwrt-ar71xx-generic-tl-mr3040-v2-squashfs-factory.bin) and uploaded it into the original web interface. The flashing went fine, and the machine was available via telnet on the ethernet port. After logging in and setting the root password, ssh was available and I could start to set it up as a batman-adv mesh node.

I started off by reading the instructions from Wireless Africa, which had quite a lot of useful information, but eventually I followed the recipe from the Open Mesh wiki for using batman-adv on OpenWrt. A small snag was the fact that the opkg install kmod-batman-adv command did not work as it should. The batman-adv kernel module would fail to load because its dependency crc16 was not already loaded. I reported the bug to the openwrt project and hope it will be fixed soon. But the problem only seem to affect initial testing of batman-adv, as configuration seem to work when booting from scratch.

The setup is done using files in /etc/config/. I did not bridge the Ethernet and mesh interfaces this time, to be able to hook up the box on my local network and log into it for configuration updates. The following files were changed and look like this after modifying them:

/etc/config/network


config interface 'loopback'
        option ifname 'lo'
        option proto 'static'
        option ipaddr '127.0.0.1'
        option netmask '255.0.0.0'

config globals 'globals'
        option ula_prefix 'fdbf:4c12:3fed::/48'

config interface 'lan'
        option ifname 'eth0'
        option type 'bridge'
        option proto 'dhcp'
        option ipaddr '192.168.1.1'
        option netmask '255.255.255.0'
        option hostname 'tl-mr3040'
        option ip6assign '60'

config interface 'mesh'
        option ifname 'adhoc0'
        option mtu '1528'
        option proto 'batadv'
        option mesh 'bat0'

/etc/config/wireless


config wifi-device 'radio0'
        option type 'mac80211'
        option channel '11'
        option hwmode '11ng'
        option path 'platform/ar933x_wmac'
        option htmode 'HT20'
        list ht_capab 'SHORT-GI-20'
        list ht_capab 'SHORT-GI-40'
        list ht_capab 'RX-STBC1'
        list ht_capab 'DSSS_CCK-40'
        option disabled '0'

config wifi-iface 'wmesh'
        option device 'radio0'
        option ifname 'adhoc0'
        option network 'mesh'
        option encryption 'none'
        option mode 'adhoc'
        option bssid '02:BA:00:00:00:01'
        option ssid 'meshfx@hackeriet'

/etc/config/batman-adv


config 'mesh' 'bat0'
        option interfaces 'adhoc0'
        option 'aggregated_ogms'
        option 'ap_isolation'
        option 'bonding'
        option 'fragmentation'
        option 'gw_bandwidth'
        option 'gw_mode'
        option 'gw_sel_class'
        option 'log_level'
        option 'orig_interval'
        option 'vis_mode'
        option 'bridge_loop_avoidance'
        option 'distributed_arp_table'
        option 'network_coding'
        option 'hop_penalty'

# yet another batX instance
# config 'mesh' 'bat5'
#       option 'interfaces' 'second_mesh'

The mesh node is now operational. I have yet to test its range, but I hope it is good. I have not yet tested the TP-Link 3600 box still wrapped up in plastic.

2nd November 2013

If one of the points of switching to a new init system in Debian is to get rid of huge init.d scripts, I doubt we need to switch away from sysvinit and init.d scripts at all. Here is an example init.d script, ie a rewrite of /etc/init.d/rsyslog:

#!/lib/init/init-d-script
### BEGIN INIT INFO
# Provides:          rsyslog
# Required-Start:    $remote_fs $time
# Required-Stop:     umountnfs $time
# X-Stop-After:      sendsigs
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: enhanced syslogd
# Description:       Rsyslog is an enhanced multi-threaded syslogd.
#                    It is quite compatible to stock sysklogd and can be 
#                    used as a drop-in replacement.
### END INIT INFO
DESC="enhanced syslogd"
DAEMON=/usr/sbin/rsyslogd

Pretty minimalistic to me... For the record, the original sysv-rc script was 137 lines, and the above is just 15 lines, most of it meta info/comments.

How to do this, you ask? Well, one create a new script /lib/init/init-d-script looking something like this:

#!/bin/sh

# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions

#
# Function that starts the daemon/service

#
do_start()
{
	# Return
	#   0 if daemon has been started
	#   1 if daemon was already running
	#   2 if daemon could not be started
	start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
		|| return 1
	start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \
		$DAEMON_ARGS \
		|| return 2
	# Add code here, if necessary, that waits for the process to be ready
	# to handle requests from services started subsequently which depend
	# on this one.  As a last resort, sleep for some time.
}

#
# Function that stops the daemon/service
#
do_stop()
{
	# Return
	#   0 if daemon has been stopped
	#   1 if daemon was already stopped
	#   2 if daemon could not be stopped
	#   other if a failure occurred
	start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --name $NAME
	RETVAL="$?"
	[ "$RETVAL" = 2 ] && return 2
	# Wait for children to finish too if this is a daemon that forks
	# and if the daemon is only ever run from this initscript.
	# If the above conditions are not satisfied then add some other code
	# that waits for the process to drop all resources that could be
	# needed by services started subsequently.  A last resort is to
	# sleep for some time.
	start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
	[ "$?" = 2 ] && return 2
	# Many daemons don't delete their pidfiles when they exit.
	rm -f $PIDFILE
	return "$RETVAL"
}

#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
	#
	# If the daemon can reload its configuration without
	# restarting (for example, when it is sent a SIGHUP),
	# then implement that here.
	#
	start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME
	return 0
}

SCRIPTNAME=$1
scriptbasename="$(basename $1)"
echo "SN: $scriptbasename"
if [ "$scriptbasename" != "init-d-library" ] ; then
    script="$1"
    shift
    . $script
else
    exit 0
fi

NAME=$(basename $DAEMON)
PIDFILE=/var/run/$NAME.pid

# Exit if the package is not installed
#[ -x "$DAEMON" ] || exit 0

# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME

# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh

case "$1" in
  start)
	[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
	do_start
	case "$?" in
		0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
		2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
	esac
	;;
  stop)
	[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
	do_stop
	case "$?" in
		0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
		2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
	esac
	;;
  status)
	status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
	;;
  #reload|force-reload)
	#
	# If do_reload() is not implemented then leave this commented out
	# and leave 'force-reload' as an alias for 'restart'.
	#
	#log_daemon_msg "Reloading $DESC" "$NAME"
	#do_reload
	#log_end_msg $?
	#;;
  restart|force-reload)
	#
	# If the "reload" option is implemented then remove the
	# 'force-reload' alias
	#
	log_daemon_msg "Restarting $DESC" "$NAME"
	do_stop
	case "$?" in
	  0|1)
		do_start
		case "$?" in
			0) log_end_msg 0 ;;
			1) log_end_msg 1 ;; # Old process is still running
			*) log_end_msg 1 ;; # Failed to start
		esac
		;;
	  *)
		# Failed to stop
		log_end_msg 1
		;;
	esac
	;;
  *)
	echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
	exit 3
	;;
esac

:

It is based on /etc/init.d/skeleton, and could be improved quite a lot. I did not really polish the approach, so it might not always work out of the box, but you get the idea. I did not try very hard to optimize it nor make it more robust either.

A better argument for switching init system in Debian than reducing the size of init scripts (which is a good thing to do anyway), is to get boot system that is able to handle the kernel events sensibly and robustly, and do not depend on the boot to run sequentially. The boot and the kernel have not behaved sequentially in years.

1st November 2013

The SPICE protocol for remote display access is the preferred solution with oVirt and RedHat Enterprise Virtualization, and I was sad to discover the other day that the browser plugin needed to use these systems seamlessly was missing in Debian. The request for a package was from 2012-04-10 with no progress since 2013-04-01, so I decided to wrap up a package based on the great work from Cajus Pollmeier and put it in a collab-maint maintained git repository to get a package I could use. I would very much like others to help me maintain the package (or just take over, I do not mind), but as no-one had volunteered so far, I just uploaded it to NEW. I hope it will be available in Debian in a few days.

The source is now available from http://anonscm.debian.org/gitweb/?p=collab-maint/spice-xpi.git;a=summary.

Tags: debian, english.
27th October 2013

The vmdebootstrap program is a a very nice system to create virtual machine images. It create a image file, add a partition table, mount it and run debootstrap in the mounted directory to create a Debian system on a stick. Yesterday, I decided to try to teach it how to make images for Raspberry Pi, as part of a plan to simplify the build system for the FreedomBox project. The FreedomBox project already uses vmdebootstrap for the virtualbox images, but its current build system made multistrap based system for Dreamplug images, and it is lacking support for Raspberry Pi.

Armed with the knowledge on how to build "foreign" (aka non-native architecture) chroots for Raspberry Pi, I dived into the vmdebootstrap code and adjusted it to be able to build armel images on my amd64 Debian laptop. I ended up giving vmdebootstrap five new options, allowing me to replicate the image creation process I use to make Debian Jessie based mesh node images for the Raspberry Pi. First, the --foreign /path/to/binfm_handler option tell vmdebootstrap to call debootstrap with --foreign and to copy the handler into the generated chroot before running the second stage. This allow vmdebootstrap to create armel images on an amd64 host. Next I added two new options --bootsize size and --boottype fstype to teach it to create a separate /boot/ partition with the given file system type, allowing me to create an image with a vfat partition for the /boot/ stuff. I also added a --variant variant option to allow me to create smaller images without the Debian base system packages installed. Finally, I added an option --no-extlinux to tell vmdebootstrap to not install extlinux as a boot loader. It is not needed on the Raspberry Pi and probably most other non-x86 architectures. The changes were accepted by the upstream author of vmdebootstrap yesterday and today, and is now available from the upstream project page.

To use it to build a Raspberry Pi image using Debian Jessie, first create a small script (the customize script) to add the non-free binary blob needed to boot the Raspberry Pi and the APT source list:

#!/bin/sh
set -e # Exit on first error
rootdir="$1"
cd "$rootdir"
cat <<EOF > etc/apt/sources.list
deb http://http.debian.net/debian/ jessie main contrib non-free
EOF
# Install non-free binary blob needed to boot Raspberry Pi.  This
# install a kernel somewhere too.
wget https://raw.github.com/Hexxeh/rpi-update/master/rpi-update \
    -O $rootdir/usr/bin/rpi-update
chmod a+x $rootdir/usr/bin/rpi-update
mkdir -p $rootdir/lib/modules
touch $rootdir/boot/start.elf
chroot $rootdir rpi-update

Next, fetch the latest vmdebootstrap script and call it like this to build the image:

sudo ./vmdebootstrap \
    --variant minbase \
    --arch armel \
    --distribution jessie \
    --mirror http://http.debian.net/debian \
    --image test.img \
    --size 600M \
    --bootsize 64M \
    --boottype vfat \
    --log-level debug \
    --verbose \
    --no-kernel \
    --no-extlinux \
    --root-password raspberry \
    --hostname raspberrypi \
    --foreign /usr/bin/qemu-arm-static \
    --customize `pwd`/customize \
    --package netbase \
    --package git-core \
    --package binutils \
    --package ca-certificates \
    --package wget \
    --package kmod

The list of packages being installed are the ones needed by rpi-update to make the image bootable on the Raspberry Pi, with the exception of netbase, which is needed by debootstrap to find /etc/hosts with the minbase variant. I really wish there was a way to set up an Raspberry Pi using only packages in the Debian archive, but that is not possible as far as I know, because it boots from the GPU using a non-free binary blob.

The build host need debootstrap, kpartx and qemu-user-static and probably a few others installed. I have not checked the complete build dependency list.

The resulting image will not use the hardware floating point unit on the Raspberry PI, because the armel architecture in Debian is not optimized for that use. So the images created will be a bit slower than Raspbian based images.

21st October 2013

The last few days I have been experimenting with the batman-adv mesh technology. I want to gain some experience to see if it will fit the Freedombox project, and together with my neighbors try to build a mesh network around the park where I live. Batman-adv is a layer 2 mesh system ("ethernet" in other words), where the mesh network appear as if all the mesh clients are connected to the same switch.

My hardware of choice was the Linksys WRT54GL routers I had lying around, but I've been unable to get them working with batman-adv. So instead, I started playing with a Raspberry Pi, and tried to get it working as a mesh node. My idea is to use it to create a mesh node which function as a switch port, where everything connected to the Raspberry Pi ethernet plug is connected (bridged) to the mesh network. This allow me to hook a wifi base station like the Linksys WRT54GL to the mesh by plugging it into a Raspberry Pi, and allow non-mesh clients to hook up to the mesh. This in turn is useful for Android phones using the Serval Project voip client, allowing every one around the playground to phone and message each other for free. The reason is that Android phones do not see ad-hoc wifi networks (they are filtered away from the GUI view), and can not join the mesh without being rooted. But if they are connected using a normal wifi base station, they can talk to every client on the local network.

To get this working, I've created a debian package meshfx-node and a script build-rpi-mesh-node to create the Raspberry Pi boot image. I'm using Debian Jessie (and not Raspbian), to get more control over the packages available. Unfortunately a huge binary blob need to be inserted into the boot image to get it booting, but I'll ignore that for now. Also, as Debian lack support for the CPU features available in the Raspberry Pi, the system do not use the hardware floating point unit. I hope the routing performance isn't affected by the lack of hardware FPU support.

To create an image, run the following with a sudo enabled user after inserting the target SD card into the build machine:

% wget -O build-rpi-mesh-node \
    https://raw.github.com/petterreinholdtsen/meshfx-node/master/build-rpi-mesh-node
% sudo bash -x ./build-rpi-mesh-node > build.log 2>&1
% dd if=/root/rpi/rpi_basic_jessie_$(date +%Y%m%d).img of=/dev/mmcblk0 bs=1M
%

Booting with the resulting SD card on a Raspberry PI with a USB wifi card inserted should give you a mesh node. At least it does for me with a the wifi card I am using. The default mesh settings are the ones used by the Oslo mesh project at Hackeriet, as I mentioned in an earlier blog post about this mesh testing.

The mesh node was not horribly expensive either. I bought everything over the counter in shops nearby. If I had ordered online from the lowest bidder, the price should be significantly lower:

SupplierModelNOK
TeknikkmagasinetRaspberry Pi model B349.90
TeknikkmagasinetRaspberry Pi type B case99.90
LefdalJensen Air:Link 25150295.-
Clas OhlsonKingston 16 GB SD card199.-
Total cost943.80

Now my mesh network at home consist of one laptop in the basement connected to my production network, one Raspberry Pi node on the 1th floor that can be seen by my neighbor across the park, and one play-node I use to develop the image building script. And some times I hook up my work horse laptop to the mesh to test it. I look forward to figuring out what kind of latency the batman-adv setup will give, and how much packet loss we will experience around the park. :)

19th October 2013

Back in 2010, I created a Perl library to talk to the Spykee robot (with two belts, wifi, USB and Linux) and made it available from my web page. Today I concluded that it should move to a site that is easier to use to cooperate with others, and moved it to github. If you got a Spykee robot, you might want to check out the libspykee-perl github repository.

Tags: english, nuug, robot.
15th October 2013

The last few days I came across a few good causes that should get wider attention. I recommend signing and donating to each one of these. :)

Via Debian Project News for 2013-10-14 I came across the Outreach Program for Women program which is a Google Summer of Code like initiative to get more women involved in free software. One debian sponsor has offered to match any donation done to Debian earmarked for this initiative. I donated a few minutes ago, and hope you will to. :)

And the Electronic Frontier Foundation just announced plans to create video documentaries about the excessive spying on every Internet user that take place these days, and their need to fund the work. I've already donated. Are you next?

For my Norwegian audience, the organisation Studentenes og Akademikernes Internasjonale Hjelpefond is collecting signatures for a statement under the heading Bloggers United for Open Access for those of us asking for more focus on open access in the Norwegian government. So far 499 signatures. I hope you will sign it too.

11th October 2013

Wireless mesh networks are self organising and self healing networks that can be used to connect computers across small and large areas, depending on the radio technology used. Normal wifi equipment can be used to create home made radio networks, and there are several successful examples like Freifunk and Athens Wireless Metropolitan Network (see wikipedia for a large list) around the globe. To give you an idea how it work, check out the nice overview of the Kiel Freifunk community which can be seen from their dynamically updated node graph and map, where one can see how the mesh nodes automatically handle routing and recover from nodes disappearing. There is also a small community mesh network group in Oslo, Norway, and that is the main topic of this blog post.

I've wanted to check out mesh networks for a while now, and hoped to do it as part of my involvement with the NUUG member organisation community, and my recent involvement in the Freedombox project finally lead me to give mesh networks some priority, as I suspect a Freedombox should use mesh networks to connect neighbours and family when possible, given that most communication between people are between those nearby (as shown for example by research on Facebook communication patterns). It also allow people to communicate without any central hub to tap into for those that want to listen in on the private communication of citizens, which have become more and more important over the years.

So far I have only been able to find one group of people in Oslo working on community mesh networks, over at the hack space Hackeriet at Husmania. They seem to have started with some Freifunk based effort using OLSR, called the Oslo Freifunk project, but that effort is now dead and the people behind it have moved on to a batman-adv based system called meshfx. Unfortunately the wiki site for the Oslo Freifunk project is no longer possible to update to reflect this fact, so the old project page can't be updated to point to the new project. A while back, the people at Hackeriet invited people from the Freifunk community to Oslo to talk about mesh networks. I came across this video where Hans Jørgen Lysglimt interview the speakers about this talk (from youtube):

I mentioned OLSR and batman-adv, which are mesh routing protocols. There are heaps of different protocols, and I am still struggling to figure out which one would be "best" for some definitions of best, but given that the community mesh group in Oslo is so small, I believe it is best to hook up with the existing one instead of trying to create a completely different setup, and thus I have decided to focus on batman-adv for now. It sure help me to know that the very cool Serval project in Australia is using batman-adv as their meshing technology when it create a self organizing and self healing telephony system for disaster areas and less industrialized communities. Check out this cool video presenting that project (from youtube):

According to the wikipedia page on Wireless mesh network there are around 70 competing schemes for routing packets across mesh networks, and OLSR, B.A.T.M.A.N. and B.A.T.M.A.N. advanced are protocols used by several free software based community mesh networks.

The batman-adv protocol is a bit special, as it provide layer 2 (as in ethernet ) routing, allowing ipv4 and ipv6 to work on the same network. One way to think about it is that it provide a mesh based vlan you can bridge to or handle like any other vlan connected to your computer. The required drivers are already in the Linux kernel at least since Debian Wheezy, and it is fairly easy to set up. A good introduction is available from the Open Mesh project. These are the key settings needed to join the Oslo meshfx network:

SettingValue
Protocol / kernel modulebatman-adv
ESSIDmeshfx@hackeriet
Channel / Frequency11 / 2462
Cell ID02:BA:00:00:00:01

The reason for setting ad-hoc wifi Cell ID is to work around bugs in firmware used in wifi card and wifi drivers. (See a nice post from VillageTelco about "Information about cell-id splitting, stuck beacons, and failed IBSS merges! for details.) When these settings are activated and you have some other mesh node nearby, your computer will be connected to the mesh network and can communicate with any mesh node that is connected to any of the nodes in your network of nodes. :)

My initial plan was to reuse my old Linksys WRT54GL as a mesh node, but that seem to be very hard, as I have not been able to locate a firmware supporting batman-adv. If anyone know how to use that old wifi access point with batman-adv these days, please let me know.

If you find this project interesting and want to join, please join us on IRC, either channel #oslohackerspace or #nuug on irc.freenode.net.

While investigating mesh networks in Oslo, I came across an old research paper from the university of Stavanger and Telenor Research and Innovation called The reliability of wireless backhaul mesh networks and elsewhere learned that Telenor have been experimenting with mesh networks at Grünerløkka in Oslo. So mesh networks are also interesting for commercial companies, even though Telenor discovered that it was hard to figure out a good business plan for mesh networking and as far as I know have closed down the experiment. Perhaps Telenor or others would be interested in a cooperation?

Update 2013-10-12: I was just told by the Serval project developers that they no longer use batman-adv (but are compatible with it), but their own crypto based mesh system.

8th October 2013

The other day I was pleased and surprised to discover that Marcelo Salvador had published a video on Youtube showing how to install the standalone Debian Edu / Skolelinux profile. This is the profile intended for use at home or on laptops that should not be integrated into the provided network services (no central home directory, no Kerberos / LDAP directory etc, in other word a single user machine). The result is 11 minutes long, and show some user applications (seem to be rather randomly picked). Missed a few of my favorites like celestia, planets and chromium showing the Zygote Body 3D model of the human body, but I guess he did not know about those or find other programs more interesting. :) And the video do not show the advantages I believe is one of the most valuable featuers in Debian Edu, its central school server making it possible to run hundreds of computers without hard drives by installing one central LTSP server.

Anyway, check out the video, embedded below and linked to above:

Are there other nice videos demonstrating Skolelinux? Please let me know. :)

29th September 2013

A few hours ago, the announcement for the first stable release of Debian Edu Wheezy went out from the Debian publicity team. The complete announcement text can be found at the Debian News section, translated to several languages. Please check it out.

There is one minor known problem that we will fix very soon. One can not install a amd64 Thin Client Server using PXE, as the /var/ partition is too small. A workaround is to extend the partition (use lvresize + resize2fs in tty 2 while installing).

27th September 2013

The Freedombox project have been going on for a while, and have presented the vision, ideas and solution several places. Here is a little collection of videos of talks and presentation of the project.

A larger list is available from the Freedombox Wiki.

On other news, I am happy to report that Freedombox based on Debian Jessie is coming along quite well, and soon both Owncloud and using Tor should be available for testers of the Freedombox solution. :) In a few weeks I hope everything needed to test it is included in Debian. The withsqlite package is already in Debian, and the plinth package is pending in NEW. The third and vital part of that puzzle is the metapackage/setup framework, which is still pending an upload. Join us on IRC (#freedombox on irc.debian.org) and the mailing list if you want to help make this vision come true.

16th September 2013

The third wheezy based beta release of Debian Edu was wrapped up today. This is the release announcement from Holger Levsen:

Hi,

it is my pleasure to announce the third beta release (beta 2 for short) of Debian Edu / Skolelinux based on Debian Wheezy!

Please test these images extensivly, if no new problems are found we plan to do this final Debian Edu Wheezy release this coming weekend. We are not aware of any major problems or blockers in beta2, if you find something, please notify us immediately!

(More about the remaining steps for the Edu Wheezy release in another mail to the edu list tonight or tomorrow...)

Noteworthy changes and software updates for Debian Edu 7.1+edu0~b2 compared to beta1:

  • The KDE proxy setup has been adjusted to use the provided wpad.dat. This also gets Chromium to use this proxy.
  • Install kdepim-groupware with KDE desktops to make sure korganizer understand ical/dav sources.
  • Increased default maximum size of /var/spool/squid and /skole/backup on the main server.
  • A source DVD image containing all source packages is now available as well.
  • Updates for chromium (29.0.1547.57-1~deb7u1), imagemagick (6.7.7.10-5+deb7u2), php5 (5.4.4-14+deb7u4), libmodplug (0.8.8.4-3+deb7u1+git20130828), tiff (4.0.2-6+deb7u2), linux-image (3.2.0-4-486_3.2.46-1+deb7u1).

Where to get it:

To download the multiarch netinstall CD release you can use

The SHA1SUM of this image is: 3a1c89f4666df80eebcd46c5bf5fedb866f9472f

To download the multiarch USB stick ISO release you can use

The SHA1SUM of this image is: 702d1718548f401c74bfa6df9f032cc3ee16597e

The Source DVD image has the filename debian-edu-7.1+edu0~b2-source-DVD.iso and the SHA1SUM 089eed8b3f962db47aae1f6a9685e9bb2fa30ca5 and is available the same way as the other isos.

How to report bugs

For information how to report bugs please see
http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

About Debian Edu and Skolelinux

Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediately after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD or USB stick all other machines can be installed via the network. The provided school server provides LDAP database and Kerberos authentication service, centralized home directories, DHCP server, web proxy and many other services. The desktop contains more than 60 educational software packages and more are available from the Debian archive, and schools can choose between KDE, Gnome, LXDE and Xfce desktop environment.

This is the seventh test release based on Debian Wheezy. Basically this is an updated and slightly improved version compared to the Squeeze release.

Notes for upgrades from Alpha Prereleases

Alpha based installations should reinstall or downgrade the versions of gosa and libpam-mklocaluser to the ones used in this beta release. Both alpha and beta0 based installations should reinstall or deal with gosa.conf manually; there are two options: (1) Keep gosa.conf and edit this file as outlined on the mailing list. (2) Accept the new version of gosa.conf and replace both contained admin password placeholders with the password hashes found in the old one (backup copy!). In both cases all users need to change their password to make sure a password is set for CIFS access to their home directory.

cheers,
Holger

10th September 2013

I was introduced to the Freedombox project in 2010, when Eben Moglen presented his vision about serving the need of non-technical people to keep their personal information private and within the legal protection of their own homes. The idea is to give people back the power over their network and machines, and return Internet back to its intended peer-to-peer architecture. Instead of depending on a central service, the Freedombox will give everyone control over their own basic infrastructure.

I've intended to join the effort since then, but other tasks have taken priority. But this summers nasty news about the misuse of trust and privilege exercised by the "western" intelligence gathering communities increased my eagerness to contribute to a point where I actually started working on the project a while back.

The initial Debian initiative based on the vision from Eben Moglen, is to create a simple and cheap Debian based appliance that anyone can hook up in their home and get access to secure and private services and communication. The initial deployment platform have been the Dreamplug, which is a piece of hardware I do not own. So to be able to test what the current Freedombox setup look like, I had to come up with a way to install it on some hardware I do have access to. I have rewritten the freedom-maker image build framework to use .deb packages instead of only copying setup into the boot images, and thanks to this rewrite I am able to set up any machine supported by Debian Wheezy as a Freedombox, using the previously mentioned deb (and a few support debs for packages missing in Debian).

The current Freedombox setup consist of a set of bootstrapping scripts (freedombox-setup), and a administrative web interface (plinth + exmachina + withsqlite), as well as a privacy enhancing proxy based on privoxy (freedombox-privoxy). There is also a web/javascript based XMPP client (jwchat) trying (unsuccessfully so far) to talk to the XMPP server (ejabberd). The web interface is pluggable, and the goal is to use it to enable OpenID services, mesh network connectivity, use of TOR, etc, etc. Not much of this is really working yet, see the project TODO for links to GIT repositories. Most of the code is on github at the moment. The HTTP proxy is operational out of the box, and the admin web interface can be used to add/remove plinth users. I've not been able to do anything else with it so far, but know there are several branches spread around github and other places with lots of half baked features.

Anyway, if you want to have a look at the current state, the following recipes should work to give you a test machine to poke at.

Debian Wheezy amd64

  1. Fetch normal Debian Wheezy installation ISO.
  2. Boot from it, either as CD or USB stick.
  3. Press [tab] on the boot prompt and add this as a boot argument to the Debian installer:

    url=http://www.reinholdtsen.name/freedombox/preseed-wheezy.dat
  4. Answer the few language/region/password questions and pick disk to install on.
  5. When the installation is finished and the machine have rebooted a few times, your Freedombox is ready for testing.

Raspberry Pi Raspbian

  1. Fetch a Raspbian SD card image, create SD card.
  2. Boot from SD card, extend file system to fill the card completely.
  3. Log in and add this to /etc/sources.list:

    deb http://www.reinholdtsen.name/freedombox wheezy main
    
  4. Run this as root:

    wget -O - http://www.reinholdtsen.name/freedombox/BE1A583D.asc | \
       apt-key add -
    apt-get update
    apt-get install freedombox-setup
    /usr/lib/freedombox/setup
    
  5. Reboot into your freshly created Freedombox.

You can test it on other architectures too, but because the freedombox-privoxy package is binary, it will only work as intended on the architectures where I have had time to build the binary and put it in my APT repository. But do not let this stop you. It is only a short "apt-get source -b freedombox-privoxy" away. :)

Note that by default Freedombox is a DHCP server on the 192.168.1.0/24 subnet, so if this is your subnet be careful and turn off the DHCP server by running "update-rc.d isc-dhcp-server disable" as root.

Please let me know if this works for you, or if you have any problems. We gather on the IRC channel #freedombox on irc.debian.org and the project mailing list.

Once you get your freedombox operational, you can visit http://your-host-name:8001/ to see the state of the plint welcome screen (dead end - do not be surprised if you are unable to get past it), and next visit http://your-host-name:8001/help/ to look at the rest of plinth. The default user is 'admin' and the default password is 'secret'.

22nd August 2013

The second wheezy based beta release of Debian Edu was wrapped up today, slightly delayed because of some bugs in the initial Windows integration fixes . This is the release announcement:

New features for Debian Edu 7.1+edu0~b1 released 2013-08-22

These are the release notes for Debian Edu / Skolelinux 7.1+edu0~b1, based on Debian with codename "Wheezy".

About Debian Edu and Skolelinux

Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediately after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD or USB stick all other machines can be installed via the network. The provided school server provides LDAP database and Kerberos authentication service, centralized home directories, DHCP server, web proxy and many other services. The desktop contains more than 60 educational software packages and more are available from the Debian archive, and schools can choose between KDE, Gnome, LXDE and Xfce desktop environment.

This is the sixth test release based on Debian Wheezy. Basically this is an updated and slightly improved version compared to the Squeeze release.

ALERT: Alpha based installations should reinstall or downgrade the versions of gosa and libpam-mklocaluser to the ones used in this beta release. Both alpha and beta0 based installations should reinstall or deal with gosa.conf manually; there are two options: (1) Keep gosa.conf and edit this file as outlined on the mailing list. (2) Accept the new version of gosa.conf and replace both contained admin password placeholders with the password hashes found in the old one (backup copy!). In both cases every user need to change their their password to make sure a password is set for CIFS access to their home directory.

Software updates

  • Added ssh askpass packages to default installation, to ensure ssh work also without a attached tty.
  • Add the command-not-found package to the default installation to make it easier to figure out where to find missing command line tools. Please note, that the command 'update-command-not-found' has to be run as root to actually make it useful (internet access required).

Other changes

  • Adjusted the USB stick ISO image build to include every tool needed for desktop=xfce installations.
  • Adjust thin-client-server task to work when installing from USB stick ISO image.
  • Made new grub artwork (changed png from indexed to RGB format).
  • Minor cleanup in the CUPS setup.
  • Make sure that bootstrapping of the Samba domain really happens during installation of the main server and adjust SID handling to cope with this.
  • Make Samba passwords changeable (again) via GOsa².
  • Fix generation of LM and NT password hashes via GOsa² to avoid empty password hashes.
  • Adapted Samba machine domain joining to latest change in the smbldap-tools Perl package, fixing bugs blocking Windows machines from joining the Samba domain.

Known issues

  • KDE fails to understand the wpad.dat file provided, causing it to not use the http proxy as it should.
  • Chromium also fails to use the proxy when using the KDE desktop (using the KDE configuration).

Where to get it

To download the multiarch netinstall CD release you can use

The MD5SUM of this image is: 1e357f80b55e703523f2254adde6d78b
The SHA1SUM of this image is: 7157f9be5fd27c7694d713c6ecfed61c3edda3b2

To download the multiarch USB stick ISO release you can use

The MD5SUM of this image is: 7a8408ead59cf7e3cef25afb6e91590b
The SHA1SUM of this image is: f1817c031f02790d5edb3bfa0dcf8451088ad119

How to report bugs

http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

18th August 2013

Earlier, I reported about my problems using an Intel SSD 520 Series 180 GB disk. Friday I was told by IBM that the original disk should be thrown away. And as there no longer was a problem if I bricked the firmware, I decided today to try to install Intel firmware to replace the Lenovo firmware currently on the disk.

I searched the Intel site for firmware, and found issdfut_2.0.4.iso (aka Intel SATA Solid-State Drive Firmware Update Tool) which according to the site should contain the latest firmware for SSD disks. I inserted the broken disk in one of my spare laptops and booted the ISO from a USB stick. The disk was recognized, but the program claimed the newest firmware already were installed and refused to insert any Intel firmware. So no change, and the disk is still unable to handle write load. :( I guess the only way to get them working would be if Lenovo releases new firmware. No idea how likely that is. Anyway, just blogging about this test for completeness. I got a working Samsung disk, and see no point in spending more time on the broken disks.

Tags: debian, english.
2nd August 2013

It has been a while since my last update. Since last summer, I have worked on a Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with the copyright law. Yesterday, I finally broken the 90% mark, when counting the number of strings to translate. Due to real life constraints, I have not had time to work on it since March, but when the summer broke out, I found time to work on it again. Still lots of work left, but the first draft is nearing completion. I created a graph to show the progress of the translation:

When the first draft is done, the translated text need to be proof read, and the remaining formatting problems with images and SVG drawings need to be fixed. There are probably also some index entries missing that need to be added. This can be done by comparing the index entries listed in the SiSU version of the book, or comparing the English docbook version with the paper version. Last, the colophon page with ISBN numbers etc need to be wrapped up before the release is done. I should also figure out how to get correct Norwegian sorting of the index pages. All docbook tools I have tried so far (xmlto, docbook-xsl, dblatex) get the order of symbols and the special Norwegian letters ÆØÅ wrong.

There is still need for translators and people with docbook knowledge, to be able to get a good looking book (I still struggle with dblatex, xmlto and docbook-xsl) as well as to do the draft translation and proof reading. And I would like the figures to be redrawn as SVGs to make it easy to translate them. Any SVG master around? There are also some legal terms that are unfamiliar to me. If you want to help, please get in touch with me, and check out the project files currently available from github.

If you are curious what the translated book currently look like, the updated PDF and EPUB are published on github. The HTML version is published as well, but github hand it out with MIME type text/plain, confusing browsers, so I saw no point in linking to that version.

27th July 2013

The first wheezy based beta release of Debian Edu was wrapped up today. This is the release announcement:

New features for Debian Edu 7.1+edu0~b0 released 2013-07-27

These are the release notes for for Debian Edu / Skolelinux 7.1+edu0~b0, based on Debian with codename "Wheezy".

About Debian Edu and Skolelinux

Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediately after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD, DVD or USB stick all other machines can be installed via the network. The provided school server provides LDAP database and Kerberos authentication service, centralized home directories, DHCP server, web proxy and many other services. The desktop contains more than 60 educational software packages and more are available from the Debian archive, and schools can choose between KDE, Gnome, LXDE and Xfce desktop environment.

This is the fifth test release based on Debian Wheezy. Basically this is an updated and slightly improved version compared to the Squeeze release.

ALERT: Alpha based installations should reinstall or downgrade the versions of gosa and libpam-mklocaluser to the ones used in this beta release.

Software updates

  • Switched roaming workstation profiles from wicd to network-manager for network configuration, as wicd didn't work any more.
  • Changed version numbers of patched gosa and libpam-mklocaluser packages to make sure our locally patched versions will be replaced by the official packages when they are released from Debian. Those installing alpha version need to reinstall or manually downgrade gosa and libpam-mklocaluser.
  • Added bluetooth tools to the default desktop (bluedevil, blueman).
  • Added tools for sharing the desktop on KDE (krdc, krfb).
  • Added valgrind to the default installation for easier debugging of crash bugs.

Other changes

  • Fixed artwork package to work with gnome, no longer break desktop=gnome installations.
  • Adjusted installer to now work when forced to use a proxy with the netinst CD.
  • Fixed code detecting and setting/loading hardware specific setup/firmware to work more robust out of the box.
  • Adjusted Kerberos setup to detect realm and server settings at install time instead of dynamically at run time. This avoid a crash with krb5-auth-dialog on diskless workstations without a DNS name.
  • Worked around misfeature in network-manager not calling the dhclient exit hooks, causing automatic proxy configuration and automatic host name setting at run time to work again.
  • Fixed feature setting the default Iceweasel start page from URL fetched from LDAP, to allow schools to set the global default by updating the dc=skole,dc=skolelinux,dc=no LDAP object.
  • Changed default host name on all networked machines to be unique (generated from MAC or reverse DNS) after boot.
  • Adjusted partition sizes to make sure they are big enough.

Known issues

  • Grub is missing the new artwork.
  • KDE fail to understand the wpad.dat file provided, causing it to not use the http proxy as it should.
  • Chromium also fail to use the proxy.

Where to get it

To download the multiarch netinstall CD release you can use

The MD5SUM of this image is: 55d5de9765b6dccd5d9ec33cf1a07109
The SHA1SUM of this image is: 996a1d9517740e4d627d100de2d12b23dd545a3f

To download the multiarch USB stick ISO release you can use

The MD5SUM of this image is: d8f0818c51a78d357de794066f289f69
The SHA1SUM of this image is: 49185ca354e8d0543240423746924f76a6cee733

How to report bugs

http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

17th July 2013

Today I switched to my new laptop. I've previously written about the problems I had with my new Thinkpad X230, which was delivered with an 180 GB Intel SSD disk with Lenovo firmware that did not handle sustained writes. My hardware supplier have been very forthcoming in trying to find a solution, and after first trying with another identical 180 GB disks they decided to send me a 256 GB Samsung SSD disk instead to fix it once and for all. The Samsung disk survived the installation of Debian with encrypted disks (filling the disk with random data during installation killed the first two), and I thus decided to trust it with my data. I have installed it as a Debian Edu Wheezy roaming workstation hooked up with my Debian Edu Squeeze main server at home using Kerberos and LDAP, and will use it as my work station from now on.

As this is a solid state disk with no moving parts, I believe the Debian Wheezy default installation need to be tuned a bit to increase performance and increase life time of the disk. The Linux kernel and user space applications do not yet adjust automatically to such environment. To make it easier for my self, I created a draft Debian package ssd-setup to handle this tuning. The source for the ssd-setup package is available from collab-maint, and it is set up to adjust the setup of the machine by just installing the package. If there is any non-SSD disk in the machine, the package will refuse to install, as I did not try to write any logic to sort file systems in SSD and non-SSD file systems.

I consider the package a draft, as I am a bit unsure how to best set up Debian Wheezy with an SSD. It is adjusted to my use case, where I set up the machine with one large encrypted partition (in addition to /boot), put LVM on top of this and set up partitions on top of this again. See the README file in the package source for the references I used to pick the settings. At the moment these parameters are tuned:

  • Set up cryptsetup to pass TRIM commands to the physical disk (adding discard to /etc/crypttab)
  • Set up LVM to pass on TRIM commands to the underlying device (in this case a cryptsetup partition) by changing issue_discards from 0 to 1 in /etc/lvm/lvm.conf.
  • Set relatime as a file system option for ext3 and ext4 file systems.
  • Tell swap to use TRIM commands by adding 'discard' to /etc/fstab.
  • Change I/O scheduler from cfq to deadline using a udev rule.
  • Run fstrim on every ext3 and ext4 file system every night (from cron.daily).
  • Adjust sysctl values vm.swappiness to 1 and vm.vfs_cache_pressure to 50 to reduce the kernel eagerness to swap out processes.

During installation, I cancelled the part where the installer fill the disk with random data, as this would kill the SSD performance for little gain. My goal with the encrypted file system is to ensure those stealing my laptop end up with a brick and not a working computer. I have no hope in keeping the really resourceful people from getting the data on the disk (see XKCD #538 for an explanation why). Thus I concluded that adding the discard option to crypttab is the right thing to do.

I considered using the noop I/O scheduler, as several recommended it for SSD, but others recommended deadline and a benchmark I found indicated that deadline might be better for interactive use.

I also considered using the 'discard' file system option for ext3 and ext4, but read that it would give a performance hit ever time a file is removed, and thought it best to that that slowdown once a day instead of during my work.

My package do not set up tmpfs on /var/run, /var/lock and /tmp, as this is already done by Debian Edu.

I have not yet started on the user space tuning. I expect iceweasel need some tuning, and perhaps other applications too, but have not yet had time to investigate those parts.

The package should work on Ubuntu too, but I have not yet tested it there.

As for the answer to the question in the title of this blog post, as far as I know, the only solution I know about is to replace the disk. It might be possible to flash it with Intel firmware instead of the Lenovo firmware. But I have not tried and did not want to do so without approval from Lenovo as I wanted to keep the warranty on the disk until a solution was found and they wanted the broken disks back.

Tags: debian, english.
10th July 2013

A few days ago, I wrote about the problems I experienced with my new X230 and its SSD disk, which was dying during installation because it is unable to cope with sustained write. My supplier is in contact with Lenovo, and they wanted to send a replacement disk to try to fix the problem. They decided to send an identical model, so my hopes for a permanent fix was slim.

Anyway, today I got the replacement disk and tried to install Debian Edu Wheezy with encrypted disk on it. The new disk have the same firmware version as the original. This time my hope raised slightly as the installation progressed, as the original disk used to die after 4-7% of the disk was written to, while this time it kept going past 10%, 20%, 40% and even past 50%. But around 60%, the disk died again and I was back on square one. I still do not have a new laptop with a disk I can trust. I can not live with a disk that might lock up when I download a new Debian Edu / Skolelinux ISO or other large files. I look forward to hearing from my supplier with the next proposal from Lenovo.

The original disk is marked Intel SSD 520 Series 180 GB, 11S0C38722Z1ZNME35X1TR, ISN: CVCV321407HB180EGN, SA: G57560302, FW: LF1i, 29MAY2013, PBA: G39779-300, LBA 351,651,888, LI P/N: 0C38722, Pb-free 2LI, LC P/N: 16-200366, WWN: 55CD2E40002756C4, Model: SSDSC2BW180A3L 2.5" 6Gb/s SATA SSD 180G 5V 1A, ASM P/N 0C38732, FRU P/N 45N8295, P0C38732.

The replacement disk is marked Intel SSD 520 Series 180 GB, 11S0C38722Z1ZNDE34N0L0, ISN: CVCV315306RK180EGN, SA: G57560-302, FW: LF1i, 22APR2013, PBA: G39779-300, LBA 351,651,888, LI P/N: 0C38722, Pb-free 2LI, LC P/N: 16-200366, WWN: 55CD2E40000AB69E, Model: SSDSC2BW180A3L 2.5" 6Gb/s SATA SSD 180G 5V 1A, ASM P/N 0C38732, FRU P/N 45N8295, P0C38732.

The only difference is in the first number (serial number?), ISN, SA, date and WNPP values. Mentioning all the details here in case someone is able to use the information to find a way to identify the failing disk among working ones (if any such working disk actually exist).

Tags: debian, english.
9th July 2013

The upcoming Saturday, 2013-07-13, we are organising a combined Debian Edu developer gathering and Debian and Ubuntu bug squashing party in Oslo. It is organised by the member assosiation NUUG and the Debian Edu / Skolelinux project together with the hack space Bitraf.

It starts 10:00 and continue until late evening. Everyone is welcome, and there is no fee to participate. There is on the other hand limited space, and only room for 30 people. Please put your name on the event wiki page if you plan to join us.

5th July 2013

Half a year ago, I reported that I had to find a replacement for my trusty old Thinkpad X41. Unfortunately I did not have much time to spend on it, and it took a while to find a model I believe will do the job, but two days ago the replacement finally arrived. I ended up picking a Thinkpad X230 with SSD disk (NZDAJMN). I first test installed Debian Edu Wheezy as a roaming workstation, and it seemed to work flawlessly. But my second installation with encrypted disk was not as successful. More on that below.

I had a hard time trying to track down a good laptop, as my most important requirements (robust and with a good keyboard) are never listed in the feature list. But I did get good help from the search feature at Prisjakt, which allowed me to limit the list of interesting laptops based on my other requirements. A bit surprising that SSD disk are not disks according to that search interface, so I had to drop specifying the number of disks from my search parameters. I also asked around among friends to get their impression on keyboards and robustness.

So the new laptop arrived, and it is quite a lot wider than the X41. I am not quite convinced about the keyboard, as it is significantly wider than my old keyboard, and I have to stretch my hand a lot more to reach the edges. But the key response is fairly good and the individual key shape is fairly easy to handle, so I hope I will get used to it. My old X40 was starting to fail, and I really needed a new laptop now. :)

Turning off the touch pad was simple. All it took was a quick visit to the BIOS during boot it disable it.

But there is a fatal problem with the laptop. The 180 GB SSD disk lock up during load. And this happen when installing Debian Wheezy with encrypted disk, while the disk is being filled with random data. I also tested to install Ubuntu Raring, and it happen there too if I reenable the code to fill the disk with random data (it is disabled by default in Ubuntu). And the bug with is already known. It was reported to Debian as BTS report #691427 2012-10-25 (journal commit I/O error on brand-new Thinkpad T430s ext4 on lvm on SSD). It is also reported to the Linux kernel developers as Kernel bugzilla report #51861 2012-12-20 (Intel SSD 520 stops working under load (SSDSC2BW180A3L in Lenovo ThinkPad T430s)). It is also reported on the Lenovo forums, both for T430 2012-11-10 and for X230 03-20-2013. The problem do not only affect installation. The reports state that the disk lock up during use if many writes are done on the disk, so it is much no use to work around the installation problem and end up with a computer that can lock up at any moment. There is even a small C program available that will lock up the hard drive after running a few minutes by writing to a file.

I've contacted my supplier and asked how to handle this, and after contacting PCHELP Norway (request 01D1FDP) which handle support requests for Lenovo, his first suggestion was to upgrade the disk firmware. Unfortunately there is no newer firmware available from Lenovo, as my disk already have the most recent one (version LF1i). I hope to hear more from him today and hope the problem can be fixed. :)

Tags: debian, english.
4th July 2013

Half a year ago, I reported that I had to find a replacement for my trusty old Thinkpad X41. Unfortunately I did not have much time to spend on it, but today the replacement finally arrived. I ended up picking a Thinkpad X230 with SSD disk (NZDAJMN). I first test installed Debian Edu Wheezy as a roaming workstation, and it worked flawlessly. As I write this, it is installing what I hope will be a more final installation, with a encrypted hard drive to ensure any dope head stealing it end up with an expencive door stop.

I had a hard time trying to track down a good laptop, as my most important requirements (robust and with a good keyboard) are never listed in the feature list. But I did get good help from the search feature at Prisjakt, which allowed me to limit the list of interesting laptops based on my other requirements. A bit surprising that SSD disk are not disks, so I had to drop number of disks from my search parameters.

I am not quite convinced about the keyboard, as it is significantly wider than my old keyboard, and I have to stretch my hand a lot more to reach the edges. But the key response is fairly good and the individual key shape is fairly easy to handle, so I hope I will get used to it. My old X40 was starting to fail, and I really needed a new laptop now. :)

I look forward to figuring out how to turn off the touch pad.

Tags: debian, english.
3rd July 2013

The fourth wheezy based alpha release of Debian Edu was wrapped up today. This is the release announcement:

New features for Debian Edu 7.1+edu0~alpha3 released 2013-07-03

These are the release notes for for Debian Edu / Skolelinux 7.1+edu0~alpha3, based on Debian with codename "Wheezy".

About Debian Edu and Skolelinux

Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediately after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD, DVD or USB stick all other machines can be installed via the network. The provided school server provides LDAP database and Kerberos authentication service, centralized home directories, DHCP server, web proxy and many other services. The desktop contains more than 60 educational software packages and more are available from the Debian archive, and schools can choose between KDE, Gnome, LXDE and Xfce desktop environment.

This is the fourth test release based on Debian Wheezy. Basically this is an updated and slightly improved version compared to the Squeeze release.

Software updates

  • Dropped ispell dictionaries from our default installation.
  • Dropped menu-xdg from the KDE desktop option, to drop the Debian submenu. It was not included with Gnome, LXDE or Xfce, so this brings KDE in line with the others.
  • Dropped xdrawchem, xjig and xsok from our default installation as they don't have a desktop menu entry and thus won't show up in the menu now that menu-xdg was removed.
  • Removed the killer system to kill left behind processes on multi-user machines, as it was no longer able to understand when a X display was in use and killed the processes of the active users too.
  • Dropped the golearn (from goplay) package as the debtags in wheezy are too few to make the package useful.

Other changes

  • Updated artwork matching http://wiki.debian.org/DebianArt/Themes/Joy
  • Multi-arch i386/amd64 USB stick ISO available.
  • Got rid of ispell/wordlist related debconf questions that showed up for some language options.
  • Switched to using http.debian.net as APT source by default.
  • Fixed proxy configuration on Main Server installations.
  • Changed LTSP setup to ask dpkg to use force-unsafe-io the same way d-i is doing it.
  • Made sure root and user passwords were not left behind in the debconf database after installation on Main Server installations.
  • Made Roaming Workstation dynamic setup more robust and added draft script setup-ad-client to hook a Roaming Workstation up to a Active Directory server instead of a Debian Edu Main Server.
  • Update system to install needed firmware packages during installation, to work properly in Wheezy.
  • Update system to handle hardware quirks (debian-edu-hwsetup).
  • Corrected PXE installation setup to properly pass selected desktop and keymap settings to PXE installation clients.
  • LTSP diskless workstations use sshfs by default, allowing them to work without adding them to DNS and NIS netgroups for NFS access.

Known issues

  • No mass import of user account data in GOsa (ldif or csv) available yet (698840).
  • Artwork not enabled for all desktops.

Where to get it

To download the multiarch netinstall CD release you can use

The MD5SUM of this image is: 2b161a99d2a848c376d8d04e3854e30c
The SHA1SUM of this image is: 498922e9c508c0a7ee9dbe1dfe5bf830d779c3c8

To download the multiarch USB stick ISO release you can use

The MD5SUM of this image is: 25e808e403a4c15dbef1d13c37d572ac
The SHA1SUM of this image is: 15ecfc93eb6b4f453b7eb0bc04b6a279262d9721

How to report bugs

http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

25th June 2013

It annoys me when the computer fail to do automatically what it is perfectly capable of, and I have to do it manually to get things working. One such task is to find out what firmware packages are needed to get the hardware on my computer working. Most often this affect the wifi card, but some times it even affect the RAID controller or the ethernet card. Today I pushed version 0.4 of the Isenkram package including a new script isenkram-autoinstall-firmware handling the process of asking all the loaded kernel modules what firmware files they want, find debian packages providing these files and install the debian packages. Here is a test run on my laptop:

# isenkram-autoinstall-firmware 
info: kernel drivers requested extra firmware: ipw2200-bss.fw ipw2200-ibss.fw ipw2200-sniffer.fw
info: fetching http://http.debian.net/debian/dists/squeeze/Contents-i386.gz
info: locating packages with the requested firmware files
info: Updating APT sources after adding non-free APT source
info: trying to install firmware-ipw2x00
firmware-ipw2x00
firmware-ipw2x00
Preconfiguring packages ...
Selecting previously deselected package firmware-ipw2x00.
(Reading database ... 259727 files and directories currently installed.)
Unpacking firmware-ipw2x00 (from .../firmware-ipw2x00_0.28+squeeze1_all.deb) ...
Setting up firmware-ipw2x00 (0.28+squeeze1) ...
# 

When all the requested firmware is present, a simple message is printed instead:

# isenkram-autoinstall-firmware 
info: did not find any firmware files requested by loaded kernel modules.  exiting
# 

It could use some polish, but it is already working well and saving me some time when setting up new machines. :)

So, how does it work? It look at the set of currently loaded kernel modules, and look up each one of them using modinfo, to find the firmware files listed in the module meta-information. Next, it download the Contents file from a nearby APT mirror, and search for the firmware files in this file to locate the package with the requested firmware file. If the package is in the non-free section, a non-free APT source is added and the package is installed using apt-get install. The end result is a slightly better working machine.

I hope someone find time to implement a more polished version of this script as part of the hw-detect debian-installer module, to finally fix BTS report #655507. There really is no need to insert USB sticks with firmware during a PXE install when the packages already are available from the nearby Debian mirror.

22nd June 2013

In the Debian Edu / Skolelinux project, we include a post-installation test suite, which check that services are running, working, and return the expected results. It runs automatically just after the first boot on test installations (using test ISOs), but not on production installations (using non-test ISOs). It test that the LDAP service is operating, Kerberos is responding, DNS is replying, file systems are online resizable, etc, etc. And it check that the PXE service is configured, which is the topic of this post.

The last week I've fixed the DVD and USB stick ISOs for our Debian Edu Wheezy release. These ISOs are supposed to be able to install a complete system without any Internet connection, but for that to happen all the needed packages need to be on them. Thanks to our test suite, I discovered that we had forgotten to adjust our PXE setup to cope with the new names and paths used by the netboot d-i packages. When Internet connectivity was available, the installer fall back to using wget to fetch d-i boot images, but when offline it require working packages to get it working. And the packages changed name from debian-installer-6.0-netboot-$arch to debian-installer-7.0-netboot-$arch, we no longer pulled in the packages during installation. Without our test suite, I suspect we would never have discovered this before release. Now it is fixed right after we got the ISOs operational.

Another by-product of the test suite is that we can ask system administrators with problems getting Debian Edu to work, to run the test suite using /usr/sbin/debian-edu-test-install and see if any errors are detected. This usually pinpoint the subsystem causing the problem.

If you want to help us help kids learn how to share and create, please join us on #debian-edu on irc.debian.org and the debian-edu@ mailing list.

17th June 2013

The Debian Edu and Skolelinux distribution have users and contributors all around the globe. And a while back, an enterprising young man showed up on our IRC channel #debian-edu and started asking questions about how Debian Edu worked. We answered as good as we could, and even convinced him to help us with translations. And today I managed to get an interview with him, to learn more about him.

Who are you, and how do you spend your days?

I'm a 25 year old free software enthusiast, living in Romania, which is also my country of origin. Back in 2009, at a New Year's Eve party, I had a very nice beer discussion with a friend, when we realized we have no organised Debian community in our country. A few days later, we put together the infrastructure for such community and even gathered a nice Debian-ish crowd. Since then, I began my quest as a free software hacker and activist and I am constantly trying to cover as much ground as possible on that field.

A few years ago I founded a small web development company, which provided me the flexible schedule I needed so much for my activities. For the last 13 months, I have been the Technical Director of Fundația Ceata, which is a free software activist organisation endorsed by the FSF and the FSFE, and the only one we have in our country.

How did you get in contact with the Skolelinux / Debian Edu project?

The idea of participating in the Debian Edu project was a surprise even to me, since I never used it before I began getting involved in it. This year I had a great opportunity to deliver a talk on educational software, and I knew immediately where to look. It was a love at first sight, since I was previously involved with some of the technologies the project incorporates, and I rapidly found a lot of ways to contribute.

My first contributions consisted in translating the installer and configuration dialogs, then I found some bugs to squash (I still haven't fixed them yet though), and I even got my eyes on some other areas where I can prove myself helpful. Since the appetite for free software in my country is pretty low, I'll be happy to be the first one around here advocating for the project's adoption in educational environments, and maybe even get my hands dirty in creating a flavour for our own needs. I am not used to make very advanced plannings, so from now on, time will tell what I'll be doing next, but I think I have a pretty consistent starting point.

What do you see as the advantages of Skolelinux/Debian Edu?

Not a long time ago, I was in the position of configuring and maintaining a LDAP server on some Debian derivative, and I must say it took me a while. A long time ago, I was maintaining a bigger Samba-powered infrastructure, and I must say I spent quite a lot of time on it. I have similar stories about many of the services included with Skolelinux, and the main advantage I see about it is the out-of-the box availability of them, making it quite competitive when it comes to managing a school's network, for example.

Of course, there is more to say about Skolelinux than the availability of the software included, its flexibility in various scenarios is something I can't wait to experiment "into the wild" (I only played with virtual machines so far). And I am sure there is a lot more I haven't discovered yet about it, being so new within the project.

What do you see as the disadvantages of Skolelinux / Debian Edu?

As usual, when it comes to Debian Blends, I see as the biggest disadvantage the lack of a numerous team dedicated to the project. Every day I see the same names in the changelogs, and I have a constantly fear of the bus factor in this story. I'd like to see Debian Edu advertised more as an entry point into the Debian ecosystem, especially amongst newcomers and students. IMHO there are a lot low-hanging fruits in terms of bug squashing, and enough opportunities to get the feeling of the Debian Project's dynamics. Not to mention it's a very fun blend to work on!

Derived from the previous statement, is the delay in catching up with the main Debian release and documentation. This is common though to all blends and derivatives, but it's an issue we can all work on.

Which free software do you use daily?

I can hardly imagine myself spending a day without Vim, since my daily routine covers writing code and hacking configuration files. I am a fan of the Awesome window manager (but I also like the Enlightenment project a lot!), Claws Mail due to its ease of use and very configurable behaviour. Recently I fell in love with Redshift, which helps me get through the night without headaches. Of course, there is much more stuff in this bag, but I'll need a blog on my own for doing this!

Which strategy do you believe is the right one to use to get schools to use free software?

Well, on this field, I cannot do much more than experiment right now. So, being far from having a recipe for success, I can only assume that:

  • schools would like to get rid of proprietary software
  • students will love the openness of the system, and will want to experiment with it - maybe we need to harvest the native curiosity of teenagers more?
  • there is no "right one" when it comes to strategies, but it would be useful to have some success stories published somewhere, so other can get some inspiration from them (I know I'd promote them!)
  • more active promotion - talks, conferences, even small school lectures can do magical things if they encounter at least one person interested. Who knows who that person might be? ;-)

I also see some problems in getting Skolelinux into schools; for example, in our country we have a great deal of corruption issues, so it might be hard(er) to fight against proprietary solutions. Also, people who relied on commercial software for all their lives, would be very hard to convert against their will.

12th June 2013

There is a certain cross-over between the Debian Edu / Skolelinux project and the Edubuntu project, and for example the LTSP packages in Debian are a joint effort between the projects. One person with a foot in both camps is Jonathan Carter, which I am now happy to present to you.

Who are you, and how do you spend your days?

I'm a South-African free software geek who lives in Cape Town. My days vary quite a bit since I'm involved in too many things. As I'm getting older I'm learning how to focus a bit more :)

I'm also an Edubuntu contributor and I love when there are opportunities for the Edubuntu and Debian Edu projects to benefit from each other.

How did you get in contact with the Skolelinux / Debian Edu project?

I've been somewhat familiar with the project before, but I think my first direct exposure to the project was when I met Petter [Reinholdtsen] and Knut [Yrvin] at the Edubuntu summit in 2005 in London. They provided great feedback that helped the bootstrapping of Edubuntu. Back then Edubuntu (and even Ubuntu) was still very new and it was great getting input from people who have been around longer. I was also still very excitable and said yes to everything and to this day I have a big todo list backlog that I'm catching up with. I think over the years the relationship between Edubuntu and Debian-Edu has been gradually improving, although I think there's a lot that we could still improve on in terms of working together on packages. I'm sure we'll get there one day.

What do you see as the advantages of Skolelinux / Debian Edu?

Debian itself already has so many advantages. I could go on about it for pages, but in essence I love that it's a very honest project that puts its users first with no hidden agendas and also produces very high quality work.

I think the advantage of Debian Edu is that it makes many common set-up tasks simpler so that administrators can get up and running with a lot less effort and frustration. At the same time I think it helps to standardise installations in schools so that it's easier for community members and commercial suppliers to support.

What do you see as the disadvantages of Skolelinux / Debian Edu?

I had to re-type this one a few times because I'm trying to separate "disadvantages" from "areas that need improvement" (which is what I originally rambled on about)

The biggest disadvantage I can think of is lack of manpower. The project could do so much more if there were more good contributors. I think some of the problems are external too. Free software and free content in education is a no-brainer but it takes some time to catch on. When you've been working with the same proprietary eco-system for years and have gotten used to it, it can be hard to adjust to some concepts in the free software world. It would be nice if there were more Debian Edu consultants across the world. I'd love to be one myself but I'm already so over-committed that it's just not possible currently.

I think the best short-term solution to that large-scale problem is for schools to be pro-active and share their experiences and grow their skills in-house. I'm often saddened to see how much money educational institutions spend on 3rd party solutions that they don't have access to after the service has ended and they could've gotten so much more value otherwise by being more self-sustainable and autonomous.

Which free software do you use daily?

My main laptop dual-boots between Debian and Windows 7. I was Windows free for years but started dual-booting again last year for some games which help me focus and relax (Starcraft II in particular). Gaming support on Linux is improving in leaps and bounds so I suppose I'll soon be able to regain that disk space :)

Besides that I rely on Icedove, Chromium, Terminator, Byobu, irssi, git, Tomboy, KVM, VLC and LibreOffice. Recently I've been torn on which desktop environment I like and I'm taking some refuge in Xfce while I figure that out. I like tools that keep things simple. I enjoy Python and shell scripting. I went to an Arduino workshop recently and it was awesome seeing how easy and simple the IDE software was to get up and running in Debian compared to the users running Windows and OS X.

I also use mc which some people frown upon slightly. I got used to using Norton Commander in the early 90's and it stuck (I think the people who sneer at it is just jealous that they don't know how to use it :p)

Which strategy do you believe is the right one to use to get schools to use free software?

I think trying to force it is unproductive. I also think that in many cases it's appropriate for schools to use non-free systems and I don't think that there's any particular moral or ethical problem with that.

I do think though that free software can already solve so so many problems in educational institutions and it's just a shame not taking advantage of that.

I also think that some curricula need serious review. For example, some areas of the world rely heavily on very specific versions of MS Office, teaching students to parrot menu items instead of learning the general concepts. I think that's very unproductive because firstly, MS Office's interface changes drastically every few years and on top of that it also locks in a generation to a product that might not be the best solution for them.

To answer your question, I believe that the right strategy is to educate and inform, giving someone the information they require to make a decision that would work for them.

11th June 2013

When installing RedHat, Fedora, Debian and Ubuntu on some machines, the screen just turn black when Linux boot, either during installation or on first boot from the hard disk. I've seen it once in a while the last few years, but only recently understood the cause. I've seen it on HP laptops, and on my latest acquaintance the Packard Bell laptop. The reason seem to be in the wiring of some laptops. The system to control the screen background light is inverted, so when Linux try to turn the brightness fully on, it end up turning it off instead. I do not know which Linux drivers are affected, but this post is about the i915 driver used by the Packard Bell EasyNote LV, Thinkpad X40 and many other laptops.

The problem can be worked around two ways. Either by adding i915.invert_brightness=1 as a kernel option, or by adding a file in /etc/modprobe.d/ to tell modprobe to add the invert_brightness=1 option when it load the i915 kernel module. On Debian and Ubuntu, it can be done by running these commands as root:

echo options i915 invert_brightness=1 | tee /etc/modprobe.d/i915.conf
update-initramfs -u -k all

Since March 2012 there is a mechanism in the Linux kernel to tell the i915 driver which hardware have this problem, and get the driver to invert the brightness setting automatically. To use it, one need to add a row in the intel_quirks array in the driver source drivers/gpu/drm/i915/intel_display.c (look for "static struct intel_quirk intel_quirks"), specifying the PCI device number (vendor number 8086 is assumed) and subdevice vendor and device number.

My Packard Bell EasyNote LV got this output from lspci -vvnn for the video card in question:

00:02.0 VGA compatible controller [0300]: Intel Corporation \
    3rd Gen Core processor Graphics Controller [8086:0156] \
    (rev 09) (prog-if 00 [VGA controller])
 Subsystem: Acer Incorporated [ALI] Device [1025:0688]
 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- \
    ParErr- Stepping- SE RR- FastB2B- DisINTx+
 Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- \
    SERR-  [disabled]
 Capabilities: 
 Kernel driver in use: i915

The resulting intel_quirks entry would then look like this:

struct intel_quirk intel_quirks[] = {
       ...
        /* Packard Bell EasyNote LV11HC needs invert brightness quirk */
	{ 0x0156, 0x1025, 0x0688, quirk_invert_brightness },
       ...
}

According to the kernel module instructions (as seen using modinfo i915), information about hardware needing the invert_brightness flag should be sent to the dri-devel (at) lists.freedesktop.org mailing list to reach the kernel developers. But my email about the laptop sent 2013-06-03 have not yet shown up in the web archive for the mailing list, so I suspect they do not accept emails from non-subscribers. Because of this, I sent my patch also to the Debian bug tracking system instead as BTS report #710938, to make sure the patch is not lost.

Unfortunately, it is not enough to fix the kernel to get Laptops with this problem working properly with Linux. If you use Gnome, your worries should be over at this point. But if you use KDE, there is something in KDE ignoring the invert_brightness setting and turning on the screen during login. I've reported it to Debian as BTS report #711237, and have no idea yet how to figure out exactly what subsystem is doing this. Perhaps you can help? Perhaps you know what the Gnome developers did to handle this, and this can give a clue to the KDE developers? Or you know where in KDE the screen brightness is changed during login? If so, please update the BTS report (or get in touch if you do not know how to update BTS).

Update 2013-07-19: The correct fix for this machine seem to be acpi_backlight=vendor, to disable ACPI backlight support completely, as the ACPI information on the machine is trash and it is better to leave it to the intel video driver to control the screen backlight.

Tags: debian, english.
10th June 2013

The third wheezy based alpha release of Debian Edu was wrapped up today. This is the release announcement:

New features for Debian Edu 7.0.0 alpha2 released 2013-06-10

This is the release notes for for Debian Edu / Skolelinux 7.0.0 edu alpha2, based on Debian with codename "Wheezy".

About Debian Edu and Skolelinux

Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediately after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD, DVD or USB stick all other machines can be installed via the network. The provided school server provides LDAP database and Kerberos authentication service, centralized home directories, DHCP server, web proxy and many other services. The desktop contains more than 60 educational software packages and more are available from the Debian archive, and schools can choose between KDE, Gnome, LXDE and Xfce desktop environment.

This is the third test release based on Debian Wheezy. Basically this is an updated and slightly improved version compared to the Squeeze release.

Software updates

  • Iceweasel was updated from 10 to 17. (DSA 2699-1)
  • Updated libxv (DSA-2674), libxvmc (DSA-2675), libxfixes (DSA-2676), libxrender (DSA-2677), mesa (DSA-2678), xserver-xorg-video-openchrome (DSA-2679), libxt (DSA-2680), libxcursor (DSA-2681), libxext (DSA-2682), libxi (DSA-2683), libxrandr (DSA-2684), libxp (DSA-2685), libxcb (DSA-2686), libfs (DSA-2687), libxres (DSA-2688), libxtst (DSA-2689), libxxf86dga (DSA-2690), libxinerama (DSA-2691), libxxf86vm (DSA-2692), libx11 (DSA-2693), chromium-browser (DSA-2695), gnutls26 (DSA-2697), wireshark (DSA-2700), krb5 (DSA-2701), telepathy-gabble (DSA-2702) and subversion (DSA-2703).
  • Switched xrdp on thin client servers to use tightvncserver instead of xvnc4.
  • Now install software oscilloscope xoscope by default.
  • Now install music tools gtick, lingot and pianobooster by default.

Other changes

  • The subnet-change script is now able to change all files needing a change on the main-server when changing the IP network used.
  • Updated translation of the installation.
  • New Romanian translation.
  • Fix security problem causing root and first user password to no longer show up in /var/cache/debconf/templates.dat.
  • Fix roaming workstation setup (Closed in libpam-mklocaluser/0.8, libpam-mklocaluser/0.8~deb7u1: #706753: libpam-mklocaluser: Fail to create local user during first login).
  • Made roaming workstation setup more robust in non-Debian Edu environments.
  • New script debian-edu-bless to transform a Debian installation to a Debian Edu profile.
  • Adjust Iceweasel setup to improve performance when $HOME is on NFS.
  • More testsuite tests.
  • Make automatic proxy configuration more robust.
  • Adjust GOsa² GUI configuration.
  • Update thin client and diskless workstation setup to work with LTSP in Wheezy.
  • Diskless workstations now run out of the box -- no need to set them up with GOsa².
  • Update IMAP server setup.
  • Fix login into Skolelinux Backup Tool (Closed in slbackup-php/0.4.4-1: #700257: slbackup-php: Fails to submit correctly entered password).

Known issues

  • DVD binary and source images are not yet ready.
  • No mass import of user account data in GOsa (ldif or csv) available yet (Open in gosa/2.7.4-4: #698840: gosa-plugin-ldapmanager: missing import feature).
  • Missing artwork for the KDE desktop (and probably a few others).
  • KDE Debian submenu lacks icons (Closed: #502192: menu-xdg: invents own icon names instead of using existing). This will remain unfixed.

Where to get it

To download the multiarch netinstall CD release you can use

The MD5SUM of this image is: 27bbcace407743382f3c42c08dbe8178
The SHA1SUM of this image is: e35f7d7908566cd3075375b3721fa10ee420d419

How to report bugs

http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

5th June 2013

Here is a call for help from the Debian Edu / Skolelinux project. We have two problems blocking the release of the Wheezy version we hope to get released soon. The two problems require some with PHP skills, and we seem to lack anyone with both time and PHP skills in the project:

  1. It is impossible to log into the slbackup web interface (slbackup-php) using the root user and password. This is BTS report #700257. This used to work, but stopped working some time since Squeeze. Perhaps some obsolete PHP feature was used?
  2. It is not possible to "mass import" user lists in Gosa, neither using ldif nor using CSV files. The feature was disabled after a major rewrite of Gosa, and need to be ported to the new system. This is BTS report #698840.

If you can help us, please join us on IRC (#debian-edu on irc.debian.org) and provide patches via the BTS.

4th June 2013

It has been a while since my last English Debian Edu and Skolelinux interview last November. But the developers and translators are still pulling along to get the Wheezy based release out the door, and this time I managed to get an interview from one of the French translators in the project, Cédric Boutillier.

Who are you, and how do you spend your days?

I am 34 year old. I live near Paris, France. I am an assistant professor in probability theory. I spend my daytime teaching mathematics at the university and doing fundamental research in probability in connexion with combinatorics and statistical physics.

I have been involved in the Debian project for a couple of years and became Debian Developer a few months ago. I am working on Ruby packaging, publicity and translation.

How did you get in contact with the Skolelinux / Debian Edu project?

I came to the Debian Edu project after a call for translation of the Debian Edu manual for the release of Debian Edu Squeeze. Since then, I have been working on updating the French translation of the manual.

I had the opportunity to make an installation of Debian Edu in a virtual machine when I was preparing localised version of some screen shots for the manual. I was amazed to see it worked out of the box and how comprehensive the list of software installed by default was.

What amazed me was the complete network infrastructure directly ready to use, which can and the nice administration interface provided by GOsa². What pleased me also was the fact that among the software installed by default, there were many "traditional" educative software to learn languages, to count, to program... but also software to develop creativity and artistic skills with music (Ardour, Audacity) and movies/animation (I was especially thinking of Stopmotion).

I am following the development of Debian Edu and am hanging out on #debian-edu. Unfortunately, I don't much time to get more involved in this beautiful project.

What do you see as the advantages of Skolelinux / Debian Edu?

For me, the main advantages of Skolelinux/Debian Edu are its community of experts and its precise documentation, as well as the fact that it provides a solution ready to use.

I would add also the fact that it is based on the rock solid Debian distribution, which ensures stability and provides a huge collection of educational free software.

What do you see as the disadvantages of Skolelinux / Debian Edu?

Maybe the lack of manpower to do lobbying on the project. Sometimes, people who need to take decisions concerning IT do not have all the elements to evaluate properly free software solutions. The fact that support by a company may be difficult to find is probably a problem if the school does not have IT personnel.

One can find support from a company by looking at the wiki dokumentation, where some countries already have a number of companies providing support for Debian Edu, like Germany or Norway. This list is easy to find readily from the manual. However, for other countries, like France, the list is empty. I guess that consultants proposing support for Debian would be able to provide some support for Debian Edu as well.

Which free software do you use daily?

I am using the KDE Plasma Desktop. But the pieces of software I use most runs in a terminal: Mutt and OfflineIMAP for emails, latex for scientific documents, mpd for music. VIM is my editor of choice. I am also using the mathematical software Scilab and Sage (built from source as not completely packaged for Debian, yet).

Do you have any suggestions for teachers interested in using the free software in Debian to teach mathematics and statistics?

I do not have any "nice" recommendations for statistics. At our university, we use both R and Scilab to teach statistics and probabilistic simulations. For geometry, there are nice programs:

  • drgeo and kig to do constructions in planar geometry
  • kali to discover symmetry groups (the so-called wallpapers and frieze groups), although the interface looks a bit old.

I like also cantor, which provides a uniform interface to SciLab, Sage, Octave, etc...

Which strategy do you believe is the right one to use to get schools to use free software?

My suggestions would be to

  • advertise the reduction of costs when free software is used.
  • communicate about the quality of free software projects, using well known examples like Firefox, ThunderBird and OpenOffice.org/LibreOffice.
  • advertise the living and strong community around the project.
  • show that it is not more difficult to use than any other system.
1st June 2013

Included in Debian Edu / Skolelinux, there are quite a lot of educational software. Created to help teachers teach, and pupils learn. We have tried to tag them all using debtags use::learning and role::program, and using the debtags I was happy to be able to create a collage of the educational software packages installed by default, sorted by the debtag field. Here it is. Click on a image to learn more about the program.

field::arts

audacity childsplay denemo freebirth gcompris gimp hydrogen lilypond lmms rosegarden scribus solfege stopmotion tuxpaint

field::astronomy

celestia-gnome gpredict kstars planets stellarium xplanet

field::biology:structural

pymol

field::chemistry

atomix chemtool easychem gchempaint gdis ghemical gperiodic kalzium pymol [viewmol] xdrawchem

field::electronics

gcompris [gpsim]

field::geography

kgeography marble xplanet

field::linguistics

gcompris kanagram khangman klettres parley

field::mathematics

childsplay drgeo gcompris geogebra [geomview] grace graphmonkey graphthing kalgebra kbruch kig kmplot mathwar rocs scratch tuxmath xabacus

field::physics

gcompris step

field::TODO

blinken cgoban childsplay gcompris gnuchess gnugo gtans ktouch librecad scratch

In total, 61 applications. 3 of them lacked screen shots on screenshot.debian.net. If you know of some packages we should install by default, please let us know on IRC, #debian-edu on irc.debian.org, or our mailing list debian-edu@.

27th May 2013

Two days ago, I asked how I could install Linux on a Packard Bell EasyNote LV computer preinstalled with Windows 8. I found a solution, but am horrified with the obstacles put in the way of Linux users on a laptop with UEFI and Windows 8.

I never found out if the cause of my problems were the use of UEFI secure booting or fast boot. I suspect fast boot was the problem, causing the firmware to boot directly from HD without considering any key presses and alternative devices, but do not know UEFI settings enough to tell.

There is no way to install Linux on the machine in question without opening the box and disconnecting the hard drive! This is as far as I can tell, the only way to get access to the firmware setup menu without accepting the Windows 8 license agreement. I am told (and found description on how to) that it is possible to configure the firmware setup once booted into Windows 8. But as I believe the terms of that agreement are completely unacceptable, accepting the license was never an alternative. I do not enter agreements I do not intend to follow.

I feared I had to return the laptops and ask for a refund, and waste many hours on this, but luckily there was a way to get it to work. But I would not recommend it to anyone planning to run Linux on it, and I have become sceptical to Windows 8 certified laptops. Is this the way Linux will be forced out of the market place, by making it close to impossible for "normal" users to install Linux without accepting the Microsoft Windows license terms? Or at least not without risking to loose the warranty?

I've updated the Linux Laptop wiki page for Packard Bell EasyNote LV, to ensure the next person do not have to struggle as much as I did to get Linux into the machine.

Thanks to Bob Rosbag, Florian Weimer, Philipp Kern, Ben Hutching, Michael Tokarev and others for feedback and ideas.

Tags: debian, english.
25th May 2013

I've run into quite a problem the last few days. I bought three new laptops for my parents and a few others. I bought Packard Bell Easynote LV to run Kubuntu on and use as their home computer. But I am completely unable to figure out how to install Linux on it. The computer is preinstalled with Windows 8, and I suspect it uses UEFI instead of a BIOS to boot.

The problem is that I am unable to get it to PXE boot, and unable to get it to boot the Linux installer from my USB stick. I have yet to try the DVD install, and still hope it will work. when I turn on the computer, there is no information on what buttons to press to get the normal boot menu. I expect to get some boot menu to select PXE or USB stick booting. When booting, it first ask for the language to use, then for some regional settings, and finally if I will accept the Windows 8 terms of use. As these terms are completely unacceptable to me, I have no other choice but to turn off the computer and try again to get it to boot the Linux installer.

I have gathered my findings so far on a Linlap page about the Packard Bell EasyNote LV model. If you have any idea how to get Linux installed on this machine, please get in touch or update that wiki page. If I can't find a way to install Linux, I will have to return the laptop to the seller and find another machine for my parents.

I wonder, is this the way Linux will be forced out of the market using UEFI and "secure boot" by making it impossible to install Linux on new Laptops?

Tags: debian, english.
17th May 2013

Debian Edu / Skolelinux is an operating system based on Debian intended for use in schools. It contain a turn-key solution for the computer network provided to pupils in the primary schools. It provide both the central server, network boot servers and desktop environments with heaps of educational software. The project was founded almost 12 years ago, 2001-07-02. If you want to support the project, which is in need for cash to fund developer gatherings and other project related activity, please donate some money.

A topic that come up again and again on the Debian Edu mailing lists and elsewhere, is the question on how to transform a Debian or Ubuntu installation into a Debian Edu installation. It isn't very hard, and last week I wrote a script to replicate the steps done by the Debian Edu installer.

The script, debian-edu-bless in the debian-edu-config package, will go through these six steps and transform an existing Debian Wheezy or Ubuntu (untested) installation into a Debian Edu Workstation:

  1. Add skolelinux related APT sources.
  2. Create /etc/debian-edu/config with the wanted configuration.
  3. Install debian-edu-install to load preseeding values and pull in our configuration.
  4. Preseed debconf database with profile setup in /etc/debian-edu/config, and run tasksel to install packages according to the profile specified in the config above, overriding some of the Debian automation machinery.
  5. Run debian-edu-cfengine-D installation to configure everything that could not be done using preseeding.
  6. Ask for a reboot to enable all the configuration changes.

There are some steps in the Debian Edu installation that can not be replicated like this. Disk partitioning and LVM setup, for example. So this script just assume there is enough disk space to install all the needed packages.

The script was created to help a Debian Edu student working on setting up Raspberry Pi as a Debian Edu client, and using it he can take the existing Raspbian installation and transform it into a fully functioning Debian Edu Workstation (or Roaming Workstation, or whatever :).

The default setting in the script is to create a KDE Workstation. If a LXDE based Roaming workstation is wanted instead, modify the PROFILE and DESKTOP values at the top to look like this instead:

PROFILE="Roaming-Workstation"
DESKTOP="lxde"

The script could even become useful to set up Debian Edu servers in the cloud, by starting with a virtual Debian installation at some virtual hosting service and setting up all the services on first boot.

14th May 2013

The Debian Edu / Skolelinux project is making great progress and made its second Wheezy based release today. This is the release announcement:

New features for Debian Edu 7.0.0 alpha1 released 2013-05-14

This is the release notes for for Debian Edu / Skolelinux 7.0.0 edu alpha1, based on Debian with codename "Wheezy".

About Debian Edu and Skolelinux

Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediatly after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD, DVD or USB stick all other machines can be installed via the network.

This is the first test release based on Wheezy (which currently is not released yet). Basically this is an updated and slightly improved version compared to the Squeeze release.

Software updates

  • Install freemind (0.9.0) by default, and stop installing vym by default.
  • Install chromium (26.0.1410.43) by default.
  • Install goplay (0.5-1.1) to make golearn available by default.
  • Updated support for Japanese input methods, now based on ibus-anthy.

Other changes

  • Switched default file system from ext3 to ext4 for speed and reliability improvements.
  • Got rid of unwanted winbind daemon and PAM setup activated because of 706434.
  • Extended and improved the testsuite tests to detect more possible problems.
  • Corrected proxy handling to not set http_proxy to a bogus direct:// URL.
  • Corrected proxy setup for diskless workstations.
  • Corrected PXE setup to use our updated udebs during installation.
  • Made installation handling of low entropy level more robust.
  • Create larger partitions for Roaming workstations and Thin client servers, to make room for all the software installed.
  • Fix bug in Roaming workstation PAM setup, making it impossible to log in (706753).

Known issues

  • IP resolution for the local hostname give useless IPv6 address (705900). Only install libnss-myhostname on roaming workstations until it is fixed.
  • DVD images are not yet ready.
  • No mass import of user account data in GOsa (ldif or csv) available yet (698840).
  • Missing artwork for the KDE desktop (and probably a few others).
  • KDE Debian submenu lacks icons.
  • LXDE menu lacks entry for changing GOsa password (website). Installing gosa-desktop will be an option.
  • Backup configuration via web interface is impossible due to password submission problem (700257).

Where to get it

To download the multiarch netinstall CD release you can use

The MD5SUM of this image is: 685ed76c1aa8e44b12d3fde21faf450b

The SHA1SUM of this image is: 6c874de157024da13e115bab29c068080a11ec4c

How to report bugs

http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

11th May 2013

In January, I announced a new IRC channel #debian-lego, for those of us in the Debian and Linux community interested in LEGO, the marvellous construction system from Denmark. We also created a wiki page to have a place to take notes and write down our plans and hopes. And several people showed up to help. I was very happy to see the effect of my call. Since the small start, we have a debtags tag hardware::hobby:lego tag for LEGO related packages, and now count 10 packages related to LEGO and Mindstorms:

brickosalternative OS for LEGO Mindstorms RCX. Supports development in C/C++
leocadvirtual brick CAD software
libnxtutility library for talking to the LEGO Mindstorms NX
lnpddaemon for LNP communication with BrickOS
nbccompiler for LEGO Mindstorms NXT bricks
nqcNot Quite C compiler for LEGO Mindstorms RCX
python-nxtpython driver/interface/wrapper for the Lego Mindstorms NXT robot
python-nxt-filersimple GUI to manage files on a LEGO Mindstorms NXT
scratcheasy to use programming environment for ages 8 and up
t2nsimple command-line tool for Lego NXT

Some of these are available in Wheezy, and all but one are currently available in Jessie/testing. leocad is so far only available in experimental.

If you care about LEGO in Debian, please join us on IRC and help adding the rest of the great free software tools available on Linux for LEGO designers.

Tags: debian, english, lego, robot.
5th May 2013

When I woke up this morning, I was very happy to see that the release announcement for Debian Wheezy was waiting in my mail box. This is a great Debian release, and I expect to move my machines at home over to it fairly soon.

The new debian release contain heaps of new stuff, and one program in particular make me very happy to see included. The Scratch program, made famous by the Teach kids code movement, is included for the first time. Alongside similar programs like kturtle and turtleart, it allow for visual programming where syntax errors can not happen, and a friendly programming environment for learning to control the computer. Scratch will also be included in the next release of Debian Edu.

And now that Wheezy is wrapped up, we can wrap up the next Debian Edu/Skolelinux release too. The first alpha release went out last week, and the next should soon follow.

26th April 2013

The Debian Edu / Skolelinux project is still going strong and made its first Wheezy based release today. This is the release announcement:

New features for Debian Edu ~7.0.0 alpha0 released 2013-04-26

This is the release notes for for Debian Edu / Skolelinux ~7.0.0 edu alpha0, based on Debian with codename "Wheezy".

About Debian Edu and Skolelinux

Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediatly after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD, DVD or USB stick all other machines can be installed via the network.

This is the first test release based on Wheezy (which currently is not released yet). Basically this is an updated and slightly improved version compared to the Squeeze release.

Software updates

  • Everything which is new in Debian Wheezy, eg:
    • Linux kernel 3.2.x
    • Desktop environments KDE "Plasma" 4.8.4, GNOME 3.4, and LXDE 4 (KDE is installed by default; to choose GNOME or LXDE: see manual.)
    • Web browser Iceweasel 10 ESR
    • LibreOffice 3.5.4
    • LTSP 5.4.2
    • GOsa 2.7.4
    • CUPS print system 1.5.3
    • Educational toolbox GCompris 12.01
    • Music creator Rosegarden 12.04
    • Image editor Gimp 2.8.2
    • Virtual universe Celestia 1.6.1
    • Virtual stargazer Stellarium 0.11.3
    • Scratch visual programming environment 1.4.0.6
    • New version of debian-installer from Debian Wheezy, see installation manual for more details.
    • Debian Wheezy includes about 37000 packages available for installation.
    • More information about Debian Wheezy 7.0 is provided in the release notes and the installation manual.

Documentation

  • The (English) Debian Edu Wheezy Manual is fully translated to German, French, Italian and Danish. Partly translated versions exist for Norwegian Bokmal and Spanish.

LDAP related changes

  • Slight changes to some objects and acls to have more types to choose from when adding systems in GOsa. Now systems can be of type server, workstation, printer, terminal or netdevice.

Other changes

  • LTSP clients start as diskless workstation / thin client can be configured via command line argument -- or individually adding an entry in lts.conf or LDAP.
  • GOsa gui: Now some options that seemed to be available, but are non functional, are greyed out (or are not clickable). Some tabs are completely hidden to the end user, others even to the GOsa admin.

Regressions

  • No mass import of user account data in GOsa (ldif or csv) available yet.

No updated artwork

  • Updated artwork which is visible during installation, in the login screen and as desktop wallpaper is still missing or the same as we had for our Squeeze based release.

Where to get it

To download the multiarch netinstall CD release you can use

The MD5SUM of this image is: c5e773ddafdaa4f48c409c682f598b6c

The SHA1SUM of this image is: 25934fabb9b7d20235499a0a51f08ce6c54215f2

How to report bugs

http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

16th April 2013

This years first Skolelinux / Debian Edu developer gathering take place the coming weekend in Trondheim. Details about the gathering can be found on the FRiSK wiki. The dates are 19-21th of April 2013, and online participation for those unable to make it in person is very welcome, and I plan to participate online myself as I could not leave Oslo this weekend.

The focus of the gathering is to work on the web pages and project infrastructure, and to continue the work on the Wheezy based Debian Edu release.

See you on IRC, #debian-edu on irc.debian.org, then?

3rd April 2013

Today the Isenkram package finally made it into the archive, after lingering in NEW for many months. I uploaded it to the Debian experimental suite 2013-01-27, and today it was accepted into the archive.

Isenkram is a system for suggesting to users what packages to install to work with a pluggable hardware device. The suggestion pop up when the device is plugged in. For example if a Lego Mindstorm NXT is inserted, it will suggest to install the program needed to program the NXT controller. Give it a go, and report bugs and suggestions to BTS. :)

26th March 2013

Would you like to help the environment and save money at the same time, without much sacrifice? A small step could be to change the font you use when printing.

Three years ago, Ars Technica reported how the University of Wisconsin-Green Bay changed their default front from Arial to Century Gothic to save money. The Century Gothic font uses 30% less toner than Arial to print the same text. In other word, you could cut your toner costs by 30% (or actually, increase your toner supply life time by more than 30%), by simply changing the default font used in your prints.

But it is not quite obvious how much one will save by switching. The University of Wisconsin-Green Bay said it used $100,000 per year on ink and toner cartridges, according to a report from TwinCities.com, and expected to save between $5,000 and $10,000 per year by asking staff and students to use a different font. Not all PDFs and documents are created internally, and those from external sources will most likely still use a different font. Also, the Century Gothic font is slightly wider than Arial, and thus might use more sheets of paper to print the same text, so the total saving depend on the documents printed.

But it is definitely something to consider, if you want to reduce the amount of trash, decrease the amount of toner used in the world, and save some money in the process.

Update 2013-04-10: If you want to know how much ink/toner could be saved when switching between fonts, Inkfarm got a service to calculate the difference between font pairs. They also recommend which fonts to use to save ink. Check it out. :) While updating this blog post, I also came across a blog post from InkCloners, listing the fonts they recommend, with Centory Gothic at the top.

Tags: english.
24th March 2013

A few days ago, during a discussion in EFN about interesting books to read about copyright and the data retention directive, a suggestion to read the 1968 short story Kodémus by Tore Åge Bringsværd came up. The text was only available in old paper books, and thus not easily available for current and future generations. Some of the people participating in the discussion contacted the author, and reported back 2013-03-19 that the author was OK with releasing the short story using a Creative Commons license. The text was quickly scanned and OCR-ed, and we were ready to start on the editing and typesetting.

As I already had some experience formatting text in my project to provide a Norwegian version of the Free Culture book by Lawrence Lessig, I chipped in and set up a DocBook processing framework to generate PDF, HTML and EPUB version of the short story. The tools to transform DocBook to different formats are already in my Linux distribution of choice, Debian, so all I had to do was to use the dblatex, dbtoepub and xmlto tools to do the conversion. After a few days, we decided to replace dblatex with xsltproc/fop (aka docbook-xsl), to get the copyright information to show up in the PDF and to get a nicer <variablelist> typesetting, but that is just a minor technical detail.

There were a few challenges, of course. We want to typeset the short story to look like the original, and that require fairly good control over the layout. The original short story have three parts/scenes separated by a single horizontally centred star (*), and the paragraphs do not contain only flowing text, but dialogs and text that started on a new line in the middle of the paragraph.

I initially solved the first challenge by using a paragraph with a single star in it, ie <para>*</para>, but it made sure a placeholder indicated where the scene shifted. This did not look too good without the centring. The next approach was to create a new preprocessor directive <?newscene?>, mapping to "<hr/>" for HTML and "<fo:block text-align="center"><fo:leader leader-pattern="rule" rule-thickness="0.5pt"/></fo:block>" for FO/PDF output (did not try to implement this in dblatex, as we had switched at this time). The HTML XSL file looked like this:

<?xml version='1.0'?> 
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>
  <xsl:template match="processing-instruction('newscene')">
    <hr/>
  </xsl:template>
</xsl:stylesheet> 

And the FO/PDF XSL file looked like this:

<?xml version='1.0'?> 
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>
  <xsl:template match="processing-instruction('newscene')">
    <fo:block text-align="center">
      <fo:leader leader-pattern="rule" rule-thickness="0.5pt"/>
    </fo:block>
  </xsl:template>
</xsl:stylesheet> 

Finally, I came across the <bridgehead> tag, which seem to be a good fit for the task at hand, and I replaced <?newscene?> with <bridgehead>*</bridgehead>. It isn't centred, but we can fix it with some XSL rule if the current visual layout isn't enough.

I did not find a good DocBook compliant way to solve the linebreak/paragraph challenge, so I ended up creating a new processor directive <?linebreak?>, mapping to <br/> in HTML, and <fo:block/> in FO/PDF. I suspect there are better ways to do this, and welcome ideas and patches on github. The HTML XSL file now look like this:

<?xml version='1.0'?> 
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>
  <xsl:template match="processing-instruction('linebreak)">
    <br/>
  </xsl:template>
</xsl:stylesheet> 

And the FO/PDF XSL file looked like this:

<?xml version='1.0'?> 
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'
  xmlns:fo="http://www.w3.org/1999/XSL/Format">
  <xsl:template match="processing-instruction('linebreak)">
    <fo:block/>
  </xsl:template>
</xsl:stylesheet> 

One unsolved challenge is our wish to expose different ISBN numbers per publication format, while keeping all of them in some conditional structure in the DocBook source. No idea how to do this, so we ended up listing all the ISBN numbers next to their format in the colophon page.

If you want to check out the finished result, check out the source repository at github (future/new/official repository). We expect it to be ready and announced in a few days.

17th March 2013

Via twitter I just discovered that Pcwizz have done a video review on Youtube of Skolelinux / Debian Edu version 6. He installed the standalone profile and the video show a walk-through of of the menu content, demonstration of a few programs and his view of our distribution.

There is also some really nice quotes (transcribed by me, might have heard wrong). While looking thought the Graphics menu:

"Basically everything you ever need in a school environment."

And as a general evaluation of the entire distribution:

"So, yeah, a bit bloated. It kept all the Debian stuff in there, just to keep it nice and GNU. So, I do not want to go on about it, but lets give it 7 out of 10. I am not going to use it. That is because I am not deploying a school network. There may be some mythical feature to help you deploy Skolelinux on a school network."

To bad he did not test the server profile, and discovered the PXE installation option. It make it possible to install only the main server from CD, and the rest of the machines via the net, and might be considered the mythical feature he talk about. :)

While looking through the menus, there is also this funny comment about the part of the K menu generated from the Debian menu subsystem:

"[The K menu] have a special Debian section for software that no-one is going to look at, because it contain lots of junky stuff that you actually don't need in the education distribution, but have just been included because it isn't stripped out for some reason."

I guess it is yet another argument for merging the Debian menu and Gnome/KDE desktop menu entries into one consistent menu system instead of two incomplete and partly inconsistent menu systems.

The entire video is available below for those accepting iframe embedding:

8th March 2013

Last Sunday, 2013-03-03,, Holger Levsen announced the first update of Skolelinux / Debian Edu based on Debian Squeeze. This is the first update since the initial release 2012-03-11. This is the release announcement email from Holger:

Hi,

it's my pleasure to announce the immediate availability of Debian Edu 6.0.7+r1 ("Debian Edu Squeeze").

Debian Edu 6.0.7+r1 is an incremental update to Debian Edu 6.0.4+r0, containing all the changes between Debian 6.0.4 and 6.0.7 as well Debian Edu specific bugfixes and enhancements. See below (in this mail) for the full list of (edu) changes. Please see http://www.debian.org/News/2012/20120311 for more information on "Debian Edu Squeeze".

Images are available for download at http://ftp.skolelinux.org/skolelinux-cd/

md5sums:
1fe79eb4f0f9ae1c58fc318e26cc1e2e debian-edu-6.0.7+r1-CD.iso
a6ddd924a8bd9a1b5ca122e8fe1c34ec debian-edu-6.0.7+r1-DVD.iso
ac6c72cd7925ccec51bfbf58e2a7c69c debian-edu-6.0.7+r1-source-DVD.iso

sha1sums:
a4b58233b672a99c7df8dc24fb6de3327654a5c3 debian-edu-6.0.7+r1-CD.iso
9b524915e0ff2aa793f13d93123e5bd2bab2dbaa debian-edu-6.0.7+r1-DVD.iso
43997614893fc5e9e59ad6ce066b05d07fd836fa debian-edu-6.0.7+r1-source-DVD.iso

These images are suitable for amd64+i386.

Changes for Debian Edu 6.0.7+r1 Codename "Squeeze", released 2013-03-03:

  • sitesummary was updated from 0.1.3 to 0.1.8
    • Make Nagios configuration more robust and efficient
    • Comply with 3.X kernel
  • debian-edu-doc from 1.4~20120310~6.0.4+r0 to 1.4~20130228~6.0.7+r1
    • Minor updates from the wiki
    • Danish translation now complete
  • debian-edu-config from 1.453 to 1.455
    • Fix /etc/hosts for LTSP diskless workstations. Closes: #699880
    • Make ltsp_local_mount script work for multiple devices.
    • Correct Kerberos user policy: don't expire password after 2 days. Closes: #664596
    • Handle '#' characters in the root or first users password. Closes: #664976
    • Fixes for gosa-sync:
      • Don't fail if password contains "
      • Don't disclose new password string in syslog
    • Fixes for gosa-create:
      • Invalidate libnss cache before applying changes
      • Multiple failures during mass user import into GOsa²
      • gosa-netgroups plugin: don't erase entries of attribute type "memberNisNetgroup". Closes: #687256
      • First user now uses the same Kerberos policy as all other users
    • Add Danish web page
  • debian-edu-install from 1.528 to 1.530
    • Improve preseeding support and documentation

End-user documentation in English is available at http://wiki.debian.org/DebianEdu/Documentation/Squeeze/ - translations to French, Italian, Danish and German are available in the debian-edu-doc package. (Other languages could use your help!)

If you want to contribute to Debian Edu, please join our mailinglist debian-edu@lists.debian.org!

I am very happy to see the fruits of a year of hard work. :)

3rd March 2013

Do you want to set up your own TV station, schedule videos and broadcast them on the air? Using free software? With video on demand support using free and open standards? Included a web based video stream as well? And administrate it all in your web browser from anywhere in the world? A few years now the Norwegian public access TV-channel Frikanalen have been building a system to do just this. The source code for the solution is licensed using the GNU LGPL, and available from github.

The idea is simple. You upload a video file over the web, and attach meta information to the file. You select a time slot in the program schedule, and when the time come it is played on the air and in the web stream. It is also made available in a video on demand solution for anyone to see it also outside its scheduled time. All you need to run a TV station - using your web browser.

There are several parts to this web based solution. I'll mention the three most important ones. The first part is the database of videos and the schedule. This is written in Django and include a REST API. The current database is SQLite, but the plan is to migrate it to PostgreSQL. At the moment this system can be tested on beta.frikanalen.tv. The second part is the video playout, taking the schedule information from the database and providing a video stream to broadcast. This is done using CasparCG from SVT and Media Lovin' Toolkit. Video signal distribution is handled using Open Broadcast Encoder. The third part is the converter, handling the transformation of uploaded video files to a format useful for broadcasting, streaming and video on demand. It is still very much work in progress, so it is not yet decided what it will end up using. Note that the source of the latter two parts are not yet pushed to github. The lead author want to clean them up a bit more first.

The development is coordinated on the #frikanalen IRC channel (irc.freenode.net), and discussed on the frikanalen mailing list. The lead developer is Benjamin Bruheim (phed on IRC). Anyone is welcome to participate in the development.

27th February 2013

Dr. Richard Stallman, founder of Free Software Foundation, is giving a talk in Oslo March 1st 2013 17:00 to 19:00. The event is public and organised by Norwegian Unix Users Group (NUUG) (where I am the chair of the board) and The Norwegian Open Source Competence Center. The title of the talk is «The Free Software Movement and GNU», with this description:

The Free Software Movement campaigns for computer users' freedom to cooperate and control their own computing. The Free Software Movement developed the GNU operating system, typically used together with the kernel Linux, specifically to make these freedoms possible.

The meeting is open for everyone. Due to space limitations, the doors opens for NUUG members at 16:15, and everyone else at 16:45. I am really curious how many will show up. See the event page for the location details.

15th February 2013

If you, like me, want an updated a map for your Garmin GPS, there is now a great source of free maps available from Frikart. To download a map, just click on the country you are interested in, and download the map type you want. There are 8 different maps available, using different colours and data selection. Pick one of Roadmap, Topo Summer, Topo Winter, Roadmap II, Topo Summer II, Topo Winter II, "Trails - overlay map" and "Cross country - overlay map" (see the web page for descriptions).

The maps are updated weekly, so if you find something wrong in the map you can just edit the OpenStreetmap map source (anyone can contribute) and fetch a fixed map a week later. :)

Tags: english, kart.
12th February 2013

Here in Norway, electronic invoices are spreading, and the solution promoted by the Norwegian government require that invoices are sent through one of the approved facilitators, and it is not possible to send electronic invoices without an agreement with one of these facilitators. This seem like a needless limitation to be able to transfer invoice information between buyers and sellers. My preferred solution would be to just transfer the invoice information directly between seller and buyer, for example using SMTP, or some HTTP based protocol like REST or SOAP. But this might also be overkill, as the "electronic" information can be transferred using paper invoices too, using a simple bar code. My bar code encoding of choice would be QR codes, as this encoding can be read by any smart phone out there. The content of the code could be anything, but I would go with the vCard format, as it too is supported by a lot of computer equipment these days.

The vCard format support extentions, and the invoice specific information can be included using such extentions. For example an invoice from SLX Debian Labs (picked because we ask for donations to the Debian Edu project and thus have bank account information publicly available) for NOK 1000.00 could have these extra fields:

X-INVOICE-NUMBER:1
X-INVOICE-AMOUNT:NOK1000.00
X-INVOICE-KID:123412341234
X-INVOICE-MSG:Donation to Debian Edu
X-BANK-ACCOUNT-NUMBER:16040884339
X-BANK-IBAN-NUMBER:NO8516040884339
X-BANK-SWIFT-NUMBER:DNBANOKKXXX

The X-BANK-ACCOUNT-NUMBER field was proposed in a stackoverflow answer regarding how to put bank account information into a vCard. For payments in Norway, either X-INVOICE-KID (payment ID) or X-INVOICE-MSG could be used to pass on information to the seller when paying the invoice.

The complete vCard could look like this:

BEGIN:VCARD
VERSION:2.1
ORG:SLX Debian Labs Foundation
ADR;WORK:;;Gunnar Schjelderups vei 29D;OSLO;;0485;Norway
URL;WORK:http://www.linuxiskolen.no/slxdebianlabs/
EMAIL;PREF;INTERNET:sdl-styret@rt.nuug.no
REV:20130212T095000Z
X-INVOICE-NUMBER:1
X-INVOICE-AMOUNT:NOK1000.00
X-INVOICE-MSG:Donation to Debian Edu
X-BANK-ACCOUNT-NUMBER:16040884339
X-BANK-IBAN-NUMBER:NO8516040884339
X-BANK-SWIFT-NUMBER:DNBANOKKXXX
END:VCARD

The resulting QR code created using qrencode would look like this, and should be readable (and thus checkable) by any smart phone, or for example the zbar bar code reader and feed right into the approval and accounting system.

The extension fields will most likely not show up in any normal vCard reader, so those parts would have to go directly into a system handling invoices. I am a bit unsure how vCards without name parts are handled, but a simple test indicate that this work just fine.

Update 2013-02-12 11:30: Added KID to the proposal based on feedback from Sturle Sunde.

Tags: english, standard.
10th February 2013

With kids in the house, one challenge is getting them to sleep during the night and wake up when it is morning. I mean, when I believe it is morning, and not two hours earlier. In our household we have decided that 07:00 is the turning point, but getting the kids to sleep until 07:00 is a small challenge every day. They have adapted quite well, and rarely wake up at 05:00 any more, but some times wake up at times like 05:50, 06:15, 06:30 or 06:45, and it is hard to put the awake one to bed again without disturbing and waking the rest. And I understand perfectly well that they fail to sleep until 07:00 some times, as there is no way for them to know if it is before or after the magic moment without coming and asking us parents.

But yesterday I came up with a method to solve this problem. It involve home automation. A few years ago I bought a Tellstick and RF switches at the local Clas Ohlson shop, allowing me to control lights and other electrical gadgets using my Linux server. When I moved from the old flat to a small house, I put away all this equipment as most of the lighting in the house was not using wall sockets and thus not easy to connect to the gadgets I had. But recently I bought a Tellstick Net to be able to read sensor input as well as control power sockets. I want to control ovens in the basement to avoid the pipes to freeze, and monitor the humidity to detect flooding. The default setup for Tellstick Net is to be controlled by the vendor web service, which to me is a security problem, but it is also possible to build ones own firmware with local access instead of being controlled by a Swedish company, thanks to the release of the GPL licensed firmware source code. I plan to get that running before I let it control anything important. But while working on this, one idea to make it easier for the kids came to me yesterday. We can set up a night light controlled by the computer, and turn it automatically on at 07:00. The kids can then check the light in the morning to know if they are supposed to get up or not. They joined me in setting everything up, and I repeated the concept several times before bed times to make sure they remembered to check the light before getting up in the morning.

We tested it this morning, and all the kids stayed in bed until after 07:00, and every one of them commented on the fact that the "morning light" was turned on and signalled that the morning had arrived. So this look like a success, and I am excited to see how this develops the next few days. :) I really hope this can allow us all to sleep a bit longer in the morning.

A nice advantage of this setup is that we can remote control when to tell the kids to get up. We do not have to wait until 07:00, and can also delay it if we want to.

Tags: english.
2nd February 2013

My last bitcoin related blog post mentioned that the new bitcoin package for Debian was waiting in NEW. It was accepted by the Debian ftp-masters 2013-01-19, and have been available in unstable since then. It was automatically copied to Ubuntu, and is available in their Raring version too.

But there is a strange problem with the build that block this new version from being available on the i386 and kfreebsd-i386 architectures. For some strange reason, the autobuilders in Debian for these architectures fail to run the test suite on these architectures (BTS #672524). We are so far unable to reproduce it when building it manually, and no-one have been able to propose a fix. If you got an idea what is failing, please let us know via the BTS.

One feature that is annoying me with of the bitcoin client, because I often run low on disk space, is the fact that the client will exit if it run short on space (BTS #696715). So make sure you have enough disk space when you run it. :)

As usual, if you use bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

22nd January 2013

Yesterday, I asked for testers for my prototype for making Debian better at handling pluggable hardware devices, which I set out to create earlier this month. Several valuable testers showed up, and caused me to really want to to open up the development to more people. But before I did this, I want to come up with a sensible name for this project. Today I finally decided on a new name, and I have renamed the project from hw-support-handler to this new name. In the process, I moved the source to git and made it available as a collab-maint repository in Debian. The new name? It is Isenkram. To fetch and build the latest version of the source, use

git clone http://anonscm.debian.org/git/collab-maint/isenkram.git
cd isenkram && git-buildpackage -us -uc

I have not yet adjusted all files to use the new name yet. If you want to hack on the source or improve the package, please go ahead. But please talk to me first on IRC or via email before you do major changes, to make sure we do not step on each others toes. :)

If you wonder what 'isenkram' is, it is a Norwegian word for iron stuff, typically meaning tools, nails, screws, etc. Typical hardware stuff, in other words. I've been told it is the Norwegian variant of the German word eisenkram, for those that are familiar with that word.

Update 2013-01-26: Added -us -us to build instructions, to avoid confusing people with an error from the signing process.

Update 2013-01-27: Switch to HTTP URL for the git clone argument to avoid the need for authentication.

21st January 2013

Early this month I set out to try to improve the Debian support for pluggable hardware devices. Now my prototype is working, and it is ready for a larger audience. To test it, fetch the source from the Debian Edu subversion repository, build and install the package. You might have to log out and in again activate the autostart script.

The design is simple:

  • Add desktop entry in /usr/share/autostart/ causing a program hw-support-handlerd to start when the user log in.
  • This program listen for kernel events about new hardware (directly from the kernel like udev does), not using HAL dbus events as I initially did.
  • When new hardware is inserted, look up the hardware modalias in the APT database, a database available via HTTP and a database available as part of the package.
  • If a package is mapped to the hardware in question, the package isn't installed yet and this is the first time the hardware was plugged in, show a desktop notification suggesting to install the package or packages.
  • If the user click on the 'install package now' button, ask aptdaemon via the PackageKit API to install the requrired package.
  • aptdaemon ask for root password or sudo password, and install the package while showing progress information in a window.

I still need to come up with a better name for the system. Here are some screen shots showing the prototype in action. First the notification, then the password request, and finally the request to approve all the dependencies. Sorry for the Norwegian Bokmål GUI.





The prototype still need to be improved with longer timeouts, but is already useful. The database of hardware to package mappings also need more work. It is currently compatible with the Ubuntu way of storing such information in the package control file, but could be changed to use other formats instead or in addition to the current method. I've dropped the use of discover for this mapping, as the modalias approach is more flexible and easier to use on Linux as long as the Linux kernel expose its modalias strings directly.

Update 2013-01-21 16:50: Due to popular demand, here is the command required to check out and build the source: Use 'svn checkout svn://svn.debian.org/debian-edu/trunk/src/hw-support-handler/; cd hw-support-handler; debuild'. If you lack debuild, install the devscripts package.

Update 2013-01-23 12:00: The project is now renamed to Isenkram and the source moved from the Debian Edu subversion repository to a Debian collab-maint git repository. See build instructions for details.

19th January 2013

This Christmas my trusty old laptop died. It died quietly and suddenly in bed. With a quiet whimper, it went completely quiet and black. The power button was no longer able to turn it on. It was a IBM Thinkpad X41, and the best laptop I ever had. Better than both Thinkpads X30, X31, X40, X60, X61 and X61S. Far better than the Compaq I had before that. Now I need to find a replacement. To keep going during Christmas, I moved the one year old SSD disk to my old X40 where it fitted (only one I had left that could use it), but it is not a durable solution.

My laptop needs are fairly modest. This is my wishlist from when I got a new one more than 10 years ago. It still holds true.:)

  • Lightweight (around 1 kg) and small volume (preferably smaller than A4).
  • Robust, it will be in my backpack every day.
  • Three button mouse and a mouse pin instead of touch pad.
  • Long battery life time. Preferable a week.
  • Internal WIFI network card.
  • Internal Twisted Pair network card.
  • Some USB slots (2-3 is plenty)
  • Good keyboard - similar to the Thinkpad.
  • Video resolution at least 1024x768, with size around 12" (A4 paper size).
  • Hardware supported by Debian Stable, ie the default kernel and X.org packages.
  • Quiet, preferably fan free (or at least not using the fan most of the time).

You will notice that there are no RAM and CPU requirements in the list. The reason is simply that the specifications on laptops the last 10-15 years have been sufficient for my needs, and I have to look at other features to choose my laptop. But are there still made as robust laptops as my X41? The Thinkpad X60/X61 proved to be less robust, and Thinkpads seem to be heading in the wrong direction since Lenovo took over. But I've been told that X220 and X1 Carbon might still be useful.

Perhaps I should rethink my needs, and look for a pad with an external keyboard? I'll have to check the Linux Laptops site for well-supported laptops, or perhaps just buy one preinstalled from one of the vendors listed on the Linux Pre-loaded site.

Tags: debian, english.
18th January 2013

Some times I try to figure out which Iceweasel browser plugin to install to get support for a given MIME type. Thanks to specifications done by Ubuntu and Mozilla, it is possible to do this in Debian. Unfortunately, not very many packages provide the needed meta information, Anyway, here is a small script to look up all browser plugin packages announcing ther MIME support using this specification:

#!/usr/bin/python
import sys
import apt
def pkgs_handling_mimetype(mimetype):
    cache = apt.Cache()
    cache.open(None)
    thepkgs = []
    for pkg in cache:
        version = pkg.candidate
        if version is None:
            version = pkg.installed
        if version is None:
            continue
        record = version.record
        if not record.has_key('Npp-MimeType'):
            continue
        mime_types = record['Npp-MimeType'].split(',')
        for t in mime_types:
            t = t.rstrip().strip()
            if t == mimetype:
                thepkgs.append(pkg.name)
    return thepkgs
mimetype = "audio/ogg"
if 1 < len(sys.argv):
    mimetype = sys.argv[1]
print "Browser plugin packages supporting %s:" % mimetype
for pkg in pkgs_handling_mimetype(mimetype):
    print "  %s" %pkg

It can be used like this to look up a given MIME type:

% ./apt-find-browserplug-for-mimetype 
Browser plugin packages supporting audio/ogg:
  gecko-mediaplayer
% ./apt-find-browserplug-for-mimetype application/x-shockwave-flash
Browser plugin packages supporting application/x-shockwave-flash:
  browser-plugin-gnash
%

In Ubuntu this mechanism is combined with support in the browser itself to query for plugins and propose to install the needed packages. It would be great if Debian supported such feature too. Is anyone working on adding it?

Update 2013-01-18 14:20: The Debian BTS request for icweasel support for this feature is #484010 from 2008 (and #698426 from today). Lack of manpower and wish for a different design is the reason thus feature is not yet in iceweasel from Debian.

Tags: debian, english.
16th January 2013

The DEP-11 proposal to add AppStream information to the Debian archive, is a proposal to make it possible for a Desktop application to propose to the user some package to install to gain support for a given MIME type, font, library etc. that is currently missing. With such mechanism in place, it would be possible for the desktop to automatically propose and install leocad if some LDraw file is downloaded by the browser.

To get some idea about the current content of the archive, I decided to write a simple program to extract all .desktop files from the Debian archive and look up the claimed MIME support there. The result can be found on the Skolelinux FTP site. Using the collected information, it become possible to answer the question in the title. Here are the 20 most supported MIME types in Debian stable (Squeeze), testing (Wheezy) and unstable (Sid). The complete list is available from the link above.

Debian Stable:

  count MIME type
  ----- -----------------------
     32 text/plain
     30 audio/mpeg
     29 image/png
     28 image/jpeg
     27 application/ogg
     26 audio/x-mp3
     25 image/tiff
     25 image/gif
     22 image/bmp
     22 audio/x-wav
     20 audio/x-flac
     19 audio/x-mpegurl
     18 video/x-ms-asf
     18 audio/x-musepack
     18 audio/x-mpeg
     18 application/x-ogg
     17 video/mpeg
     17 audio/x-scpls
     17 audio/ogg
     16 video/x-ms-wmv

Debian Testing:

  count MIME type
  ----- -----------------------
     33 text/plain
     32 image/png
     32 image/jpeg
     29 audio/mpeg
     27 image/gif
     26 image/tiff
     26 application/ogg
     25 audio/x-mp3
     22 image/bmp
     21 audio/x-wav
     19 audio/x-mpegurl
     19 audio/x-mpeg
     18 video/mpeg
     18 audio/x-scpls
     18 audio/x-flac
     18 application/x-ogg
     17 video/x-ms-asf
     17 text/html
     17 audio/x-musepack
     16 image/x-xbitmap

Debian Unstable:

  count MIME type
  ----- -----------------------
     31 text/plain
     31 image/png
     31 image/jpeg
     29 audio/mpeg
     28 application/ogg
     27 image/gif
     26 image/tiff
     26 audio/x-mp3
     23 audio/x-wav
     22 image/bmp
     21 audio/x-flac
     20 audio/x-mpegurl
     19 audio/x-mpeg
     18 video/x-ms-asf
     18 video/mpeg
     18 audio/x-scpls
     18 application/x-ogg
     17 audio/x-musepack
     16 video/x-ms-wmv
     16 video/x-msvideo

I am told that PackageKit can provide an API to access the kind of information mentioned in DEP-11. I have not yet had time to look at it, but hope the PackageKit people in Debian are on top of these issues.

Update 2013-01-16 13:35: Updated numbers after discovering a typo in my script.

Tags: debian, english.
15th January 2013

Yesterday, I wrote about the modalias values provided by the Linux kernel following my hope for better dongle support in Debian. Using this knowledge, I have tested how modalias values attached to package names can be used to map packages to hardware. This allow the system to look up and suggest relevant packages when I plug in some new hardware into my machine, and replace discover and discover-data as the database used to map hardware to packages.

I create a modaliases file with entries like the following, containing package name, kernel module name (if relevant, otherwise the package name) and globs matching the relevant hardware modalias.

Package: package-name
Modaliases: module(modaliasglob, modaliasglob, modaliasglob)

It is fairly trivial to write code to find the relevant packages for a given modalias value using this file.

An entry like this would suggest the video and picture application cheese for many USB web cameras (interface bus class 0E01):

Package: cheese
Modaliases: cheese(usb:v*p*d*dc*dsc*dp*ic0Eisc01ip*)

An entry like this would suggest the pcmciautils package when a CardBus bridge (bus class 0607) PCI device is present:

Package: pcmciautils
Modaliases: pcmciautils(pci:v*d*sv*sd*bc06sc07i*)

An entry like this would suggest the package colorhug-client when plugging in a ColorHug with USB IDs 04D8:F8DA:

Package: colorhug-client
Modaliases: colorhug-client(usb:v04D8pF8DAd*)

I believe the format is compatible with the format of the Packages file in the Debian archive. Ubuntu already uses their Packages file to store their mappings from packages to hardware.

By adding a XB-Modaliases: header in debian/control, any .deb can announce the hardware it support in a way my prototype understand. This allow those publishing packages in an APT source outside the Debian archive as well as those backporting packages to make sure the hardware mapping are included in the package meta information. I've tested such header in the pymissile package, and its modalias mapping is working as it should with my prototype. It even made it to Ubuntu Raring.

To test if it was possible to look up supported hardware using only the shell tools available in the Debian installer, I wrote a shell implementation of the lookup code. The idea is to create files for each modalias and let the shell do the matching. Please check out and try the hw-support-lookup shell script. It run without any extra dependencies and fetch the hardware mappings from the Debian archive and the subversion repository where I currently work on my prototype.

When I use it on a machine with a yubikey inserted, it suggest to install yubikey-personalization:

% ./hw-support-lookup
yubikey-personalization
%

When I run it on my Thinkpad X40 with a PCMCIA/CardBus slot, it propose to install the pcmciautils package:

% ./hw-support-lookup
pcmciautils
%

If you know of any hardware-package mapping that should be added to my database, please tell me about it.

It could be possible to generate several of the mappings between packages and hardware. One source would be to look at packages with kernel modules, ie packages with *.ko files in /lib/modules/, and extract their modalias information. Another would be to look at packages with udev rules, ie packages with files in /lib/udev/rules.d/, and extract their vendor/model information to generate a modalias matching rule. I have not tested any of these to see if it work.

If you want to help implementing a system to let us propose what packages to install when new hardware is plugged into a Debian machine, please send me an email or talk to me on #debian-devel.

14th January 2013

While looking into how to look up Debian packages based on hardware information, to find the packages that support a given piece of hardware, I refreshed my memory regarding modalias values, and decided to document the details. Here are my findings so far, also available in the Debian Edu subversion repository:

Modalias decoded

This document try to explain what the different types of modalias values stands for. It is in part based on information from <URL: https://wiki.archlinux.org/index.php/Modalias >, <URL: http://unix.stackexchange.com/questions/26132/how-to-assign-usb-driver-to-device >, <URL: http://code.metager.de/source/history/linux/stable/scripts/mod/file2alias.c > and <URL: http://cvs.savannah.gnu.org/viewvc/dmidecode/dmidecode.c?root=dmidecode&view=markup >.

The modalias entries for a given Linux machine can be found using this shell script:

find /sys -name modalias -print0 | xargs -0 cat | sort -u

The supported modalias globs for a given kernel module can be found using modinfo:

% /sbin/modinfo psmouse | grep alias:
alias:          serio:ty05pr*id*ex*
alias:          serio:ty01pr*id*ex*
%

PCI subtype

A typical PCI entry can look like this. This is an Intel Host Bridge memory controller:

pci:v00008086d00002770sv00001028sd000001ADbc06sc00i00

This represent these values:

 v   00008086  (vendor)
 d   00002770  (device)
 sv  00001028  (subvendor)
 sd  000001AD  (subdevice)
 bc  06        (bus class)
 sc  00        (bus subclass)
 i   00        (interface)

The vendor/device values are the same values outputted from 'lspci -n' as 8086:2770. The bus class/subclass is also shown by lspci as 0600. The 0600 class is a host bridge. Other useful bus values are 0300 (VGA compatible card) and 0200 (Ethernet controller).

Not sure how to figure out the interface value, nor what it means.

USB subtype

Some typical USB entries can look like this. This is an internal USB hub in a laptop:

usb:v1D6Bp0001d0206dc09dsc00dp00ic09isc00ip00

Here is the values included in this alias:

 v    1D6B  (device vendor)
 p    0001  (device product)
 d    0206  (bcddevice)
 dc     09  (device class)
 dsc    00  (device subclass)
 dp     00  (device protocol)
 ic     09  (interface class)
 isc    00  (interface subclass)
 ip     00  (interface protocol)

The 0900 device class/subclass means hub. Some times the relevant class is in the interface class section. For a simple USB web camera, these alias entries show up:

usb:v0AC8p3420d5000dcEFdsc02dp01ic01isc01ip00
usb:v0AC8p3420d5000dcEFdsc02dp01ic01isc02ip00
usb:v0AC8p3420d5000dcEFdsc02dp01ic0Eisc01ip00
usb:v0AC8p3420d5000dcEFdsc02dp01ic0Eisc02ip00

Interface class 0E01 is video control, 0E02 is video streaming (aka camera), 0101 is audio control device and 0102 is audio streaming (aka microphone). Thus this is a camera with microphone included.

ACPI subtype

The ACPI type is used for several non-PCI/USB stuff. This is an IR receiver in a Thinkpad X40:

acpi:IBM0071:PNP0511:

The values between the colons are IDs.

DMI subtype

The DMI table contain lots of information about the computer case and model. This is an entry for a IBM Thinkpad X40, fetched from /sys/devices/virtual/dmi/id/modalias:

dmi:bvnIBM:bvr1UETB6WW(1.66):bd06/15/2005:svnIBM:pn2371H4G:pvrThinkPadX40:rvnIBM:rn2371H4G:rvrNotAvailable:cvnIBM:ct10:cvrNotAvailable:

The values present are

 bvn  IBM            (BIOS vendor)
 bvr  1UETB6WW(1.66) (BIOS version)
 bd   06/15/2005     (BIOS date)
 svn  IBM            (system vendor)
 pn   2371H4G        (product name)
 pvr  ThinkPadX40    (product version)
 rvn  IBM            (board vendor)
 rn   2371H4G        (board name)
 rvr  NotAvailable   (board version)
 cvn  IBM            (chassis vendor)
 ct   10             (chassis type)
 cvr  NotAvailable   (chassis version)

The chassis type 10 is Notebook. Other interesting values can be found in the dmidecode source:

  3 Desktop
  4 Low Profile Desktop
  5 Pizza Box
  6 Mini Tower
  7 Tower
  8 Portable
  9 Laptop
 10 Notebook
 11 Hand Held
 12 Docking Station
 13 All In One
 14 Sub Notebook
 15 Space-saving
 16 Lunch Box
 17 Main Server Chassis
 18 Expansion Chassis
 19 Sub Chassis
 20 Bus Expansion Chassis
 21 Peripheral Chassis
 22 RAID Chassis
 23 Rack Mount Chassis
 24 Sealed-case PC
 25 Multi-system
 26 CompactPCI
 27 AdvancedTCA
 28 Blade
 29 Blade Enclosing

The chassis type values are not always accurately set in the DMI table. For example my home server is a tower, but the DMI modalias claim it is a desktop.

SerIO subtype

This type is used for PS/2 mouse plugs. One example is from my test machine:

serio:ty01pr00id00ex00

The values present are

  ty  01  (type)
  pr  00  (prototype)
  id  00  (id)
  ex  00  (extra)

This type is supported by the psmouse driver. I am not sure what the valid values are.

Other subtypes

There are heaps of other modalias subtypes according to file2alias.c. There is the rest of the list from that source: amba, ap, bcma, ccw, css, eisa, hid, i2c, ieee1394, input, ipack, isapnp, mdio, of, parisc, pcmcia, platform, scsi, sdio, spi, ssb, vio, virtio, vmbus, x86cpu and zorro. I did not spend time documenting all of these, as they do not seem relevant for my intended use with mapping hardware to packages when new stuff is inserted during run time.

Looking up kernel modules using modalias values

To check which kernel modules provide support for a given modalias, one can use the following shell script:

  for id in $(find /sys -name modalias -print0 | xargs -0 cat | sort -u); do \
    echo "$id" ; \
    /sbin/modprobe --show-depends "$id"|sed 's/^/  /' ; \
  done

The output can look like this (only the first few entries as the list is very long on my test machine):

  acpi:ACPI0003:
    insmod /lib/modules/2.6.32-5-686/kernel/drivers/acpi/ac.ko 
  acpi:device:
  FATAL: Module acpi:device: not found.
  acpi:IBM0068:
    insmod /lib/modules/2.6.32-5-686/kernel/drivers/char/nvram.ko 
    insmod /lib/modules/2.6.32-5-686/kernel/drivers/leds/led-class.ko 
    insmod /lib/modules/2.6.32-5-686/kernel/net/rfkill/rfkill.ko 
    insmod /lib/modules/2.6.32-5-686/kernel/drivers/platform/x86/thinkpad_acpi.ko 
  acpi:IBM0071:PNP0511:
    insmod /lib/modules/2.6.32-5-686/kernel/lib/crc-ccitt.ko 
    insmod /lib/modules/2.6.32-5-686/kernel/net/irda/irda.ko 
    insmod /lib/modules/2.6.32-5-686/kernel/drivers/net/irda/nsc-ircc.ko 
  [...]

If you want to help implementing a system to let us propose what packages to install when new hardware is plugged into a Debian machine, please send me an email or talk to me on #debian-devel.

Update 2013-01-15: Rewrite "cat $(find ...)" to "find ... -print0 | xargs -0 cat" to make sure it handle directories in /sys/ with space in them.

10th January 2013

As part of my investigation on how to improve the support in Debian for hardware dongles, I dug up my old Mark and Spencer USB Rocket Launcher and updated the Debian package pymissile to make sure udev will fix the device permissions when it is plugged in. I also added a "Modaliases" header to test it in the Debian archive and hopefully make the package be proposed by jockey in Ubuntu when a user plug in his rocket launcher. In the process I moved the source to a git repository under collab-maint, to make it easier for any DD to contribute. Upstream is not very active, but the software still work for me even after five years of relative silence. The new git repository is not listed in the uploaded package yet, because I want to test the other changes a bit more before I upload the new version. If you want to check out the new version with a .desktop file included, visit the gitweb view or use "git clone git://anonscm.debian.org/collab-maint/pymissile.git".

9th January 2013

One thing that annoys me with Debian and Linux distributions in general, is that there is a great package management system with the ability to automatically install software packages by downloading them from the distribution mirrors, but no way to get it to automatically install the packages I need to use the hardware I plug into my machine. Even if the package to use it is easily available from the Linux distribution. When I plug in a LEGO Mindstorms NXT, it could suggest to automatically install the python-nxt, nbc and t2n packages I need to talk to it. When I plug in a Yubikey, it could propose the yubikey-personalization package. The information required to do this is available, but no-one have pulled all the pieces together.

Some years ago, I proposed to use the discover subsystem to implement this. The idea is fairly simple:

  • Add a desktop entry in /usr/share/autostart/ pointing to a program starting when a user log in.
  • Set this program up to listen for kernel events emitted when new hardware is inserted into the computer.
  • When new hardware is inserted, look up the hardware ID in a database mapping to packages, and take note of any non-installed packages.
  • Show a message to the user proposing to install the discovered package, and make it easy to install it.

I am not sure what the best way to implement this is, but my initial idea was to use dbus events to discover new hardware, the discover database to find packages and PackageKit to install packages.

Yesterday, I found time to try to implement this idea, and the draft package is now checked into the Debian Edu subversion repository. In the process, I updated the discover-data package to map the USB ids of LEGO Mindstorms and Yubikey devices to the relevant packages in Debian, and uploaded a new version 2.2013.01.09 to unstable. I also discovered that the current discover package in Debian no longer discovered any USB devices, because /proc/bus/usb/devices is no longer present. I ported it to use libusb as a fall back option to get it working. The fixed package version 2.1.2-6 is now in experimental (didn't upload it to unstable because of the freeze).

With this prototype in place, I can insert my Yubikey, and get this desktop notification to show up (only once, the first time it is inserted):

For this prototype to be really useful, some way to automatically install the proposed packages by pressing the "Please install program(s)" button should to be implemented.

If this idea seem useful to you, and you want to help make it happen, please help me update the discover-data database with mappings from hardware to Debian packages. Check if 'discover-pkginstall -l' list the package you would like to have installed when a given hardware device is inserted into your computer, and report bugs using reportbug if it isn't. Or, if you know of a better way to provide such mapping, please let me know.

This prototype need more work, and there are several questions that should be considered before it is ready for production use. Is dbus the correct way to detect new hardware? At the moment I look for HAL dbus events on the system bus, because that is the events I could see on my Debian Squeeze KDE desktop. Are there better events to use? How should the user be notified? Is the desktop notification mechanism the best option, or should the background daemon raise a popup instead? How should packages be installed? When should they not be installed?

If you want to help getting such feature implemented in Debian, please send me an email. :)

2nd January 2013

During Christmas, I have worked a bit on the Debian support for LEGO Mindstorm NXT. My son and I have played a bit with my NXT set, and I discovered I had to build all the tools myself because none were already in Debian Squeeze. If Debian support for LEGO is something you care about, please join me on the IRC channel #debian-lego (server irc.debian.org). There is a lot that could be done to improve the Debian support for LEGO designers. For example both CAD software and Mindstorm compilers are missing. :)

Update 2012-01-03: A project page including links to Lego related packages is now available.

Tags: debian, english, lego, robot.
28th December 2012

I was happy to discover a few days ago that the Skolelinux / Debian Edu project also this year received a Christmas present from Another Agency in Trondheim. NOK 1000,- showed up on our donation account December 24th. I want to express our thanks for this very welcome present. As the Debian Edu / Skolelinux project is very short on funding these days, and thus lack the money to do regular developer gatherings, this donation was most welcome. One developer gathering cost around NOK 15 000,-, so we need quite a lot more to keep the development pace we want. Thus, I hope their example this year is followed by many others. :)

The public list of donors can be found on the donation page for the project, which also contain instructions if you want to donate to the project.

25th December 2012

Let me start by wishing you all marry Christmas and a happy new year! I hope next year will prove to be a good year.

Bitcoin, the digital decentralised "currency" that allow people to transfer bitcoins between each other with minimal overhead, is a very interesting experiment. And as I wrote a few days ago, the bitcoin situation in Debian is about to improve a bit. The new debian source package (version 0.7.2-2) was uploaded yesterday, and is waiting in the NEW queue for one of the ftpmasters to approve the new bitcoin-qt package name.

And thanks to the great work of Jonas and the rest of the bitcoin team in Debian, you can easily test the package in Debian Squeeze using the following steps to get a set of working packages:

git clone git://git.debian.org/git/collab-maint/bitcoin
cd bitcoin
DEB_MAINTAINER_MODE=1 DEB_BUILD_OPTIONS=noupnp fakeroot debian/rules clean
DEB_BUILD_OPTIONS=noupnp git-buildpackage --git-ignore-new

You might have to install some build dependencies as well. The list of commands should give you two packages, bitcoind and bitcoin-qt, ready for use in a Squeeze environment. Note that the client will download the complete set of bitcoin "blocks", which need around 5.6 GiB of data on my machine at the moment. Make sure your ~/.bitcoin/ directory have lots of spare room if you want to download all the blocks. The client will warn if the disk is getting full, so there is not really a problem if you got too little room, but you will not be able to get all the features out of the client.

As usual, if you use bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

21st December 2012

It has been a while since I wrote about bitcoin, the decentralised peer-to-peer based crypto-currency, and the reason is simply that I have been busy elsewhere. But two days ago, I started looking at the state of bitcoin in Debian again to try to recover my old bitcoin wallet. The package is now maintained by a team of people, and the grunt work had already been done by this team. We owe a huge thank you to all these team members. :) But I was sad to discover that the bitcoin client is missing in Wheezy. It is only available in Sid (and an outdated client from backports). The client had several RC bugs registered in BTS blocking it from entering testing. To try to help the team and improve the situation, I spent some time providing patches and triaging the bug reports. I also had a look at the bitcoin package available from Matt Corallo in a PPA for Ubuntu, and moved the useful pieces from that version into the Debian package.

After checking with the main package maintainer Jonas Smedegaard on IRC, I pushed several patches into the collab-maint git repository to improve the package. It now contains fixes for the RC issues (not from me, but fixed by Scott Howard), build rules for a Qt GUI client package, konqueror support for the bitcoin: URI and bash completion setup. As I work on Debian Squeeze, I also created a patch to backport the latest version. Jonas is going to look at it and try to integrate it into the git repository before uploading a new version to unstable.

I would very much like bitcoin to succeed, to get rid of the centralized control currently exercised in the monetary system. I find it completely unacceptable that the USA government is collecting transaction data for almost all international money transfers (most are done in USD and transaction logs shipped to the spooks), and that the major credit card companies can block legal money transactions to Wikileaks. But for bitcoin to succeed, more people need to use bitcoins, and more people need to accept bitcoins when they sell products and services. Improving the bitcoin support in Debian is a small step in the right direction, but not enough. Unfortunately the user experience when browsing the web and wanting to pay with bitcoin is still not very good. The bitcoin: URI is a step in the right direction, but need to work in most or every browser in use. Also the bitcoin-qt client is too heavy to fire up to do a quick transaction. I believe there are other clients available, but have not tested them.

My experiment with bitcoins showed that at least some of my readers use bitcoin. I received 20.15 BTC so far on the address I provided in my blog two years ago, as can be seen on the blockexplorer service. Thank you everyone for your donation. The blockexplorer service demonstrates quite well that bitcoin is not quite anonymous and untracked. :) I wonder if the number of users have gone up since then. If you use bitcoin and want to show your support of my activity, please send Bitcoin donations to the same address as last time, 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

18th December 2012

A few days ago I came across a blog post from Joey Hess describing ledger and hledger, a text based system for double-entry accounting. I found it interesting, as I am involved with several organizations where accounting is an issue, and I have not really become too friendly with the different web based systems we use. I find it hard to find what I look for in the menus and even harder try to get sensible data out of the systems. Ledger seem different. The accounting data is kept in text files that can be stored in a version control system, and there are at least five different implementations able to read the format. An example entry look like this, and is simple enough that it will be trivial to generate entries based on CVS files fetched from the bank:

2004-05-27 Book Store
      Expenses:Books                 $20.00
      Liabilities:Visa

The concept seemed interesting enough for me to check it out and look for others using it. I found blog posts from Christine Spang, Pete Keen, Andrew Cantino and Ronald Ip describing how they use it, as well as a post from Bradley M. Kuhn at the Software Freedom Conservancy. All seemed like good recommendations fitting my need.

The ledger package is available in Debian Squeeze, while the hledger package only is available in Debian Sid. As I use Squeeze, ledger seemed the best choice to get started.

To get some real data to test on, I wrote a web scraper for LODO, the accounting system used by the NUUG association, and started to play with the data set. I'm not really deeply into accounting, but I am able to get a simple balance and accounting status for example using the "ledger balance" command. But I will have to gather more experience before I know if the ledger way is a good fit for the organisations I am involved in.

6th December 2012

Where I work at the University of Oslo, we use the Cerebrum user administration system to maintain users, groups, DNS, DHCP, etc. I've known since the system was written that the server is providing an XML-RPC API, but I have never spent time to try to figure out how to use it, as we always use the bofh command line client at work. Until today. I want to script the updating of DNS and DHCP to make it easier to set up virtual machines. Here are a few notes on how to use it with Python.

I started by looking at the source of the Java bofh client, to figure out how it connected to the API server. I also googled for python examples on how to use XML-RPC, and found a simple example in the XML-RPC howto.

This simple example code show how to connect, get the list of commands (as a JSON dump), and how to get the information about the user currently logged in:

#!/usr/bin/env python
import getpass
import xmlrpclib
server_url = 'https://cerebrum-uio.uio.no:8000';
username = getpass.getuser()
password = getpass.getpass()
server = xmlrpclib.Server(server_url);
#print server.get_commands(sessionid)
sessionid = server.login(username, password)
print server.run_command(sessionid, "user_info", username)
result = server.logout(sessionid)
print result

Armed with this knowledge I can now move forward and script the DNS and DHCP updates I wanted to do.

Tags: english, sysadmin.
17th November 2012

While working on a Norwegian translation of the Free Culture by Lawrence Lessig (76% done), which cover the problems with todays copyright law and how it stifles creativity, one idea occurred to me. The idea is to get the tax office to help make more works enter the public domain and also help make it easier to clear rights for using copyrighted works.

I mentioned this idea briefly during Yesterdays presentation by John Perry Barlow, and concluded that it was best to put it in writing for a wider audience. The idea is not really based on the argument that copyrighted works are "intellectual property", as the core requirement is that copyrighted work have value for the copyright holder and the tax office like to collect their share from any value controlled by the citizens in a country. I'm sharing the idea here to let others consider it and perhaps shoot it down with a fresh set of arguments.

Most valuables are taxed by the government. At least here in Norway, the amount of money you have, the value of our land property, the value of your house, the value of your car, the value of our stocks and other valuables are all added together. If the tax value of these values exceed your debt, you have to pay the tax office some taxes for these values. And copyrighted work have value. It have value for the rights holder, who can earn money selling access to the work. But it is not included in the tax calculations? Why not?

If the government want to tax copyrighted works, it would want to maintain a database of all the copyrighted works and who are the rights holders for a given works, to be able to associate the works value to the right citizen or company for tax purposes. If such database exist, it will become a lot easier to find out who to talk to for clearing permissions to use a copyrighted work, which is a very hard operation with todays copyright law. To ensure that copyright holders keep the database up-to-date, it would have to become a requirement to be able to collect money for granting access to copyrighted works that the work is listed in the database with the correct right holder.

If copyright causes copyright holders to have to pay more taxes, they will have a small incentive to "disown" their copyright, and let the work enter the public domain. For works with several right holders one of the right holders could state (and get it registered in the database) that she do not need to be consulted when clearing rights to use the work in question and thus will not get any income from that work. Stating this would have to be impossible to revert and stop the tax office from adding the value of that work to the given citizens tax calculation. I assume the copyright law would stay the same, allowing creators to pick a license of their choosing, and also allowing them to put their work directly in the public domain. The existence of such database will make it even easier to clear rights, and if the right holders listed in the database is taxed, this system would increase the amount of works that enter the public domain.

The effect would be that the tax office help to make it easier to get rights to use the works that have not yet entered the public domain and help to get more work into the public domain.

Why have such taxing not happened yet? I am sure the tax office would like to tax copyrighted work values if they could.

14th November 2012

Here is another interview with one of the people in the Debian Edu and Skolelinux community. I am running short on people willing to be interviewed, so if you know about someone I should interview, Please send me an email. After asking for many months, I finally managed to lure another one of the people behind the German "IT-Zukunft Schule" project out from maternity leave to conduct an interview. Give a warm welcome to Angela Fuß. :)

Who are you, and how do you spend your days?

I am a 39-year-old woman living in the very north of Germany near Denmark. I live in a patchwork family with "my man" Mike Gabriel, my two daughters, Mikes daughter and Mikes and my rather newborn son.

At the moment - because of our little baby - I am spending most of the day by being a caring and organising mom for all the kids. Besides that I am really involved into and occupied with several inner growth processes: New born souls always bring the whole familiar system into movement and that needs time and focus ;-). We are also in the middle of buying a house and moving to it.

In 2013 I will work again in my job in a German foundation for nature conservation. I am doing public relation work there. Besides that - and that is the connection to Skolelinux / Debian Edu - I am working in our own school project "IT-Zukunft Schule" in North Germany. I am responsible for the quality assurance, the customer relationship management and the communication processes in the project.

Since 2001 I constantly have been training myself in communication and leadership. Besides that I am a forester, a landscaping gardener and a yoga teacher.

How did you get in contact with the Skolelinux / Debian Edu project?

I fell in love with Mike ;-).

Very soon after getting to know him I was completely enrolled into Free Software. At this time Mike did IT-services for one newly founded school in Kiel. Other schools in Kiel needed concepts for their IT environment. Often when Mike came home from working at the newly founded school I found myself listening to his complaints about several points where the communication with the schools head or the teachers did not work. So we were clear that he would not work for one more school if we did not set up a structure for communication between him, the schools head, the teachers, the students and the parents.

Together with our friend and hardware supplier Andreas Buchholz we started to get an overview of free software solutions suitable for schools. One day before Christmas 2010 Mike and I had a date with Kurt Gramlich in Gütersloh. As Kurt and I are really interested in building networks of people and in being in communication we dived into Skolelinux and brought it to the first grammar schools in Northern Germany.

For information about our school project you can read the interview with Mike Gabriel.

What do you see as the advantages of Skolelinux / Debian Edu?

First I have to say: I cannot answer this question technically. My answer comes rather from a social point of view.

The biggest advantage of Skolelinux / Debian Edu I see is the large and strong international community of Debian Developers in the background which is very alive and connected over mailinglists, blogs and meetings. My constant feeling for the Debian Community is: If something does not work they will somehow fix it. All is well ;-). This is of course a user experience. What I also get as a big advantage of Skolelinux / Debian Edu is that everybody who uses it and works with it can also contribute to it - that includes students, teachers, parents...

What do you see as the disadvantages of Skolelinux / Debian Edu?

I will answer this question relating to the internal structure of Skolelinux / Debian Edu.

What I see as a major disadvantage is that there is a gap between the group of developers for Debian Edu and the people who make the marketing, that means the people that bring Skolelinux to the schools. There is a lack of communication between these two groups and I think that does not really work for Skolelinux / Debian Edu.

Further I appreciate that Skolelinux / Debian Edu is known as a do-ocracy. Nevertheless I keep asking myself if at some points a democracy or some kind of hierarchical project structure would be good and helpful. I am also missing some kind of contact between the Skolelinux / Debian Edu communities in Europe or on an international level. I think it would be good if there was more sharing between the different countries using Skolelinux / Debian Edu.

Which free software do you use daily?

On my laptop I am still using an Ubuntu 10.04 with a Gnome Desktop on. As applications I use Openoffice.org, Gedit, Firefox, Pidgin, LaTeX and GnuCash. For mails I am using Horde. And I am really fond of my N900 running with Maemo.

Which strategy do you believe is the right one to use to get schools to use free software?

I am really convinced that in our school project "IT-Zukunft Schule" we have developed (and keep developing) a great way to get schools to use Free Software. We have written a detailed concept for that so I cannot explain the whole thing here. But in a nutshell the strategy has three crucial pillars:

  • We really take time to get what sort of stories, questions and concerns the schools head and the teachers have about using different kinds of IT and we take time to enrol them into Free Software.
  • Our solution for schools is never just technical. In the centre are always the people who are going to use the software. From the very beginning of the planning for a school, we tell the schools head that they are paying us not only for a technical solution for their school, they also pay us for leading all the communication processes needed. If they do not want that, we are not working with them because we cannot give a guarantee for the quality of our work then.
  • Another focus lies in the training of teachers and students in co-administrating the IT-System at their school. They start getting in contact with the Skolelinux / Debian Edu community and they get the offer to become more and more independent from us.
4th November 2012

Slashdot just ran a story about the European Central Bank (ECB) releasing a report (PDF) about virtual currencies and bitcoin. It is interesting to see how a member of the bitcoin community receive the report. As for the future, I suspect the central banks and the governments will outlaw bitcoin if it gain any popularity, to avoid competition. My thoughts go to the Wörgl experiment with negative inflation on cash which was such a success that it was terminated by the Austrian National Bank in 1933. A successful alternative would be a threat to the current money system and gain powerful forces to work against it.

While checking out the current status of bitcoin, I also discovered that the community already seem to have experienced its first pyramid game / Ponzi scheme. Not very surprising, given how members of "small" communities tend to trust each other. I guess enterprising crocks will try again and again, as they do anywhere wealth is available.

26th October 2012

I work at the University of Oslo looking after the computers, mostly on the unix side, but in general all over the place. I am also a member (and currently leader) of the NUUG association, which in turn make me a member of USENIX. NUUG is an member organisation for us in Norway interested in free software, open standards and unix like operating systems, and USENIX is a US based member organisation with similar targets. And thanks to these memberships, I get all issues of the great USENIX magazine ;login: in the mail several times a year. The magazine is great, and I read most of it every time.

In the last issue of the USENIX magazine ;login:, there is an article by Stuart Kendrick from Fred Hutchinson Cancer Research Center titled "What Takes Us Down" (longer version also available from his own site), where he report what he found when he processed the outage reports (both planned and unplanned) from the last twelve years and classified them according to cause, time of day, etc etc. The article is a good read to get some empirical data on what kind of problems affect a data centre, but what really inspired me was the kind of reporting they had put in place since 2000.

The centre set up a mailing list, and started to send fairly standardised messages to this list when a outage was planned or when it already occurred, to announce the plan and get feedback on the assumtions on scope and user impact. Here is the two example from the article: First the unplanned outage:

Subject:     Exchange 2003 Cluster Issues
Severity:    Critical (Unplanned)
Start: 	     Monday, May 7, 2012, 11:58
End: 	     Monday, May 7, 2012, 12:38
Duration:    40 minutes
Scope:	     Exchange 2003
Description: The HTTPS service on the Exchange cluster crashed, triggering
             a cluster failover.

User Impact: During this period, all Exchange users were unable to
             access e-mail. Zimbra users were unaffected.
Technician:  [xxx]
Next the planned outage:
Subject:     H Building Switch Upgrades
Severity:    Major (Planned)
Start:	     Saturday, June 16, 2012, 06:00
End:	     Saturday, June 16, 2012, 16:00
Duration:    10 hours
Scope:	     H2 Transport
Description: Currently, Catalyst 4006s provide 10/100 Ethernet to end-
	     stations. We will replace these with newer Catalyst
	     4510s.
User Impact: All users on H2 will be isolated from the network during
     	     this work. Afterward, they will have gigabit
     	     connectivity.
Technician:  [xxx]

He notes in his article that the date formats and other fields have been a bit too free form to make it easy to automatically process them into a database for further analysis, and I would have used ISO 8601 dates myself to make it easier to process (in other words I would ask people to write '2012-06-16 06:00 +0000' instead of the start time format listed above). There are also other issues with the format that could be improved, read the article for the details.

I find the idea of standardising outage messages seem to be such a good idea that I would like to get it implemented here at the university too. We do register planned changes and outages in a calendar, and report the to a mailing list, but we do not do so in a structured format and there is not a report to the same location for unplanned outages. Perhaps something for other sites to consider too?

22nd October 2012

A blog post from Martin Bekkelund today tell the story of how Amazon erased the books from a customer's kindle, locked the account and refuse to tell the customer why. If a real book store did this to a customer, it would be called breaking into private property and theft. The story has spread around the net today. A bit more background information is available in Norwegian from digi.no. It is no surprise that digital restriction mechanisms (DRM) are used this way, as it has been warned about such abuse since DRM was introduced many years back. And Amazon proved in 2009 that it was willing to break into customers equipment and remove the books people had bought, when it removed the book 1984 by George Orwell from all the customers who had bought it. From the official comments, it even sounded like Amazon would never do that again. And here we are, three years later.

And thought this action is against Norwegian regulations and law, it is according to the terms of use as written by Amazon, and it is hard to hold Amazon accountable to Norwegian laws. It is just yet another example of unacceptable terms of use on the web, and how they are used to remove customer rights.

Luckily for electronic books, there are alternatives without unacceptable terms. For example Project Gutenberg (about 40,000 books), Project Runenberg (1,652 books) and The Internet Archive (3,641,797 books) have heaps of books without DRM, which can read by anyone and shared with anyone.

Update 2012-10-23: This story broke in the morning on Monday. In the evening after the story had spread all across the Internet, Amazon restored the account of the user, as reported by digi.no and NRK. Apparently public pressure work. The story from Martin have seen several twitter messages per minute the last 24 hours, which is quite a lot, and is still drawing a lot of attention. But even when the account is restored, the fundamental problem still exist. I recommend reading two opinions from Simon Phipps and Glen Moody if you want to learn more about the fundamentals and more details about the original story.

18th October 2012

Civil liberties and privacy in the western world are going down the drain, and it is hard to fight against it. I try to do my best, but time is limited. I hope you do your best too. A few years ago I came across a marvellous drawing by Clay Bennett visualising some of what is going on.

«They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety.» - Benjamin Franklin

Do you feel safe at the airport? I do not. Do you feel safe when you see a surveillance camera? I do not. Do you feel safe when you leave electronic traces of your behaviour and opinions? I do not. I just remember the Panopticon, and can not help to think that we are slowly transforming our society to a huge Panopticon on our own.

12th October 2012

Thanks to a blog post by Eddy Petrișor, I became aware of yet another "alternative medicine" company using legal intimidation tactics to scare off critics. According to the originating blog post about the detox "cure" ColonHelp and its producers Zenyth Pharmaceuticals actions, the producer sues Wordpress to get rid of the critical information. To check if the story was for real, I contacted Automattic, the company behind wordpress.com, and they reply was "We can confirm that Zenyth is seeking a court order against WordPress / Automattic. However, we don't believe the Terms of Service have been violated in this matter".

The story seem to be simply that a blogger checked the scientific foundation for a popular health product in Rumania, ColonHelp, and reported that there was no reason at all to believe it improved the health of its users. This caused the company behind the product, Zenyth Pharmaceuticals, to use legal intimidation to try to silence the critic, instead of presenting its views and scientific foundation to argue its side.

This is the usual story, and the Zenyth Pharmaceuticals company deserve everyone to know how it failed to act properly. Lets hope the Streisand effect can make it rethink its strategy.

What is the harm, you might think. I suggest you take a look at a list of victims of detoxification.

Tags: english, skepsis.
3rd October 2012

I just read the blog post from Tim Retout about the computer science book collection available in his local library, and just wanted to share my comment on his theory about computer books becoming obsolete so soon. That is part of the reason why the selection is so sad in almost any local library (it is in mine too), but I believe the major contributing factor is that the people buying books to the library have no way to know a good and future computer classic from trash. And they need to know which one will become a classic in the future, as they would normally buy one of the recently published books.

During my university years, I worked for a while at the university library, and even there the person in charge of buying computer related books (and in fact any natural science related book), did not know enough about computers to make a good educated guess. Once, just before Christmas, they had some leftover money on the book budget and I was asked if I could pick out a lot of computer books in the university book store, for the library to buy for their collection. I had a great time picking all the books I dreamt of buying and reading, and the books I knew were classics (like most of the Stevens collection). I picked several of the generic O'Reilly books (ie documenting protocols, formats and systems, not specific versions of products) and stayed away from the 'teach yourself X in N days' class. I had a great time, and probably picked out more than a hundred books for the library that evening.

The sad fact is that there is no way a overworked librarian is going to know that for example The Practice of Programming is a must-have in any computer library, and they will most of the time end up picking the wrong books to buy. Perhaps you can help your local library make better choices by giving the suggestions for books to get? I know they would love to hear from you, even if their budget might block them from getting your favourite book right away.

Tags: english.
23rd September 2012

Since this summer, I have worked in my spare time on a Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig. The reason is that this book is a great primer on what problems exist in the current copyright laws, and I want it to be available also for those that are reluctant do read an English book. When I started, I called for volunteers to help me, but too few have volunteered so far, and progress is a bit slow. Anyway, today I broken the 70 percent mark for the first rough translation. At the moment, less than 700 strings (paragraphs, index terms, titles) are left to translate. With my current progress of 10-20 strings per day, it will take a while to complete the translation. This graph show the updated progress:

Progress have slowed down lately due to family and work commitments. If you want to help, please get in touch, and check out the project files currently available from github.

If you are curious what the translated book currently look like, the updated PDF and EPUB are published on github. The HTML version is published as well, but github hand it out with MIME type text/plain, confusing browsers, so I saw no point in linking to that version.

17th September 2012

After a long break in my row of interviews with people in the Debian Edu and Skolelinux community, I finally found time to wrap up another. This time it is Giorgio Pioda, which showed up on the mailing list at the start of this year, asking questions and inspiring us to improve the first time administrators experience with Skolelinux. :) The interview was conduced in May, but I only found time to publish it now.

Who are you, and how do you spend your days?

I have a PhD in chemistry but since several years I work as teacher in secondary (15-18 year old students) and tertiary (a kind of "light" university) schools. Five years ago I started to manage a Learning Management Service server and slowly I got more and more involved with IT. 3 years ago the graduating schools moved completely to Linux and I got the head of the IT for this. The experience collected in chemistry labs computers (for example NMR analysis of protein folding) and in the IT-courses during university where sufficient to start. Self training is anyway very important

I live in the Italian speaking part of Switzerland, and the SPSE school (secondary) is a very special sport school for young people who try to became sport pro (for all sports, we have dozens of disciplines represented) and we are recognised by the Olympic Swiss Organisation.

How did you get in contact with the Skolelinux/Debian Edu project?

Looking for Linux / Primary Domain Controller (PDC) I found it already several years ago. But since the system was still not Kerberized and since our schools relies strongly on laptops I didn't use it. I plan to introduce it in the next future, probably for the next school year, since the squeeze release solved this security hole.

What do you see as the advantages of Skolelinux/Debian Edu?

Many. First of all there is a strong and living community that is very generous for help and hints. Chat help is crucial, together with the mailing list. Second. With Skolelinux you get an already well engineered platform and you don't have to start to build up your PDC and your clients from GNU/scratch; I've already done this once and I can tell it, it is hard. Third, since Skolelinux is a standard platform, it is way easier to educate other IT people and even if the head IT is sick another one could pick up the task without too much hassle.

What do you see as the disadvantages of Skolelinux/Debian Edu?

The only real problem I see is that it is a little too less flexible at client level. Debian stable is rocky and desirable, but there are many reasons that force for another choice. For example the need of new drivers for new PC, or the need for a specific OS for some devices that have specific software packages for another specific distribution (I have such a case for whiteboards that have only Ubuntu packages). Thus, I prepared compatibility packages educlient and eduroaming, hoping not to use them ;-)

Which free software do you use daily?

I have a Debian Stable PDC at school (Kerberos, NIS, NFS) with mixed Debian and Ubuntu clients. If you think that this triad combination is exotic... well I discovered right yesterday that Perceus has the same...

For myself I run Debian wheezy/sid, but this combination is good only I you have enough competence to fix stuff for yourself, if something breaks. Daily I use texmacs, gnumeric, a little bit of R statistics, kmplot, and less frequently OpenOffice.org.

Which strategy do you believe is the right one to use to get schools to use free software?

I think that the only real argument that school managers "hear" is cost reduction. They don't give too much weight on quality, stability, just because they are normally not open to change.

Students adapts very quickly to GNU/Linux (and for them being able to switch between different OS is a plus value); teachers and managers don't.

We decided to move to Linux because students at our school have own laptop and we have the responsibility to keep the laptop ready to use; we were really unsatisfied with Microsoft since every Monday we had 20 machine to fix for viral infections... With Linux this has been reduced to zero, since people installs almost only from official repositories. I think that our special needs brought us to Linux. Those who don't have such needs will hardly move to Linux.

15th September 2012

After the Opus codec made it into IETF as RFC 6716, I had a look to see if there is any activity in IETF to standardise a video codec too, and I was happy to discover that there is some activity in this area. A non-"working group" mailing list video-codec was created 2012-08-20. It is intended to discuss the topic and if a formal working group should be formed.

I look forward to see how this plays out. There is already an email from someone in the MPEG group at ISO asking people to participate in the ISO group. Given how ISO failed with OOXML and given that it so far (as far as I can remember) only have produced multimedia formats requiring royalty payments, I suspect joining the ISO group would be a complete waste of time, but I am not involved in any codec work and my opinion will not matter much.

If one of my readers is involved with codec work, I hope she will join this work to standardise a royalty free video codec within IETF.

12th September 2012

Yesterday, IETF announced the publication of of RFC 6716, the Definition of the Opus Audio Codec, a low latency, variable bandwidth, codec intended for both VoIP, film and music. This is the first time, as far as I know, that IETF have standardized a multimedia codec. In RFC 3533, IETF standardized the OGG container format, and it has proven to be a great royalty free container for audio, video and movies. I hope IETF will continue to standardize more royalty free codeces, after ISO and MPEG have proven incapable of securing everyone equal rights to publish multimedia content on the Internet.

IETF require two interoperating independent implementations to ratify a standard, and have so far ensured to only standardize royalty free specifications. Both are key factors to allow everyone (rich and poor), to compete on equal terms on the Internet.

Visit the Opus project page if you want to learn more about the solution.

7th September 2012

As I mentioned this summer, I have created a Computer Science song book a few years ago, and today I finally found time to create a public Gitorious repository for the project.

If you want to help out, please clone the source and submit patches to the HTML version. To generate the PDF and PostScript version, please use prince XML, or let me know about a useful free software processor capable of creating a good looking PDF from the HTML.

Want to sing? You can still find the song book in HTML, PDF and PostScript formats at Petter's Computer Science Songbook.

23rd August 2012

I came across a great comment from Simon Phipps today, about how Microsoft have been forced to open Office, and it made me remember and revisit the great site officeshots which allow you to check out how different programs present the ODF file format. I recommend both to those of my readers interested in ODF. :)

Tags: english, standard.
17th August 2012

In my spare time, I currently work on a Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with the copyright law I can give to my parents and others that are reluctant to read an English book. It is a marvellous set of examples on how the ever expanding copyright regulations hurt culture and society. When the translation is done, I hope to find funding to print and ship a copy to all the members of the Norwegian parliament, before they sit down to debate the latest revisions to the Norwegian copyright law. This summer I called for volunteers to help me, and I have been able to secure the valuable contribution from at least one other Norwegian.

Two days ago, we finally broke the 50% mark. Then more than 50% of the number of strings to translate (normally paragraphs, but also titles and index entries are also counted). All parts from the beginning up to and including chapter four is translated. So is chapters six, seven and the conclusion. I created a graph to show the progress:

The number of strings to translate increase as I insert the index entries into the docbook. They were missing with the docbook version I initially started with. There are still quite a few index entries missing, but everyone starting with A, B, O, Z and Y are done. I currently focus on completing the index entries, to get a complete english version of the docbook source.

There is still need for translators and people with docbook knowledge, to be able to get a good looking book (I still struggle with dblatex, xmlto and docbook-xsl) as well as to do the draft translation and proof reading. And I would like the figures to be redrawn as SVGs to make it easy to translate them. Any SVG master around? I am sure there are some legal terms that are unfamiliar to me. If you want to help, please get in touch, and check out the project files currently available from github.

If you are curious what the translated book currently look like, the updated PDF and EPUB are published on github. The HTML version is published as well, but github hand it out with MIME type text/plain, confusing browsers, so I saw no point in linking to that version.

10th August 2012

In docbook one can specify the language used at the top, and the processing pipeline will use this information to pick the correct translations for 'chapter', 'see also', 'index' etc. And for most languages used with docbook, I guess this work just fine. For example a German user can start the document with <book lang="de">, and the document will show up with the correct content with any of the docbook processors. This is not the case for the language I am working with at the moment, Norwegian Bokmål.

For a while, I was confused about which language code to use, because I was unable to find any language code that would work across all tools. I am currently testing dblatex, xmlto, docbook-xsl, and dbtoepub, and they do not handle Norwegian Bokmål the same way. Some of them do not handle it at all.

A bit of background information is probably needed to understand this mess. Norwegian is not one, but two written variants. The variants are Norwegian Nynorsk and Norwegian Bokmål. There are three two letter language codes associated with these languages, Norwegian is 'no', Norwegian Nynorsk is 'nn' and Norwegian Bokmål is 'nb'. Historically the 'no' language code was used for Norwegian Bokmål, but many years ago this was found to be å bad idea, and the recommendation is to use the most specific language code instead, to avoid confusion. In the transition period it is a good idea to make sure 'no' was an alias for 'nb'.

Back to docbook processing tools in Debian. The dblatex tool only understand 'nn'. There are translations for 'no', but not 'nb' (BTS #684391), but due to a bug (BTS #682936) the 'no' language code is not recognised. The docbook-xsl tool chain only recognise 'nn' and 'nb', but not 'no'. The xmlto tool only recognise 'nn' and 'nb', but not 'no'. The end result that there is no language code I can use to get the docbook file working with all of these tools at the same time. :(

The correct solution is to use <book lang="nb">, but it will take time before that will work with all the free software docbook processors. :(

Oh, the joy of well integrated tools. :/

31st July 2012

I tried to send this text to the docbook-apps mailing list at lists.oasis-open.org, but it only accept messages from subscribers and rejected my post, and I completely lack the bandwidth required to subscribe to another mailing list, so instead I try to post my message here and hope my blog readers can help me out.

I am quite new to docbook processing, and am climbing a steep learning curve at the moment.

To give you some background, I am working on a Norwegian translation of the book Free Culture by Lawrence Lessig, and I use docbook to handle the process. The files to build the book are available from github. The book got around 400 pages with parts, images, footnotes, tables, index entries etc, which has proven to be a challenge for the free software docbook processors. My build platform is Debian GNU/Linux Squeeze.

I want to build PDF, EPUB and HTML version of the book, and have tried different tool chains to do the conversion from docbook to these formats. I am currently focusing on the PDF version, and have a few problems.

  • Using dblatex, the <part> handling is not the way I want to, as </part> do not really end the <part>. (See BTS report #683166), the xetex backend (needed to process UTF-8) give incorrect hyphens in index references spanning several pages (See BTS report #682901), and I am unable to get the norwegian template texts (See BTS report #682936).
  • Using straight xmlto fail with some latex error (See BTS report #683163).
  • Using xmlto with the fop backend fail to handle images (do not show up in the PDF), fail to handle a long footnote (overlap footnote and text body, see BTS report #683197), and fail to create a correct index (some lack page ref, and the page refs listed are not right).
  • Using xmlto with the dblatex backend behave like dblatex.
  • Using docbook-xls with xsltproc + fop have the same footnote and index problems the xmlto + fop processing.

So I wonder, what would be the best way to create the PDF version of this book? Are some of the bugs found above solved in new or experimental versions of some docbook tool chain?

What about HTML and EPUB versions?

21st July 2012

I reported earlier that I am working on a norwegian version of the book Free Culture by Lawrence Lessig. Progress is good, and yesterday I got a major contribution from Anders Hagen Jarmund completing chapter six. The source files as well as a PDF and EPUB version of this book are available from github.

I am happy to report that the draft for the first two chapters (preface, introduction) is complete, and three other chapters are also completely translated. This completes 26 percent of the number of strings (equivalent to paragraphs) in the book, and there is thus 74 percent left to translate. A graph of the progress is present at the bottom of the github project page. There is still room for more contributors. Get in touch or send github pull requests with fixes if you got time and are willing to help make this book make it to print. :)

The book translation framework could also be a good basis for other translations, if you want the book to be available in your language.

16th July 2012

I am currently working on a project to translate the book Free Culture by Lawrence Lessig to Norwegian. And the source we base our translation on is the docbook version, to allow us to use po4a and .po files to handle the translation, and for this to work well the docbook source document need to be properly tagged. The source files of this project is available from github.

The problem is that the docbook source have flaws, and we have no-one involved in the project that is a docbook expert. Is there a docbook expert somewhere that is interested in helping us create a well tagged docbook version of the book, and adjust our build process for the PDF, EPUB and HTML version of the book? This will provide a well tagged English version (our source document), and make it a lot easier for us to create a good Norwegian version. If you can and want to help, please get in touch with me or fork the github project and send pull requests with fixes. :)

9th July 2012

The Debian Edu / Skolelinux project have users all over the globe, but until recently we have not known about any users in Norway's neighbour country Sweden. This changed when George Bredberg showed up in March this year on the mailing list, asking interesting questions about how to adjust and scale the just released Debian Edu Wheezy setup to his liking. He granted me an interview, and I am happy to share his answers with you here.

Who are you, and how do you spend your days?

I'm a 44 year old country guy that have been working 12 years at the same school as 50% IT-manager and 50% Teacher. My educational background is fil.kand in history and religious beliefs, an exam as a "folkhighschool" teacher, that is, for teaching grownups. In Norwegian I believe it's called "Vuxenupplaring". I also have a master in "Technology and social change". So I'm not really a tech guy, I just like to study how humans and technology interact and that is my perspective when working with IT.

How did you get in contact with the Skolelinux/Debian Edu project?

I have followed the Skolelinux project for quite some time by now. Earlier I tested out the K12-LTSP project, which we used for some time, but I really like the idea of having a distribution aimed to be a complete solution for schools with necessary tools integrated. When K12-LTSP abandoned that idea some years ago, I started to look more seriously into Skolelinux instead.

What do you see as the advantages of Skolelinux/Debian Edu?

The big point of Skolelinux to me is that it is a complete distribution, ready to install. It has LDAP-support, MS Windows integration tools and so forth already configured, saving an administrator a lot of time and headache. We were using another Linux based thin-client system called Thinlinc, that has served us very well. But that Skolelinux is based on VNC and LTSP, to me, is better when it comes to the kind of multimedia used in schools. That is showing videos from Youtube or educational TV. It is also easier to mix thin clients with workstations, since the user settings will be the same. In our VNC-based solution you had to "beat around the bush" by setting up a second, hidden, home-directory for user settings for the workstations, because they will be different from the ones used on the thin clients. Skolelinux support for diskless workstations are very convenient since a school today often need to use a class room projector showing videos in full screen. That is easily done with a small integrated media computer running as a diskless workstation. You have only two installs to update and configure. One for the thin clients and one for the workstations. Also saving a lot of time. Our old system was also based on Redhat and CentOS. They are both very nice distributions, but they are sometimes painfully slow when it comes to updating multimedia support and multimedia programs (even such as Gimp), leaving us with a bit "oldish" applications. Debian is quicker to update.

What do you see as the disadvantages of Skolelinux/Debian Edu?

Debian is a bit too quick when it comes to updating. As an example we use old HP terminals as thinclients, and two times already this year (2012) the updates you get from the repositories has stopped sound from working with them. It's a kernel/ALSA issue. So you have to be more careful properly testing the updates before you run them in a production environment. This has never happened with CentOS.

I also would like to be able to set my own domain-settings at install time. In Skolelinux they are kind of hard coded into the distribution, when it comes to LDAP and at least samba integration. That is more a cosmetic/translation issue, and not a real problem. Running MS Windows applications within the Skolelinux environment needs to be better supported. That is, running them seamlessly via RDP, and support for single-sign on. That will make the transition to free software easier, because you can keep the applications you really need. No support will make it impossible if you work in a school where some applications can't be open source. As for us we really need to run Adobe InDesign in our journalist classes. We run a journalist education, and is one of the very few non university ones that is ok:d by Svenska journalistförbundet (Swedish journalist association). Our education gives the pupils the right of membership there, once they are done. This is important if you want to get a job.

Adobe InDesign is the program most commonly used in newspapers and magazines. We used Quark Express before, but they seem to loose there market to Adobe. The only "equivalent" to InDesign in the opensource world is Scribus, and its not advanced enough. At least not according to the teacher. I think it would be possible to use it, because they are not supposed to learn a program, they are supposed to learn how to edit and compile a newspaper. But politically at our school we are not there yet. And Scribus lacks a lot of things you find i InDesign.

We used even a windows program for sound editing when it comes to the radio-journalist part. The year to come we are going to try Audacity. That software has the same kind of limitations compared to Adobe Audition, but that teacher is a bit more open minded. We have tried Ardour also, but that instead is more like a music studio program, not intended for the kind of editing taking place in a radio studio. Its way to complex and the GUI is to scattered when you only want to cut, make pass-overs, add extra channels and normalise. Those things you can do in Audacity, but its not as easy as in Audition. You have to do more things manually with envelopes, and that is a bit old fashion and timewasting. Its also harder to cut and move sound from one channel to another, which is a thing that you do frequently because you often find yourself needing to rearrange parts of the sound file.

So, I am not sure we will succeed in replacing even Audition, but we will try. The problem is the students have certain expectations when they start an education towards a profession. So the programs has to look and feel professional. Good thing with radio, there are many programs out there, that radio studios use, so its not as standardised as Newspaper editing. That means, it does not really matter what program they learn, because once they start working they still have to learn the program the studio uses, so instead focus has to be to learn the editing part without to much focus on a specific software.

Which free software do you use daily?

Myself I'm running Linux Mint, or Ubuntu these days. I use almost only open source software, and preferably Linux based. When it comes to most used applications its OpenOffice, and Firefox (of course ;) )

Which strategy do you believe is the right one to use to get schools to use free software?

To get schools to use free software there has to be good open source software that are windows based, to ease the transition. But it's also very important that the multimedia support is working flawlessly. The problems with Youtube, Twitter, Facebook and whatever will create problems when it comes to both teachers and students. Economy are also important for schools, so using thin clients, as long as they have good multimedia support, is a very good idea. It's also important that the open source software works even for the administration. It's hard to convince the teachers to stick with open source, if the principal has to run Windows. It also creates a problem if some classes has to use Windows for there tasks, since that will create a difference in "status" between classes, so a good support for running windows applications via the thin client (Linux) desktop is essential. At least at our school, where we have mixed level of educations, from high-school to journalist-school.

Update 2012-07-09 08:30: Paul Wise tipped me on IRC about three useful sources related to Free Software for radio stations: the LWN article Radio station management with Airtime, Airtime which claim to be a Free open source radio automation software and Rivendell which claim to be complete radio broadcast automation solution. All of them seem useful to the aspiring radio producer.

8th July 2012

In the Debian Edu / Skolelinux project, we have realised that one of the major blockers for the project success is the purchasing skills in schools and municipalities. We provide what the happy users of Debian Edu / Skolelinux say they need and to a lower cost than the alternatives, and yet so few schools decide to use our solution. I was pleased to discover the same observation done by mySociety and Tom Steinberg in his blog post "Can you recognize the million pound chair?". Read it and weep for the spending of your tax money.

Of course there are other factors involved as well, like our projects bad marketing skills and the Linux community fragmentation causing worry with the people on the outside, so we as a project need to keep working hard to gain users, but it is a up-hill battle when public decision makers are unable to understand computer system purchases.

7th July 2012

Included in Debian Edu / Skolelinux is a large collection of end user and school specific software. It is one of the packages not installed by default but provided in the Debian archive for schools to install if they want to, is a system to automatically plan the school time table using information about available teachers, classes and rooms, combined with the list of required courses and how many hours each topic should receive. The software is named FET, and it provide a graphical user interface to input the required information, save the result in a fairly simple XML format, and generate time tables for both teachers and students. It is available both for Linux, MacOSX and Windows.

This is the feature list, liftet from the project web site:

  • FET is free software, licensed under the GNU GPL v2 or later. You can freely use, copy, modify and redistribute it
  • Localized to en_US (US English, default), ar (Arabic), ca (Catalan), da (Danish), de (German), el (Greek), es (Spanish), fa (Persian), fr (French), gl (Galician), he (Hebrew), hu (Hungarian), id (Indonesian), it (Italian), lt (Lithuanian), mk (Macedonian), ms (Malay), nl (Dutch), pl (Polish), pt_BR (Brazilian Portuguese), ro (Romanian), ru (Russian), si (Sinhala), sk (Slovak), sr (Serbian), tr (Turkish), uk (Ukrainian), uz (Uzbek) and vi (Vietnamese) (incompletely for some languages)
  • Fully automatic generation algorithm, allowing also semi-automatic or manual allocation
  • Platform independent implementation, allowing running on GNU/Linux, Windows, Mac and any system that Qt supports
  • Flexible modular XML format for the input file, allowing editing with an XML editor or by hand (besides FET interface)
  • Import/export from CSV format
  • The resulted timetables are exported into HTML, XML and CSV formats
  • Flexible students structure, organized into sets: years, groups and subgroups. FET allows overlapping years and groups and non-overlapping subgroups. You can even define individual students (as separate sets)
  • Each constraint has a weight percentage, from 0.0% to 100.0% (but some special constraints are allowed to have only 100% weight percentage)
  • Limits for the algorithm (all these limits can be increased on demand, as a custom version, because this would require a bit more memory):
    • Maximum total number of hours (periods) per day: 60
    • Maximum number of working days per week: 35
    • Maximum total number of teachers: 6000
    • Maximum total number of sets of students: 30000
    • Maximum total number of subjects: 6000
    • Virtually unlimited number of activity tags
    • Maximum number of activities: 30000
    • Maximum number of rooms: 6000
    • Maximum number of buildings: 6000
    • Possibility of adding multiple teachers and students sets for each activity. (it is possible also to have no teachers or no students sets for an activity)
    • Virtually unlimited number of time constraints
    • Virtually unlimited number of space constraints
  • A large and flexible palette of time constraints:
    • Break periods
    • For teacher(s):
      • Not available periods
      • Max/min days per week
      • Max gaps per day/week
      • Max hours daily/continuously
      • Min hours daily
      • Max hours daily/continuously with an activity tag
      • Respect working in an hourly interval a max number of days per week
    • For students (sets):
      • Not available periods
      • Begins early (specify max allowed beginnings at second hour)
      • Max gaps per day/week
      • Max hours daily/continuously
      • Min hours daily
      • Max hours daily/continuously with an activity tag
      • Respect working in an hourly interval a max number of days per week
    • For an activity or a set of activities/subactivities:
      • A single preferred starting time
      • A set of preferred starting times
      • A set of preferred time slots
      • Min/max days between them
      • End(s) students day
      • Same starting time/day/hour
      • Occupy max time slots from selection (a complex and flexible constraint, useful in many situations)
      • Consecutive, ordered, grouped (for 2 or 3 (sub)activities)
      • Not overlapping
      • Max simultaneous in selected time slots
      • Min gaps between a set of (sub)activities
  • A large and flexible palette of space constraints:
    • Room not available periods
    • For teacher(s):
      • Home room(s)
      • Max building changes per day/week
      • Min gaps between building changes
    • For students (sets):
      • Home room(s)
      • Max building changes per day/week
      • Min gaps between building changes
    • Preferred room(s):
      • For a subject
      • For an activity tag
      • For a subject and an activity tag
      • Individually for a (sub)activity
    • For a set of activities:
      • Occupy a maximum number of different rooms

I have not used it myself, as I am not involved in time table planning at a school, but it seem to work fine when I test it. If you need to set up your schools time table, and is tired of doing it manually, check it out. A quick summary on how to use it can be found in a blog post from MarvelSoft. If you find FET useful, please provide a recipe for the Debian Edu project in the Debian Edu HowTo section.

3rd July 2012

In the NUUG FiksGataMi project (Norwegian version of FixMyStreet from mySociety), we have discovered a problem with the municipalities using Zimbra. When FiksGataMi send a problem report to the government, the email From: address is set to the address of the person reporting the problem, while envelope sender is set to the FiksGataMi contact address. The intention is to make sure the municipality send any replies to the person reporting the problem, while any email delivery problems are sent to us in NUUG. This work well in most cases, but not for Karmøy municipality using Zimbra. Karmøy is using the vacation message function in Zimbra to send an automatic reply to report that the message has been received, and this message is sent to the envelope sender and not the address in the From: header.

This causes the automatic message from Karmøy to go to NUUGs request-tracker instance instead of to the person reporting the problem. We can not really change the envelope sender address, as this would make it impossible for us to discover when there are problems with the MTAs receiving problem reports. We have been in contact with the people at Karmøy municipality, and they are willing to adjust Zimbra if something can be changed there to get a better behaviour.

The default behaviour of Zimbra is as far as I can tell according to the specification in RFC 3834, which recommend that vacation messages are sent to the envelope sender and not to the From: address. But I wonder if it is possible to adjust or configure Zimbra to behave differently. Anyone know? Please let us know at fiksgatami (at) nuug.no.

26th June 2012

I've been too busy at home, but finally I found time to wrap up another interview with the people behind Debian Edu and Skolelinux. This time we get to know José Luis Redrejo Rodríguez, one of our great helpers from Spain. His effort was the reason we added support for several desktop types (KDE, Gnome and most recently LXDE) in Debian Edu, and have all of these available in the recently published Debian Edu Squeeze version.

Who are you, and how do you spend your days?

I'm a father, teacher and engineer who is working for the Education ministry of the Region of Extremadura (Spain) in the implementation of ICT in schools

How did you get in contact with the Skolelinux/Debian Edu project?

At 2006, I verified that both, we in Extremadura and Skolelinux project, had been working in parallel for some years, doing very similar things, using very similar tools and with similar targets, so I decided it was time to join forces as much as possible.

What do you see as the advantages of Skolelinux/Debian Edu?

A community of highly skilled experts working together, with a really open schema of collaboration and work. I really love the concepts of Do-ocracy and Merit-ocracy and the way these concepts are been used everyday inside Debian Edu.

What do you see as the disadvantages of Skolelinux/Debian Edu?

Sometimes the differences in the implementations, laws or economical and technical resources in the different countries don't allow us to agree in the same solution for all of us, and several approaches are needed, what is a waste of effort. Also, there is a lack of more man power to be able to follow the fast evolution of the technologies in school.

Which free software do you use daily?

Debian, of course, and due to my kind of job I am most of my time between Iceweasel, Geany and Terminator.

Which strategy do you believe is the right one to use to get schools to use free software?

I think there is not a single strategy because there are very different scenarios: schools with mixed proprietary and free environments, schools using only workstations, other schools using laptops, netbooks, tablets, interactive white-boards, etc.

Also the range of ages of the students is very broad and you can not use the same solutions for primary schools and secondary or even universities. So different strategies are needed.

But, looking at these differences, and looking back to the things we've done and implemented, and the places were we have spent most of our forces, I think we should focus as much as possible in free multi-platform environments, using only standards tools, and moving more and more to Internet or network solutions that could be deployed using wireless. I think we'll see more and more personal devices in the schools, devices the students and teachers will take home with them, so the solutions must be able to be taken at home and continue working there.

24th June 2012

Many years ago, while studying Computer Science at the University of Tromsø, I started collecting computer related songs for use at parties. The original version was written in LaTeX, but a few years ago I got help from Håkon W. Lie, one of the inventors of W3C CSS, to convert it to HTML while keeping the ability to create a nice book in PDF format. I have not had time to maintain the book for a while now, and guess I should put it up on some public version control repository where others can help me extend and update the book. If anyone is volunteering to help me with this, send me an email. Also let me know if there are songs missing in my book.

I have not mentioned the book on my blog so far, and it occured to me today that I really should let all my readers share the joys of singing out load about programming, computers and computer networks. Especially now that Debconf 12 is about to start (and I am not going). Want to sing? Check out Petter's Computer Science Songbook.

11th June 2012

During my work on Debian Edu based on Squeeze, I came across some issues that should be addressed in the Wheezy release. I finally found time to wrap up my notes and provide quick summary of what I found, with a bit explanation.

  • We need to rewrite our package installation framework, as tasksel changed from using tasksel tasks to using meta packages (aka packages with dependencies like our education-* packages), and our installation system depend on tasksel tasks in /usr/share/tasksel/debian-edu-tasks.desc for package installation.
  • Enable Kerberos login for more services. Now with the Kerberos foundation in place, we should use it to get single sign on with more services, and avoiding unneeded password / login questions. We should at least try to enable it for these services:
    • CUPS for admins to add/configure printers and users when using quotas.
    • Nagios for admins checking the system status.
    • GOsa for admins updating LDAP and users changing their passwords.
    • LDAP for admins updating LDAP.
    • Squid for users when exam mode / filtering is active.
    • ssh for admins and users to save a password prompt.
  • When we move GOsa to use Kerberos instead of LDAP bind to authenticate users, we should try to block or at least limit access to use LDAP bind for authentication, to ensure Kerberos is used when it is intended, and nothing fall back to using the less safe LDAP bind
  • Merge debian-edu-config and debian-edu-install. The split made sense when d-e-install did a lot more, but these days it is just an inconvenience when we update the debconf preseeding values.
  • Fix partman-auto to allow us to abort the installation before touching the disk if the disk is too small. This is BTS report #653305 and the d-i developers are fine with the patch and someone just need to apply it and upload. After this is done we need to adjust debian-edu-install to use this new hook.
  • Adjust to new LTSP framework (boot time config instead of install time config). LTSP changed its design, and our hooks to install packages and update the configuration is most likely not going to work in Wheezy.
  • Consider switching to NBD instead of NFS for LTSP root, to allow the Kernel to cache files in its normal file cache, possibly speeding up KDE login on slow networks.
  • Make it possible to create expired user passwords that need to change on first login. This is useful when handing out password on paper, to make sure only the user know the password. This require fixes to the PAM handling of kdm and gdm.
  • Make GUI for adding new machines automatically from sitesummary. The current command line script is not very friendly to people most familiar with GUIs. This should probably be integrated into GOsa to have it available where the admin will be looking for it..
  • We should find way for Nagios to check that the DHCP service actually is working (as in handling out IP addresses). None of the Nagios checks I have found so far have been working for me.
  • We should switch from libpam-nss-ldapd to sssd for all profiles using LDAP, and not only on for roaming workstations, to have less packages to configure and consistent setup across all profiles.
  • We should configure Kerberos to update LDAP and Samba password when changing password using the Kerberos protocol. The hook was requested in BTS report #588968 and is now available in Wheezy. We might need to write a MIT Kerberos plugin in C to get this.
  • We should clean up the set of applications installed by default.
    • reduce the number of chemistry visualisers
    • consider dropping xpaint
    • and probably more?
  • Some hardware need external firmware to work properly. This is mostly the case for WiFi network cards, but there are some other examples too. For popular laptops to work out of the box, such firmware need to be installed from non-free, and we should provide some GUI to do this. Ubuntu already have this implemented, and we could consider using their packages. At the moment we have some command line script to do this (one for the running system, another for the LTSP chroot).
  • In Squeeze, we provide KDE, Gnome and LXDE as desktop options. We should extend the list to Xfce and Sugar, and preferably find a way to install several and allow the admin or the user to select which one to use.
  • The golearn tool from the goplay package make it easy to check out interesting educational packages. We should work on the package tagging in Debian to ensure it represent all the useful educational packages, and extend the tool to allow it to use packagekit to install new applications with a simple mouse click.
  • The Squeeze version got half a exam solution already in place, with the introduction of iptable based network blocking, but for it to be a complete exam solution the Squid proxy need to enable filtering/blocking as well when the exam mode is enabled. We should implement a way to easily enable this for the schools that want it, instead of the "it is documented" method of today.
  • A feature used in several schools is the ability for a teacher to "take over" the desktop of individual or all computers in the room. There are at least three implementations, italc, controlaula og epoptes and we should pick one of them and make it trivial to set it up in a school. The challenges is how to distribute crypto keys and how to group computers in one room and how to set up which machine/user can control the machines in a given room.
  • Tablets and surf boards are getting more and more popular, and we should look into providing a good solution for integrating these into the Debian Edu network. Not quite sure how. Perhaps we should provide a installation profile with better touch screen support for them, or add some sync services to allow them to exchange configuration and data with the central server. This should be investigated.

I guess we will discover more as we continue to work on the Wheezy version.

9th June 2012

Slashdot got a story about Intel planning a TV with face recognition to recognise the viewer, and it occurred to me that it would be more interesting to turn it around, and do face recognition on the TV image itself. It could let the viewer know who is present on the screen, and perhaps look up their credibility, company affiliation, previous appearances etc for the viewer to better evaluate what is being said and done. That would be a feature I would be willing to pay for.

I would not be willing to pay for a TV that point a camera on my household, like the big brother feature apparently proposed by Intel. It is the telescreen idea fetched straight out of the book 1984 by George Orwell.

6th June 2012

A few days ago I reported how to get the support status out of Dell using an unofficial and undocumented SOAP API, which I since have found out was discovered by Daniel De Marco in february. Combined with my web scraping code for HP, Dell and IBM from 2009, I got inspired and wrote a web service based on Scraperwiki to make it easy to look up the support status and get a machine readable result back.

This is what it look like at the moment when asking for the JSON output:

% GET https://views.scraperwiki.com/run/computer-hardware-support-status/?format=json&vendor=Dell&servicetag=2v1xwn1
supportstatus({"servicetag": "2v1xwn1", "warrantyend": "2013-11-24", "shipped": "2010-11-24", "scrapestamputc": "2012-06-06T20:26:56.965847", "scrapedurl": "http://143.166.84.118/services/assetservice.asmx?WSDL", "vendor": "Dell", "productid": ""})
%

It currently support Dell and HP, and I am hoping for help to add support for other vendors. The python source is available on Scraperwiki and I welcome help with adding more features.

Tags: english, nuug.
2nd June 2012

Back in 2010, Mike Gabriel showed up on the Debian Edu and Skolelinux mailing list. He quickly proved to be a valuable developer, and thanks to his tireless effort we now have Kerberos integrated into the Debian Edu Squeeze version.

Who are you, and how do you spend your days?

My name is Mike Gabriel, I am 38 years old and live near Kiel, Schleswig-Holstein, Germany. I live together with a wonderful partner (Angela Fuß) and two own children and two bonus children (contributed by Angela).

During the day I am part-time employed as a system administrator and part-time working as an IT consultant. The consultancy work touches free software topics wherever and whenever possible. During the nights I am a free software developer. In the gaps I also train in becoming an osteopath.

Starting in 2010 we (Andreas Buchholz, Angela Fuß, Mike Gabriel) have set up a free software project in the area of Kiel that aims at introducing free software into schools. The project's name is "IT-Zukunft Schule" (IT future for schools). The project links IT skills with communication skills.

How did you get in contact with the Skolelinux/Debian Edu project?

While preparing our own customised Linux distribution for "IT-Zukunft Schule" we were repeatedly asked if we really wanted to reinvent the wheel. What schools really need is already available, people said. From this impulse we started evaluating other Linux distributions that target being used for school networks.

At the end we short-listed two approaches and compared them: a commercial Linux distribution developed by a company in Bremen, Germany, and Skolelinux / Debian Edu. Between 12/2010 and 03/2011 we went to several events and met people being responsible for marketing and development of either of the distributions. Skolelinux / Debian Edu was by far much more convincing compared to the other product that got short-listed beforehand--across the full spectrum. What was most attractive for me personally: the perspective of collaboration within the developmental branch of the Debian Edu project itself.

In parallel with this, we talked to many local and not-so-local people. People teaching at schools, headmasters, politicians, data protection experts, other IT professionals.

We came to two conclusions:

First, a technical conclusion: What schools need is available in bits and pieces here and there, and none of the solutions really fit by 100%. Any school we have seen has a very individual IT setup whereas most of each school's requirements could mapped by a standard IT solution. The requirement to this IT solution is flexibility and customisability, so that individual adaptations here and there are possible. In terms of re-distributing and rolling out such a standardised IT system for schools (a system that is still to some degree customisable) there is still a lot of work to do here locally. Debian Edu / Skolelinux has been our choice as the starting point.

Second, a holistic conclusion: What schools need does not exist at all (or we missed it so far). There are several technical solutions for handling IT at schools that tend to make a good impression. What has been missing completely here in Germany, though, is the enrolment of people into using IT and teaching with IT. "IT-Zukunft Schule" tries to provide an approach for this.

Only some schools have some sort of a media concept which explains, defines and gives guidance on how to use IT in class. Most schools in Northern Germany do not have an IT service provider, the school's IT equipment is managed by one or (if the school is lucky) two (admin) teachers, most of the workload these admin teachers get done in there spare time.

We were surprised that only a very few admin teachers were networked with colleagues from other schools. Basically, every school here around has its individual approach of providing IT equipment to teachers and students and the exchange of ideas has been quasi non-existent until 2010/2011.

Quite some (non-admin) teachers try to avoid using IT technology in class as a learning medium completely. Several reasons for this avoidance do exist.

We discovered that no-one has ever taken a closer look at this social part of IT management in schools, so far. On our quest journey for a technical IT solution for schools, we discussed this issue with several teachers, headmasters, politicians, other IT professionals and they all confirmed: a holistic approach of considering IT management at schools, an approach that includes the people in place, will be new and probably a gain for all.

What do you see as the advantages of Skolelinux/Debian Edu?

There is a list of advantages: international context, openness to any kind of contributions, do-ocracy policy, the closeness to Debian, the different installation scenarios possible (from stand-alone workstation to complex multi-server sites), the transparency within project communication, honest communication within the group of developers, etc.

What do you see as the disadvantages of Skolelinux/Debian Edu?

Every coin has two sides:

Technically: BTS issue #311188, tricky upgradability of a Debian Edu main server, network client installations on top of a plain vanilla Debian installation should become possible sometime in the near future, one could think about splitting the very complex package debian-edu-config into several portions (to make it easier for new developers to contribute).

Another issue I see is that we (as Debian Edu developers) should find out more about the network of people who do the marketing for Debian Edu / Skolelinux. There is a very active group in Germany promoting Skolelinux on the bigger Linux Days within Germany. Are there other groups like that in other countries? How can we bring these marketing people together (marketing group A with group B and all of them with the group of Debian Edu developers)? During the last meeting of the German Skolelinux group, I got the impression of people there being rather disconnected from the development department of Debian Edu / Skolelinux.

Which free software do you use daily?

For my daily business, I do not use commercial software at all.

For normal stuff I use Iceweasel/Firefox, Libreoffice.org. For serious text writing I prefer LaTeX. I use gimp, inkscape, scribus for more artistic tasks. I run virtual machines in KVM and Virtualbox.

I am one of the upstream developers of X2Go. In 2010 I started the development of a Python based X2Go Client, called PyHoca-GUI. PyHoca-GUI has brought forth a Python X2Go Client API that currently is being integrated in Ubuntu's software center.

For communications I have my own Kolab server running using Horde as web-based groupware client. For IRC I love to use irssi, for Jabber I have several clients that I use, mostly pidgin, though. I am also the Debian maintainer of Coccinella, a Jabber-based interactive whiteboard.

My favourite terminal emulator is KDE's Yakuake.

Which strategy do you believe is the right one to use to get schools to use free software?

Communicate, communicate, communicate. Enrol people, enrol people, enrol people.

1st June 2012

A few years ago I wrote how to extract support status for your Dell and HP servers. Recently I have learned from colleges here at the University of Oslo that Dell have made this even easier, by providing a SOAP based web service. Given the service tag, one can now query the Dell servers and get machine readable information about the support status. This perl code demonstrate how to do it:

use strict;
use warnings;
use SOAP::Lite;
use Data::Dumper;
my $GUID = '11111111-1111-1111-1111-111111111111';
my $App = 'test';
my $servicetag = $ARGV[0] or die "Please supply a servicetag. $!\n";
my ($deal, $latest, @dates);
my $s = SOAP::Lite
    -> uri('http://support.dell.com/WebServices/')
    -> on_action( sub { join '', @_ } )
    -> proxy('http://xserv.dell.com/services/assetservice.asmx')
    ;
my $a = $s->GetAssetInformation(
    SOAP::Data->name('guid')->value($GUID)->type(''),
    SOAP::Data->name('applicationName')->value($App)->type(''),
    SOAP::Data->name('serviceTags')->value($servicetag)->type(''),
);
print Dumper($a -> result) ;

The output can look like this:

$VAR1 = {
          'Asset' => {
                     'Entitlements' => {
                                       'EntitlementData' => [
                                                            {
                                                              'EntitlementType' => 'Expired',
                                                              'EndDate' => '2009-07-29T00:00:00',
                                                              'Provider' => '',
                                                              'StartDate' => '2006-07-29T00:00:00',
                                                              'DaysLeft' => '0'
                                                            },
                                                            {
                                                              'EntitlementType' => 'Expired',
                                                              'EndDate' => '2009-07-29T00:00:00',
                                                              'Provider' => '',
                                                              'StartDate' => '2006-07-29T00:00:00',
                                                              'DaysLeft' => '0'
                                                            },
                                                            {
                                                              'EntitlementType' => 'Expired',
                                                              'EndDate' => '2007-07-29T00:00:00',
                                                              'Provider' => '',
                                                              'StartDate' => '2006-07-29T00:00:00',
                                                              'DaysLeft' => '0'
                                                            }
                                                          ]
                                     },
                     'AssetHeaderData' => {
                                          'SystemModel' => 'GX620',
                                          'ServiceTag' => '8DSGD2J',
                                          'SystemShipDate' => '2006-07-29T19:00:00-05:00',
                                          'Buid' => '2323',
                                          'Region' => 'Europe',
                                          'SystemID' => 'PLX_GX620',
                                          'SystemType' => 'OptiPlex'
                                        }
                   }
        };

I have not been able to find any documentation from Dell about this service outside the inline documentation, and according to one comment it can have stability issues, but it is a lot better than scraping HTML pages. :)

Wonder if HP and other server vendors have a similar service. If you know of one, drop me an email. :)

Tags: english, nuug.
31st May 2012

A few days ago my color calibration gadget ColorHug arrived in the mail, and I've had a few days to test it. As all my machines are running Debian Squeeze, where the calibration software is missing (it is present in Wheezy and Sid), I ran the calibration using the Fedora based live CD. This worked just fine. So far I have only done the quick calibration. It was slow enough for me, so I will leave the more extensive calibration for another day.

After calibration, I get a ICC color profile file that can be passed to programs understanding such tools. KDE do not seem to understand it out of the box, so I searched for command line tools to use to load the color profile into X. xcalib was the first one I found, and it seem to work fine for single monitor setups. But for my video player, a laptop with a flat screen attached, it was unable to load the color profile for the correct monitor. After searching a bit, I discovered that the dispwin tool from the argyll package would do what I wanted, and a simple

dispwin -d 1 profile.icc

later I had the color profile loaded for the correct monitor. The result was a bit more pink than I expected. I guess I picked the wrong monitor type for the "led" monitor I got, but the result is good enough for now.

Tags: english.
27th May 2012

In 2003, a German teacher showed up on the Debian Edu and Skolelinux mailing list with interesting problems and reports proving he setting up Linux for a (for us at the time) lot of pupils. His name was Ralf Gesellensetter, and he has been an important tester and contributor since then, helping to make sure the Debian Edu Squeeze release became as good as it is..

Who are you, and how do you spend your days?

I am a teacher from Germany, and my subjects are Geography, Mathematics, and Computer Science ("Informatik"). During the past 12 years (since 2000), I have been working for a comprehensive (and soon, also inclusive) school leading to all kind of general levels, such as O- or A-level ("Abitur"). For quite as long, I've been taking care of our computer network.

Now, in my early 40s, I enjoy the privilege of spending a lot of my spare time together with my wife, our son (3 years) and our daughter (4 months).

How did you get in contact with the Skolelinux/Debian Edu project?

We had tried different Linux based school servers, when members of my local Linux User Group (LUG OWL) detected Skolelinux. I remember very well, being part of a party celebrating the Linux New Media Award ("Best Newcomer Distribution", also nominated: Ubuntu) that was given to Skolelinux at Linux World Exposition in Frankfurt, 2005 (IIRC). Few months later, I had the chance to join a developer meeting in Ulsrud (Oslo) and to hand out the award to Knut Yrvin and others. For more than 7 years, Skolelinux is part of our schools infrastructure, namely our main server (tjener), one LTSP (today without thin clients), and approximately 50 work stations. Most of these have the option to boot a locally installed Skolelinux image. As a consequence, I joined quite a few events dealing with free software or Linux, and met many Debian (Edu) developers. All of them seemed quite nice and competent to me, one more reason to stick to Skolelinux.

What do you see as the advantages of Skolelinux/Debian Edu?

Debian driven, you are given all the advantages of a community project including well maintained updates. Once, you are familiar with the network layout, you can easily roll out an entire educational computer infrastructure, from just one installation media. As only free software (FOSS) is used, that supports even elderly hardware, up-sizing your IT equipment is only limited by space (i.e. available labs). Especially if you run a LTSP thin client server, your administration costs tend towards zero.

What do you see as the disadvantages of Skolelinux/Debian Edu?

While Debian's stability has loads of advantages for servers, this might be different in some cases for clients: Schools with unlimited budget might buy new hardware with components that are not yet supported by Debian stable, or wish to use more recent versions of office packages or desktop environments. These schools have the option to run Debian testing or other distributions - if they have the capacity to do so. Another issue is that Debian release cycles include a wide range of changes; therefor a high percentage of human power seems to be absorbed by just keeping the features of Skolelinux within the new setting of the version to come. During this process, the cogs of Debian Edu are getting more and more professional, i.e. harder to understand for novices.

Which free software do you use daily?

LibreOffice, Wikipedia, Openstreetmap, Iceweasel (Mozilla Firefox), KMail, Gimp, Inkscape - and of course the Linux Kernel (not only on PC, Laptop, Mobile, but also our SAT receiver)

Which strategy do you believe is the right one to use to get schools to use free software?

  1. Support computer science as regular subject in schools to make people really "own" their hardware, to make them understand the difference between proprietary software products, and free software developing.
  2. Make budget baskets corresponding: In Germany's public schools there are more or less fixed budgets for IT equipment (including licenses), so schools won't benefit from any savings here. This privilege is left to private schools which have consequently a large share among German Skolelinux schools.
  3. Get free software in the seminars where would-be teachers are trained. In many cases, teachers' software customs are respected by decision makers rather than the expertise of any IT experts.
  4. Don't limit ourself to free software run natively. Everybody uses free software or free licenses (for instance Wikipedia), and this general concept should get expanded to free educational content to be shared world wide (school books e.g.).
  5. Make clear where ever you can that the market share of free (libre) office suites is much above 20 p.c. today, and that you pupils don't need to know the "ribbon menu" in order to get employed.
  6. Talk about the difference between freeware and free software.
  7. Spread free software, or even collections of portable free apps for USB pen drives. Endorse students to get a legal copy of Libreoffice rather than accepting them to use illegal serials. And keep sending documents in ODF formats.

26th May 2012

I just come across a blog post from Glyn Moody reporting the claimed cost from Microsoft on requiring ODF to be used by the UK government. I just sent him an email to let him know that his assumption are most likely wrong. Sharing it here in case some of my blog readers have seem the same numbers float around in the UK.

Hi. I just noted your http://blogs.computerworlduk.com/open-enterprise/2012/04/does-microsoft-office-lock-in-cost-the-uk-government-500-million/index.htm comment:

"They're all in Danish, not unreasonably, but even with the help of Google Translate I can't find any figures about the savings of "moving to a flexible two standard" as claimed by the Microsoft email. But I assume it is backed up somewhere, so let's take it, and the £500 million figure for the UK, on trust."

I can tell you that the Danish reports are inflated. I believe it is the same reports that were used in the Norwegian debate around 2007, and Gisle Hannemyr (a well known IT commentator in Norway) had a look at the content. In short, the reason it is claimed that using ODF will be so costly, is based on the assumption that this mean every existing document need to be converted from one of the MS Office formats to ODF, transferred to the receiver, and converted back from ODF to one of the MS Office formats, and that the conversion will cost 10 minutes of work time for both the sender and the receiver. In reality the sender would have a tool capable of saving to ODF, and the receiver would have a tool capable of reading it, and the time spent would at most be a few seconds for saving and loading, not 20 minutes of wasted effort.

Microsoft claimed all these costs were saved by allowing people to transfer the original files from MS Office instead of spending 10 minutes converting to ODF. :)

See http://hannemyr.com/no/ms12_vl02.php and http://hannemyr.com/no/ms12.php for background information. Norwegian only, sorry. :)

Tags: english, nuug, standard.
18th May 2012

In january, I discovered the ColorHug, a USB dongle from Hughski to calibrate the color on a computer screen. The software required is included in Debian, and I decided back then to preorder from the next batch. Yesterday I finally heard back from them, and got the opportunity to order. Today I ordered mine, and eagerly await the delivery. I hope it arrive next week, as I got a confirmation that it should go in the mail on monday. :)

If you want to ensure the colors on the screen match the intended colors, I suggest you check out this cheap tool with free software drivers. :)

Tags: english.
13th May 2012

It has been a few busy weeks for me, but I am finally back to publish another interview with the people behind Debian Edu and Skolelinux. This time it is one of our German developers, who have helped out over the years to make sure both a lot of major but also a lot of the minor details get right before release.

Who are you, and how do you spend your days?

My name is Jürgen Leibner, I'm 49 years old and living in Bielefeld, a town in northern Germany. I worked nearly 20 years as certified engineer in the department for plant design and layout of an international company for machinery and equipment. Since 2011 I'm a certified technical writer (tekom e.V.) and doing technical documentations for a steam turbine manufacturer. From April this year I will manage the department of technical documentation at a manufacturer of automation and assembly line engineering.

My first contact with linux was around 1993. Since that time I used it at work and at home repeatedly but not exclusively as I do now at home since 2006.

How did you get in contact with the Skolelinux/Debian Edu project?

Once a day in the early year of 2001 when I wanted to fetch my daughter from primary school, there was a teacher sitting in the middle of 20 old computers trying to boot them and he failed. I helped him to get them booting. That was seen by the school director and she asked me if I would like to manage that the school gets all that old computers in use. I answered: "Yes".

Some weeks later every of the 10 classrooms had one computer running Windows98. I began to collect old computers and equipment as gifts and installed the first computer room with a peer-to-peer network. I did my work at school without being payed in my spare time and with a lot of fun. About one year later the school was connected to Internet and a local area network was installed in the school building. That was the time to have a server and I knew it must be a Linux server to be able to fulfil all the wishes of the teachers and being able to do this in a transparent and economic way, without extra costs for things like licence and software. So I searched for a school server system running under Linux and I found a couple of people nearby who founded 'skolelinux.de'. It was the Skolelinux prerelease 32 I first tried out for being used at the school. I managed the IT of that school until the municipal authority took over the IT management and centralised the services for all schools in Bielefeld in December of 2006.

What do you see as the advantages of Skolelinux/Debian Edu?

When I'm looking back to the beginning, there were other advantages for me as today.

In the past there were advantages like:

  • I don't need to buy it so it generates no costs to the school as they had little money to spent for computers and software.
  • It has a licence which grands all rights to use it without cost.
  • It was more able to fit all requirements of a server system for schools than a Microsoft server system, even if there are only Windows clients because of it's preconfigured overall concept of being a infrastructure solution and community for schools, not only a server
  • I was able to configure the server to the needs of the school.

Today some of the advantages has been lost, changed or new ones came up in this way:

  • Most schools here do have money to buy hardware and software now.
  • They are today mostly managed from central IT departments which have own concepts which often do not fit to Debian Edu concepts because they are to close to Microsoft ideology.
  • With the Squeeze version of Debian Edu which now uses GOsa² for management I feel more able to manage the daily tasks than with the interfaces used in the past.
  • It is more modular than in the past and fits even better to the different needs.
  • The documentation is usable and gets better every day.
  • More people than ever before are using Debian Edu all over the world and so the community, which is an very important part I think, is sharing knowledge and minds.
  • Most, maybe all, of the technical requirements for schools are solved today by Debian Edu.

What do you see as the disadvantages of Skolelinux/Debian Edu?

  • There are too few IT companies able to integrate Debian Edu into their product portfolio for serving schools with concepts or even whole municipality areas.
  • Debian Edu has beside other free and open software projects not enough lobbyists which promote free and open software to politicians.
  • Technically there are no disadvantages I'm aware of.

Which free software do you use daily?

I use Debian stable on my home server and on my little desktop computer. On my laptop I use Debian testing/sid. The applications I use on my laptop and my desktop are Open/Libre-office, Iceweasel, KMail, DigiKam, Amarok, Dolphin, okular and all the other programs I need from the KDE environment. On console I use newsbeuter, mutt, screen, irssi and all the other famous and useful tools.

My home server provides mail services with exim, dovecot, roundcube and mutt over ssh on the console, file services with samba, NFS, rsync, web services with apache, moinmoin-wiki, multimedia services with gallery2 and mediatomb and database services with MySQL for me and the whole family. I probably forgot something.

Which strategy do you believe is the right one to use to get schools to use free software?

I believe, we should provide concepts for IT companies to integrate Debian Edu into their product portfolio with use cases for different countries and areas all over the world.

30th April 2012

I normally cut my hair short, and my tool of choice has been a common hair/beard cutter, bought in a electrical shop here in Norway. But the last ones have not really been up to the task. My last cutter, some model from Braun, could only cut a few of my hairs at the time, and cutting my head took forever. And the one before that did not work very well either. We have looked for something better for a while, but it was not until I ended up visiting a hairdresser that we discovered that there are indeed better tools available. But these are not marketed and sold to "regular consumers". The hair saloons can get them through their suppliers, but their suppliers only sell companies. The models they sell, are very different from the ones available from Elkjøp and Lefdal. The main difference is their efficiency. It would cut my hair in 5 minutes, instead of the 30-40 minutes required by my impotent Braun. The hairdresser I visited had a Panasonic ER160, which unfortunately is no longer available from the producer. But I found it had a successor, the Panasonic ER1611.

The next step was to find somewhere to buy it. This was not straight forward. The list of suppliers I got from the hairdresser did not want to sell anything to me. But searching for the model on the web we found a supplier in Norway willing to sell it to us for around NOK 4000,-. This was a bit much. We kept searching and finally found a Danish supplier selling it for around NOK 1800,-. We ordered one, and it arrived a few days ago.

The instructions said it had to charge for 8 hours when we started to use it, so we left it charging over night. Normally it will only need one hour to charge. The following evening we successfully tested it, and I can warmly recommend it to anyone looking for a real hair cutter. The ones we have used until now have been hair cutter toys.

Tags: english.
26th April 2012

In an article today published by Computerworld Norway, the photographer Eirik Helland Urke reports that the video editor application included with HTC One X have some quite surprising terms of use. The article is mostly based on the twitter message from mister Urke, stating:

"Drøy brukeravtale: HTC kan bruke MINE redigerte videoer kommersielt. Selv kan jeg KUN bruke dem privat."

I quickly translated it to this English message:

"Arrogant user agreement: HTC can use MY edited videos commercially. Although I can ONLY use them privately."

I've been unable to find the text of the license term myself, but suspect it is a variation of the MPEG-LA terms I discovered with my Canon IXUS 130. The HTC One X specification specifies that the recording format of the phone is .amr for audio and .mp3 for video. AMR is Adaptive Multi-Rate audio codec with patents which according to the Wikipedia article require an license agreement with VoiceAge. MP4 is MPEG4 with H.264, which according to Wikipedia require a licence agreement with MPEG-LA.

I know why I prefer free and open standards also for video.

19th April 2012

Here in Norway, the Ministry of Government Administration, Reform and Church Affairs is behind a directory of standards that are recommended or mandatory for use by the government. When the directory was created, the people behind it made an effort to ensure that everyone would be able to implement the standards and compete on equal terms to supply software and solutions to the government. Free software and non-free software could compete on the same level.

But recently, some standards with RAND (Reasonable And Non-Discriminatory) terms have made their way into the directory. And while this might not sound too bad, the fact is that standard specifications with RAND terms often block free software from implementing them. The reasonable part of RAND mean that the cost per user/unit is low,and the non-discriminatory part mean that everyone willing to pay will get a license. Both sound great in theory. In practice, to get such license one need to be able to count users, and be able to pay a small amount of money per unit or user. By definition, users of free software do not need to register their use. So counting users or units is not possible for free software projects. And given that people will use the software without handing any money to the author, it is not really economically possible for a free software author to pay a small amount of money to license the rights to implement a standard when the income available is zero. The result in these situations is that free software are locked out from implementing standards with RAND terms.

Because of this, when I see someone claiming the terms of a standard is reasonable and non-discriminatory, all I can think of is how this really is non-reasonable and discriminatory. Because free software developers are working in a global market, it does not really help to know that software patents are not supposed to be enforceable in Norway. The patent regimes in other countries affect us even here. I really hope the people behind the standard directory will pay more attention to these issues in the future.

You can find more on the issues with RAND, FRAND and RAND-Z terms from Simon Phipps (RAND: Not So Reasonable?).

Update 2012-04-21: Just came across a blog post from Glyn Moody over at Computer World UK warning about the same issue, and urging people to speak out to the UK government. I can only urge Norwegian users to do the same for the hearing taking place at the moment (respond before 2012-04-27). It proposes to require video conferencing standards including specifications with RAND terms.

15th April 2012

Behind Debian Edu and Skolelinux there are a lot of people doing the hard work of setting together all the pieces. This time I present to you Andreas Mundt, who have been part of the technical development team several years. He was also a key contributor in getting GOsa and Kerberos set up in the recently released Debian Edu Squeeze version.

Who are you, and how do you spend your days?

My name is Andreas Mundt, I grew up in south Germany. After studying Physics I spent several years at university doing research in Quantum Optics. After that I worked some years in an optics company. Finally I decided to turn over a new leaf in my life and started teaching 10 to 19 years old kids at school. I teach math, physics, information technology and science/technology.

How did you get in contact with the Skolelinux/Debian Edu project?

Already before I switched to teaching, I followed the Debian Edu project because of my interest in education and Debian. Within the qualification/training period for the teaching, I started contributing.

What do you see as the advantages of Skolelinux/Debian Edu?

The advantages of Debian Edu are the well known name, the out-of-the-box philosophy and of course the great free software of the Debian Project!

What do you see as the disadvantages of Skolelinux/Debian Edu?

As every coin has two sides, the out-of-the-box philosophy has its downside, too. In my opinion, it is hard to modify and tweak the setup, if you need or want that. Further more, it is not easily possible to upgrade the system to a new release. It takes much too long after a Debian release to prepare the -Edu release, perhaps because the number of developers working on the core of the code is rather small and often busy elsewhere.

The Debian LAN project might fill the use case of a more flexible system.

Which free software do you use daily?

I am only using non-free software if I am forced to and run Debian on all my machines. For documents I prefer LaTeX and PGF/TikZ, then mutt and iceweasel for email respectively web browsing. At school I have Arduino and Fritzing in use for a micro controller project.

Which strategy do you believe is the right one to use to get schools to use free software?

One of the major problems is the vendor lock-in from top to bottom: Especially in combination with ignorant government employees and politicians, this works out great for the "market-leader". The school administration here in Baden-Wuerttemberg is occupied by that vendor. Documents have to be prepared in non-free, proprietary formats. Even free browsers do not work for the school administration. Publishers of school books provide software only for proprietary platforms.

To change this, political work is very important. Parts of the political spectrum have become aware of the problem in the last years. However it takes quite some time and courageous politicians to 'free' the system. There is currently some discussion about "Open Data" and "Free/Open Standards". I am not sure if all the involved parties have a clue about the potential of these ideas, and probably only a fraction takes them seriously. However it might slowly make free software and the philosophy behind it more known and popular.

8th April 2012

It take all kind of contributions to create a Linux distribution like Debian Edu / Skolelinux, and this time I lend the ear to Justin B. Rye, who is listed as a big contributor to the Debian Edu Squeeze release manual.

Who are you, and how do you spend your days?

I'm a 44-year-old linguistics graduate living in Edinburgh who has occasionally been employed as a sysadmin.

How did you get in contact with the Skolelinux/Debian Edu project?

I'm neither a developer nor a Skolelinux/Debian Edu user! The only reason my name's in the credits for the documentation is that I hang around on debian-l10n-english waiting for people to mention things they'd like a native English speaker to proofread... So I did a sweep through the wiki for typos and Norglish and inconsistent spellings of "localisation".

What do you see as the advantages of Skolelinux/Debian Edu?

What do you see as the disadvantages of Skolelinux/Debian Edu?

These questions are too hard for me - I don't use it! In fact I had hardly any contact with I.T. until long after I'd got out of the education system.

I can tell you the advantages of Debian for me though: it soaks up as much of my free time as I want and no more, and lets me do everything I want a computer for without ever forcing me to spend money on the latest hardware.

Which free software do you use daily?

I've been using Debian since Rex; popularity-contest says the software that I use most is xinit, xterm, and xulrunner (in other words, I use a distinctly retro sort of desktop).

Which strategy do you believe is the right one to use to get schools to use free software?

Well, I don't know. I suppose I'd be inclined to try reasoning with the people who make the decisions, but obviously if that worked you would hardly need a strategy.

6th April 2012

Recently I have spent time with Skolelinux Drift AS on speeding up a Debian Edu / Skolelinux Lenny installation using LTSP diskless workstations, and in the process I discovered something very surprising. The reason the KDE menu was responding slow when using it for the first time, was mostly due to the way KDE find application icons. I discovered that showing the Multimedia menu would cause more than 20 000 IP packages to be passed between the LTSP client and the NFS server. Most of these were NFS LOOKUP calls, resulting in a NFS3ERR_NOENT response. Because the ping times between the client and the server were in the range 2-20 ms, the menus would be very slow. Looking at the strace of kicker in Lenny (or plasma-desktop i Squeeze - same problem there), I see that the source of these NFS calls are access(2) system calls for non-existing files. KDE can do hundreds of access(2) calls to find one icon file. In my example, just finding the mplayer icon required around 230 access(2) calls.

The KDE code seem to search for icons using a list of icon directories, and the list of possible directories is large. In (almost) each directory, it look for files ending in .png, .svgz, .svg and .xpm. The result is a very slow KDE menu when /usr/ is NFS mounted. Showing a single sub menu may result in thousands of NFS requests. I am not the first one to discover this. I found a KDE bug report from 2009 about this problem, and it is still unsolved.

My solution to speed up the KDE menu was to create a package kde-icon-cache that upon installation will look at all .desktop files used to generate the KDE menu, find their icons, search the icon paths for the file that KDE will end up finding at run time, and copying the icon file to /var/lib/kde-icon-cache/. Finally, I add symlinks to these icon files in one of the first directories where KDE will look for them. This cut down the number of file accesses required to find one icon from several hundred to less than 5, and make the KDE menu almost instantaneous. I'm not quite sure where to make the package publicly available, so for now it is only available on request.

The bug report mention that this do not only affect the KDE menu and icon handling, but also the login process. Not quite sure how to speed up that part without replacing NFS with for example NBD, and that is not really an option at the moment.

If you got feedback on this issue, please let us know on debian-edu (at) lists.debian.org.

Update 2015-08-04: The source of the scripts and associated Debian package is available from the Debian Edu github repository.

5th April 2012

About two weeks ago, I was interviewed via email about Debian Edu and Skolelinux by Bruce Byfield in Linux Weekly News. The result was made public for non-subscribers today. I am pleased to see liked our Linux solution for schools. Check out his article Debian Edu/Skolelinux: A distribution for education if you want to learn more.

1st April 2012

Germany is a core area for the Debian Edu and Skolelinux user community, and this time I managed to get hold of Wolfgang Schweer, a valuable contributor to the project from Germany.

Who are you, and how do you spend your days?

I've studied Mathematics at the university 'Ruhr-Universität' in Bochum, Germany. Since 1981 I'm working as a teacher at the school "Westfalen-Kolleg Dortmund", a second chance school. Here, young adults is given the opportunity to get further education in order to do the school examination 'Abitur', which will allow to study at a university. This second chance is of value for those who want a better job perspective or failed to get a higher school examination being teens.

Besides teaching I was involved in developing online courses for a blended learning project called 'abitur-online.nrw' and in some other information technology related projects. For about ten years I've been teacher and coordinator for the 'abitur-online' project at my school. Being now in my early sixties, I've decided to leave school at the end of April this year.

How did you get in contact with the Skolelinux/Debian Edu project?

The first information about Skolelinux must have come to my attention years ago and somehow related to LTSP (Linux Terminal Server Project). At school, we had set up a network at the beginning of 1997 using Suse Linux on the desktop, replacing a Novell network. Since 2002, we used old machines from the city council of Dortmund as thin clients (LTSP, later Ubuntu/Lessdisks) cause new hardware was out of reach. At home I'm using Debian since years and - subscribed to the Debian news letter - heard from time to time about Skolelinux. About two years ago I proposed to replace the (somehow undocumented and only known to me) system at school by a well known Debian based system: Skolelinux.

Students and teachers appreciated the new system because of a better look and feel and an enhanced access to local media on thin clients. The possibility to alter and/or reset passwords using a GUI was welcomed, too. Being able to do administrative tasks using a GUI and to easily set up workstations using PXE was of very high value for the admin teachers.

What do you see as the advantages of Skolelinux/Debian Edu?

It's open source, easy to set up, stable and flexible due to it's Debian base. It integrates LTSP out-of-the-box. And it is documented! So it was a perfect choice.

Being open source, there are no license problems and so it's possible to point teachers and students to programs like OpenOffice.org, ViewYourMind (mind mapping) and The Gimp. It's of high value to be able to adapt parts of the system to special needs of a school and to choose where to get support for this.

What do you see as the disadvantages of Skolelinux/Debian Edu?

Nothing yet.

Which free software do you use daily?

At home (Debian Sid with Gnome Desktop): Iceweasel, LibreOffice, Mutt, Gedit, Document Viewer, Midnight Commander, flpsed (PDF Annotator). At school (Skolelinux Lenny): Iceweasel, Gedit, LibreOffice.

Which strategy do you believe is the right one to use to get schools to use free software?

Some time ago I thought it was enough to tell people about it. But that doesn't seem to work quite well. Now I concentrate on those more interested and hope to get multiplicators that way.

25th March 2012

The same Debian Edu developer that did the last screen cast I published, Wolfgang Schweer, has created a new screen cast showing how to set up Kmail in Debian Edu Squeze to authenticate using Kerberos, allowing users to check their local email account without providing any password. The video is embedded here in quarter size, and also available from vimeo and download as a Ogg Theora file. Check it out below.

Download video as Ogg.

19th March 2012

Debian Edu / Skolelinux users are spread all across the globe. The second inteview after the Squeeze release was publised is with John Ingleby, a teacher and long time Linux user in United Kingdom.

Who are you, and how do you spend your days?

I teach ICT part time at the Rudolf Steiner School in Kings Langley, near London, UK. Previously I worked as a technical author/trainer while my children attended the school, and I also contributed to the Schoolforge UK community with the aim of encouraging UK schools to adopt free/open source software. Five or six years ago we had about 50 schools interested in some way, but we weren't able to convert many of them into sustainable installations.

How did you get in contact with the Skolelinux/Debian Edu project?

Skolelinux had two representatives at an early Edubuntu meeting in London which I attended. However at that time our school network had just been installed using CentOS, LTSP 4 and GNOME. When LTSP 5 came along we switched to Edubuntu thin client servers so now we have a mixed environment which includes Windows PCs and student laptops, as well as their MacBooks and iPads. However, the proprietary systems have always been rather problematic, and we never built a GUI for the LDAP server, so when I discovered Skolelinux is configured for all these things we decided to try it.

What do you see as the advantages of Skolelinux/Debian Edu?

By far the biggest advantage is the Debian Edu community. Apart from that I have always believed in the same "sustainable computing" goals that Skolelinux is built on: installing Linux on computers which would otherwise be thrown away, to provide a reliable, secure and low-cost IT environment for schools. From my own experience I know that a part-time person can teach and manage a network of about 25 Linux computers, but it would take much more of my time if we had proprietary software everywhere.

What do you see as the disadvantages of Skolelinux/Debian Edu?

As a newcomer I'm just finding out who's who in the community and how you're organised, and what your procedures are for dealing with various things such as editing manual pages and so-on. The only English language mailing list seems to be for developers as well as users, so my inbox needs heavy pruning each day!

Which free software do you use daily?

Besides the software already mentioned at school we use Samba, OpenLDAP, CUPS, Nagios and Dansguardian for the network, and on the desktops we have LibreOffice, Firefox, GIMP and Inkscape. At home I use Ubuntu and an Android 4 eePad Transformer (but I'm not sure if that counts...)

Which strategy do you believe is the right one to use to get schools to use free software?

That's a tough question! For very many years UK schools installed and taught only proprietary software, so that at the highest levels the notion of "computer" means simply "proprietary office applications". However, schools today are experiencing budget constraints, and many are having to think hard about upgrading Windows XP. At the same time, we have students showing teachers how to use iPads, MacBooks and Android, so the choice of operating system is no longer quite so automatic. What is more, our government at last realised that we need people with programming skills, so they're putting coding back in the curriculum! And it's encouraging that the first 10,000 Raspberry Pi units sold out in 2 hours.

I don't really know what strategy is going to get UK schools to use free software, but building an active community of Skolelinux/Debian Edu users in this country has to be part of it.

16th March 2012

Documentation in Debian Edu is provided in several languages, and it is important to make it both easy to contribute and to keep the translated versions in sync. To do this we have come up with what we believe is a very efficient work flow.

  1. The documentation is written in a moinmoin wiki (see for example the Squeeze release manual) with support for exporting the content as docbook XML.
  2. This docbook document is given to po4a to extract a gettext style .pot file with the content, which in turn is used to create .po files with the translated text.
  3. The .po files are given to translators, and they can always tell which part of the original wiki document is new or changed. They can use their normal translation tools like lokalize or poedit to write the translation. There is even a system in place to handle translated images.
  4. The translated .po files are combined with the original docbook XML document using po4a to create a translated docbook document.
  5. The final step is to use all the generated docbook files and create PDF and HTML version of the original and translated documents.

This setup work very well, but have a few issues. The biggest issue is that the docbook support we use in moinmoin is not actively maintained. The docbook support is also buggy, and our build system contain workarounds to make sure the generated docbook is usable despite these bugs.

If you want to have a look at our setup, it is all there in the debian-edu-doc package.

11th March 2012

This weekend we finally published the first stable release of Skolelinux / Debian Edu based on Debian/Squeeze. The full announcement is available from the project announcement list. Now is a good time to test if it you have not done so already.

I plan to present the new version at a NUUG meeting on tuesday. I look forward to seeing you there if you are in Oslo, Norway.

9th March 2012

Inspired by the interview series conducted by Raphael, I started a Norwegian interview series with people involved in the Debian Edu / Skolelinux community. This was so popular that I believe it is time to move to a more international audience.

While Debian Edu and Skolelinux originated in France and Norway, and have most users in Europe, there are users all around the globe. One of those far away from me is Nigel Barker, a long time Debian Edu system administrator and contributor. It is thanks to him that Debian Edu is adjusted to work out of the box in Japan. I got him to answer a few questions, and am happy to share the response with you. :)

Who are you, and how do you spend your days?

My name is Nigel Barker, and I am British. I am married to Yumiko, and we have three lovely children, aged 15, 14 and 4(!) I am the IT Coordinator at Hiroshima International School, Japan. I am also a teacher, and in fact I spend most of my day teaching Mathematics, Science, IT, and Chemistry. I was originally a Chemistry teacher, but I have always had an interest in computers. Another teacher teaches primary school IT, but apart from that I am the only computer person, so that means I am the network manager, technician and webmaster, also, and I help people with their computer problems. I teach python to beginners in an after-school club. I am way too busy, so I really appreciate the simplicity of Skolelinux.

How did you get in contact with the Skolelinux/Debian Edu project?

In around 2004 or 5 I discovered the ltsp project, and set up a server in the IT lab. I wanted some way to connect it to our central samba server, which I was also quite poor at configuring. I discovered Edubuntu when it came out, but it didn't really improve my setup. I did various desperate searches for things like "school Linux server" and ended up in a document called "Drift" something or other. Reading there it became clear that Skolelinux was going to solve all my problems in one go. I was very excited, but apprehensive, because my previous attempts to install Debian had ended in failure (I used Mandrake for everything - ltsp, samba, apache, mail, ns...). I downloaded a beta version, had some problems, so subscribed to the Debian Edu list for help. I have remained subscribed ever since, and my school has run a Skolelinux network since Sarge.

What do you see as the advantages of Skolelinux/Debian Edu?

For me the integrated setup. This is not just the server, or the workstation, or the ltsp. Its all of them, and its all configured ready to go. I read somewhere in the early documentation that it is designed to be setup and managed by the Maths or Science teacher, who doesn't necessarily know much about computers, in a small Norwegian school. That describes me perfectly if you replace Norway with Japan.

What do you see as the disadvantages of Skolelinux/Debian Edu?

The desktop is fairly plain. If you compare it with Edubuntu, who have fun themes for children, or with distributions such as Mint, who make the desktop beautiful. They create a good impression on people who don't need to understand how to use any of it, but who might be important to the school. School administrators or directors, for instance, or parents. Even kids. Debian itself usually has ugly default theme settings. It was my dream a few years back that some kind of integration would allow Edubuntu to do the desktop stuff and Debian Edu the servers, but now I realise how impossible that is. A second disadvantage is that if something goes wrong, or you need to customise something, then suddenly the level of expertise required multiplies. For example, backup wasn't working properly in Lenny. It took me ages to learn how to set up my own server to do rsync backups. I am afraid of anything to do with ldap, but perhaps Gosa will help.

Which free software do you use daily?

Nowadays I only use Debian on my personal computers. I have one for studio work (I play guitar and write songs), running AV Linux (customised Debian) a netbook running Squeeze, and a bigger laptop still running Skolelinux Lenny workstation. I have a Tjener in my house, that's very useful for the family photos and music. At school the students only use Skolelinux. (Some teachers and the office still have windows). So that means we only use free software all day every day. Open office, The GIMP, Firefox/Iceweasel, VLC and Audacity are installed on every computer in school, irrespective of OS. We also have Koha on Debian for the library, and Apache, Moodle, b2evolution and Etomite on Debian for the www. The firewall is Untangle.

Which strategy do you believe is the right one to use to get schools to use free software?

Current trends are in our favour. Open source is big in industry, and ordinary people have heard of it. The spread of Android and the popularity of Apple have helped to weaken the impression that you have to have Microsoft on everything. People complain to me much less about file formats and Word than they did 5 years ago. The Edu aspect is also a selling point. This is all customised for schools. Where is the Windows-edu, or the Mac-edu? But of course the main attraction is budget.The trick is to convince people that the quality is not compromised when you stop paying and use free software instead. That is one reason why I say the desktop experience is a weakness. People are not impressed when their USB drive doesn't work, or their browser doesn't play flash, for example.

7th March 2012

One of the Debian Edu developers, Wolfgang Schweer, just created a screen cast documenting how to create a lot of new users in LDAP on Debian Edu Squeeze. The video is embedded here in quarter size, and also available from vimeo and download as a Ogg Theora file. Check it out below.

Download video as Ogg.

4th March 2012

This weekend we wrapped up and published the third release candidate for Debian Edu / Skolelinux based on Squeeze. The full announcement is available from the project announcement list. Check it out if you need a software solution for your school.

3rd March 2012

Many years ago, the Skolelinux / Debian Edu project initiated a student project to create a tool for making stop motion movies. The proposal came from a teacher needing such tool on Skolelinux. The project, called "stopmotion", was manned by two extraordinary students and won a school award and a national aware with this great project. The project was initiated and mentored by Herman Robak, and manned by the students Bjørn Erik Nilsen and Fredrik Berg Kjølstad. They got in touch with people at Aardman Animation studio and received feedback on how professionals would like such stopmotion tool to work, and the end result was and is used by animators around the globe. But as is usual after studying, both got jobs and went elsewhere, and did not have time to properly tend to the project, and it has been lingering for a few years now. Until last year...

Last year some of the users got together with Herman, and moved the project to Sourceforge and in effect restarted the project under a new name, linuxstopmotion. The name change was done to make it possible to find the project using Internet search engines (try to search for 'stopmotion' to see what I mean). I've been following the mailing list and the improvement already in place and planned for the future is encouraging. If you want to make stop motion movies. Check it out. :)

27th February 2012

This weekend we wrapped up and published the second release candidate for Debian Edu / Skolelinux based on Squeeze. The full announcement did for some reason not make it the project announcement list, but is available from the Debian development announcement list. Check it out if you need a software solution for your school.

19th February 2012

One week delayed due to DVD build problems, we managed today to wrap up and publish the first release candidate for Debian Edu / Skolelinux based on Squeeze. The full announcement is available on the project announcement list. Check it out if you need a software solution for your school.

14th February 2012

Once in a while my home server have disk problems. Thanks to Linux Software RAID, I have not lost data yet (but I was close this summer :). But once a disk is starting to behave funny, a practical problem present itself. How to get from the Linux device name (like /dev/sdd) to something that can be used to identify the disk when the computer is turned off? In my case I have SATA disks with a unique ID printed on the label. All I need is a way to figure out how to query the disk to get the ID out.

After fumbling a bit, I found that hdparm -I will report the disk serial number, which is printed on the disk label. The following (almost) one-liner can be used to look up the ID of all the failed disks:

for d in $(cat /proc/mdstat |grep '(F)'|tr ' ' "\n"|grep '(F)'|cut -d\[ -f1|sort -u);
do
    printf "Failed disk $d: "
    hdparm -I /dev/$d |grep 'Serial Num'
done

Putting it here to make sure I do not have to search for it the next time, and in case other find it useful.

At the moment I have two failing disk. :(

Failed disk sdd1:       Serial Number:      WD-WCASJ1860823
Failed disk sdd2:       Serial Number:      WD-WCASJ1860823
Failed disk sde2:       Serial Number:      WD-WCASJ1840589

The last time I had failing disks, I added the serial number on labels I printed and stuck on the short sides of each disk, to be able to figure out which disk to take out of the box without having to remove each disk to look at the physical vendor label. The vendor label is at the top of the disk, which is hidden when the disks are mounted inside my box.

I really wish the check_linux_raid Nagios plugin for checking Linux Software RAID in the nagios-plugins-standard debian package would look up this value automatically, as it would make the plugin a lot more useful when my disks fail. At the moment it only report a failure when there are no more spares left (it really should warn as soon as a disk is failing), and it do not tell me which disk(s) is failing when the RAID is running short on disks.

Tags: english, raid.
13th February 2012

New in the Squeeze version of Debian Edu / Skolelinux is the ability for clients to automatically configure their proxy settings based on their environment. We want all systems on the client to use the WPAD based proxy definition fetched from http://wpad/wpad.dat, to allow sites to control the proxy setting from a central place and make sure clients do not have hard coded proxy settings. The schools can change the global proxy setting by editing tjener:/etc/debian-edu/www/wpad.dat and the change propagate to all Debian Edu clients in the network.

The problem is that some systems do not understand the WPAD system. In other words, how do one get from a WPAD file like this (this is a simple one, they can run arbitrary code):

function FindProxyForURL(url, host)
{
   if (!isResolvable(host) ||
       isPlainHostName(host) ||
       dnsDomainIs(host, ".intern"))
      return "DIRECT";
   else
      return "PROXY webcache:3128; DIRECT";
}

to a proxy setting in the process environment looking like this:

http_proxy=http://webcache:3128/
ftp_proxy=http://webcache:3128/

To do this conversion I developed a perl script that will execute the javascript fragment in the WPAD file and return the proxy that would be used for http://www.debian.org/, and insert this extracted proxy URL in /etc/environment and /etc/apt/apt.conf. The perl script wpad-extract work just fine in Squeeze, but in Wheezy the library it need to run the javascript code is no longer able to build because the C library it depended on is now a C++ library. I hope someone find a solution to that problem before Wheezy is frozen. An alternative would be for us to rewrite wpad-extract to use some other javascript library currently working in Wheezy, but no known alternative is known at the moment.

This automatic proxy system allow the roaming workstation (aka laptop) setup in Debian Edu/Squeeze to use the proxy when the laptop is connected to the backbone network in a Debian Edu setup, and to automatically use any proxy present and announced using the WPAD feature when it is connected to other networks. And if no proxy is announced, direct connections will be used instead.

Silently using a proxy announced on the network might be a privacy or security problem. But those controlling DHCP and DNS on a network could just as easily set up a transparent proxy, and force all HTTP and FTP connections to use a proxy anyway, so I consider that distinction to be academic. If you are afraid of using the wrong proxy, you should avoid connecting to the network in question in the first place. In Debian Edu, the proxy setup is updated using dhcp and ifupdown hooks, to make sure the configuration is updated every time the network setup changes.

The WPAD system is documented in a IETF draft and a Wikipedia page for those that want to learn more.

5th February 2012

Since the Lenny version of Debian Edu / Skolelinux, a feature to save power have been included. It is as simple as it is practical: Shut down unused clients at night, and turn them on again in the morning. This is done using the shutdown-at-night Debian package.

To enable this feature on a client, the machine need to be added to the netgroup shutdown-at-night-hosts. For Debian Edu, this is done in LDAP, and once this is in place, the machine in question will check every hour from 16:00 until 06:00 to see if the machine is unused, and shut it down if it is. If the hardware in question is supported by the nvram-wakeup package, the BIOS is told to turn the machine back on around 07:00 +- 10 minutes. If this isn't working, one can configure wake-on-lan to try to turn on the client. The wake-on-lan option is only documented and not enabled by default in Debian Edu.

It is important to not turn all machines on at once, as this can blow a fuse if several computers are connected to the same fuse like the common setup for a classroom. The nvram-wakeup method only work for machines with a functioning hardware/BIOS clock. I've seen old machines where the BIOS battery were dead and the hardware clock were starting from 0 (or was it 1990?) every boot. If you have one of those, you have to turn on the computer manually.

The shutdown-at-night package is completely self contained, and can also be used outside the Debian Edu environment. For those without a central LDAP server with netgroups, one can instead touch the file /etc/shutdown-at-night/shutdown-at-night to enable it. Perhaps you too can use it to save some power?

4th February 2012

I am happy to announce that finally we managed today to wrap up and publish the third beta version of Debian Edu / Skolelinux based on Squeeze. If you want to test a LDAP backed Kerberos server with out of the box PXE configuration for running diskless machines and installing new machines, check it out. If you need a software solution for your school, check it out too. The full announcement is available on the project announcement list.

I am very happy to report these changes and improvements since beta2 (there are more, see announcement for full list):

  • It is now possible to change the pre-configured IP subnet from 10.0.0.0/8 to something else by using the subnet-change tool after the installation.
  • Too full partitions are now automatically extended on the Main Server, based on the rules specified in /etc/fsautoresizetab.
  • The CUPS queues are now automatically flushed every night, and all disabled queues are restarted every hour. This should cut down on the amount of manual administration needed for printers.
  • The set of initial users have been changed. Now a personal user for the local system administrator is created during installation instead of the previously created localadmin and super-admin users, and this user is granted administrative privileges using group membership. This reduces the number of passwords one need to keep up to date on the system.

The new main server seem to work so well that I am testing it as my private DNS/LDAP/Kerberos/PXE/LTSP server at home. I will use it look for issues we could fix to polish Debian Edu even further before the final Squeeze release is published.

Next weekend the project organise a developer gathering in Oslo. We will continue the work on the Squeeze version, and start initial planning for the Wheezy version. Perhaps I will see you there?

27th January 2012

With some computer hardware, one need non-free firmware blobs. This is the sad fact of todays computers. In the next version of Debian Edu / Skolelinux based on Squeeze, we provide several scripts and modifications to make firmware blobs easier to handle. The common use case I run into is a laptop with a wireless network card requiring non-free firmware to work, but there are other use cases as well.

First and foremost, Debian Edu provide ISO images for DVD and CD with all firmware packages in the Debian sections main and non-free included, to ensure debian-installer find and can install all of them during installation. This take care firmware for network devices used by the installer when installing from from local media. But for example multimedia devices are not activated in the installer and are not taken care of by this.

For non-network devices, we provide the script /usr/share/debian-edu-config/tools/auto-addfirmware which search through the dmesg output for drivers requesting extra firmware. The firmware file name is looked up in the Contents-ARCH.gz file available in the package repository, and the packages providing the requested firmware file(s) is installed. I have proposed to do something similar in debian-installer (BTS report #655507), to allow PXE installs of Debian to handle firmware installation better. Run the script as root from the command line to fetch and install the needed firmware packages.

Debian Edu provide PXE installation of Debian out of the box, and because some machines need firmware to get their network cards working, the installation initrd some times need extra firmware included to be able to install at all. To fill the PXE installation initrd with extra firmware, the /usr/share/debian-edu-config/tools/pxe-addfirmware script is provided. Again, just run it as root on the command line to fill the PXE initrd with firmware packages.

Last, some LTSP clients might also need firmware to get their network cards working. For this, /usr/share/debian-edu-config/tools/ltsp-addfirmware is provided to update the LTSP initrd with firmware blobs. It is used the same way as the other firmware related tools.

At the moment, we do not run any of these during installation. We do not know if this is acceptable for the local administrator to use non-free software, and it is their choice.

We plan to release beta3 this weekend. You might want to give it a try.

25th January 2012

The next version of Debian Edu / Skolelinux will include a new tool sitesummary2ldapdhcp, which can be used to quickly set up all the computers in a school without much manual labour. Here is a short summary on how to use it to set up a new school.

First, install a combined Main Server and Thin Client Server as the central server in the network. Next, PXE boot all the client machines as thin clients and wait 5 minutes after the last client booted to allow the clients to report their existence to the central server. When this is done, log on to the central server and run sitesummary2ldapdhcp -a in the konsole to use the collected information to generate system objects in LDAP. The output will look similar to this:

% sitesummary2ldapdhcp -a
info: Updating machine tjener.intern [10.0.2.2] id ether-00:01:02:03:04:05.
info: Create GOsa machine for auto-mac-00-01-02-03-04-06 [10.0.16.20] id ether-00:01:02:03:04:06.

Enter password if you want to activate these changes, and ^c to abort.

Connecting to LDAP as cn=admin,ou=ldap-access,dc=skole,dc=skolelinux,dc=no
enter password: *******
% 

After providing the LDAP administrative password (the same as the root password set during installation), the LDAP database will be populated with system objects for each PXE booted machine with automatically generated names. The final step to set up the school is then to log into GOsa, the web based user, group and system administration system to change system names, add systems to the correct host groups and finally enable DHCP and DNS for the systems. All clients that should be used as diskless workstations should be added to the workstation-hosts group. After this is done, all computers can be booted again via PXE and get their assigned names and group based configuration automatically.

We plan to release beta3 with the updated version of this feature enabled this weekend. You might want to give it a try.

Update 2012-01-28: When calling sitesummary2ldapdhcp to add new hosts, one need to add the option -a. I forgot to mention this in my original text, and have added it to the text now.

10th January 2012

In the Squeeze version of Debian Edu / Skolelinux soon to be released, users of the system will get their default browser start page set from LDAP, allowing the system administrator to point all users to the school web page by updating one setting in LDAP. In addition to setting the default start page when a machine boots, users are shown the same page as a welcome page when they log in for the first time.

The LDAP object dc=skole,dc=skolelinux,dc=no have an attribute labeledURI with "http://www/ LDAP for Debian Edu/Skolelinux" as the default content. By changing this value to another URL, all users get to see the page behind this new URL.

An easy way to update it is by using the ldapvi tool. It can be called as "ldapvi -ZD '(cn=admin)'' to update LDAP with the new setting.

We have written the code to adjust the default start page and show the welcome page, and I wonder if there is an easier way to do this from within Iceweasel instead.

7th January 2012

I am happy to announce that today we managed to wrap up and publish the second beta version of Debian Edu / Skolelinux. If you want to test a LDAP backed Kerberos server with out of the box PXE configuration for running diskless machines and installing new machines, check it out. If you need a software solution for your school, check it out too. The full announcement is available on the project announcement list.

3rd January 2012

During christmas, I have been working getting the next version of Debian Edu / Skolelinux ready for release. The initial problem I looked at was particularly interesting.

The installer would hang at the end when it was doing it post-installation configuration, and whatevery I did to try to find the cause and fix it always worked while I tested it, but never when I integrated it into the installer and ran the installation from scratch. I would try to restart processes, close file descriptors, remove or create files, and the installer would always unblock and wrap up its tasks.

Eventually the cause was found. The kernel was simply running out of entropy, causing the Kerberos setup to hang waiting for more. Pressing keys was adding entropy to the kernel, and thus all my tries to fix the problem worked not because what I was typing to fix it, but because I was typing.

The fix I implemented was to add a background process looking at the level of entropy in the kernel (by checking /proc/sys/kernel/random/entropy_avail), and if it was too small, the installer will flush the kernel file buffers and do 'find /' to generate some disk IO. Disk IO generate entropy in the kernel, and is one of the few things that can be initated from within the system to generate entropy.

The fix is in beta1 of the Debian Edu/Squeeze version, and we welcome more testers and developers. We plan to release beta2 this weekend.

21st November 2011

At work we have heaps of servers. I believe the total count is around 1000 at the moment. To be able to get help from the vendors when something go wrong, we want to keep the firmware on the servers up to date. If the firmware isn't the latest and greatest, the vendors typically refuse to start debugging any problems until the firmware is upgraded. So before every reboot, we want to upgrade the firmware, and we would really like everyone handling servers at the university to do this themselves when they plan to reboot a machine. For that to happen we at the unix server admin group need to provide the tools to do so.

To make firmware upgrading easier, I am working on a script to fetch and install the latest firmware for the servers we got. Most of our hardware are from Dell and HP, so I have focused on these servers so far. This blog post is about the Dell part.

On the Dell FTP site I was lucky enough to find an XML file with firmware information for all 11th generation servers, listing which firmware should be used on a given model and where on the FTP site I can find it. Using a simple perl XML parser I can then download the shell scripts Dell provides to do firmware upgrades from within Linux and reboot when all the firmware is primed and ready to be activated on the first reboot.

This is the Dell related fragment of the perl code I am working on. Are there anyone working on similar tools for firmware upgrading all servers at a site? Please get in touch and lets share resources.

#!/usr/bin/perl
use strict;
use warnings;
use File::Temp qw(tempdir);
BEGIN {
    # Install needed RHEL packages if missing
    my %rhelmodules = (
        'XML::Simple' => 'perl-XML-Simple',
        );
    for my $module (keys %rhelmodules) {
        eval "use $module;";
        if ($@) {
            my $pkg = $rhelmodules{$module};
            system("yum install -y $pkg");
            eval "use $module;";
        }
    }
}
my $errorsto = 'pere@hungry.com';

upgrade_dell();

exit 0;

sub run_firmware_script {
    my ($opts, $script) = @_;
    unless ($script) {
        print STDERR "fail: missing script name\n";
        exit 1
    }
    print STDERR "Running $script\n\n";

    if (0 == system("sh $script $opts")) { # FIXME correct exit code handling
        print STDERR "success: firmware script ran succcessfully\n";
    } else {
        print STDERR "fail: firmware script returned error\n";
    }
}

sub run_firmware_scripts {
    my ($opts, @dirs) = @_;
    # Run firmware packages
    for my $dir (@dirs) {
        print STDERR "info: Running scripts in $dir\n";
        opendir(my $dh, $dir) or die "Unable to open directory $dir: $!";
        while (my $s = readdir $dh) {
            next if $s =~ m/^\.\.?/;
            run_firmware_script($opts, "$dir/$s");
        }
        closedir $dh;
    }
}

sub download {
    my $url = shift;
    print STDERR "info: Downloading $url\n";
    system("wget --quiet \"$url\"");
}

sub upgrade_dell {
    my @dirs;
    my $product = `dmidecode -s system-product-name`;
    chomp $product;

    if ($product =~ m/PowerEdge/) {

        # on RHEL, these pacakges are needed by the firwmare upgrade scripts
        system('yum install -y compat-libstdc++-33.i686 libstdc++.i686 libxml2.i686 procmail');

        my $tmpdir = tempdir(
            CLEANUP => 1
            );
        chdir($tmpdir);
        fetch_dell_fw('catalog/Catalog.xml.gz');
        system('gunzip Catalog.xml.gz');
        my @paths = fetch_dell_fw_list('Catalog.xml');
        # -q is quiet, disabling interactivity and reducing console output
        my $fwopts = "-q";
        if (@paths) {
            for my $url (@paths) {
                fetch_dell_fw($url);
            }
            run_firmware_scripts($fwopts, $tmpdir);
        } else {
            print STDERR "error: Unsupported Dell model '$product'.\n";
            print STDERR "error: Please report to $errorsto.\n";
        }
        chdir('/');
    } else {
        print STDERR "error: Unsupported Dell model '$product'.\n";
        print STDERR "error: Please report to $errorsto.\n";
    }
}

sub fetch_dell_fw {
    my $path = shift;
    my $url = "ftp://ftp.us.dell.com/$path";
    download($url);
}

# Using ftp://ftp.us.dell.com/catalog/Catalog.xml.gz, figure out which
# firmware packages to download from Dell.  Only work for Linux
# machines and 11th generation Dell servers.
sub fetch_dell_fw_list {
    my $filename = shift;

    my $product = `dmidecode -s system-product-name`;
    chomp $product;
    my ($mybrand, $mymodel) = split(/\s+/, $product);

    print STDERR "Finding firmware bundles for $mybrand $mymodel\n";

    my $xml = XMLin($filename);
    my @paths;
    for my $bundle (@{$xml->{SoftwareBundle}}) {
        my $brand = $bundle->{TargetSystems}->{Brand}->{Display}->{content};
        my $model = $bundle->{TargetSystems}->{Brand}->{Model}->{Display}->{content};
        my $oscode;
        if ("ARRAY" eq ref $bundle->{TargetOSes}->{OperatingSystem}) {
            $oscode = $bundle->{TargetOSes}->{OperatingSystem}[0]->{osCode};
        } else {
            $oscode = $bundle->{TargetOSes}->{OperatingSystem}->{osCode};
        }
        if ($mybrand eq $brand && $mymodel eq $model && "LIN" eq $oscode)
        {
            @paths = map { $_->{path} } @{$bundle->{Contents}->{Package}};
        }
    }
    for my $component (@{$xml->{SoftwareComponent}}) {
        my $componenttype = $component->{ComponentType}->{value};

        # Drop application packages, only firmware and BIOS
        next if 'APAC' eq $componenttype;

        my $cpath = $component->{path};
        for my $path (@paths) {
            if ($cpath =~ m%/$path$%) {
                push(@paths, $cpath);
            }
        }
    }
    return @paths;
}

The code is only tested on RedHat Enterprise Linux, but I suspect it could work on other platforms with some tweaking. Anyone know a index like Catalog.xml is available from HP for HP servers? At the moment I maintain a similar list manually and it is quickly getting outdated.

Tags: debian, english.
7th October 2011

Here in Norway the public libraries are debating with the publishing houses how to handle electronic books. Surprisingly, the libraries seem to be willing to accept digital restriction mechanisms (DRM) on books and renting e-books with artificial scarcity from the publishing houses. Time limited renting (2-3 years) is one proposed model, and only allowing X borrowers for each book is another. Personally I find it amazing that libraries are even considering such models.

Anyway, while reading part of this debate, it occurred to me that someone should present a more sensible approach to the libraries, to allow its borrowers to get used to a better model. The idea is simple:

Create a computer system for the libraries, either in the form of a Live DVD or a installable distribution, that provide a simple kiosk solution to hand out free e-books. As a start, the books distributed by Project Gutenberg (about 36,000 books), Project Runenberg (1149 books) and The Internet Archive (3,033,748 books) could be included, but any book where the copyright has expired or with a free licence could be distributed.

The computer system would make it easy to:

  • Copy e-books into a USB stick, reading tablets, cell phones and other relevant equipment.
  • Show the books for reading on the the screen in the library.

In addition to such kiosk solution, there should probably be a web site as well to allow people easy access to these books without visiting the library. The site would be the distribution point for the kiosk systems, which would connect regularly to fetch any new books available.

Are there anyone working on a system like this? I guess it would fit any library in the world, and not just the Norwegian public libraries. :)

17th September 2011

For convenience, I want to store copies of all my DVDs on my file server. It allow me to save shelf space flat while still having my movie collection easily available. It also make it possible to let the kids see their favourite DVDs without wearing the physical copies down. I prefer to store the DVDs as ISOs to keep the DVD menu and subtitle options intact. It also ensure that the entire film is one file on the disk. As this is for personal use, the ripping is perfectly legal here in Norway.

Normally I rip the DVDs using dd like this:

#!/bin/sh
# apt-get install lsdvd
title=$(lsdvd 2>/dev/null|awk '/Disc Title: / {print $3}')
dd if=/dev/dvd of=/storage/dvds/$title.iso bs=1M

But some DVDs give a input/output error when I read it, and I have been looking for a better alternative. I have no idea why this I/O error occur, but suspect my DVD drive, the Linux kernel driver or something fishy with the DVDs in question. Or perhaps all three.

Anyway, I believe I found a solution today using dvdbackup and genisoimage. This script gave me a working ISO for a problematic movie by first extracting the DVD file system and then re-packing it back as an ISO.

#!/bin/sh
# apt-get install lsdvd dvdbackup genisoimage
set -e
tmpdir=/storage/dvds/
title=$(lsdvd 2>/dev/null|awk '/Disc Title: / {print $3}')
dvdbackup -i /dev/dvd -M -o $tmpdir -n$title
genisoimage -dvd-video -o $tmpdir/$title.iso $tmpdir/$title
rm -rf $tmpdir/$title

Anyone know of a better way available in Debian/Squeeze?

Update 2011-09-18: I got a tip from Konstantin Khomoutov about the readom program from the wodim package. It is specially written to read optical media, and is called like this: readom dev=/dev/dvd f=image.iso. It got 6 GB along with the problematic Cars DVD before it failed, and failed right away with a Timmy Time DVD.

Next, I got a tip from Bastian Blank about his program python-dvdvideo, which seem to be just what I am looking for. Tested it with my problematic Timmy Time DVD, and it succeeded creating a ISO image. The git source built and installed just fine in Squeeze, so I guess this will be my tool of choice in the future.

4th August 2011

Wouter Verhelst have some interesting comments and opinions on my blog post on the need to clean up /etc/rcS.d/ in Debian and my blog post about the default KDE desktop in Debian. I only have time to address one small piece of his comment now, and though it best to address the misunderstanding he bring forward:

Currently, a system admin has four options: [...] boot to a single-user system (by adding 'single' to the kernel command line; this runs rcS and rc1 scripts)

This make me believe Wouter believe booting into single user mode and booting into runlevel 1 is the same. I am not surprised he believe this, because it would make sense and is a quite sensible thing to believe. But because the boot in Debian is slightly broken, runlevel 1 do not work properly and it isn't the same as single user mode. I'll try to explain what is actually happing, but it is a bit hard to explain.

Single user mode is defined like this in /etc/inittab: "~~:S:wait:/sbin/sulogin". This means the only thing that is executed in single user mode is sulogin. Single user mode is a boot state "between" the runlevels, and when booting into single user mode, only the scripts in /etc/rcS.d/ are executed before the init process enters the single user state. When switching to runlevel 1, the state is in fact not ending in runlevel 1, but it passes through runlevel 1 and end up in the single user mode (see /etc/rc1.d/S03single, which runs "init -t1 S" to switch to single user mode at the end of runlevel 1. It is confusing that the 'S' (single user) init mode is not the mode enabled by /etc/rcS.d/ (which is more like the initial boot mode).

This summary might make it clearer. When booting for the first time into single user mode, the following commands are executed: "/etc/init.d/rc S; /sbin/sulogin". When booting into runlevel 1, the following commands are executed: "/etc/init.d/rc S; /etc/init.d/rc 1; /sbin/sulogin". A problem show up when trying to continue after visiting single user mode. Not all services are started again as they should, causing the machine to end up in an unpredicatble state. This is why Debian admins recommend rebooting after visiting single user mode.

A similar problem with runlevel 1 is caused by the amount of scripts executed from /etc/rcS.d/. When switching from say runlevel 2 to runlevel 1, the services started from /etc/rcS.d/ are not properly stopped when passing through the scripts in /etc/rc1.d/, and not started again when switching away from runlevel 1 to the runlevels 2-5. I believe the problem is best fixed by moving all the scripts out of /etc/rcS.d/ that are not required to get a functioning single user mode during boot.

I have spent several years investigating the Debian boot system, and discovered this problem a few years ago. I suspect it originates from when sysvinit was introduced into Debian, a long time ago.

30th July 2011

In the Debian boot system, several packages include scripts that are started from /etc/rcS.d/. In fact, there is a bite more of them than make sense, and this causes a few problems. What kind of problems, you might ask. There are at least two problems. The first is that it is not possible to recover a machine after switching to runlevel 1. One need to actually reboot to get the machine back to the expected state. The other is that single user boot will sometimes run into problems because some of the subsystems are activated before the root login is presented, causing problems when trying to recover a machine from a problem in that subsystem. A minor additional point is that moving more scripts out of rcS.d/ and into the other rc#.d/ directories will increase the amount of scripts that can run in parallel during boot, and thus decrease the boot time.

So, which scripts should start from rcS.d/. In short, only the scripts that _have_ to execute before the root login prompt is presented during a single user boot should go there. Everything else should go into the numeric runlevels. This means things like lm-sensors, fuse and x11-common should not run from rcS.d, but from the numeric runlevels. Today in Debian, there are around 115 init.d scripts that are started from rcS.d/, and most of them should be moved out. Do your package have one of them? Please help us make single user and runlevel 1 better by moving it.

Scripts setting up the screen, keyboard, system partitions etc. should still be started from rcS.d/, but there is for example no need to have the network enabled before the single user login prompt is presented.

As always, things are not so easy to fix as they sound. To keep Debian systems working while scripts migrate and during upgrades, the scripts need to be moved from rcS.d/ to rc2.d/ in reverse dependency order, ie the scripts that nothing in rcS.d/ depend on can be moved, and the next ones can only be moved when their dependencies have been moved first. This migration must be done sequentially while we ensure that the package system upgrade packages in the right order to keep the system state correct. This will require some coordination when it comes to network related packages, but most of the packages with scripts that should migrate do not have anything in rcS.d/ depending on them. Some packages have already been updated, like the sudo package, while others are still left to do. I wish I had time to work on this myself, but real live constrains make it unlikely that I will find time to push this forward.

29th July 2011

While at Debconf11, I have several times during discussions mentioned the issues I believe should be improved in Debian for its desktop to be useful for more people. The use case for this is my parents, which are currently running Kubuntu which solve the issues.

I suspect these four missing features are not very hard to implement. After all, they are present in Ubuntu, so if we wanted to do this in Debian we would have a source.

  1. Simple GUI based upgrade of packages. When there are new packages available for upgrades, a icon in the KDE status bar indicate this, and clicking on it will activate the simple upgrade tool to handle it. I have no problem guiding both of my parents through the process over the phone. If a kernel reboot is required, this too is indicated by the status bars and the upgrade tool. Last time I checked, nothing with the same features was working in KDE in Debian.
  2. Simple handling of missing Firefox browser plugins. When the browser encounter a MIME type it do not currently have a handler for, it will ask the user if the system should search for a package that would add support for this MIME type, and if the user say yes, the APT sources will be searched for packages advertising the MIME type in their control file (visible in the Packages file in the APT archive). If one or more packages are found, it is a simple click of the mouse to add support for the missing mime type. If the package require the user to accept some non-free license, this is explained to the user. The entire process make it more clear to the user why something do not work in the browser, and make the chances higher for the user to blame the web page authors and not the browser for any missing features.
  3. Simple handling of missing multimedia codec/format handlers. When the media players encounter a format or codec it is not supporting, a dialog pop up asking the user if the system should search for a package that would add support for it. This happen with things like MP3, Windows Media or H.264. The selection and installation procedure is very similar to the Firefox browser plugin handling. This is as far as I know implemented using a gstreamer hook. The end result is that the user easily get access to the codecs that are present from the APT archives available, while explaining more on why a given format is unsupported by Ubuntu.
  4. Better browser handling of some MIME types. When displaying a text/plain file in my Debian browser, it will propose to start emacs to show it. If I remember correctly, when doing the same in Kunbutu it show the file as a text file in the browser. At least I know Opera will show text files within the browser. I much prefer the latter behaviour.

There are other nice features as well, like the simplified suite upgrader, but given that I am the one mostly doing the dist-upgrade, it do not matter much.

I really hope we could get these features in place for the next Debian release. It would require the coordinated effort of several maintainers, but would make the end user experience a lot better.

26th July 2011

The Norwegian FiksGataMi site is build on Debian/Squeeze, and this platform was chosen because I am most familiar with Debian (being a Debian Developer for around 10 years) because it is the latest stable Debian release which should get security support for a few years.

The web service is written in Perl, and depend on some perl modules that are missing in Debian at the moment. It would be great if these modules were added to the Debian archive, allowing anyone to set up their own FixMyStreet clone in their own country using only Debian packages. The list of modules missing in Debian/Squeeze isn't very long, and I hope the perl group will find time to package the 12 modules Catalyst::Plugin::SmartURI, Catalyst::Plugin::Unicode::Encoding, Catalyst::View::TT, Devel::Hide, Sort::Key, Statistics::Distributions, Template::Plugin::Comma, Template::Plugin::DateTime::Format, Term::Size::Any, Term::Size::Perl, URI::SmartURI and Web::Scraper to make the maintenance of FixMyStreet easier in the future.

Thanks to the great tools in Debian, getting the missing modules installed on my server was a simple call to 'cpan2deb Module::Name' and 'dpkg -i' to install the resulting package. But this leave me with the responsibility of tracking security problems, which I really do not have time for.

20th June 2011

Reading the thingiverse blog, I came across two highlights of interesting parts of the Autodesk and Microsoft Kinect End User License Agreements (EULAs), which illustrates quite well why I stay away from software with EULAs. Whenever I take the time to read their content, the terms are simply unacceptable.

30th April 2011

Today, the first draft implementation of an Open311 API for the Norwegian service FiksGataMi started to work. It is only available on the developer server for now, and I have not tested it using any existing Open311 client (I lack the platforms needed to run the clients I have found so far), but it is able to query the database and extract a list of open and closed requests within a given category and reported to a given municipality. I believe that is a good start to create a useful service for those that want to do data mining on the requests submitted so far.

Where is it? Visit http://fiksgatami-dev.nuug.no/open311.cgi/v2/ to have a look. Please send feedback to the fiksgatami (at) nuug.no mailing list.

29th April 2011

The last few days I have spent some time trying to add support for the Open311 API in the Norwegian FixMyStreet service. Earlier I believed Open311 would be a useful API to use to submit reports to the municipalities, but when I noticed that the New Zealand version of FixMyStreet had implemented Open311 on the server side, it occurred to me that this was a nice way to allow the public, press and municipalities to do data mining directly in the FixMyStreet service. Thus I went to work implementing the Open311 specification for FixMyStreet. The implementation is not yet ready, but I am starting to get a draft limping along. In the process, I have discovered a few issues with the Open311 specification.

One obvious missing feature is the lack of natural language handling in the specification. The specification seem to assume all reports will be written in English, and do not provide a way for the receiving end to specify which languages are understood there. To be able to use the same client and submit to several Open311 receivers, it would be useful to know which language to use when writing reports. I believe the specification should be extended to allow the receivers of problem reports to specify which language they accept, and the submitter to specify which language the report is written in. Language of a text can also be automatically guessed using statistical methods, but for multi-lingual persons like myself, it is useful to know which language to use when writing a problem report. I suspect some lang=nb,nn kind of attribute would solve it.

A key part of the Open311 API is the list of services provided, which is similar to the categories used by FixMyStreet. One issue I run into is the need to specify both name and unique identifier for each category. The specification do not state that the identifier should be numeric, but all example implementations have used numbers here. In FixMyStreet, there is no number associated with each category. As the specification do not forbid it, I will use the name as the unique identifier for now and see how open311 clients handle it.

The report format in open311 and the report format in FixMyStreet differ in a key part. FixMyStreet have a title and a description, while Open311 only have a description and lack the title. I'm not quite sure how to best handle this yet. When asking for a FixMyStreet report in Open311 format, I just merge title an description into the open311 description, but this is not going to work if the open311 API should be used for submitting new reports to FixMyStreet.

The search feature in Open311 is missing a way to ask for problems near a geographic location. I believe this is important if one is to use Open311 as the query language for mobile units. The specification should be extended to handle this, probably using some new lat=, lon= and range= options.

The final challenge I see is that the FixMyStreet code handle several administrations in one interface, while the Open311 API seem to assume only one administration. For FixMyStreet, this mean a report can be sent to several administrations, and the categories available depend on the location of the problem. Not quite sure how to best handle this. I've noticed SeeClickFix added latitude and longitude options to the services request, but it do not solve the problem of what to return when no location is specified. Will have to investigate this a bit more.

My distaste for web forums have kept me from bringing these issues up with the open311 developer group. I really wish they had a email list available via Gmane to use for discussions instead of only a forum. Oh, well. That will probably resolve itself, one way or another. I've also tried visiting the IRC channel #open311 on FreeNode, but no-one seem to reply to my questions there. This make me wonder if I just fail to understand how the open311 community work. It sure do not work like the free software project communities I am used to.

6th April 2011

The Gnash project is still the most promising solution for a Free Software Flash implementation. A few days ago the project announced that it will participate in Google Summer of Code. I hope many students apply, and that some of them succeed in getting AVM2 support into Gnash.

3rd April 2011

Here is a small update for my English readers. Most of my blog posts have been in Norwegian the last few weeks, so here is a short update in English.

The kids still keep me too busy to get much free software work done, but I did manage to organise a project to get a Norwegian port of the British service FixMyStreet up and running, and it has been running for a month now. The entire project has been organised by me and two others. Around Christmas we gathered sponsors to fund the development work. In January I drafted a contract with mySociety on what to develop, and in February the development took place. Most of it involved converting the source to use GPS coordinates instead of British easting/northing, and the resulting code should be a lot easier to get running in any country by now. The Norwegian FiksGataMi is using OpenStreetmap as the map source and the source for administrative borders in Norway, and support for this had to be added/fixed.

The Norwegian version went live March 3th, and we spent the weekend polishing the system before we announced it March 7th. The system is running on a KVM instance of Debian/Squeeze, and has seen almost 3000 problem reports in a few weeks. Soon we hope to announce the Android and iPhone versions making it even easier to report problems with the public infrastructure.

Perhaps something to consider for those of you in countries without such service?

28th January 2011

The last few days I have looked at ways to track open security issues here at my work with the University of Oslo. My idea is that it should be possible to use the information about security issues available on the Internet, and check our locally maintained/distributed software against this information. It should allow us to verify that no known security issues are forgotten. The CVE database listing vulnerabilities seem like a great central point, and by using the package lists from Debian mapped to CVEs provided by the testing security team, I believed it should be possible to figure out which security holes were present in our free software collection.

After reading up on the topic, it became obvious that the first building block is to be able to name software packages in a unique and consistent way across data sources. I considered several ways to do this, for example coming up with my own naming scheme like using URLs to project home pages or URLs to the Freshmeat entries, or using some existing naming scheme. And it seem like I am not the first one to come across this problem, as MITRE already proposed and implemented a solution. Enter the Common Platform Enumeration dictionary, a vocabulary for referring to software, hardware and other platform components. The CPE ids are mapped to CVEs in the National Vulnerability Database, allowing me to look up know security issues for any CPE name. With this in place, all I need to do is to locate the CPE id for the software packages we use at the university. This is fairly trivial (I google for 'cve cpe $package' and check the NVD entry if a CVE for the package exist).

To give you an example. The GNU gzip source package have the CPE name cpe:/a:gnu:gzip. If the old version 1.3.3 was the package to check out, one could look up cpe:/a:gnu:gzip:1.3.3 in NVD and get a list of 6 security holes with public CVE entries. The most recent one is CVE-2010-0001, and at the bottom of the NVD page for this vulnerability the complete list of affected versions is provided.

The NVD database of CVEs is also available as a XML dump, allowing for offline processing of issues. Using this dump, I've written a small script taking a list of CPEs as input and list all CVEs affecting the packages represented by these CPEs. One give it CPEs with version numbers as specified above and get a list of open security issues out.

Of course for this approach to be useful, the quality of the NVD information need to be high. For that to happen, I believe as many as possible need to use and contribute to the NVD database. I notice RHEL is providing a map from CVE to CPE, indicating that they are using the CPE information. I'm not aware of Debian and Ubuntu doing the same.

To get an idea about the quality for free software, I spent some time making it possible to compare the CVE database from Debian with the CVE database in NVD. The result look fairly good, but there are some inconsistencies in NVD (same software package having several CPEs), and some inaccuracies (NVD not mentioning buggy packages that Debian believe are affected by a CVE). Hope to find time to improve the quality of NVD, but that require being able to get in touch with someone maintaining it. So far my three emails with questions and corrections have not seen any reply, but I hope contact can be established soon.

An interesting application for CPEs is cross platform package mapping. It would be useful to know which packages in for example RHEL, OpenSuSe and Mandriva are missing from Debian and Ubuntu, and this would be trivial if all linux distributions provided CPE entries for their packages.

23rd January 2011

In the discover-data package in Debian, there is a script to report useful information about the running hardware for use when people report missing information. One part of this script that I find very useful when debugging hardware problems, is the part mapping loaded kernel module to the PCI device it claims. It allow me to quickly see if the kernel module I expect is driving the hardware I am struggling with. To see the output, make sure discover-data is installed and run /usr/share/bug/discover-data 3>&1. The relevant output on one of my machines like this:

loaded modules:
10de:03eb i2c_nforce2
10de:03f1 ohci_hcd
10de:03f2 ehci_hcd
10de:03f0 snd_hda_intel
10de:03ec pata_amd
10de:03f6 sata_nv
1022:1103 k8temp
109e:036e bttv
109e:0878 snd_bt87x
11ab:4364 sky2

The code in question look like this, slightly modified for readability and to drop the output to file descriptor 3:

if [ -d /sys/bus/pci/devices/ ] ; then
    echo loaded pci modules:
    (
        cd /sys/bus/pci/devices/
        for address in * ; do
            if [ -d "$address/driver/module" ] ; then
                module=`cd $address/driver/module ; pwd -P | xargs basename`
                if grep -q "^$module " /proc/modules ; then
                    address=$(echo $address |sed s/0000://)
                    id=`lspci -n -s $address | tail -n 1 | awk '{print $3}'`
                    echo "$id $module"
                fi
            fi
        done
    )
    echo
fi

Similar code could be used to extract USB device module mappings:

if [ -d /sys/bus/usb/devices/ ] ; then
    echo loaded usb modules:
    (
        cd /sys/bus/usb/devices/
        for address in * ; do
            if [ -d "$address/driver/module" ] ; then
                module=`cd $address/driver/module ; pwd -P | xargs basename`
                if grep -q "^$module " /proc/modules ; then
                    address=$(echo $address |sed s/0000://)
                    id=$(lsusb -s $address | tail -n 1 | awk '{print $6}')
                    if [ "$id" ] ; then
                        echo "$id $module"
                    fi
                fi
            fi
        done
    )
    echo
fi

This might perhaps be something to include in other tools as well.

Tags: debian, english.
16th January 2011

The video format struggle on the web continues, and the three contenders seem to be Ogg Theora, H.264 and WebM. Most video sites seem to use H.264, while others use Ogg Theora. Interestingly enough, the comments I see give me the feeling that a lot of people believe H.264 is the most supported video format in browsers, but according to the Wikipedia article on HTML5 video, this is not true. Check out the nice table of supprted formats in different browsers there. The format supported by most browsers is Ogg Theora, supported by released versions of Mozilla Firefox, Google Chrome, Chromium, Opera, Konqueror, Epiphany, Origyn Web Browser and BOLT browser, while not supported by Internet Explorer nor Safari. The runner up is WebM supported by released versions of Google Chrome Chromium Opera and Origyn Web Browser, and test versions of Mozilla Firefox. H.264 is supported by released versions of Safari, Origyn Web Browser and BOLT browser, and the test version of Internet Explorer. Those wanting Ogg Theora support in Internet Explorer and Safari can install plugins to get it.

To me, the simple conclusion from this is that to reach most users without any extra software installed, one uses Ogg Theora with the HTML5 video tag. Of course to reach all those without a browser handling HTML5, one need fallback mechanisms. In NUUG, we provide first fallback to a plugin capable of playing MPEG1 video, and those without such support we have a second fallback to the Cortado java applet playing Ogg Theora. This seem to work quite well, as can be seen in an example from last week.

The reason Ogg Theora is the most supported format, and H.264 is the least supported is simple. Implementing and using H.264 require royalty payment to MPEG-LA, and the terms of use from MPEG-LA are incompatible with free software licensing. If you believed H.264 was without royalties and license terms, check out "H.264 – Not The Kind Of Free That Matters" by Simon Phipps.

A incomplete list of sites providing video in Ogg Theora is available from the Xiph.org wiki, if you want to have a look. I'm not aware of a similar list for WebM nor H.264.

Update 2011-01-16 09:40: A question from Tollef on IRC made me realise that I failed to make it clear enough this text is about the <video> tag support in browsers and not the video support provided by external plugins like the Flash plugins.

12th January 2011

Today I discovered via digi.no that the Chrome developers, in a surprising announcement, yesterday announced plans to drop H.264 support for HTML5 <video> in the browser. The argument used is that H.264 is not a "completely open" codec technology. If you believe H.264 was free for everyone to use, I recommend having a look at the essay "H.264 – Not The Kind Of Free That Matters". It is not free of cost for creators of video tools, nor those of us that want to publish on the Internet, and the terms provided by MPEG-LA excludes free software projects from licensing the patents needed for H.264. Some background information on the Google announcement is available from OSnews. A good read. :)

Personally, I believe it is great that Google is taking a stand to promote equal terms for everyone when it comes to video publishing on the Internet. This can only be done by publishing using free and open standards, which is only possible if the web browsers provide support for these free and open standards. At the moment there seem to be two camps in the web browser world when it come to video support. Some browsers support H.264, and others support Ogg Theora and WebM (Dirac is not really an option yet), forcing those of us that want to publish video on the Internet and which can not accept the terms of use presented by MPEG-LA for H.264 to not reach all potential viewers. Wikipedia keep an updated summary of the current browser support.

Not surprising, several people would prefer Google to keep promoting H.264, and John Gruber presents the mind set of these people quite well. His rhetorical questions provoked a reply from Thom Holwerda with another set of questions presenting the issues with H.264. Both are worth a read.

Some argue that if Google is dropping H.264 because it isn't free, they should also drop support for the Adobe Flash plugin. This argument was covered by Simon Phipps in todays blog post, which I find to put the issue in context. To me it make perfect sense to drop native H.264 support for HTML5 in the browser while still allowing plugins.

I suspect the reason this announcement make so many people protest, is that all the users and promoters of H.264 suddenly get an uneasy feeling that they might be backing the wrong horse. A lot of TV broadcasters have been moving to H.264 the last few years, and a lot of money has been invested in hardware based on the belief that they could use the same video format for both broadcasting and web publishing. Suddenly this belief is shaken.

An interesting question is why Google is doing this. While the presented argument might be true enough, I believe Google would only present the argument if the change make sense from a business perspective. One reason might be that they are currently negotiating with MPEG-LA over royalties or usage terms, and giving MPEG-LA the feeling that dropping H.264 completely from Chroome, Youtube and Google Video would improve the negotiation position of Google. Another reason might be that Google want to save money by not having to pay the video tax to MPEG-LA at all, and thus want to move to a video format not requiring royalties at all. A third reason might be that the Chrome development team simply want to avoid the Chrome/Chromium split to get more help with the development of Chrome. I guess time will tell.

Update 2011-01-15: The Google Chrome team provided more background and information on the move it a blog post yesterday.

30th December 2010

After trying to compare Ogg Theora to the Digistan definition of a free and open standard, I concluded that this need to be done for more standards and started on a framework for doing this. As a start, I want to get the status for all the standards in the Norwegian reference directory, which include UTF-8, HTML, PDF, ODF, JPEG, PNG, SVG and others. But to be able to complete this in a reasonable time frame, I will need help.

If you want to help out with this work, please visit the wiki pages I have set up for this, and let me know that you want to help out. The IRC channel #nuug on irc.freenode.net is a good place to coordinate this for now, as it is the IRC channel for the NUUG association where I have created the framework (I am the leader of the Norwegian Unix User Group).

The framework is still forming, and a lot is left to do. Do not be scared by the sketchy form of the current pages. :)

27th December 2010

One of the reasons I like the Digistan definition of "Free and Open Standard" is that this is a new term, and thus the meaning of the term has been decided by Digistan. The term "Open Standard" has become so misunderstood that it is no longer very useful when talking about standards. One end up discussing which definition is the best one and with such frame the only one gaining are the proponents of de-facto standards and proprietary solutions.

But to give us an idea about the diversity of definitions of open standards, here are a few that I know about. This list is not complete, but can be a starting point for those that want to do a complete survey. More definitions are available on the wikipedia page.

First off is my favourite, the definition from the European Interoperability Framework version 1.0. Really sad to notice that BSA and others has succeeded in getting it removed from version 2.0 of the framework by stacking the committee drafting the new version with their own people. Anyway, the definition is still available and it include the key properties needed to make sure everyone can use a specification on equal terms.

The following are the minimal characteristics that a specification and its attendant documents must have in order to be considered an open standard:

  • The standard is adopted and will be maintained by a not-for-profit organisation, and its ongoing development occurs on the basis of an open decision-making procedure available to all interested parties (consensus or majority decision etc.).
  • The standard has been published and the standard specification document is available either freely or at a nominal charge. It must be permissible to all to copy, distribute and use it for no fee or at a nominal fee.
  • The intellectual property - i.e. patents possibly present - of (parts of) the standard is made irrevocably available on a royalty- free basis.
  • There are no constraints on the re-use of the standard.

Another one originates from my friends over at DKUUG, who coined and gathered support for this definition in 2004. It even made it into the Danish parlament as their definition of a open standard. Another from a different part of the Danish government is available from the wikipedia page.

En åben standard opfylder følgende krav:

  1. Veldokumenteret med den fuldstændige specifikation offentligt tilgængelig.
  2. Frit implementerbar uden økonomiske, politiske eller juridiske begrænsninger på implementation og anvendelse.
  3. Standardiseret og vedligeholdt i et åbent forum (en såkaldt "standardiseringsorganisation") via en åben proces.

Then there is the definition from Free Software Foundation Europe.

An Open Standard refers to a format or protocol that is

  1. subject to full public assessment and use without constraints in a manner equally available to all parties;
  2. without any components or extensions that have dependencies on formats or protocols that do not meet the definition of an Open Standard themselves;
  3. free from legal or technical clauses that limit its utilisation by any party or in any business model;
  4. managed and further developed independently of any single vendor in a process open to the equal participation of competitors and third parties;
  5. available in multiple complete implementations by competing vendors, or as a complete implementation equally available to all parties.

A long time ago, SUN Microsystems, now bought by Oracle, created its Open Standards Checklist with a fairly detailed description.

Creation and Management of an Open Standard

  • Its development and management process must be collaborative and democratic:
    • Participation must be accessible to all those who wish to participate and can meet fair and reasonable criteria imposed by the organization under which it is developed and managed.
    • The processes must be documented and, through a known method, can be changed through input from all participants.
    • The process must be based on formal and binding commitments for the disclosure and licensing of intellectual property rights.
    • Development and management should strive for consensus, and an appeals process must be clearly outlined.
    • The standard specification must be open to extensive public review at least once in its life-cycle, with comments duly discussed and acted upon, if required.

Use and Licensing of an Open Standard

  • The standard must describe an interface, not an implementation, and the industry must be capable of creating multiple, competing implementations to the interface described in the standard without undue or restrictive constraints. Interfaces include APIs, protocols, schemas, data formats and their encoding.
  • The standard must not contain any proprietary "hooks" that create a technical or economic barriers
  • Faithful implementations of the standard must interoperate. Interoperability means the ability of a computer program to communicate and exchange information with other computer programs and mutually to use the information which has been exchanged. This includes the ability to use, convert, or exchange file formats, protocols, schemas, interface information or conventions, so as to permit the computer program to work with other computer programs and users in all the ways in which they are intended to function.
  • It must be permissible for anyone to copy, distribute and read the standard for a nominal fee, or even no fee. If there is a fee, it must be low enough to not preclude widespread use.
  • It must be possible for anyone to obtain free (no royalties or fees; also known as "royalty free"), worldwide, non-exclusive and perpetual licenses to all essential patent claims to make, use and sell products based on the standard. The only exceptions are terminations per the reciprocity and defensive suspension terms outlined below. Essential patent claims include pending, unpublished patents, published patents, and patent applications. The license is only for the exact scope of the standard in question.
    • May be conditioned only on reciprocal licenses to any of licensees' patent claims essential to practice that standard (also known as a reciprocity clause)
    • May be terminated as to any licensee who sues the licensor or any other licensee for infringement of patent claims essential to practice that standard (also known as a "defensive suspension" clause)
    • The same licensing terms are available to every potential licensor
  • The licensing terms of an open standards must not preclude implementations of that standard under open source licensing terms or restricted licensing terms

It is said that one of the nice things about standards is that there are so many of them. As you can see, the same holds true for open standard definitions. Most of the definitions have a lot in common, and it is not really controversial what properties a open standard should have, but the diversity of definitions have made it possible for those that want to avoid a level marked field and real competition to downplay the significance of open standards. I hope we can turn this tide by focusing on the advantages of Free and Open Standards.

25th December 2010

The Digistan definition of a free and open standard reads like this:

The Digital Standards Organization defines free and open standard as follows:

  1. A free and open standard is immune to vendor capture at all stages in its life-cycle. Immunity from vendor capture makes it possible to freely use, improve upon, trust, and extend a standard over time.
  2. The standard is adopted and will be maintained by a not-for-profit organisation, and its ongoing development occurs on the basis of an open decision-making procedure available to all interested parties.
  3. The standard has been published and the standard specification document is available freely. It must be permissible to all to copy, distribute, and use it freely.
  4. The patents possibly present on (parts of) the standard are made irrevocably available on a royalty-free basis.
  5. There are no constraints on the re-use of the standard.

The economic outcome of a free and open standard, which can be measured, is that it enables perfect competition between suppliers of products based on the standard.

For a while now I have tried to figure out of Ogg Theora is a free and open standard according to this definition. Here is a short writeup of what I have been able to gather so far. I brought up the topic on the Xiph advocacy mailing list in July 2009, for those that want to see some background information. According to Ivo Emanuel Gonçalves and Monty Montgomery on that list the Ogg Theora specification fulfils the Digistan definition.

Free from vendor capture?

As far as I can see, there is no single vendor that can control the Ogg Theora specification. It can be argued that the Xiph foundation is such vendor, but given that it is a non-profit foundation with the expressed goal making free and open protocols and standards available, it is not obvious that this is a real risk. One issue with the Xiph foundation is that its inner working (as in board member list, or who control the foundation) are not easily available on the web. I've been unable to find out who is in the foundation board, and have not seen any accounting information documenting how money is handled nor where is is spent in the foundation. It is thus not obvious for an external observer who control The Xiph foundation, and for all I know it is possible for a single vendor to take control over the specification. But it seem unlikely.

Maintained by open not-for-profit organisation?

Assuming that the Xiph foundation is the organisation its web pages claim it to be, this point is fulfilled. If Xiph foundation is controlled by a single vendor, it isn't, but I have not found any documentation indicating this.

According to a report prepared by Audun Vaaler og Børre Ludvigsen for the Norwegian government, the Xiph foundation is a non-commercial organisation and the development process is open, transparent and non-Discrimatory. Until proven otherwise, I believe it make most sense to believe the report is correct.

Specification freely available?

The specification for the Ogg container format and both the Vorbis and Theora codeces are available on the web. This are the terms in the Vorbis and Theora specification:

Anyone may freely use and distribute the Ogg and [Vorbis/Theora] specifications, whether in private, public, or corporate capacity. However, the Xiph.Org Foundation and the Ogg project reserve the right to set the Ogg [Vorbis/Theora] specification and certify specification compliance.

The Ogg container format is specified in IETF RFC 3533, and this is the term:

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.

The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.

All these terms seem to allow unlimited distribution and use, an this term seem to be fulfilled. There might be a problem with the missing permission to distribute modified versions of the text, and thus reuse it in other specifications. Not quite sure if that is a requirement for the Digistan definition.

Royalty-free?

There are no known patent claims requiring royalties for the Ogg Theora format. MPEG-LA and Steve Jobs in Apple claim to know about some patent claims (submarine patents) against the Theora format, but no-one else seem to believe them. Both Opera Software and the Mozilla Foundation have looked into this and decided to implement Ogg Theora support in their browsers without paying any royalties. For now the claims from MPEG-LA and Steve Jobs seem more like FUD to scare people to use the H.264 codec than any real problem with Ogg Theora.

No constraints on re-use?

I am not aware of any constraints on re-use.

Conclusion

3 of 5 requirements seem obviously fulfilled, and the remaining 2 depend on the governing structure of the Xiph foundation. Given the background report used by the Norwegian government, I believe it is safe to assume the last two requirements are fulfilled too, but it would be nice if the Xiph foundation web site made it easier to verify this.

It would be nice to see other analysis of other specifications to see if they are free and open standards.

25th December 2010

A few days ago an article in the Norwegian Computerworld magazine about how version 2.0 of European Interoperability Framework has been successfully lobbied by the proprietary software industry to remove the focus on free software. Nothing very surprising there, given earlier reports on how Microsoft and others have stacked the committees in this work. But I find this very sad. The definition of an open standard from version 1 was very good, and something I believe should be used also in the future, alongside the definition from Digistan. Version 2 have removed the open standard definition from its content.

Anyway, the news reminded me of the great reply sent by Dr. Edgar Villanueva, congressman in Peru at the time, to Microsoft as a reply to Microsofts attack on his proposal regarding the use of free software in the public sector in Peru. As the text was not available from a few of the URLs where it used to be available, I copy it here from my source to ensure it is available also in the future. Some background information about that story is available in an article from Linux Journal in 2002.

Lima, 8th of April, 2002
To: Señor JUAN ALBERTO GONZÁLEZ
General Manager of Microsoft Perú

Dear Sir:

First of all, I thank you for your letter of March 25, 2002 in which you state the official position of Microsoft relative to Bill Number 1609, Free Software in Public Administration, which is indubitably inspired by the desire for Peru to find a suitable place in the global technological context. In the same spirit, and convinced that we will find the best solutions through an exchange of clear and open ideas, I will take this opportunity to reply to the commentaries included in your letter.

While acknowledging that opinions such as yours constitute a significant contribution, it would have been even more worthwhile for me if, rather than formulating objections of a general nature (which we will analyze in detail later) you had gathered solid arguments for the advantages that proprietary software could bring to the Peruvian State, and to its citizens in general, since this would have allowed a more enlightening exchange in respect of each of our positions.

With the aim of creating an orderly debate, we will assume that what you call "open source software" is what the Bill defines as "free software", since there exists software for which the source code is distributed together with the program, but which does not fall within the definition established by the Bill; and that what you call "commercial software" is what the Bill defines as "proprietary" or "unfree", given that there exists free software which is sold in the market for a price like any other good or service.

It is also necessary to make it clear that the aim of the Bill we are discussing is not directly related to the amount of direct savings that can by made by using free software in state institutions. That is in any case a marginal aggregate value, but in no way is it the chief focus of the Bill. The basic principles which inspire the Bill are linked to the basic guarantees of a state of law, such as:

  • Free access to public information by the citizen.
  • Permanence of public data.
  • Security of the State and citizens.

To guarantee the free access of citizens to public information, it is indispensable that the encoding of data is not tied to a single provider. The use of standard and open formats gives a guarantee of this free access, if necessary through the creation of compatible free software.

To guarantee the permanence of public data, it is necessary that the usability and maintenance of the software does not depend on the goodwill of the suppliers, or on the monopoly conditions imposed by them. For this reason the State needs systems the development of which can be guaranteed due to the availability of the source code.

To guarantee national security or the security of the State, it is indispensable to be able to rely on systems without elements which allow control from a distance or the undesired transmission of information to third parties. Systems with source code freely accessible to the public are required to allow their inspection by the State itself, by the citizens, and by a large number of independent experts throughout the world. Our proposal brings further security, since the knowledge of the source code will eliminate the growing number of programs with *spy code*.

In the same way, our proposal strengthens the security of the citizens, both in their role as legitimate owners of information managed by the state, and in their role as consumers. In this second case, by allowing the growth of a widespread availability of free software not containing *spy code* able to put at risk privacy and individual freedoms.

In this sense, the Bill is limited to establishing the conditions under which the state bodies will obtain software in the future, that is, in a way compatible with these basic principles.

From reading the Bill it will be clear that once passed:

  • the law does not forbid the production of proprietary software
  • the law does not forbid the sale of proprietary software
  • the law does not specify which concrete software to use
  • the law does not dictate the supplier from whom software will be bought
  • the law does not limit the terms under which a software product can be licensed.
  • What the Bill does express clearly, is that, for software to be acceptable for the state it is not enough that it is technically capable of fulfilling a task, but that further the contractual conditions must satisfy a series of requirements regarding the license, without which the State cannot guarantee the citizen adequate processing of his data, watching over its integrity, confidentiality, and accessibility throughout time, as these are very critical aspects for its normal functioning.

    We agree, Mr. Gonzalez, that information and communication technology have a significant impact on the quality of life of the citizens (whether it be positive or negative). We surely also agree that the basic values I have pointed out above are fundamental in a democratic state like Peru. So we are very interested to know of any other way of guaranteeing these principles, other than through the use of free software in the terms defined by the Bill.

    As for the observations you have made, we will now go on to analyze them in detail:

    Firstly, you point out that: "1. The bill makes it compulsory for all public bodies to use only free software, that is to say open source software, which breaches the principles of equality before the law, that of non-discrimination and the right of free private enterprise, freedom of industry and of contract, protected by the constitution."

    This understanding is in error. The Bill in no way affects the rights you list; it limits itself entirely to establishing conditions for the use of software on the part of state institutions, without in any way meddling in private sector transactions. It is a well established principle that the State does not enjoy the wide spectrum of contractual freedom of the private sector, as it is limited in its actions precisely by the requirement for transparency of public acts; and in this sense, the preservation of the greater common interest must prevail when legislating on the matter.

    The Bill protects equality under the law, since no natural or legal person is excluded from the right of offering these goods to the State under the conditions defined in the Bill and without more limitations than those established by the Law of State Contracts and Purchasing (T.U.O. by Supreme Decree No. 012-2001-PCM).

    The Bill does not introduce any discrimination whatever, since it only establishes *how* the goods have to be provided (which is a state power) and not *who* has to provide them (which would effectively be discriminatory, if restrictions based on national origin, race religion, ideology, sexual preference etc. were imposed). On the contrary, the Bill is decidedly antidiscriminatory. This is so because by defining with no room for doubt the conditions for the provision of software, it prevents state bodies from using software which has a license including discriminatory conditions.

    It should be obvious from the preceding two paragraphs that the Bill does not harm free private enterprise, since the latter can always choose under what conditions it will produce software; some of these will be acceptable to the State, and others will not be since they contradict the guarantee of the basic principles listed above. This free initiative is of course compatible with the freedom of industry and freedom of contract (in the limited form in which the State can exercise the latter). Any private subject can produce software under the conditions which the State requires, or can refrain from doing so. Nobody is forced to adopt a model of production, but if they wish to provide software to the State, they must provide the mechanisms which guarantee the basic principles, and which are those described in the Bill.

    By way of an example: nothing in the text of the Bill would prevent your company offering the State bodies an office "suite", under the conditions defined in the Bill and setting the price that you consider satisfactory. If you did not, it would not be due to restrictions imposed by the law, but to business decisions relative to the method of commercializing your products, decisions with which the State is not involved.

    To continue; you note that:" 2. The bill, by making the use of open source software compulsory, would establish discriminatory and non competitive practices in the contracting and purchasing by public bodies..."

    This statement is just a reiteration of the previous one, and so the response can be found above. However, let us concern ourselves for a moment with your comment regarding "non-competitive ... practices."

    Of course, in defining any kind of purchase, the buyer sets conditions which relate to the proposed use of the good or service. From the start, this excludes certain manufacturers from the possibility of competing, but does not exclude them "a priori", but rather based on a series of principles determined by the autonomous will of the purchaser, and so the process takes place in conformance with the law. And in the Bill it is established that *no one* is excluded from competing as far as he guarantees the fulfillment of the basic principles.

    Furthermore, the Bill *stimulates* competition, since it tends to generate a supply of software with better conditions of usability, and to better existing work, in a model of continuous improvement.

    On the other hand, the central aspect of competivity is the chance to provide better choices to the consumer. Now, it is impossible to ignore the fact that marketing does not play a neutral role when the product is offered on the market (since accepting the opposite would lead one to suppose that firms' expenses in marketing lack any sense), and that therefore a significant expense under this heading can influence the decisions of the purchaser. This influence of marketing is in large measure reduced by the bill that we are backing, since the choice within the framework proposed is based on the *technical merits* of the product and not on the effort put into commercialization by the producer; in this sense, competitiveness is increased, since the smallest software producer can compete on equal terms with the most powerful corporations.

    It is necessary to stress that there is no position more anti-competitive than that of the big software producers, which frequently abuse their dominant position, since in innumerable cases they propose as a solution to problems raised by users: "update your software to the new version" (at the user's expense, naturally); furthermore, it is common to find arbitrary cessation of technical help for products, which, in the provider's judgment alone, are "old"; and so, to receive any kind of technical assistance, the user finds himself forced to migrate to new versions (with non-trivial costs, especially as changes in hardware platform are often involved). And as the whole infrastructure is based on proprietary data formats, the user stays "trapped" in the need to continue using products from the same supplier, or to make the huge effort to change to another environment (probably also proprietary).

    You add: "3. So, by compelling the State to favor a business model based entirely on open source, the bill would only discourage the local and international manufacturing companies, which are the ones which really undertake important expenditures, create a significant number of direct and indirect jobs, as well as contributing to the GNP, as opposed to a model of open source software which tends to have an ever weaker economic impact, since it mainly creates jobs in the service sector."

    I do not agree with your statement. Partly because of what you yourself point out in paragraph 6 of your letter, regarding the relative weight of services in the context of software use. This contradiction alone would invalidate your position. The service model, adopted by a large number of companies in the software industry, is much larger in economic terms, and with a tendency to increase, than the licensing of programs.

    On the other hand, the private sector of the economy has the widest possible freedom to choose the economic model which best suits its interests, even if this freedom of choice is often obscured subliminally by the disproportionate expenditure on marketing by the producers of proprietary software.

    In addition, a reading of your opinion would lead to the conclusion that the State market is crucial and essential for the proprietary software industry, to such a point that the choice made by the State in this bill would completely eliminate the market for these firms. If that is true, we can deduce that the State must be subsidizing the proprietary software industry. In the unlikely event that this were true, the State would have the right to apply the subsidies in the area it considered of greatest social value; it is undeniable, in this improbable hypothesis, that if the State decided to subsidize software, it would have to do so choosing the free over the proprietary, considering its social effect and the rational use of taxpayers money.

    In respect of the jobs generated by proprietary software in countries like ours, these mainly concern technical tasks of little aggregate value; at the local level, the technicians who provide support for proprietary software produced by transnational companies do not have the possibility of fixing bugs, not necessarily for lack of technical capability or of talent, but because they do not have access to the source code to fix it. With free software one creates more technically qualified employment and a framework of free competence where success is only tied to the ability to offer good technical support and quality of service, one stimulates the market, and one increases the shared fund of knowledge, opening up alternatives to generate services of greater total value and a higher quality level, to the benefit of all involved: producers, service organizations, and consumers.

    It is a common phenomenon in developing countries that local software industries obtain the majority of their takings in the service sector, or in the creation of "ad hoc" software. Therefore, any negative impact that the application of the Bill might have in this sector will be more than compensated by a growth in demand for services (as long as these are carried out to high quality standards). If the transnational software companies decide not to compete under these new rules of the game, it is likely that they will undergo some decrease in takings in terms of payment for licenses; however, considering that these firms continue to allege that much of the software used by the State has been illegally copied, one can see that the impact will not be very serious. Certainly, in any case their fortune will be determined by market laws, changes in which cannot be avoided; many firms traditionally associated with proprietary software have already set out on the road (supported by copious expense) of providing services associated with free software, which shows that the models are not mutually exclusive.

    With this bill the State is deciding that it needs to preserve certain fundamental values. And it is deciding this based on its sovereign power, without affecting any of the constitutional guarantees. If these values could be guaranteed without having to choose a particular economic model, the effects of the law would be even more beneficial. In any case, it should be clear that the State does not choose an economic model; if it happens that there only exists one economic model capable of providing software which provides the basic guarantee of these principles, this is because of historical circumstances, not because of an arbitrary choice of a given model.

    Your letter continues: "4. The bill imposes the use of open source software without considering the dangers that this can bring from the point of view of security, guarantee, and possible violation of the intellectual property rights of third parties."

    Alluding in an abstract way to "the dangers this can bring", without specifically mentioning a single one of these supposed dangers, shows at the least some lack of knowledge of the topic. So, allow me to enlighten you on these points.

    On security:

    National security has already been mentioned in general terms in the initial discussion of the basic principles of the bill. In more specific terms, relative to the security of the software itself, it is well known that all software (whether proprietary or free) contains errors or "bugs" (in programmers' slang). But it is also well known that the bugs in free software are fewer, and are fixed much more quickly, than in proprietary software. It is not in vain that numerous public bodies responsible for the IT security of state systems in developed countries require the use of free software for the same conditions of security and efficiency.

    What is impossible to prove is that proprietary software is more secure than free, without the public and open inspection of the scientific community and users in general. This demonstration is impossible because the model of proprietary software itself prevents this analysis, so that any guarantee of security is based only on promises of good intentions (biased, by any reckoning) made by the producer itself, or its contractors.

    It should be remembered that in many cases, the licensing conditions include Non-Disclosure clauses which prevent the user from publicly revealing security flaws found in the licensed proprietary product.

    In respect of the guarantee:

    As you know perfectly well, or could find out by reading the "End User License Agreement" of the products you license, in the great majority of cases the guarantees are limited to replacement of the storage medium in case of defects, but in no case is compensation given for direct or indirect damages, loss of profits, etc... If as a result of a security bug in one of your products, not fixed in time by yourselves, an attacker managed to compromise crucial State systems, what guarantees, reparations and compensation would your company make in accordance with your licensing conditions? The guarantees of proprietary software, inasmuch as programs are delivered ``AS IS'', that is, in the state in which they are, with no additional responsibility of the provider in respect of function, in no way differ from those normal with free software.

    On Intellectual Property:

    Questions of intellectual property fall outside the scope of this bill, since they are covered by specific other laws. The model of free software in no way implies ignorance of these laws, and in fact the great majority of free software is covered by copyright. In reality, the inclusion of this question in your observations shows your confusion in respect of the legal framework in which free software is developed. The inclusion of the intellectual property of others in works claimed as one's own is not a practice that has been noted in the free software community; whereas, unfortunately, it has been in the area of proprietary software. As an example, the condemnation by the Commercial Court of Nanterre, France, on 27th September 2001 of Microsoft Corp. to a penalty of 3 million francs in damages and interest, for violation of intellectual property (piracy, to use the unfortunate term that your firm commonly uses in its publicity).

    You go on to say that: "The bill uses the concept of open source software incorrectly, since it does not necessarily imply that the software is free or of zero cost, and so arrives at mistaken conclusions regarding State savings, with no cost-benefit analysis to validate its position."

    This observation is wrong; in principle, freedom and lack of cost are orthogonal concepts: there is software which is proprietary and charged for (for example, MS Office), software which is proprietary and free of charge (MS Internet Explorer), software which is free and charged for (Red Hat, SuSE etc GNU/Linux distributions), software which is free and not charged for (Apache, Open Office, Mozilla), and even software which can be licensed in a range of combinations (MySQL).

    Certainly free software is not necessarily free of charge. And the text of the bill does not state that it has to be so, as you will have noted after reading it. The definitions included in the Bill state clearly *what* should be considered free software, at no point referring to freedom from charges. Although the possibility of savings in payments for proprietary software licenses are mentioned, the foundations of the bill clearly refer to the fundamental guarantees to be preserved and to the stimulus to local technological development. Given that a democratic State must support these principles, it has no other choice than to use software with publicly available source code, and to exchange information only in standard formats.

    If the State does not use software with these characteristics, it will be weakening basic republican principles. Luckily, free software also implies lower total costs; however, even given the hypothesis (easily disproved) that it was more expensive than proprietary software, the simple existence of an effective free software tool for a particular IT function would oblige the State to use it; not by command of this Bill, but because of the basic principles we enumerated at the start, and which arise from the very essence of the lawful democratic State.

    You continue: "6. It is wrong to think that Open Source Software is free of charge. Research by the Gartner Group (an important investigator of the technological market recognized at world level) has shown that the cost of purchase of software (operating system and applications) is only 8% of the total cost which firms and institutions take on for a rational and truly beneficial use of the technology. The other 92% consists of: installation costs, enabling, support, maintenance, administration, and down-time."

    This argument repeats that already given in paragraph 5 and partly contradicts paragraph 3. For the sake of brevity we refer to the comments on those paragraphs. However, allow me to point out that your conclusion is logically false: even if according to Gartner Group the cost of software is on average only 8% of the total cost of use, this does not in any way deny the existence of software which is free of charge, that is, with a licensing cost of zero.

    In addition, in this paragraph you correctly point out that the service components and losses due to down-time make up the largest part of the total cost of software use, which, as you will note, contradicts your statement regarding the small value of services suggested in paragraph 3. Now the use of free software contributes significantly to reduce the remaining life-cycle costs. This reduction in the costs of installation, support etc. can be noted in several areas: in the first place, the competitive service model of free software, support and maintenance for which can be freely contracted out to a range of suppliers competing on the grounds of quality and low cost. This is true for installation, enabling, and support, and in large part for maintenance. In the second place, due to the reproductive characteristics of the model, maintenance carried out for an application is easily replicable, without incurring large costs (that is, without paying more than once for the same thing) since modifications, if one wishes, can be incorporated in the common fund of knowledge. Thirdly, the huge costs caused by non-functioning software ("blue screens of death", malicious code such as virus, worms, and trojans, exceptions, general protection faults and other well-known problems) are reduced considerably by using more stable software; and it is well known that one of the most notable virtues of free software is its stability.

    You further state that: "7. One of the arguments behind the bill is the supposed freedom from costs of open-source software, compared with the costs of commercial software, without taking into account the fact that there exist types of volume licensing which can be highly advantageous for the State, as has happened in other countries."

    I have already pointed out that what is in question is not the cost of the software but the principles of freedom of information, accessibility, and security. These arguments have been covered extensively in the preceding paragraphs to which I would refer you.

    On the other hand, there certainly exist types of volume licensing (although unfortunately proprietary software does not satisfy the basic principles). But as you correctly pointed out in the immediately preceding paragraph of your letter, they only manage to reduce the impact of a component which makes up no more than 8% of the total.

    You continue: "8. In addition, the alternative adopted by the bill (I) is clearly more expensive, due to the high costs of software migration, and (II) puts at risk compatibility and interoperability of the IT platforms within the State, and between the State and the private sector, given the hundreds of versions of open source software on the market."

    Let us analyze your statement in two parts. Your first argument, that migration implies high costs, is in reality an argument in favor of the Bill. Because the more time goes by, the more difficult migration to another technology will become; and at the same time, the security risks associated with proprietary software will continue to increase. In this way, the use of proprietary systems and formats will make the State ever more dependent on specific suppliers. Once a policy of using free software has been established (which certainly, does imply some cost) then on the contrary migration from one system to another becomes very simple, since all data is stored in open formats. On the other hand, migration to an open software context implies no more costs than migration between two different proprietary software contexts, which invalidates your argument completely.

    The second argument refers to "problems in interoperability of the IT platforms within the State, and between the State and the private sector" This statement implies a certain lack of knowledge of the way in which free software is built, which does not maximize the dependence of the user on a particular platform, as normally happens in the realm of proprietary software. Even when there are multiple free software distributions, and numerous programs which can be used for the same function, interoperability is guaranteed as much by the use of standard formats, as required by the bill, as by the possibility of creating interoperable software given the availability of the source code.

    You then say that: "9. The majority of open source code does not offer adequate levels of service nor the guarantee from recognized manufacturers of high productivity on the part of the users, which has led various public organizations to retract their decision to go with an open source software solution and to use commercial software in its place."

    This observation is without foundation. In respect of the guarantee, your argument was rebutted in the response to paragraph 4. In respect of support services, it is possible to use free software without them (just as also happens with proprietary software), but anyone who does need them can obtain support separately, whether from local firms or from international corporations, again just as in the case of proprietary software.

    On the other hand, it would contribute greatly to our analysis if you could inform us about free software projects *established* in public bodies which have already been abandoned in favor of proprietary software. We know of a good number of cases where the opposite has taken place, but not know of any where what you describe has taken place.

    You continue by observing that: "10. The bill discourages the creativity of the Peruvian software industry, which invoices 40 million US$/year, exports 4 million US$ (10th in ranking among non-traditional exports, more than handicrafts) and is a source of highly qualified employment. With a law that encourages the use of open source, software programmers lose their intellectual property rights and their main source of payment."

    It is clear enough that nobody is forced to commercialize their code as free software. The only thing to take into account is that if it is not free software, it cannot be sold to the public sector. This is not in any case the main market for the national software industry. We covered some questions referring to the influence of the Bill on the generation of employment which would be both highly technically qualified and in better conditions for competition above, so it seems unnecessary to insist on this point.

    What follows in your statement is incorrect. On the one hand, no author of free software loses his intellectual property rights, unless he expressly wishes to place his work in the public domain. The free software movement has always been very respectful of intellectual property, and has generated widespread public recognition of its authors. Names like those of Richard Stallman, Linus Torvalds, Guido van Rossum, Larry Wall, Miguel de Icaza, Andrew Tridgell, Theo de Raadt, Andrea Arcangeli, Bruce Perens, Darren Reed, Alan Cox, Eric Raymond, and many others, are recognized world-wide for their contributions to the development of software that is used today by millions of people throughout the world. On the other hand, to say that the rewards for authors rights make up the main source of payment of Peruvian programmers is in any case a guess, in particular since there is no proof to this effect, nor a demonstration of how the use of free software by the State would influence these payments.

    You go on to say that: "11. Open source software, since it can be distributed without charge, does not allow the generation of income for its developers through exports. In this way, the multiplier effect of the sale of software to other countries is weakened, and so in turn is the growth of the industry, while Government rules ought on the contrary to stimulate local industry."

    This statement shows once again complete ignorance of the mechanisms of and market for free software. It tries to claim that the market of sale of non- exclusive rights for use (sale of licenses) is the only possible one for the software industry, when you yourself pointed out several paragraphs above that it is not even the most important one. The incentives that the bill offers for the growth of a supply of better qualified professionals, together with the increase in experience that working on a large scale with free software within the State will bring for Peruvian technicians, will place them in a highly competitive position to offer their services abroad.

    You then state that: "12. In the Forum, the use of open source software in education was discussed, without mentioning the complete collapse of this initiative in a country like Mexico, where precisely the State employees who founded the project now state that open source software did not make it possible to offer a learning experience to pupils in the schools, did not take into account the capability at a national level to give adequate support to the platform, and that the software did not and does not allow for the levels of platform integration that now exist in schools."

    In fact Mexico has gone into reverse with the Red Escolar (Schools Network) project. This is due precisely to the fact that the driving forces behind the Mexican project used license costs as their main argument, instead of the other reasons specified in our project, which are far more essential. Because of this conceptual mistake, and as a result of the lack of effective support from the SEP (Secretary of State for Public Education), the assumption was made that to implant free software in schools it would be enough to drop their software budget and send them a CD ROM with Gnu/Linux instead. Of course this failed, and it couldn't have been otherwise, just as school laboratories fail when they use proprietary software and have no budget for implementation and maintenance. That's exactly why our bill is not limited to making the use of free software mandatory, but recognizes the need to create a viable migration plan, in which the State undertakes the technical transition in an orderly way in order to then enjoy the advantages of free software.

    You end with a rhetorical question: "13. If open source software satisfies all the requirements of State bodies, why do you need a law to adopt it? Shouldn't it be the market which decides freely which products give most benefits or value?"

    We agree that in the private sector of the economy, it must be the market that decides which products to use, and no state interference is permissible there. However, in the case of the public sector, the reasoning is not the same: as we have already established, the state archives, handles, and transmits information which does not belong to it, but which is entrusted to it by citizens, who have no alternative under the rule of law. As a counterpart to this legal requirement, the State must take extreme measures to safeguard the integrity, confidentiality, and accessibility of this information. The use of proprietary software raises serious doubts as to whether these requirements can be fulfilled, lacks conclusive evidence in this respect, and so is not suitable for use in the public sector.

    The need for a law is based, firstly, on the realization of the fundamental principles listed above in the specific area of software; secondly, on the fact that the State is not an ideal homogeneous entity, but made up of multiple bodies with varying degrees of autonomy in decision making. Given that it is inappropriate to use proprietary software, the fact of establishing these rules in law will prevent the personal discretion of any state employee from putting at risk the information which belongs to citizens. And above all, because it constitutes an up-to-date reaffirmation in relation to the means of management and communication of information used today, it is based on the republican principle of openness to the public.

    In conformance with this universally accepted principle, the citizen has the right to know all information held by the State and not covered by well- founded declarations of secrecy based on law. Now, software deals with information and is itself information. Information in a special form, capable of being interpreted by a machine in order to execute actions, but crucial information all the same because the citizen has a legitimate right to know, for example, how his vote is computed or his taxes calculated. And for that he must have free access to the source code and be able to prove to his satisfaction the programs used for electoral computations or calculation of his taxes.

    I wish you the greatest respect, and would like to repeat that my office will always be open for you to expound your point of view to whatever level of detail you consider suitable.

    Cordially,
    DR. EDGAR DAVID VILLANUEVA NUÑEZ
    Congressman of the Republic of Perú.

    25th December 2010

    Half a year ago I wrote a bit about OfficeShots, a web service to allow anyone to test how ODF documents are handled by the different programs reading and writing the ODF format.

    I just had a look at the service, and it seem to be going strong. Very interesting to see the results reported in the gallery, how different Office implementations handle different ODF features. Sad to see that KOffice was not doing it very well, and happy to see that LibreOffice has been tested already (but sadly not listed as a option for OfficeShots users yet). I am glad to see that the ODF community got such a great test tool available.

    Tags: english, standard.
    22nd December 2010

    The last few days I have spent at work here at the University of Oslo testing if the new batch of computers will work with Linux. Every year for the last few years the university have organised shared bid of a few thousand computers, and this year HP won the bid. Two different desktops and five different laptops are on the list this year. We in the UNIX group want to know which one of these computers work well with RHEL and Ubuntu, the two Linux distributions we currently handle at the university.

    My test method is simple, and I share it here to get feedback and perhaps inspire others to test hardware as well. To test, I PXE install the OS version of choice, and log in as my normal user and run a few applications and plug in selected pieces of hardware. When something fail, I make a note about this in the test matrix and move on. If I have some spare time I try to report the bug to the OS vendor, but as I only have the machines for a short time, I rarely have the time to do this for all the problems I find.

    Anyway, to get to the point of this post. Here is the simple tests I perform on a new model.

    • Is PXE installation working? I'm testing with RHEL6, Ubuntu Lucid and Ubuntu Maverik at the moment. If I feel like it, I also test with RHEL5 and Debian Edu/Squeeze.
    • Is X.org working? If the graphical login screen show up after installation, X.org is working.
    • Is hardware accelerated OpenGL working? Running glxgears (in package mesa-utils on Ubuntu) and writing down the frames per second reported by the program.
    • Is sound working? With Gnome and KDE, a sound is played when logging in, and if I can hear this the test is successful. If there are several audio exits on the machine, I try them all and check if the Gnome/KDE audio mixer can control where to send the sound. I normally test this by playing a HTML5 video in Firefox/Iceweasel.
    • Is the USB subsystem working? I test this by plugging in a USB memory stick and see if Gnome/KDE notices this.
    • Is the CD/DVD player working? I test this by inserting any CD/DVD I have lying around, and see if Gnome/KDE notices this.
    • Is any built in camera working? Test using cheese, and see if a picture from the v4l device show up.
    • Is bluetooth working? Use the Gnome/KDE browsing tool to see if any bluetooth devices are discovered. In my office, I normally see a few.
    • For laptops, is the SD or Compaq Flash reader working. I have memory modules lying around, and stick them in and see if Gnome/KDE notice this.
    • For laptops, is suspend/hibernate working? I'm testing if the special button work, and if the laptop continue to work after resume.
    • For laptops, is the extra buttons working, like audio level, adjusting background light, switching on/off external video output, switching on/off wifi, bluetooth, etc? The set of buttons differ from laptop to laptop, so I just write down which are working and which are not.
    • Some laptops have smart card readers, finger print readers, acceleration sensors etc. I rarely test these, as I do not know how to quickly test if they are working or not, so I only document their existence.

    By now I suspect you are really curious what the test results are for the HP machines I am testing. I'm not done yet, so I will report the test results later. For now I can report that HP 8100 Elite work fine, and hibernation fail with HP EliteBook 8440p on Ubuntu Lucid, and audio fail on RHEL6. Ubuntu Maverik worked with 8440p. As you can see, I have most machines left to test. One interesting observation is that Ubuntu Lucid has almost twice the frame rate than RHEL6 with glxgears. No idea why.

    11th December 2010

    As I continue to explore BitCoin, I've starting to wonder what properties the system have, and how it will be affected by laws and regulations here in Norway. Here are some random notes.

    One interesting thing to note is that since the transactions are verified using a peer to peer network, all details about a transaction is known to everyone. This means that if a BitCoin address has been published like I did with mine in my initial post about BitCoin, it is possible for everyone to see how many BitCoins have been transfered to that address. There is even a web service to look at the details for all transactions. There I can see that my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b have received 16.06 Bitcoin, the 1LfdGnGuWkpSJgbQySxxCWhv8MHqvwst3 address of Simon Phipps have received 181.97 BitCoin and the address 1MCwBbhNGp5hRm5rC1Aims2YFRe2SXPYKt of EFF have received 2447.38 BitCoins so far. Thank you to each and every one of you that donated bitcoins to support my activity. The fact that anyone can see how much money was transfered to a given address make it more obvious why the BitCoin community recommend to generate and hand out a new address for each transaction. I'm told there is no way to track which addresses belong to a given person or organisation without the person or organisation revealing it themselves, as Simon, EFF and I have done.

    In Norway, and in most other countries, there are laws and regulations limiting how much money one can transfer across the border without declaring it. There are money laundering, tax and accounting laws and regulations I would expect to apply to the use of BitCoin. If the Skolelinux foundation (SLX Debian Labs) were to accept donations in BitCoin in addition to normal bank transfers like EFF is doing, how should this be accounted? Given that it is impossible to know if money can cross the border or not, should everything or nothing be declared? What exchange rate should be used when calculating taxes? Would receivers have to pay income tax if the foundation were to pay Skolelinux contributors in BitCoin? I have no idea, but it would be interesting to know.

    For a currency to be useful and successful, it must be trusted and accepted by a lot of users. It must be possible to get easy access to the currency (as a wage or using currency exchanges), and it must be easy to spend it. At the moment BitCoin seem fairly easy to get access to, but there are very few places to spend it. I am not really a regular user of any of the vendor types currently accepting BitCoin, so I wonder when my kind of shop would start accepting BitCoins. I would like to buy electronics, travels and subway tickets, not herbs and books. :) The currency is young, and this will improve over time if it become popular, but I suspect regular banks will start to lobby to get BitCoin declared illegal if it become popular. I'm sure they will claim it is helping fund terrorism and money laundering (which probably would be true, as is any currency in existence), but I believe the problems should be solved elsewhere and not by blaming currencies.

    The process of creating new BitCoins is called mining, and it is CPU intensive process that depend on a bit of luck as well (as one is competing against all the other miners currently spending CPU cycles to see which one get the next lump of cash). The "winner" get 50 BitCoin when this happen. Yesterday I came across the obvious way to join forces to increase ones changes of getting at least some coins, by coordinating the work on mining BitCoins across several machines and people, and sharing the result if one is lucky and get the 50 BitCoins. Check out BitCoin Pool if this sounds interesting. I have not had time to try to set up a machine to participate there yet, but have seen that running on ones own for a few days have not yield any BitCoins througth mining yet.

    Update 2010-12-15: Found an interesting criticism of bitcoin. Not quite sure how valid it is, but thought it was interesting to read. The arguments presented seem to be equally valid for gold, which was used as a currency for many years.

    10th December 2010

    With this weeks lawless governmental attacks on Wikileak and free speech, it has become obvious that PayPal, visa and mastercard can not be trusted to handle money transactions. A blog post from Simon Phipps on bitcoin reminded me about a project that a friend of mine mentioned earlier. I decided to follow Simon's example, and get involved with BitCoin. I got some help from my friend to get it all running, and he even handed me some bitcoins to get started. I even donated a few bitcoins to Simon for helping me remember BitCoin.

    So, what is bitcoins, you probably wonder? It is a digital crypto-currency, decentralised and handled using peer-to-peer networks. It allows anonymous transactions and prohibits central control over the transactions, making it impossible for governments and companies alike to block donations and other transactions. The source is free software, and while the key dependency wxWidgets 2.9 for the graphical user interface is missing in Debian, the command line client builds just fine. Hopefully Jonas will get the package into Debian soon.

    Bitcoins can be converted to other currencies, like USD and EUR. There are companies accepting bitcoins when selling services and goods, and there are even currency "stock" markets where the exchange rate is decided. There are not many users so far, but the concept seems promising. If you want to get started and lack a friend with any bitcoins to spare, you can even get some for free (0.05 bitcoin at the time of writing). Use BitcoinWatch to keep an eye on the current exchange rates.

    As an experiment, I have decided to set up bitcoind on one of my machines. If you want to support my activity, please send Bitcoin donations to the address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b. Thank you!

    9th December 2010

    A few days ago, I was introduces to some students in the robot student assosiation Robotica Osloensis at the University of Oslo where I work, who planned to get their own 3D printer. They wanted to learn from me based on my work in the area. After having a short lunch meeting with them, I offered them to borrow my reprap kit, as I never had time to complete the build and this seem unlike to change any time soon. I look forward to see how this goes. This monday their volunteer driver picked up my kit and drove it to their lab, and tomorrow I am told the last exam is over so they can start work on getting the 3D printer operational.

    The robotic group have already build several robots on their own, and seem capable of getting the reprap operational. I really look forward to being able to print all the cool 3D designs published on Thingiverse. I even got some 3D scans I got made during Dagen@IFI when one of the groups at the computer science department at the university demonstrated their very cool 3D scanner.

    29th November 2010

    On friday, the first Debian Edu / Skolelinux development gathering in a long time take place here in Oslo, Norway. I really look forward to seeing all the good people working on the Squeeze release. The gathering is open for everyone interested in learning more about Debian Edu / Skolelinux.

    On Saturday, the Norwegian member organization taking care of organizing these development gatherings, Fri Programvare i Skolen, will hold its General Assembly for 2010. Membership is open for all, and currently there are 388 people registered as members. Last year 32 members cast their vote in the memberdb based election system. I hope more people find time to vote this year.

    27th November 2010

    In the latest issue of Linux Journal, the readers choices were presented, and the winner among the multimedia player were VLC. Personally, I like VLC, and it is my player of choice when I first try to play a video file or stream. Only if VLC fail will I drag out gmplayer to see if it can do better. The reason is mostly the failure model and trust. When VLC fail, it normally pop up a error message reporting the problem. When mplayer fail, it normally segfault or just hangs. The latter failure mode drain my trust in the program.

    But even if VLC is my player of choice, we have choosen to use mplayer in Debian Edu/Skolelinux. The reason is simple. We need a good browser plugin to play web videos seamlessly, and the VLC browser plugin is not very good. For example, it lack in-line control buttons, so there is no way for the user to pause the video. Also, when I last tested the browser plugins available in Debian, the VLC plugin failed on several video pages where mplayer based plugins worked. If the browser plugin for VLC was as good as the gecko-mediaplayer package (which uses mplayer), we would switch.

    While VLC is a good player, its user interface is slightly annoying. The most annoying feature is its inconsistent use of keyboard shortcuts. When the player is in full screen mode, its shortcuts are different from when it is playing the video in a window. For example, space only work as pause when in full screen mode. I wish it had consisten shortcuts and that space also would work when in window mode. Another nice shortcut in gmplayer is [enter] to restart the current video. It is very nice when playing short videos from the web and want to restart it when new people arrive to have a look at what is going on.

    22nd November 2010

    Michael Biebl suggested to me on IRC, that I changed my automated upgrade testing of the Lenny Gnome and KDE Desktop to do apt-get autoremove when using apt-get. This seem like a very good idea, so I adjusted by test scripts and can now present the updated result from today:

    This is for Gnome:

    Installed using apt-get, missing with aptitude

    apache2.2-bin aptdaemon baobab binfmt-support browser-plugin-gnash cheese-common cli-common cups-pk-helper dmz-cursor-theme empathy empathy-common freedesktop-sound-theme freeglut3 gconf-defaults-service gdm-themes gedit-plugins geoclue geoclue-hostip geoclue-localnet geoclue-manual geoclue-yahoo gnash gnash-common gnome gnome-backgrounds gnome-cards-data gnome-codec-install gnome-core gnome-desktop-environment gnome-disk-utility gnome-screenshot gnome-search-tool gnome-session-canberra gnome-system-log gnome-themes-extras gnome-themes-more gnome-user-share gstreamer0.10-fluendo-mp3 gstreamer0.10-tools gtk2-engines gtk2-engines-pixbuf gtk2-engines-smooth hamster-applet libapache2-mod-dnssd libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libart2.0-cil libboost-date-time1.42.0 libboost-python1.42.0 libboost-thread1.42.0 libchamplain-0.4-0 libchamplain-gtk-0.4-0 libcheese-gtk18 libclutter-gtk-0.10-0 libcryptui0 libdiscid0 libelf1 libepc-1.0-2 libepc-common libepc-ui-1.0-2 libfreerdp-plugins-standard libfreerdp0 libgconf2.0-cil libgdata-common libgdata7 libgdu-gtk0 libgee2 libgeoclue0 libgexiv2-0 libgif4 libglade2.0-cil libglib2.0-cil libgmime2.4-cil libgnome-vfs2.0-cil libgnome2.24-cil libgnomepanel2.24-cil libgpod-common libgpod4 libgtk2.0-cil libgtkglext1 libgtksourceview2.0-common libmono-addins-gui0.2-cil libmono-addins0.2-cil libmono-cairo2.0-cil libmono-corlib2.0-cil libmono-i18n-west2.0-cil libmono-posix2.0-cil libmono-security2.0-cil libmono-sharpzip2.84-cil libmono-system2.0-cil libmtp8 libmusicbrainz3-6 libndesk-dbus-glib1.0-cil libndesk-dbus1.0-cil libopal3.6.8 libpolkit-gtk-1-0 libpt2.6.7 libpython2.6 librpm1 librpmio1 libsdl1.2debian libsrtp0 libssh-4 libtelepathy-farsight0 libtelepathy-glib0 libtidy-0.99-0 media-player-info mesa-utils mono-2.0-gac mono-gac mono-runtime nautilus-sendto nautilus-sendto-empathy p7zip-full pkg-config python-aptdaemon python-aptdaemon-gtk python-axiom python-beautifulsoup python-bugbuddy python-clientform python-coherence python-configobj python-crypto python-cupshelpers python-elementtree python-epsilon python-evolution python-feedparser python-gdata python-gdbm python-gst0.10 python-gtkglext1 python-gtksourceview2 python-httplib2 python-louie python-mako python-markupsafe python-mechanize python-nevow python-notify python-opengl python-openssl python-pam python-pkg-resources python-pyasn1 python-pysqlite2 python-rdflib python-serial python-tagpy python-twisted-bin python-twisted-conch python-twisted-core python-twisted-web python-utidylib python-webkit python-xdg python-zope.interface remmina remmina-plugin-data remmina-plugin-rdp remmina-plugin-vnc rhythmbox-plugin-cdrecorder rhythmbox-plugins rpm-common rpm2cpio seahorse-plugins shotwell software-center system-config-printer-udev telepathy-gabble telepathy-mission-control-5 telepathy-salut tomboy totem totem-coherence totem-mozilla totem-plugins transmission-common xdg-user-dirs xdg-user-dirs-gtk xserver-xephyr

    Installed using apt-get, removed with aptitude

    cheese ekiga eog epiphany-extensions evolution-exchange fast-user-switch-applet file-roller gcalctool gconf-editor gdm gedit gedit-common gnome-games gnome-games-data gnome-nettool gnome-system-tools gnome-themes gnuchess gucharmap guile-1.8-libs libavahi-ui0 libdmx1 libgalago3 libgtk-vnc-1.0-0 libgtksourceview2.0-0 liblircclient0 libsdl1.2debian-alsa libspeexdsp1 libsvga1 rhythmbox seahorse sound-juicer system-config-printer totem-common transmission-gtk vinagre vino

    Installed using aptitude, missing with apt-get

    gstreamer0.10-gnomevfs

    Installed using aptitude, removed with apt-get

    [nothing]

    This is for KDE:

    Installed using apt-get, missing with aptitude

    ksmserver

    Installed using apt-get, removed with aptitude

    kwin network-manager-kde

    Installed using aptitude, missing with apt-get

    arts dolphin freespacenotifier google-gadgets-gst google-gadgets-xul kappfinder kcalc kcharselect kde-core kde-plasma-desktop kde-standard kde-window-manager kdeartwork kdeartwork-emoticons kdeartwork-style kdeartwork-theme-icon kdebase kdebase-apps kdebase-workspace kdebase-workspace-bin kdebase-workspace-data kdeeject kdelibs kdeplasma-addons kdeutils kdewallpapers kdf kfloppy kgpg khelpcenter4 kinfocenter konq-plugins-l10n konqueror-nsplugins kscreensaver kscreensaver-xsavers ktimer kwrite libgle3 libkde4-ruby1.8 libkonq5 libkonq5-templates libnetpbm10 libplasma-ruby libplasma-ruby1.8 libqt4-ruby1.8 marble-data marble-plugins netpbm nuvola-icon-theme plasma-dataengines-workspace plasma-desktop plasma-desktopthemes-artwork plasma-runners-addons plasma-scriptengine-googlegadgets plasma-scriptengine-python plasma-scriptengine-qedje plasma-scriptengine-ruby plasma-scriptengine-webkit plasma-scriptengines plasma-wallpapers-addons plasma-widget-folderview plasma-widget-networkmanagement ruby sweeper update-notifier-kde xscreensaver-data-extra xscreensaver-gl xscreensaver-gl-extra xscreensaver-screensaver-bsod

    Installed using aptitude, removed with apt-get

    ark google-gadgets-common google-gadgets-qt htdig kate kdebase-bin kdebase-data kdepasswd kfind klipper konq-plugins konqueror ksysguard ksysguardd libarchive1 libcln6 libeet1 libeina-svn-06 libggadget-1.0-0b libggadget-qt-1.0-0b libgps19 libkdecorations4 libkephal4 libkonq4 libkonqsidebarplugin4a libkscreensaver5 libksgrd4 libksignalplotter4 libkunitconversion4 libkwineffects1a libmarblewidget4 libntrack-qt4-1 libntrack0 libplasma-geolocation-interface4 libplasmaclock4a libplasmagenericshell4 libprocesscore4a libprocessui4a libqalculate5 libqedje0a libqtruby4shared2 libqzion0a libruby1.8 libscim8c2a libsmokekdecore4-3 libsmokekdeui4-3 libsmokekfile3 libsmokekhtml3 libsmokekio3 libsmokeknewstuff2-3 libsmokeknewstuff3-3 libsmokekparts3 libsmokektexteditor3 libsmokekutils3 libsmokenepomuk3 libsmokephonon3 libsmokeplasma3 libsmokeqtcore4-3 libsmokeqtdbus4-3 libsmokeqtgui4-3 libsmokeqtnetwork4-3 libsmokeqtopengl4-3 libsmokeqtscript4-3 libsmokeqtsql4-3 libsmokeqtsvg4-3 libsmokeqttest4-3 libsmokeqtuitools4-3 libsmokeqtwebkit4-3 libsmokeqtxml4-3 libsmokesolid3 libsmokesoprano3 libtaskmanager4a libtidy-0.99-0 libweather-ion4a libxklavier16 libxxf86misc1 okteta oxygencursors plasma-dataengines-addons plasma-scriptengine-superkaramba plasma-widget-lancelot plasma-widgets-addons plasma-widgets-workspace polkit-kde-1 ruby1.8 systemsettings update-notifier-common

    Running apt-get autoremove made the results using apt-get and aptitude a bit more similar, but there are still quite a lott of differences. I have no idea what packages should be installed after the upgrade, but hope those that do can have a look.

    22nd November 2010

    Most of the computers in use by the Debian Edu/Skolelinux project are virtual machines. And they have been Xen machines running on a fairly old IBM eserver xseries 345 machine, and we wanted to migrate them to KVM on a newer Dell PowerEdge 2950 host machine. This was a bit harder that it could have been, because we set up the Xen virtual machines to get the virtual partitions from LVM, which as far as I know is not supported by KVM. So to migrate, we had to convert several LVM logical volumes to partitions on a virtual disk file.

    I found a nice recipe to do this, and wrote the following script to do the migration. It uses qemu-img from the qemu package to make the disk image, parted to partition it, losetup and kpartx to present the disk image partions as devices, and dd to copy the data. I NFS mounted the new servers storage area on the old server to do the migration.

    #!/bin/sh
    
    # Based on
    # http://searchnetworking.techtarget.com.au/articles/35011-Six-steps-for-migrating-Xen-virtual-machines-to-KVM
    
    set -e
    set -x
    
    if [ -z "$1" ] ; then
        echo "Usage: $0 <hostname>"
        exit 1
    else
        host="$1"
    fi
    
    if [ ! -e /dev/vg_data/$host-disk ] ; then
        echo "error: unable to find LVM volume for $host"
        exit 1
    fi
    
    # Partitions need to be a bit bigger than the LVM LVs.  not sure why.
    disksize=$( lvs --units m | grep $host-disk | awk '{sum = sum + $4} END { print int(sum * 1.05) }')
    swapsize=$( lvs --units m | grep $host-swap | awk '{sum = sum + $4} END { print int(sum * 1.05) }')
    totalsize=$(( ( $disksize + $swapsize ) ))
    
    img=$host.img
    #dd if=/dev/zero of=$img bs=1M count=$(( $disksize + $swapsize ))
    qemu-img create $img ${totalsize}MMaking room on the Debian Edu/Sqeeze DVD
    
    parted $img mklabel msdos
    parted $img mkpart primary linux-swap 0 $disksize
    parted $img mkpart primary ext2 $disksize $totalsize
    parted $img set 1 boot on
    
    modprobe dm-mod
    losetup /dev/loop0 $img
    kpartx -a /dev/loop0
    
    dd if=/dev/vg_data/$host-disk of=/dev/mapper/loop0p1 bs=1M
    fsck.ext3 -f /dev/mapper/loop0p1 || true
    mkswap /dev/mapper/loop0p2
    
    kpartx -d /dev/loop0
    losetup -d /dev/loop0
    

    The script is perhaps so simple that it is not copyrightable, but if it is, it is licenced using GPL v2 or later at your discretion.

    After doing this, I booted a Debian CD in rescue mode in KVM with the new disk image attached, installed grub-pc and linux-image-686 and set up grub to boot from the disk image. After this, the KVM machines seem to work just fine.

    20th November 2010

    I'm still running upgrade testing of the Lenny Gnome and KDE Desktop, but have not had time to spend on reporting the status. Here is a short update based on a test I ran 20101118.

    I still do not know what a correct migration should look like, so I report any differences between apt and aptitude and hope someone else can see if anything should be changed.

    This is for Gnome:

    Installed using apt-get, missing with aptitude

    apache2.2-bin aptdaemon at-spi baobab binfmt-support browser-plugin-gnash cheese-common cli-common cpp-4.3 cups-pk-helper dmz-cursor-theme empathy empathy-common finger freedesktop-sound-theme freeglut3 gconf-defaults-service gdm-themes gedit-plugins geoclue geoclue-hostip geoclue-localnet geoclue-manual geoclue-yahoo gnash gnash-common gnome gnome-backgrounds gnome-cards-data gnome-codec-install gnome-core gnome-desktop-environment gnome-disk-utility gnome-screenshot gnome-search-tool gnome-session-canberra gnome-spell gnome-system-log gnome-themes-extras gnome-themes-more gnome-user-share gs-common gstreamer0.10-fluendo-mp3 gstreamer0.10-tools gtk2-engines gtk2-engines-pixbuf gtk2-engines-smooth hal-info hamster-applet libapache2-mod-dnssd libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libart2.0-cil libatspi1.0-0 libboost-date-time1.42.0 libboost-python1.42.0 libboost-thread1.42.0 libchamplain-0.4-0 libchamplain-gtk-0.4-0 libcheese-gtk18 libclutter-gtk-0.10-0 libcryptui0 libcupsys2 libdiscid0 libeel2-data libelf1 libepc-1.0-2 libepc-common libepc-ui-1.0-2 libfreerdp-plugins-standard libfreerdp0 libgail-common libgconf2.0-cil libgdata-common libgdata7 libgdl-1-common libgdu-gtk0 libgee2 libgeoclue0 libgexiv2-0 libgif4 libglade2.0-cil libglib2.0-cil libgmime2.4-cil libgnome-vfs2.0-cil libgnome2.24-cil libgnomepanel2.24-cil libgnomeprint2.2-data libgnomeprintui2.2-common libgnomevfs2-bin libgpod-common libgpod4 libgtk2.0-cil libgtkglext1 libgtksourceview-common libgtksourceview2.0-common libmono-addins-gui0.2-cil libmono-addins0.2-cil libmono-cairo2.0-cil libmono-corlib2.0-cil libmono-i18n-west2.0-cil libmono-posix2.0-cil libmono-security2.0-cil libmono-sharpzip2.84-cil libmono-system2.0-cil libmtp8 libmusicbrainz3-6 libndesk-dbus-glib1.0-cil libndesk-dbus1.0-cil libopal3.6.8 libpolkit-gtk-1-0 libpt-1.10.10-plugins-alsa libpt-1.10.10-plugins-v4l libpt2.6.7 libpython2.6 librpm1 librpmio1 libsdl1.2debian libservlet2.4-java libsrtp0 libssh-4 libtelepathy-farsight0 libtelepathy-glib0 libtidy-0.99-0 libxalan2-java libxerces2-java media-player-info mesa-utils mono-2.0-gac mono-gac mono-runtime nautilus-sendto nautilus-sendto-empathy openoffice.org-writer2latex openssl-blacklist p7zip p7zip-full pkg-config python-4suite-xml python-aptdaemon python-aptdaemon-gtk python-axiom python-beautifulsoup python-bugbuddy python-clientform python-coherence python-configobj python-crypto python-cupshelpers python-cupsutils python-eggtrayicon python-elementtree python-epsilon python-evolution python-feedparser python-gdata python-gdbm python-gst0.10 python-gtkglext1 python-gtkmozembed python-gtksourceview2 python-httplib2 python-louie python-mako python-markupsafe python-mechanize python-nevow python-notify python-opengl python-openssl python-pam python-pkg-resources python-pyasn1 python-pysqlite2 python-rdflib python-serial python-tagpy python-twisted-bin python-twisted-conch python-twisted-core python-twisted-web python-utidylib python-webkit python-xdg python-zope.interface remmina remmina-plugin-data remmina-plugin-rdp remmina-plugin-vnc rhythmbox-plugin-cdrecorder rhythmbox-plugins rpm-common rpm2cpio seahorse-plugins shotwell software-center svgalibg1 system-config-printer-udev telepathy-gabble telepathy-mission-control-5 telepathy-salut tomboy totem totem-coherence totem-mozilla totem-plugins transmission-common xdg-user-dirs xdg-user-dirs-gtk xserver-xephyr zip

    Installed using apt-get, removed with aptitude

    arj bluez-utils cheese dhcdbd djvulibre-desktop ekiga eog epiphany-extensions epiphany-gecko evolution-exchange fast-user-switch-applet file-roller gcalctool gconf-editor gdm gedit gedit-common gnome-app-install gnome-games gnome-games-data gnome-nettool gnome-system-tools gnome-themes gnome-utils gnome-vfs-obexftp gnome-volume-manager gnuchess gucharmap guile-1.8-libs hal libavahi-compat-libdnssd1 libavahi-core5 libavahi-ui0 libbind9-50 libbluetooth2 libcamel1.2-11 libcdio7 libcucul0 libcurl3 libdirectfb-1.0-0 libdmx1 libdvdread3 libedata-cal1.2-6 libedataserver1.2-9 libeel2-2.20 libepc-1.0-1 libepc-ui-1.0-1 libexchange-storage1.2-3 libfaad0 libgadu3 libgalago3 libgd2-noxpm libgda3-3 libgda3-common libggz2 libggzcore9 libggzmod4 libgksu1.2-0 libgksuui1.0-1 libgmyth0 libgnome-desktop-2 libgnome-pilot2 libgnomecups1.0-1 libgnomeprint2.2-0 libgnomeprintui2.2-0 libgpod3 libgraphviz4 libgtk-vnc-1.0-0 libgtkhtml2-0 libgtksourceview1.0-0 libgtksourceview2.0-0 libgucharmap6 libhesiod0 libicu38 libisccc50 libisccfg50 libiw29 libjaxp1.3-java-gcj libkpathsea4 liblircclient0 libltdl3 liblwres50 libmagick++10 libmagick10 libmalaga7 libmozjs1d libmpfr1ldbl libmtp7 libmysqlclient15off libnautilus-burn4 libneon27 libnm-glib0 libnm-util0 libopal-2.2 libosp5 libparted1.8-10 libpisock9 libpisync1 libpoppler-glib3 libpoppler3 libpt-1.10.10 libraw1394-8 libsdl1.2debian-alsa libsensors3 libsexy2 libsmbios2 libsoup2.2-8 libspeexdsp1 libssh2-1 libsuitesparse-3.1.0 libsvga1 libswfdec-0.6-90 libtalloc1 libtotem-plparser10 libtrackerclient0 libvoikko1 libxalan2-java-gcj libxerces2-java-gcj libxklavier12 libxtrap6 libxxf86misc1 libzephyr3 mysql-common rhythmbox seahorse sound-juicer swfdec-gnome system-config-printer totem-common totem-gstreamer transmission-gtk vinagre vino w3c-dtd-xhtml wodim

    Installed using aptitude, missing with apt-get

    gstreamer0.10-gnomevfs

    Installed using aptitude, removed with apt-get

    [nothing]

    This is for KDE:

    Installed using apt-get, missing with aptitude

    autopoint bomber bovo cantor cantor-backend-kalgebra cpp-4.3 dcoprss edict espeak espeak-data eyesapplet fifteenapplet finger gettext ghostscript-x git gnome-audio gnugo granatier gs-common gstreamer0.10-pulseaudio indi kaddressbook-plugins kalgebra kalzium-data kanjidic kapman kate-plugins kblocks kbreakout kbstate kde-icons-mono kdeaccessibility kdeaddons-kfile-plugins kdeadmin-kfile-plugins kdeartwork-misc kdeartwork-theme-window kdeedu kdeedu-data kdeedu-kvtml-data kdegames kdegames-card-data kdegames-mahjongg-data kdegraphics-kfile-plugins kdelirc kdemultimedia-kfile-plugins kdenetwork-kfile-plugins kdepim-kfile-plugins kdepim-kio-plugins kdessh kdetoys kdewebdev kdiamond kdnssd kfilereplace kfourinline kgeography-data kigo killbots kiriki klettres-data kmoon kmrml knewsticker-scripts kollision kpf krosspython ksirk ksmserver ksquares kstars-data ksudoku kubrick kweather libasound2-plugins libboost-python1.42.0 libcfitsio3 libconvert-binhex-perl libcrypt-ssleay-perl libdb4.6++ libdjvulibre-text libdotconf1.0 liberror-perl libespeak1 libfinance-quote-perl libgail-common libgsl0ldbl libhtml-parser-perl libhtml-tableextract-perl libhtml-tagset-perl libhtml-tree-perl libio-stringy-perl libkdeedu4 libkdegames5 libkiten4 libkpathsea5 libkrossui4 libmailtools-perl libmime-tools-perl libnews-nntpclient-perl libopenbabel3 libportaudio2 libpulse-browse0 libservlet2.4-java libspeechd2 libtiff-tools libtimedate-perl libunistring0 liburi-perl libwww-perl libxalan2-java libxerces2-java lirc luatex marble networkstatus noatun-plugins openoffice.org-writer2latex palapeli palapeli-data parley parley-data poster psutils pulseaudio pulseaudio-esound-compat pulseaudio-module-x11 pulseaudio-utils quanta-data rocs rsync speech-dispatcher step svgalibg1 texlive-binaries texlive-luatex ttf-sazanami-gothic

    Installed using apt-get, removed with aptitude

    amor artsbuilder atlantik atlantikdesigner blinken bluez-utils cvs dhcdbd djvulibre-desktop imlib-base imlib11 kalzium kanagram kandy kasteroids katomic kbackgammon kbattleship kblackbox kbounce kbruch kcron kdat kdemultimedia-kappfinder-data kdeprint kdict kdvi kedit keduca kenolaba kfax kfaxview kfouleggs kgeography kghostview kgoldrunner khangman khexedit kiconedit kig kimagemapeditor kitchensync kiten kjumpingcube klatin klettres klickety klines klinkstatus kmag kmahjongg kmailcvt kmenuedit kmid kmilo kmines kmousetool kmouth kmplot knetwalk kodo kolf kommander konquest kooka kpager kpat kpdf kpercentage kpilot kpoker kpovmodeler krec kregexpeditor kreversi ksame ksayit kshisen ksig ksim ksirc ksirtet ksmiletris ksnake ksokoban kspaceduel kstars ksvg ksysv kteatime ktip ktnef ktouch ktron kttsd ktuberling kturtle ktux kuickshow kverbos kview kviewshell kvoctrain kwifimanager kwin kwin4 kwordquiz kworldclock kxsldbg libakode2 libarts1-akode libarts1-audiofile libarts1-mpeglib libarts1-xine libavahi-compat-libdnssd1 libavahi-core5 libavc1394-0 libbind9-50 libbluetooth2 libboost-python1.34.1 libcucul0 libcurl3 libcvsservice0 libdirectfb-1.0-0 libdjvulibre21 libdvdread3 libfaad0 libfreebob0 libgd2-noxpm libgraphviz4 libgsmme1c2a libgtkhtml2-0 libicu38 libiec61883-0 libindex0 libisccc50 libisccfg50 libiw29 libjaxp1.3-java-gcj libk3b3 libkcal2b libkcddb1 libkdeedu3 libkdegames1 libkdepim1a libkgantt0 libkleopatra1 libkmime2 libkpathsea4 libkpimexchange1 libkpimidentities1 libkscan1 libksieve0 libktnef1 liblockdev1 libltdl3 liblwres50 libmagick10 libmimelib1c2a libmodplug0c2 libmozjs1d libmpcdec3 libmpfr1ldbl libneon27 libnm-util0 libopensync0 libpisock9 libpoppler-glib3 libpoppler-qt2 libpoppler3 libraw1394-8 librss1 libsensors3 libsmbios2 libssh2-1 libsuitesparse-3.1.0 libswfdec-0.6-90 libtalloc1 libxalan2-java-gcj libxerces2-java-gcj libxtrap6 lskat mpeglib network-manager-kde noatun pmount tex-common texlive-base texlive-common texlive-doc-base texlive-fonts-recommended tidy ttf-dustin ttf-kochi-gothic ttf-sjfonts

    Installed using aptitude, missing with apt-get

    dolphin kde-core kde-plasma-desktop kde-standard kde-window-manager kdeartwork kdebase kdebase-apps kdebase-workspace kdebase-workspace-bin kdebase-workspace-data kdeutils kscreensaver kscreensaver-xsavers libgle3 libkonq5 libkonq5-templates libnetpbm10 netpbm plasma-widget-folderview plasma-widget-networkmanagement xscreensaver-data-extra xscreensaver-gl xscreensaver-gl-extra xscreensaver-screensaver-bsod

    Installed using aptitude, removed with apt-get

    kdebase-bin konq-plugins konqueror

    20th November 2010

    Answering the call from the Gnash project for buildbot slaves to test the current source, I have set up a virtual KVM machine on the Debian Edu/Skolelinux virtualization host to test the git source on Debian/Squeeze. I hope this can help the developers in getting new releases out more often.

    As the developers want less main-stream build platforms tested to, I have considered setting up a Debian/kfreebsd machine as well. I have also considered using the kfreebsd architecture in Debian as a file server in NUUG to get access to the 5 TB zfs volume we currently use to store DV video. Because of this, I finally got around to do a test installation of Debian/Squeeze with kfreebsd. Installation went fairly smooth, thought I noticed some visual glitches in the cdebconf dialogs (black cursor left on the screen at random locations). Have not gotten very far with the testing. Noticed cfdisk did not work, but fdisk did so it was not a fatal problem. Have to spend some more time on it to see if it is useful as a file server for NUUG. Will try to find time to set up a gnash buildbot slave on the Debian Edu/Skolelinux this weekend.

    9th November 2010

    3D printing is just great. I just came across this Debian logo in 3D linked in from the thingiverse blog.

    7th November 2010

    Prioritising packages for the Debian Edu / Skolelinux DVD, which is supposed provide a school with all the services and user applications needed on the pupils computer network has always been hard. Even schools without Internet connections should be able to get Debian Edu working using this DVD.

    The job became a lot harder when apt and aptitude started installing recommended packages by default. We want the same set of packages to be installed when using the DVD and the netinst CD, and that means all recommended packages need to be on the DVD. I created a patch for debian-cd in BTS report #601203 to do this, and since this change was applied to the Debian Edu DVD build, we have been seriously short on space.

    A few days ago we decided to drop blender, wxmaxima and kicad from the default installation to save space on the DVD, believing that those needing these applications are few and can get them from the Debian archive.

    Yesterday, I had a look what source packages to see which packages were using most space. A few large packages are well know; openoffice.org, openclipart and fluid-soundfont. But I also discovered that lilypond used 106 MiB and fglrx-driver used 53 MiB. The lilypond package is pulled in as a dependency for rosegarden, and when looking a bit closer I discovered that 99 MiB of the 106 MiB were the documentation package, which is recommended by the binary package. I decided to drop this documentation package from our DVD, as most of our users will use the GUI front-ends and do not need the lilypond documentation. Similarly, I dropped the non-free fglrx-driver package which might be installed by d-i when its hardware is detected, as the free X driver should work.

    With this change, we finally got space for the LXDE and Gnome desktop packages as well as the language specific packages making the DVD more useful again.

    24th October 2010

    Some updates.

    My gnash pledge to raise money for the project is going well. The lower limit of 10 signers was reached in 24 hours, and so far 13 people have signed it. More signers and more funding is most welcome, and I am really curious how far we can get before the time limit of December 24 is reached. :)

    On the #gnash IRC channel on irc.freenode.net, I was just tipped about what appear to be a great code coverage tool capable of generating code coverage stats without any changes to the source code. It is called kcov, and can be used using kcov <directory> <binary>. It is missing in Debian, but the git source built just fine in Squeeze after I installed libelf-dev, libdwarf-dev, pkg-config and libglib2.0-dev. Failed to build in Lenny, but suspect that is solvable. I hope kcov make it into Debian soon.

    Finally found time to wrap up the release notes for a new alpha release of Debian Edu, and just published the second alpha test release of the Squeeze based Debian Edu / Skolelinux release. Give it a try if you need a complete linux solution for your school, including central infrastructure server, workstations, thin client servers and diskless workstations. A nice touch added yesterday is RDP support on the thin client servers, for windows clients to get a Linux desktop on request.

    19th October 2010

    The Gnash project is the most promising solution for a Free Software Flash implementation. It has done great so far, but there is still far to go, and recently its funding has dried up. I believe AVM2 support in Gnash is vital to the continued progress of the project, as more and more sites show up with AVM2 flash files.

    To try to get funding for developing such support, I have started a pledge with the following text:

    "I will pay 100$ to the Gnash project to develop AVM2 support but only if 10 other people will do the same."

    - Petter Reinholdtsen, free software developer

    Deadline to sign up by: 24th December 2010

    The Gnash project need to get support for the new Flash file format AVM2 to work with a lot of sites using Flash on the web. Gnash already work with a lot of Flash sites using the old AVM1 format, but more and more sites are using the AVM2 format these days. The project web page is available from http://www.getgnash.org/ . Gnash is a free software implementation of Adobe Flash, allowing those of us that do not accept the terms of the Adobe Flash license to get access to Flash sites.

    The project need funding to get developers to put aside enough time to develop the AVM2 support, and this pledge is my way to try to get this to happen.

    The project accept donations via the OpenMediaNow foundation, http://www.openmedianow.org/?q=node/32 .

    I hope you will support this effort too. I hope more than 10 people will participate to make this happen. The more money the project gets, the more features it can develop using these funds. :)

    9th October 2010

    This summer I got the chance to buy cheap Spykee robots, and since then I have worked on getting Linux software in place to control them. The firmware for the robot is available from the producer, and using that source it was trivial to figure out the protocol specification. I've started on a perl library to control it, and made some demo programs using this perl library to allow one to control the robots.

    The library is quite functional already, and capable of controlling the driving, fetching video, uploading MP3s and play them. There are a few less important features too.

    Since a few weeks ago, I ran out of time to spend on this project, but I never got around to releasing the current source. I decided today that it was time to do something about it, and uploaded the source to my Debian package store at people.skolelinux.org.

    Because it was simpler for me, I made a Debian package and published the source and deb. If you got a spykee robot, grab the source or binary package:

    If you are interested in helping out with developing this library, please let me know.

    Tags: english, nuug, robot.
    3rd October 2010

    Tags: english, lenker, nuug.
    9th September 2010

    A few days ago I had the mixed pleasure of bying a new digital camera, a Canon IXUS 130. It was instructive and very disturbing to be able to verify that also this camera producer have the nerve to specify how I can or can not use the videos produced with the camera. Even thought I was aware of the issue, the options with new cameras are limited and I ended up bying the camera anyway. What is the problem, you might ask? It is software patents, MPEG-4, H.264 and the MPEG-LA that is the problem, and our right to record our experiences without asking for permissions that is at risk.

    On page 27 of the Danish instruction manual, this section is written:

    This product is licensed under AT&T patents for the MPEG-4 standard and may be used for encoding MPEG-4 compliant video and/or decoding MPEG-4 compliant video that was encoded only (1) for a personal and non-commercial purpose or (2) by a video provider licensed under the AT&T patents to provide MPEG-4 compliant video.

    No license is granted or implied for any other use for MPEG-4 standard.

    In short, the camera producer have chosen to use technology (MPEG-4/H.264) that is only provided if I used it for personal and non-commercial purposes, or ask for permission from the organisations holding the knowledge monopoly (patent) for technology used.

    This issue has been brewing for a while, and I recommend you to read "Why Our Civilization's Video Art and Culture is Threatened by the MPEG-LA" by Eugenia Loli-Queru and "H.264 Is Not The Sort Of Free That Matters" by Simon Phipps to learn more about the issue. The solution is to support the free and open standards for video, like Ogg Theora, and avoid MPEG-4 and H.264 if you can.

    4th September 2010

    In the Debian popularity-contest numbers, the adobe-flashplugin package the second most popular used package that is missing in Debian. The sixth most popular is flashplayer-mozilla. This is a clear indication that working flash is important for Debian users. Around 10 percent of the users submitting data to popcon.debian.org have this package installed.

    In the report written by Lars Risan in August 2008 («Skolelinux i bruk – Rapport for Hurum kommune, Universitetet i Agder og stiftelsen SLX Debian Labs»), one of the most important problems schools experienced with Debian Edu/Skolelinux was the lack of working Flash. A lot of educational web sites require Flash to work, and lacking working Flash support in the web browser and the problems with installing it was perceived as a good reason to stay with Windows.

    I once saw a funny and sad comment in a web forum, where Linux was said to be the retarded cousin that did not really understand everything you told him but could work fairly well. This was a comment regarding the problems Linux have with proprietary formats and non-standard web pages, and is sad because it exposes a fairly common understanding of whose fault it is if web pages that only work in for example Internet Explorer 6 fail to work on Firefox, and funny because it explain very well how annoying it is for users when Linux distributions do not work with the documents they receive or the web pages they want to visit.

    This is part of the reason why I believe it is important for Debian and Debian Edu to have a well working Flash implementation in the distribution, to get at least popular sites as Youtube and Google Video to working out of the box. For Squeeze, Debian have the chance to include the latest version of Gnash that will make this happen, as the new release 0.8.8 was published a few weeks ago and is resting in unstable. The new version work with more sites that version 0.8.7. The Gnash maintainers have asked for a freeze exception, but the release team have not had time to reply to it yet. I hope they agree with me that Flash is important for the Debian desktop users, and thus accept the new package into Squeeze.

    1st September 2010

    This evening I made my first Perl GUI application. The last few days I have worked on a Perl module for controlling my recently aquired Spykee robots, and the module is now getting complete enought that it is possible to use it to control the robot driving at least. It was now time to figure out how to use it to create some GUI to allow me to drive the robot around. I picked PerlQt as I have had positive experiences with the Qt API before, and spent a few minutes browsing the web for examples. Using Qt Designer seemed like a short cut, so I ended up writing the perl GUI using Qt Designer and compiling it into a perl program using the puic program from libqt-perl. Nothing fancy yet, but it got buttons to connect and drive around.

    The perl module I have written provide a object oriented API for controlling the robot. Here is an small example on how to use it:

    use Spykee;
    Spykee::discover(sub {$robot{$_[0]} = $_[1]});
    my $host = (keys %robot)[0];
    my $spykee = Spykee->new();
    $spykee->contact($host, "admin", "admin");
    $spykee->left();
    sleep 2;
    $spykee->right();
    sleep 2;
    $spykee->forward();
    sleep 2;
    $spykee->back();
    sleep 2;
    $spykee->stop();
    

    Thanks to the release of the source of the robot firmware, I could peek into the implementation at the other end to figure out how to implement the protocol used by the robot. I've implemented several of the commands the robot understand, but is still missing the camera support to make it possible to control the robot from remote. First I want to implement support for uploading new firmware and configuring the wireless network, to make it possible to bootstrap a Spykee robot without the producers Windows and MacOSX software (I only have Linux, so I had to ask a friend to come over to get the robot testing going. :).

    Will release the source to the public soon, but need to figure out where to make it available first. I will add a link to the NUUG wiki for those that want to check back later to find it.

    Tags: english, nuug, robot.
    30th August 2010

    Just got an email from Tobias Gruetzmacher as a followup on my previous post about sshfs. He reported another problem with sshfs. It fail to handle hard links properly. A simple way to spot this is to look at the . and .. entries in the directory tree. These should have a link count >1, but on sshfs the count is 1. I just tested to see what happen when trying to hardlink, and this fail as well:

    % ln foo bar
    ln: creating hard link `bar' => `foo': Function not implemented
    %
    

    I have not yet found time to implement a test for this in my file system test code, but believe having working hard links is useful to avoid surprised unix programs. Not as useful as working file locking and symlinks, which are required to get a working desktop, but useful nevertheless. :)

    The latest version of the file system test code is available via git from http://github.com/gebi/fs-test

    26th August 2010

    My file system sematics program presented a few days ago is very useful to verify that a file system can work as a unix home directory,and today I had to extend it a bit. I'm looking into alternatives for home directory access here at the University of Oslo, and one of the options is sshfs. My friend Finn-Arne mentioned a while back that they had used sshfs with Debian Edu, but stopped because of problems. I asked today what the problems where, and he mentioned that sshfs failed to handle umask properly. Trying to detect the problem I wrote this addition to my fs testing script:

    mode_t touch_get_mode(const char *name, mode_t mode) {
      mode_t retval = 0;
      int fd = open(name, O_RDWR|O_CREAT|O_LARGEFILE, mode);
      if (-1 != fd) {
        unlink(name);
        struct stat statbuf;
        if (-1 != fstat(fd, &statbuf)) {
          retval = statbuf.st_mode & 0x1ff;
        }
        close(fd);
      }
      return retval;
    }
    
    /* Try to detect problem discovered using sshfs */
    int test_umask(void) {
      printf("info: testing umask effect on file creation\n");
    
      mode_t orig_umask = umask(000);
      mode_t newmode;
      if (0666 != (newmode = touch_get_mode("foobar", 0666))) {
        printf("  error: Wrong file mode %o when creating using mode 666 and umask 000\n",
               newmode);
      }
      umask(007);
      if (0660 != (newmode = touch_get_mode("foobar", 0666))) {
        printf("  error: Wrong file mode %o when creating using mode 666 and umask 007\n",
               newmode);
      }
    
      umask (orig_umask);
      return 0;
    }
    
    int main(int argc, char **argv) {
      [...]
      test_umask();
      return 0;
    }
    

    Sure enough. On NFS to a netapp, I get this result:

    Testing POSIX/Unix sematics on file system
    info: testing symlink creation
    info: testing subdirectory creation
    info: testing fcntl locking
      Read-locking 1 byte from 1073741824
      Read-locking 510 byte from 1073741826
      Unlocking 1 byte from 1073741824
      Write-locking 1 byte from 1073741824
      Write-locking 510 byte from 1073741826
      Unlocking 2 byte from 1073741824
    info: testing umask effect on file creation
    

    When mounting the same directory using sshfs, I get this result:

    Testing POSIX/Unix sematics on file system
    info: testing symlink creation
    info: testing subdirectory creation
    info: testing fcntl locking
      Read-locking 1 byte from 1073741824
      Read-locking 510 byte from 1073741826
      Unlocking 1 byte from 1073741824
      Write-locking 1 byte from 1073741824
      Write-locking 510 byte from 1073741826
      Unlocking 2 byte from 1073741824
    info: testing umask effect on file creation
      error: Wrong file mode 644 when creating using mode 666 and umask 000
      error: Wrong file mode 640 when creating using mode 666 and umask 007
    

    So, I can conclude that sshfs is better than smb to a Netapp or a Windows server, but not good enough to be used as a home directory.

    Update 2010-08-26: Reported the issue in BTS report #594498

    Update 2010-08-27: Michael Gebetsroither report that he found the script so useful that he created a GIT repository and stored it in http://github.com/gebi/fs-test.

    15th August 2010

    I found the notes from Rob Weir on how to crush dissent matching my own thoughts on the matter quite well. Highly recommended for those wondering which road our society should go down. In my view we have been heading the wrong way for a long time.

    9th August 2010

    As reported earlier, the last few days I have looked at how Debian Edu clients are configured, and tried to get rid of all hardcoded configuration settings on the clients. I believe the work to be mostly done, and the clients seem to work just fine with dynamically generated configuration.

    What is the point, you might ask? The point is to allow a Debian Edu desktop to integrate into an existing network infrastructure without any manual configuration.

    This is what happens when installing a Debian Edu client here at the University of Oslo using PXE. With the PXE installation, I am asked for language (Norwegian Bokmål), locality (Norway) and keyboard layout (no-latin1), Debian Edu profile (Roaming Workstation), if I accept to reformat the hard drive (yes), if I want to submit info to popcon.debian.org (no) and root password (secret). After answering these questions, the installer goes ahead and does its thing, and after around 50 minutes it is done. I press enter to finish the installation, and the machine reboots into KDE. When the machine is ready and kdm asks for login information, I enter my university username and password, am told by kdm that a local home directory has been created and that I must log in again, and finally log in with the same username and password to the KDE 4.4 desktop. At no point during this process did it ask for university specific settings, and all the required configuration was dynamically detected using information fetched via DHCP and DNS. The roaming workstation is now ready for use.

    How was this done, you might wonder? First of all, here is the list of things that need to be configured on the client to get it working properly out of the box:

    • IP address/netmask and DNS server.
    • Web proxy URL.
    • LDAP server for NSS directory information (user, group, etc).
    • Kerberos server for PAM password checking.
    • SMB mount point to access the network home directory. (*)
    • Central syslog server to send syslog messages to. (*)
    • Sitesummary collector URL to submit info to central server. (*)

    (Hm, did I forget anything? Let me knew if I did.)

    The points marked (*) are not required to be able to use the machine, but needed to provide central storage and allowing system administrators to track their machines. Since yesterday, everything but the sitesummary collector URL is dynamically discovered at boot and installation time in the svn version of Debian Edu.

    The IP and DNS setup is fetched during boot using DHCP as usual. When a DHCP update arrives, the proxy setup is updated by looking for http://wpat/wpad.dat and using the content of this WPAD file to configure the http and ftp proxy in /etc/environment and /etc/apt/apt.conf. I decided to update the proxy setup using a DHCP hook to ensure that the client stops using the Debian Edu proxy when it is moved outside the Debian Edu network, and instead uses any local proxy present on the new network when it moves around.

    The DNS names of the LDAP, Kerberos and syslog server and related configuration are generated using DNS information at boot. First the installer looks for a host named ldap in the current DNS domain. If not found, it looks for _ldap._tcp SRV records in DNS instead. If an LDAP server is found, its root DSE entry is requested and the attributes namingContexts and defaultNamingContext are used to determine which LDAP base to use for NSS. If there are several namingContexts attibutes and the defaultNamingContext is present, that LDAP subtree is used as the base. If defaultNamingContext is missing, the subtrees listed as namingContexts are searched in sequence for any object with class posixAccount or posixGroup, and the first one with such an object is used as the LDAP base. For Kerberos, a similar search is done by first looking for a host named kerberos, and then for the _kerberos._tcp SRV record. I've been unable to find a way to look up the Kerberos realm, so for this the upper case string of the current DNS domain is used.

    For the syslog server, the hosts syslog and loghost are searched for, and the _syslog._udp SRV record is consulted if no such host is found. This algorithm works for both Debian Edu and the University of Oslo. A similar strategy would work for locating the sitesummary server, but have not been implemented yet. I decided to fetch and save these settings during installation, to make sure moving to a different network does not change the set of users being allowed to log in nor the passwords required to log in. Usernames and passwords will be cached by sssd when the user logs in on the Debian Edu network, and will not change as the laptop move around. For a non-roaming machine, there is no caching, but given that it is supposed to stay in place it should not matter much. Perhaps we should switch those to use sssd too?

    The user's SMB mount point for the network home directory is located when the user logs in for the first time. The LDAP server is consulted to look for the user's LDAP object and the sambaHomePath attribute is used if found. If it isn't found, the home directory path fetched from NSS is used instead. Assuming the path is of the form /site/server/directory/username, the second part is looked up in DNS and used to generate a SMB URL of the form smb://server.domain/username. This algorithm works for both Debian edu and the University of Oslo. Perhaps there are better attributes to use or a better algorithm that works for more sites, but this will do for now. :)

    This work should make it easier to integrate the Debian Edu clients into any LDAP/Kerberos infrastructure, and make the current setup even more flexible than before. I suspect it will also work for thin client servers, allowing one to easily set up LTSP and hook it into a existing network infrastructure, but I have not had time to test this yet.

    If you want to help out with implementing these things for Debian Edu, please contact us on debian-edu@lists.debian.org.

    Update 2010-08-09: Simon Farnsworth gave me a heads-up on how to detect Kerberos realm from DNS, by looking for _kerberos TXT entries before falling back to the upper case DNS domain name. Will have to implement it for Debian Edu. :)

    8th August 2010

    A few years ago, I was involved in a project planning to use Windows file servers as home directory servers for Debian Edu/Skolelinux machines. This was thought to be no problem, as the access would be through the SMB network file system protocol, and we knew other sites used SMB with unix and samba as the file server to mount home directories without any problems. But, after months of struggling, we had to conclude that our goal was impossible.

    The reason is simply that while SMB can be used for home directories when the file server is Samba running on Unix, this only work because of Samba have some extensions and the fact that the underlying file system is a unix file system. When using a Windows file server, the underlying file system do not have POSIX semantics, and several programs will fail if the users home directory where they want to store their configuration lack POSIX semantics.

    As part of this work, I wrote a small C program I want to share with you all, to replicate a few of the problematic applications (like OpenOffice.org and GCompris) and see if the file system was working as it should. If you find yourself in spooky file system land, it might help you find your way out again. This is the fs-test.c source:

    /*
     * Some tests to check the file system sematics.  Used to verify that
     * CIFS from a windows server do not work properly as a linux home
     * directory.
     * License: GPL v2 or later
     * 
     * needs libsqlite3-dev and build-essential installed
     * compile with: gcc -Wall -lsqlite3 -DTEST_SQLITE fs-test.c -o fs-test
    */
    
    #define _FILE_OFFSET_BITS 64
    #define _LARGEFILE_SOURCE 1
    #define _LARGEFILE64_SOURCE 1
    
    #define _GNU_SOURCE /* for asprintf() */
    
    #include <errno.h>
    #include <fcntl.h>
    #include <stdio.h>
    #include <string.h>
    #include <stdlib.h>
    #include <sys/file.h>
    #include <sys/stat.h>
    #include <sys/types.h>
    #include <unistd.h>
    
    #ifdef TEST_SQLITE
    /*
     * Test sqlite open, as done by gcompris require the libsqlite3-dev
     * package and linking with -lsqlite3.  A more low level test is
     * below.
     * See also <URL: http://www.sqlite.org./faq.html#q5 >.
     */
    #include <sqlite3.h>
    #define CREATE_TABLE_USERS                                              \
      "CREATE TABLE users (user_id INT UNIQUE, login TEXT, lastname TEXT, firstname TEXT, birthdate TEXT, class_id INT ); "
    int test_sqlite_open(void) {
      char *zErrMsg;
      char *name = "testsqlite.db";
      sqlite3 *db=NULL;
      unlink(name);
      int rc = sqlite3_open(name, &db);
      if( rc ){
        printf("error: sqlite open of %s failed: %s\n", name, sqlite3_errmsg(db));
        sqlite3_close(db);
        return -1;
      }
    
      /* create tables */
      rc = sqlite3_exec(db,CREATE_TABLE_USERS, NULL,  0, &zErrMsg);
      if( rc != SQLITE_OK ){
        printf("error: sqlite table create failed: %s\n", zErrMsg);
        sqlite3_close(db);
        return -1;
      }
      printf("info: sqlite worked\n");
      sqlite3_close(db);
      return 0;
    }
    #endif /* TEST_SQLITE */
    
    /*
     * Demonstrate locking issue found in gcompris using sqlite3.  This
     * work with ext3, but not with cifs server on Windows 2003.  This is
     * done in the sqlite3 library.
     * See also
     * <URL:http://www.cygwin.com/ml/cygwin/2001-08/msg00854.html> and the
     * POSIX specification
     * <URL:http://www.opengroup.org/onlinepubs/009695399/functions/fcntl.html>.
     */
    int test_gcompris_locking(void) {
      struct flock fl;
      char *name = "testsqlite.db";
      unlink(name);
      int fd = open(name, O_RDWR|O_CREAT|O_LARGEFILE, 0644);
      printf("info: testing fcntl locking\n");
    
      fl.l_whence = SEEK_SET;
      fl.l_pid    = getpid();
      printf("  Read-locking 1 byte from 1073741824");
      fl.l_start  = 1073741824;
      fl.l_len    = 1;
      fl.l_type   = F_RDLCK;
      if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");
    
      printf("  Read-locking 510 byte from 1073741826");
      fl.l_start  = 1073741826;
      fl.l_len    = 510;
      fl.l_type   = F_RDLCK;
      if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");
    
      printf("  Unlocking 1 byte from 1073741824");
      fl.l_start  = 1073741824;
      fl.l_len    = 1;
      fl.l_type   = F_UNLCK;
      if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");
    
      printf("  Write-locking 1 byte from 1073741824");
      fl.l_start  = 1073741824;
      fl.l_len    = 1;
      fl.l_type   = F_WRLCK;
      if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");
    
      printf("  Write-locking 510 byte from 1073741826");
      fl.l_start  = 1073741826;
      fl.l_len    = 510;
      if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");
    
      printf("  Unlocking 2 byte from 1073741824");
      fl.l_start  = 1073741824;
      fl.l_len    = 2;
      fl.l_type   = F_UNLCK;
      if (0 != fcntl(fd, F_SETLK, &fl) ) printf(" - error!\n"); else printf("\n");
    
      close(fd);
      return 0;
    }
    
    /*
     * Test if permissions of freshly created directories allow entries
     * below them.  This was a problem with OpenOffice.org and gcompris.
     * Mounting with option 'sync' seem to solve this problem while
     * slowing down file operations.
     */
    int test_subdirectory_creation(void) {
    #define LEVELS 5
      char *path = strdup("test");
      char *dirs[LEVELS];
      int level;
      printf("info: testing subdirectory creation\n");
      for (level = 0; level < LEVELS; level++) {
        char *newpath = NULL;
        if (-1 == mkdir(path, 0777)) {
          printf("  error: Unable to create directory '%s': %s\n",
    	     path, strerror(errno));
          break;
        }
        asprintf(&newpath, "%s/%s", path, "test");
        free(path);
        path = newpath;
      }
      return 0;
    }
    
    /*
     * Test if symlinks can be created.  This was a problem detected with
     * KDE.
     */
    int test_symlinks(void) {
      printf("info: testing symlink creation\n");
      unlink("symlink");
      if (-1 == symlink("file", "symlink"))
        printf("  error: Unable to create symlink\n");
      return 0;
    }
    
    int main(int argc, char **argv) {
      printf("Testing POSIX/Unix sematics on file system\n");
      test_symlinks();
      test_subdirectory_creation();
    #ifdef TEST_SQLITE
      test_sqlite_open();
    #endif /* TEST_SQLITE */
      test_gcompris_locking();
      return 0;
    }
    

    When everything is working, it should print something like this:

    Testing POSIX/Unix sematics on file system
    info: testing symlink creation
    info: testing subdirectory creation
    info: sqlite worked
    info: testing fcntl locking
      Read-locking 1 byte from 1073741824
      Read-locking 510 byte from 1073741826
      Unlocking 1 byte from 1073741824
      Write-locking 1 byte from 1073741824
      Write-locking 510 byte from 1073741826
      Unlocking 2 byte from 1073741824
    

    I do not remember the exact details of the problems we saw, but one of them was with locking, where if I remember correctly, POSIX allow a read-only lock to be upgraded to a read-write lock without unlocking the read-only lock (while Windows do not). Another was a bug in the CIFS/SMB client implementation in the Linux kernel where directory meta information would be wrong for a fraction of a second, making OpenOffice.org fail to create its deep directory tree because it was not allowed to create files in its freshly created directory.

    Anyway, here is a nice tool for your tool box, might you never need it. :)

    Update 2010-08-27: Michael Gebetsroither report that he found the script so useful that he created a GIT repository and stored it in http://github.com/gebi/fs-test.

    7th August 2010

    A few days ago, I tried to install a Roaming workation profile from Debian Edu/Squeeze while on the university network here at the University of Oslo, and noticed how much had to change to get it operational using the university infrastructure. It was fairly easy, but it occured to me that Debian Edu would improve a lot if I could get the client to connect without any changes at all, and thus let the client configure itself during installation and first boot to use the infrastructure around it. Now I am a huge step further along that road.

    With our current squeeze-test packages, I can select the roaming workstation profile and get a working laptop connecting to the university LDAP server for user and group and our active directory servers for Kerberos authentication. All this without any configuration at all during installation. My users home directory got a bookmark in the KDE menu to mount it via SMB, with the correct URL. In short, openldap and sssd is correctly configured. In addition to this, the client look for http://wpad/wpad.dat to configure a web proxy, and when it fail to find it no proxy settings are stored in /etc/environment and /etc/apt/apt.conf. Iceweasel and KDE is configured to look for the same wpad configuration and also do not use a proxy when at the university network. If the machine is moved to a network with such wpad setup, it would automatically use it when DHCP gave it a IP address.

    The LDAP server is located using DNS, by first looking for the DNS entry ldap.$domain. If this do not exist, it look for the _ldap._tcp.$domain SRV records and use the first one as the LDAP server. Next, it connects to the LDAP server and search all namingContexts entries for posixAccount or posixGroup objects, and pick the first one as the LDAP base. For Kerberos, a similar algorithm is used to locate the LDAP server, and the realm is the uppercase version of $domain.

    So, what is not working, you might ask. SMB mounting my home directory do not work. No idea why, but suspected the incorrect Kerberos settings in /etc/krb5.conf and /etc/samba/smb.conf might be the cause. These are not properly configured during installation, and had to be hand-edited to get the correct Kerberos realm and server, but SMB mounting still do not work. :(

    With this automatic configuration in place, I expect a Debian Edu roaming profile installation would be able to automatically detect and connect to any site using LDAP and Kerberos for NSS directory and PAM authentication. It should also work out of the box in a Active Directory environment providing posixAccount and posixGroup objects with UID and GID values.

    If you want to help out with implementing these things for Debian Edu, please contact us on debian-edu@lists.debian.org.

    3rd August 2010

    The new roaming workstation profile in Debian Edu/Squeeze is fairly similar to the laptop setup am I working on using Ubuntu for the University of Oslo, and just for the heck of it, I tested today how hard it would be to integrate that profile into the university infrastructure. In this case, it is the university LDAP server, Active Directory Kerberos server and SMB mounting from the Netapp file servers.

    I was pleasantly surprised that the only three files needed to be changed (/etc/sssd/sssd.conf, /etc/ldap.conf and /etc/mklocaluser.d/20-debian-edu-config) and one file had to be added (/usr/share/perl5/Debian/Edu_Local.pm), to get the client working. Most of the changes were to get the client to use the university LDAP for NSS and Kerberos server for PAM, but one was to change a hard coded DNS domain name in the mklocaluser hook from .intern to .uio.no.

    This testing was so encouraging, that I went ahead and adjusted the Debian Edu scripts and setup in subversion to centralise the roaming workstation setup a bit more and avoid the hardcoded DNS domain name, so that when I test this tomorrow, I expect to get away with modifying only /etc/sssd/sssd.conf and /etc/ldap.conf to get it to use the university servers.

    My goal is to get the clients to have no hardcoded settings and fetch all their initial setup during installation and first boot, to allow them to be inserted also into environments where the default setup in Debian Edu has been changed or as with the university, where the environment is different but provides the protocols Debian Edu uses.

    27th July 2010

    I discovered this while doing automated testing of upgrades from Debian Lenny to Squeeze. A few packages in Debian still got circular dependencies, and it is often claimed that apt and aptitude should be able to handle this just fine, but some times these dependency loops causes apt to fail.

    An example is from todays upgrade of KDE using aptitude. In it, a bug in kdebase-workspace-data causes perl-modules to fail to upgrade. The cause is simple. If a package fail to unpack, then only part of packages with the circular dependency might end up being unpacked when unpacking aborts, and the ones already unpacked will fail to configure in the recovery phase because its dependencies are unavailable.

    In this log, the problem manifest itself with this error:

    dpkg: dependency problems prevent configuration of perl-modules:
     perl-modules depends on perl (>= 5.10.1-1); however:
      Version of perl on system is 5.10.0-19lenny2.
    dpkg: error processing perl-modules (--configure):
     dependency problems - leaving unconfigured
    

    The perl/perl-modules circular dependency is already reported as a bug, and will hopefully be solved as soon as possible, but it is not the only one, and each one of these loops in the dependency tree can cause similar failures. Of course, they only occur when there are bugs in other packages causing the unpacking to fail, but it is rather nasty when the failure of one package causes the problem to become worse because of dependency loops.

    Thanks to the tireless effort by Bill Allombert, the number of circular dependencies left in Debian is dropping, and perhaps it will reach zero one day. :)

    Todays testing also exposed a bug in update-notifier and different behaviour between apt-get and aptitude, the latter possibly caused by some circular dependency. Reported both to BTS to try to get someone to look at it.

    Tags: debian, english, nuug.
    27th July 2010

    I just posted this announcement culminating several months of work with the next Debian Edu release. Not nearly done, but one major step completed.

    This is the first test release based on Squeeze. The focus of this release is to test the user application selection. To have a look, install the standalone profile and let the developers know if the set of installed packages i.e. applications should be modified. If some user application is missing, or if there are some applications that no longer make sense to be included in Debian Edu, please let us know. Also, if a useful application is missing the translation for your language of choice, please let us know too.

    In addition, feedback and help to polish the desktop (menus, artwork, starters, etc.) is appreciated. We would like to ship a nice and handy KDE4 desktop targeted for schools out of the box.

    The other profiles should be installable, but there is a lot more work left to be done before they are ready, so do not expect to much.

    Changes compared to the lenny based version

    • Everything from Debian Squeeze
      • Desktop environment KDE 4.4 => the new KDE desktop in combination with some new artwork
      • Web browser Iceweasel 3.5
      • OpenOffice.org 3.2
      • Educational toolbox GCompris 9.3
      • Music creator Rosegarden 10.04.2
      • Image editor Gimp 2.6.10
      • Virtual universe Celestia 1.6.0
      • Virtual stargazer Stellarium 0.10.4
      • 3D modeler Blender 2.49.2 (new application)
      • Video editor Kdenlive 0.7.7 (new application)
    • Now using Kerberos for password checking (migration not finished). Enabled for:
      • PAM
      • LDAP
      • IMAP
      • SMTP (sender verification)
    • New experimental roaming workstation profile for laptops.
    • Show welcome page to users when they first log in. The URL is fetched from LDAP.
    • New LXDE desktop option, in addition to KDE (default) and Gnome.
    • General cleanup (not finished)

    The following features are not working as they should

    • No web based administration tool for creating users and groups. The scripts ldap-createuser-krb and ldap-add-user-to-group can be used for testing.
    • DVD installs are missing debian-installer images for the PXE boot, and do not set up the PXE menu on eth0 because of this. LTSP clients should still boot from eth1 on thin client servers.
    • The restructured KDE menu is not implemented.
    • The LDAP server setup need to be reviewed for security.
    • The LDAP directory structure need to be reworked.
    • Different sets of packages are installed when using the DVD and the netinst CD. More packages are installed using the netinst CD.
    • The jackd package fail to install. This is believed to be caused by some ongoing transition, and hopefully should be solved soon. The jackd1 package can be installed manually for those that need it.
    • Some packages lack translations. See http://wiki.debian.org/DebianEdu/Status/Squeeze for updated status, and help out with translations.

    To download this multiarch netinstall release you can use

    To download this multiarch dvd release you can use

    There is no source DVD available yet. It will be prepared when we get closer to the final release.

    The MD5SUM of these images are

    • 3dbf45d59f42a53518b6e3c9ec3b5eb6 debian-edu-6.0.0+edua0-CD.iso
    • 22f2cbfce281d1c6e478be452638675d debian-edu-6.0.0+edua0-DVD.iso

    The SHA1SUM of these images are

    • c53d1b69b40cf37cd27aefaf33f6f6a3821bedf0 debian-edu-6.0.0+edua0-CD.iso
    • 2ec29d7db676d59d32197b05c277ffe16348376c debian-edu-6.0.0+edua0-DVD.iso

    How to report bugs: http://wiki.debian.org/DebianEdu/HowTo/ReportBugsInBugzilla

    Please direct replies to debian-edu@lists.debian.org

    25th July 2010

    The last few months me and the other Debian Edu developers have been working hard to get the Debian/Squeeze based version of Debian Edu/Skolelinux into shape. This future version will use Kerberos for authentication, and services are slowly migrated to single signon, getting rid of password questions one at the time.

    It will also feature a roaming workstation profile with local home directory, for laptops that are only some times on the Skolelinux network, and for this profile a shortcut is created in Gnome and KDE to gain access to the users home directory on the file server. This shortcut uses SMB at the moment, and yesterday I had time to test if SMB mounting had started working in KDE after we added the cifs-utils package. I was pleasantly surprised how well it worked.

    Thanks to the recent changes to our samba configuration to get it to use Kerberos for authentication, there were no question about user password when mounting the SMB volume. A simple click on the shortcut in the KDE menu, and a window with the home directory popped up. :)

    One step closer to a single signon solution out of the box in Debian Edu. We already had PAM, LDAP, IMAP and SMTP in place, and now also Samba. Next step is Cups and hopefully also NFS.

    We had planned a alpha0 release of Debian Edu for today, but thanks to the autobuilder administrators for some architectures being slow to sign packages, we are still missing the fixed LTSP package we need for the release. It was uploaded three days ago with urgency=high, and if it had entered testing yesterday we would have been able to test it in time for a alpha0 release today. As the binaries for ia64 and powerpc still not uploaded to the Debian archive, we need to delay the alpha release another day.

    If you want to help out with implementing Kerberos for Debian Edu, please contact us on debian-edu@lists.debian.org.

    18th July 2010

    Thanks to todays opengeodata blog entry, I just discovered that the OpenStreetmap.org site have gotten support for calculating routes. The support is still experimental and only available from the development server, until more experience is gathered on the user interface and any scalability issues.

    Earlier, the routing I knew about using the OpenStreetmap.org data was provided by Cloudmade, but having it on the main page is required to make everyone aware of the issue. I've had people reject Openstreetmap.org as a viable alternative for them because the front page lacked routing support, and I hope their needs will be catered for when routing show up on the www.openstreetmap.org front page.

    Tags: english, kart, web.
    17th July 2010

    This is a followup on my previous work on merging all the computer related LDAP objects in Debian Edu.

    As a step to try to see if it possible to merge the DNS and DHCP LDAP objects, I have had a look at how the packages pdns-backend-ldap and dhcp3-server-ldap in Debian use the LDAP server. The two implementations are quite different in how they use LDAP.

    To get this information, I started slapd with debugging enabled and dumped the debug output to a file to get the LDAP searches performed on a Debian Edu main-server. Here is a summary.

    powerdns

    Clues on how to set up PowerDNS to use a LDAP backend is available on the web.

    PowerDNS have two modes of operation using LDAP as its backend. One "strict" mode where the forward and reverse DNS lookups are done using the same LDAP objects, and a "tree" mode where the forward and reverse entries are in two different subtrees in LDAP with a structure based on the DNS names, as in tjener.intern and 2.2.0.10.in-addr.arpa.

    In tree mode, the server is set up to use a LDAP subtree as its base, and uses a "base" scoped search for the DNS name by adding "dc=tjener,dc=intern," to the base with a filter for "(associateddomain=tjener.intern)" for the forward entry and "dc=2,dc=2,dc=0,dc=10,dc=in-addr,dc=arpa," with a filter for "(associateddomain=2.2.0.10.in-addr.arpa)" for the reverse entry. For forward entries, it is looking for attributes named dnsttl, arecord, nsrecord, cnamerecord, soarecord, ptrrecord, hinforecord, mxrecord, txtrecord, rprecord, afsdbrecord, keyrecord, aaaarecord, locrecord, srvrecord, naptrrecord, kxrecord, certrecord, dsrecord, sshfprecord, ipseckeyrecord, rrsigrecord, nsecrecord, dnskeyrecord, dhcidrecord, spfrecord and modifytimestamp. For reverse entries it is looking for the attributes dnsttl, arecord, nsrecord, cnamerecord, soarecord, ptrrecord, hinforecord, mxrecord, txtrecord, rprecord, aaaarecord, locrecord, srvrecord, naptrrecord and modifytimestamp. The equivalent ldapsearch commands could look like this:

    ldapsearch -h ldap \
      -b dc=tjener,dc=intern,ou=hosts,dc=skole,dc=skolelinux,dc=no \
      -s base -x '(associateddomain=tjener.intern)' dNSTTL aRecord nSRecord \
      cNAMERecord sOARecord pTRRecord hInfoRecord mXRecord tXTRecord \
      rPRecord aFSDBRecord KeyRecord aAAARecord lOCRecord sRVRecord \
      nAPTRRecord kXRecord certRecord dSRecord sSHFPRecord iPSecKeyRecord \
      rRSIGRecord nSECRecord dNSKeyRecord dHCIDRecord sPFRecord modifyTimestamp
    
    ldapsearch -h ldap \
      -b dc=2,dc=2,dc=0,dc=10,dc=in-addr,dc=arpa,ou=hosts,dc=skole,dc=skolelinux,dc=no \
      -s base -x '(associateddomain=2.2.0.10.in-addr.arpa)'
      dnsttl, arecord, nsrecord, cnamerecord soarecord ptrrecord \
      hinforecord mxrecord txtrecord rprecord aaaarecord locrecord \
      srvrecord naptrrecord modifytimestamp
    

    In Debian Edu/Lenny, the PowerDNS tree mode is used with ou=hosts,dc=skole,dc=skolelinux,dc=no as the base, and these are two example LDAP objects used there. In addition to these objects, the parent objects all th way up to ou=hosts,dc=skole,dc=skolelinux,dc=no also exist.

    dn: dc=tjener,dc=intern,ou=hosts,dc=skole,dc=skolelinux,dc=no
    objectclass: top
    objectclass: dnsdomain
    objectclass: domainrelatedobject
    dc: tjener
    arecord: 10.0.2.2
    associateddomain: tjener.intern
    
    dn: dc=2,dc=2,dc=0,dc=10,dc=in-addr,dc=arpa,ou=hosts,dc=skole,dc=skolelinux,dc=no
    objectclass: top
    objectclass: dnsdomain2
    objectclass: domainrelatedobject
    dc: 2
    ptrrecord: tjener.intern
    associateddomain: 2.2.0.10.in-addr.arpa
    

    In strict mode, the server behaves differently. When looking for forward DNS entries, it is doing a "subtree" scoped search with the same base as in the tree mode for a object with filter "(associateddomain=tjener.intern)" and requests the attributes dnsttl, arecord, nsrecord, cnamerecord, soarecord, ptrrecord, hinforecord, mxrecord, txtrecord, rprecord, aaaarecord, locrecord, srvrecord, naptrrecord and modifytimestamp. For reverse entires it also do a subtree scoped search but this time the filter is "(arecord=10.0.2.2)" and the requested attributes are associateddomain, dnsttl and modifytimestamp. In short, in strict mode the objects with ptrrecord go away, and the arecord attribute in the forward object is used instead.

    The forward and reverse searches can be simulated using ldapsearch like this:

    ldapsearch -h ldap -b ou=hosts,dc=skole,dc=skolelinux,dc=no -s sub -x \
      '(associateddomain=tjener.intern)' dNSTTL aRecord nSRecord \
      cNAMERecord sOARecord pTRRecord hInfoRecord mXRecord tXTRecord \
      rPRecord aFSDBRecord KeyRecord aAAARecord lOCRecord sRVRecord \
      nAPTRRecord kXRecord certRecord dSRecord sSHFPRecord iPSecKeyRecord \
      rRSIGRecord nSECRecord dNSKeyRecord dHCIDRecord sPFRecord modifyTimestamp
    
    ldapsearch -h ldap -b ou=hosts,dc=skole,dc=skolelinux,dc=no -s sub -x \
      '(arecord=10.0.2.2)' associateddomain dnsttl modifytimestamp
    

    In addition to the forward and reverse searches , there is also a search for SOA records, which behave similar to the forward and reverse lookups.

    A thing to note with the PowerDNS behaviour is that it do not specify any objectclass names, and instead look for the attributes it need to generate a DNS reply. This make it able to work with any objectclass that provide the needed attributes.

    The attributes are normally provided in the cosine (RFC 1274) and dnsdomain2 schemas. The latter is used for reverse entries like ptrrecord and recent DNS additions like aaaarecord and srvrecord.

    In Debian Edu, we have created DNS objects using the object classes dcobject (for dc), dnsdomain or dnsdomain2 (structural, for the DNS attributes) and domainrelatedobject (for associatedDomain). The use of structural object classes make it impossible to combine these classes with the object classes used by DHCP.

    There are other schemas that could be used too, for example the dnszone structural object class used by Gosa and bind-sdb for the DNS attributes combined with the domainrelatedobject object class, but in this case some unused attributes would have to be included as well (zonename and relativedomainname).

    My proposal for Debian Edu would be to switch PowerDNS to strict mode and not use any of the existing objectclasses (dnsdomain, dnsdomain2 and dnszone) when one want to combine the DNS information with DHCP information, and instead create a auxiliary object class defined something like this (using the attributes defined for dnsdomain and dnsdomain2 or dnszone):

    objectclass ( some-oid NAME 'dnsDomainAux'
        SUP top
        AUXILIARY
        MAY ( ARecord $ MDRecord $ MXRecord $ NSRecord $ SOARecord $ CNAMERecord $
              DNSTTL $ DNSClass $ PTRRecord $ HINFORecord $ MINFORecord $
              TXTRecord $ SIGRecord $ KEYRecord $ AAAARecord $ LOCRecord $
              NXTRecord $ SRVRecord $ NAPTRRecord $ KXRecord $ CERTRecord $
              A6Record $ DNAMERecord
        ))
    

    This will allow any object to become a DNS entry when combined with the domainrelatedobject object class, and allow any entity to include all the attributes PowerDNS wants. I've sent an email to the PowerDNS developers asking for their view on this schema and if they are interested in providing such schema with PowerDNS, and I hope my message will be accepted into their mailing list soon.

    ISC dhcp

    The DHCP server searches for specific objectclass and requests all the object attributes, and then uses the attributes it want. This make it harder to figure out exactly what attributes are used, but thanks to the working example in Debian Edu I can at least get an idea what is needed without having to read the source code.

    In the DHCP server configuration, the LDAP base to use and the search filter to use to locate the correct dhcpServer entity is stored. These are the relevant entries from /etc/dhcp3/dhcpd.conf:

    ldap-base-dn "dc=skole,dc=skolelinux,dc=no";
    ldap-dhcp-server-cn "dhcp";
    

    The DHCP server uses this information to nest all the DHCP configuration it need. The cn "dhcp" is located using the given LDAP base and the filter "(&(objectClass=dhcpServer)(cn=dhcp))". The search result is this entry:

    dn: cn=dhcp,dc=skole,dc=skolelinux,dc=no
    cn: dhcp
    objectClass: top
    objectClass: dhcpServer
    dhcpServiceDN: cn=DHCP Config,dc=skole,dc=skolelinux,dc=no
    

    The content of the dhcpServiceDN attribute is next used to locate the subtree with DHCP configuration. The DHCP configuration subtree base is located using a base scope search with base "cn=DHCP Config,dc=skole,dc=skolelinux,dc=no" and filter "(&(objectClass=dhcpService)(|(dhcpPrimaryDN=cn=dhcp,dc=skole,dc=skolelinux,dc=no)(dhcpSecondaryDN=cn=dhcp,dc=skole,dc=skolelinux,dc=no)))". The search result is this entry:

    dn: cn=DHCP Config,dc=skole,dc=skolelinux,dc=no
    cn: DHCP Config
    objectClass: top
    objectClass: dhcpService
    objectClass: dhcpOptions
    dhcpPrimaryDN: cn=dhcp, dc=skole,dc=skolelinux,dc=no
    dhcpStatements: ddns-update-style none
    dhcpStatements: authoritative
    dhcpOption: smtp-server code 69 = array of ip-address
    dhcpOption: www-server code 72 = array of ip-address
    dhcpOption: wpad-url code 252 = text
    

    Next, the entire subtree is processed, one level at the time. When all the DHCP configuration is loaded, it is ready to receive requests. The subtree in Debian Edu contain objects with object classes top/dhcpService/dhcpOptions, top/dhcpSharedNetwork/dhcpOptions, top/dhcpSubnet, top/dhcpGroup and top/dhcpHost. These provide options and information about netmasks, dynamic range etc. Leaving out the details here because it is not relevant for the focus of my investigation, which is to see if it is possible to merge dns and dhcp related computer objects.

    When a DHCP request come in, LDAP is searched for the MAC address of the client (00:00:00:00:00:00 in this example), using a subtree scoped search with "cn=DHCP Config,dc=skole,dc=skolelinux,dc=no" as the base and "(&(objectClass=dhcpHost)(dhcpHWAddress=ethernet 00:00:00:00:00:00))" as the filter. This is what a host object look like:

    dn: cn=hostname,cn=group1,cn=THINCLIENTS,cn=DHCP Config,dc=skole,dc=skolelinux,dc=no
    cn: hostname
    objectClass: top
    objectClass: dhcpHost
    dhcpHWAddress: ethernet 00:00:00:00:00:00
    dhcpStatements: fixed-address hostname
    

    There is less flexiblity in the way LDAP searches are done here. The object classes need to have fixed names, and the configuration need to be stored in a fairly specific LDAP structure. On the positive side, the invidiual dhcpHost entires can be anywhere without the DN pointed to by the dhcpServer entries. The latter should make it possible to group all host entries in a subtree next to the configuration entries, and this subtree can also be shared with the DNS server if the schema proposed above is combined with the dhcpHost structural object class.

    Conclusion

    The PowerDNS implementation seem to be very flexible when it come to which LDAP schemas to use. While its "tree" mode is rigid when it come to the the LDAP structure, the "strict" mode is very flexible, allowing DNS objects to be stored anywhere under the base cn specified in the configuration.

    The DHCP implementation on the other hand is very inflexible, both regarding which LDAP schemas to use and which LDAP structure to use. I guess one could implement ones own schema, as long as the objectclasses and attributes have the names used, but this do not really help when the DHCP subtree need to have a fairly fixed structure.

    Based on the observed behaviour, I suspect a LDAP structure like this might work for Debian Edu:

    ou=services
      cn=machine-info (dhcpService) - dhcpServiceDN points here
        cn=dhcp (dhcpServer)
        cn=dhcp-internal (dhcpSharedNetwork/dhcpOptions)
          cn=10.0.2.0 (dhcpSubnet)
            cn=group1 (dhcpGroup/dhcpOptions)
        cn=dhcp-thinclients (dhcpSharedNetwork/dhcpOptions)
          cn=192.168.0.0 (dhcpSubnet)
            cn=group1 (dhcpGroup/dhcpOptions)
        ou=machines - PowerDNS base points here
          cn=hostname (dhcpHost/domainrelatedobject/dnsDomainAux)
    

    This is not tested yet. If the DHCP server require the dhcpHost entries to be in the dhcpGroup subtrees, the entries can be stored there instead of a common machines subtree, and the PowerDNS base would have to be moved one level up to the machine-info subtree.

    The combined object under the machines subtree would look something like this:

    dn: dc=hostname,ou=machines,cn=machine-info,dc=skole,dc=skolelinux,dc=no
    dc: hostname
    objectClass: top
    objectClass: dhcpHost
    objectclass: domainrelatedobject
    objectclass: dnsDomainAux
    associateddomain: hostname.intern
    arecord: 10.11.12.13
    dhcpHWAddress: ethernet 00:00:00:00:00:00
    dhcpStatements: fixed-address hostname.intern
    

    One could even add the LTSP configuration associated with a given machine, as long as the required attributes are available in a auxiliary object class.

    14th July 2010

    For a while now, I have wanted to find a way to change the DNS and DHCP services in Debian Edu to use the same LDAP objects for a given computer, to avoid the possibility of having a inconsistent state for a computer in LDAP (as in DHCP but no DNS entry or the other way around) and make it easier to add computers to LDAP.

    I've looked at how powerdns and dhcpd is using LDAP, and using this information finally found a solution that seem to work.

    The old setup required three LDAP objects for a given computer. One forward DNS entry, one reverse DNS entry and one DHCP entry. If we switch powerdns to use its strict LDAP method (ldap-method=strict in pdns-debian-edu.conf), the forward and reverse DNS entries are merged into one while making it impossible to transfer the reverse map to a slave DNS server.

    If we also replace the object class used to get the DNS related attributes to one allowing these attributes to be combined with the dhcphost object class, we can merge the DNS and DHCP entries into one. I've written such object class in the dnsdomainaux.schema file (need proper OIDs, but that is a minor issue), and tested the setup. It seem to work.

    With this test setup in place, we can get away with one LDAP object for both DNS and DHCP, and even the LTSP configuration I suggested in an earlier email. The combined LDAP object will look something like this:

      dn: cn=hostname,cn=group1,cn=THINCLIENTS,cn=DHCP Config,dc=skole,dc=skolelinux,dc=no
      cn: hostname
      objectClass: dhcphost
      objectclass: domainrelatedobject
      objectclass: dnsdomainaux
      associateddomain: hostname.intern
      arecord: 10.11.12.13
      dhcphwaddress: ethernet 00:00:00:00:00:00
      dhcpstatements: fixed-address hostname
      ldapconfigsound: Y
    

    The DNS server uses the associateddomain and arecord entries, while the DHCP server uses the dhcphwaddress and dhcpstatements entries before asking DNS to resolve the fixed-adddress. LTSP will use dhcphwaddress or associateddomain and the ldapconfig* attributes.

    I am not yet sure if I can get the DHCP server to look for its dhcphost in a different location, to allow us to put the objects outside the "DHCP Config" subtree, but hope to figure out a way to do that. If I can't figure out a way to do that, we can still get rid of the hosts subtree and move all its content into the DHCP Config tree (which probably should be renamed to be more related to the new content. I suspect cn=dnsdhcp,ou=services or something like that might be a good place to put it.

    If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

    11th July 2010

    Vagrant mentioned on IRC today that ltsp_config now support sourcing files from /usr/share/ltsp/ltsp_config.d/ on the thin clients, and that this can be used to fetch configuration from LDAP if Debian Edu choose to store configuration there.

    Armed with this information, I got inspired and wrote a test module to get configuration from LDAP. The idea is to look up the MAC address of the client in LDAP, and look for attributes on the form ltspconfigsetting=value, and use this to export SETTING=value to the LTSP clients.

    The goal is to be able to store the LTSP configuration attributes in a "computer" LDAP object used by both DNS and DHCP, and thus allowing us to store all information about a computer in one place.

    This is a untested draft implementation, and I welcome feedback on this approach. A real LDAP schema for the ltspClientAux objectclass need to be written. Comments, suggestions, etc?

    # Store in /opt/ltsp/$arch/usr/share/ltsp/ltsp_config.d/ldap-config
    #
    # Fetch LTSP client settings from LDAP based on MAC address
    #
    # Uses ethernet address as stored in the dhcpHost objectclass using
    # the dhcpHWAddress attribute or ethernet address stored in the
    # ieee802Device objectclass with the macAddress attribute.
    #
    # This module is written to be schema agnostic, and only depend on the
    # existence of attribute names.
    #
    # The LTSP configuration variables are saved directly using a
    # ltspConfig prefix and uppercasing the rest of the attribute name.
    # To set the SERVER variable, set the ltspConfigServer attribute.
    #
    # Some LDAP schema should be created with all the relevant
    # configuration settings.  Something like this should work:
    # 
    # objectclass ( 1.1.2.2 NAME 'ltspClientAux'
    #     SUP top
    #     AUXILIARY
    #     MAY ( ltspConfigServer $ ltsConfigSound $ ... )
    
    LDAPSERVER=$(debian-edu-ldapserver)
    if [ "$LDAPSERVER" ] ; then
        LDAPBASE=$(debian-edu-ldapserver -b)
        for MAC in $(LANG=C ifconfig |grep -i hwaddr| awk '{print $5}'|sort -u) ; do
    	filter="(|(dhcpHWAddress=ethernet $MAC)(macAddress=$MAC))"
    	ldapsearch -h "$LDAPSERVER" -b "$LDAPBASE" -v -x "$filter" | \
    	    grep '^ltspConfig' | while read attr value ; do
    	    # Remove prefix and convert to upper case
    	    attr=$(echo $attr | sed 's/^ltspConfig//i' | tr a-z A-Z)
    	    # bass value on to clients
    	    eval "$attr=$value; export $attr"
    	done
        done
    fi
    

    I'm not sure this shell construction will work, because I suspect the while block might end up in a subshell causing the variables set there to not show up in ltsp-config, but if that is the case I am sure the code can be restructured to make sure the variables are passed on. I expect that can be solved with some testing. :)

    If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

    Update 2010-07-17: I am aware of another effort to store LTSP configuration in LDAP that was created around year 2000 by PC Xperience, Inc., 2000. I found its files on a personal home page over at redhat.com.

    9th July 2010

    Since my last post about available LDAP tools in Debian, I was told about a LDAP GUI that is even better than luma. The java application jXplorer is claimed to be capable of moving LDAP objects and subtrees using drag-and-drop, and can authenticate using Kerberos. I have only tested the Kerberos authentication, but do not have a LDAP setup allowing me to rewrite LDAP with my test user yet. It is available in Debian testing and unstable at the moment. The only problem I have with it is how it handle errors. If something go wrong, its non-intuitive behaviour require me to go through some query work list and remove the failing query. Nothing big, but very annoying.

    3rd July 2010

    Here is a short update on my my Debian Lenny->Squeeze upgrade testing. Here is a summary of the difference for Gnome when it is upgraded by apt-get and aptitude. I'm not reporting the status for KDE, because the upgrade crashes when aptitude try because of missing conflicts (#584861 and #585716).

    At the end of the upgrade test script, dpkg -l is executed to get a complete list of the installed packages. Based on this I see these differences when I did a test run today. As usual, I do not really know what the correct set of packages would be, but thought it best to publish the difference.

    Installed using apt-get, missing with aptitude

    at-spi cpp-4.3 finger gnome-spell gstreamer0.10-gnomevfs libatspi1.0-0 libcupsys2 libeel2-data libgail-common libgdl-1-common libgnomeprint2.2-data libgnomeprintui2.2-common libgnomevfs2-bin libgtksourceview-common libpt-1.10.10-plugins-alsa libpt-1.10.10-plugins-v4l libservlet2.4-java libxalan2-java libxerces2-java openoffice.org-writer2latex openssl-blacklist p7zip python-4suite-xml python-eggtrayicon python-gtkhtml2 python-gtkmozembed svgalibg1 xserver-xephyr zip

    Installed using apt-get, removed with aptitude

    bluez-utils dhcdbd djvulibre-desktop epiphany-gecko gnome-app-install gnome-mount gnome-vfs-obexftp gnome-volume-manager libao2 libavahi-compat-libdnssd1 libavahi-core5 libbind9-50 libbluetooth2 libcamel1.2-11 libcdio7 libcucul0 libcurl3 libdirectfb-1.0-0 libdvdread3 libedata-cal1.2-6 libedataserver1.2-9 libeel2-2.20 libepc-1.0-1 libepc-ui-1.0-1 libexchange-storage1.2-3 libfaad0 libgd2-noxpm libgda3-3 libgda3-common libggz2 libggzcore9 libggzmod4 libgksu1.2-0 libgksuui1.0-1 libgmyth0 libgnome-desktop-2 libgnome-pilot2 libgnomecups1.0-1 libgnomeprint2.2-0 libgnomeprintui2.2-0 libgpod3 libgraphviz4 libgtkhtml2-0 libgtksourceview1.0-0 libgucharmap6 libhesiod0 libicu38 libisccc50 libisccfg50 libiw29 libkpathsea4 libltdl3 liblwres50 libmagick++10 libmagick10 libmalaga7 libmtp7 libmysqlclient15off libnautilus-burn4 libneon27 libnm-glib0 libnm-util0 libopal-2.2 libosp5 libparted1.8-10 libpisock9 libpisync1 libpoppler-glib3 libpoppler3 libpt-1.10.10 libraw1394-8 libsensors3 libsmbios2 libsoup2.2-8 libssh2-1 libsuitesparse-3.1.0 libswfdec-0.6-90 libtalloc1 libtotem-plparser10 libtrackerclient0 libvoikko1 libxalan2-java-gcj libxerces2-java-gcj libxklavier12 libxtrap6 libxxf86misc1 libzephyr3 mysql-common swfdec-gnome totem-gstreamer wodim

    Installed using aptitude, missing with apt-get

    gnome gnome-desktop-environment hamster-applet python-gnomeapplet python-gnomekeyring python-wnck rhythmbox-plugins xorg xserver-xorg-input-all xserver-xorg-input-evdev xserver-xorg-input-kbd xserver-xorg-input-mouse xserver-xorg-input-synaptics xserver-xorg-video-all xserver-xorg-video-apm xserver-xorg-video-ark xserver-xorg-video-ati xserver-xorg-video-chips xserver-xorg-video-cirrus xserver-xorg-video-dummy xserver-xorg-video-fbdev xserver-xorg-video-glint xserver-xorg-video-i128 xserver-xorg-video-i740 xserver-xorg-video-mach64 xserver-xorg-video-mga xserver-xorg-video-neomagic xserver-xorg-video-nouveau xserver-xorg-video-nv xserver-xorg-video-r128 xserver-xorg-video-radeon xserver-xorg-video-radeonhd xserver-xorg-video-rendition xserver-xorg-video-s3 xserver-xorg-video-s3virge xserver-xorg-video-savage xserver-xorg-video-siliconmotion xserver-xorg-video-sis xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-tga xserver-xorg-video-trident xserver-xorg-video-tseng xserver-xorg-video-vesa xserver-xorg-video-vmware xserver-xorg-video-voodoo

    Installed using aptitude, removed with apt-get

    deskbar-applet xserver-xorg xserver-xorg-core xserver-xorg-input-wacom xserver-xorg-video-intel xserver-xorg-video-openchrome

    I was told on IRC that the xorg-xserver package was changed in git today to try to get apt-get to not remove xorg completely. No idea when it hits Squeeze, but when it does I hope it will reduce the difference somewhat.

    1st July 2010

    For a laptop, centralized user directories and password checking is a bit troubling. Laptops are typically used also when not connected to the network, and it is vital for a user to be able to log in or unlock the screen saver also when a central server is unavailable. This is possible by caching passwords and directory information (user and group attributes) locally, and the packages to do so are available in Debian. Here follow two recipes to set this up in Debian/Squeeze. It is also possible to set up in Debian/Lenny, but require more manual setup there because pam-auth-update is missing in Lenny.

    LDAP/Kerberos + nscd + libpam-ccreds + libpam-mklocaluser/pam_mkhomedir

    This is the traditional method with a twist. The password caching is provided by libpam-ccreds (version 10-4 or later is needed on Squeeze), and the directory caching is done by nscd. The directory lookup and password checking is done using LDAP. If one want to use Kerberos for password checking the libpam-ldapd package can be replaced with libpam-krb5 or libpam-heimdal. If one is happy having a local home directory with the path listed in LDAP, one can use the pam_mkhomedir module from pam-modules to make this happen instead of using libpam-mklocaluser. A setup for pam-auth-update to enable pam_mkhomedir will have to be written until a fix for bug #568577 is in the archive. Because I believe it is a bad idea to have local home directories using misleading paths like /site/server/partition/, I prefer to create a local user with the home directory in /home/. This is done using the libpam-mklocaluser package.

    These packages need to be installed and configured

    libnss-ldapd libpam-ldapd nscd libpam-ccreds libpam-mklocaluser
    

    The ldapd packages will ask for LDAP connection information, and one have to fill in the values that fits ones own site. Make sure the PAM part uses encrypted connections, to make sure the password is not sent in clear text to the LDAP server. I've been unable to get TLS certificate checking for a self signed certificate working, which make LDAP authentication unsafe for Debian Edu (nslcd is not checking if it is talking to the correct LDAP server), and very much welcome feedback on how to get this working.

    Because nscd do not have a default configuration fit for offline caching until bug #485282 is fixed, this configuration should be used instead of the one currently in /etc/nscd.conf. The changes are in the fields reload-count and positive-time-to-live, and is based on the instructions I found in the LDAP for Mobile Laptops instructions by Flyn Computing.

    	debug-level		0
    	reload-count		unlimited
    	paranoia		no
    
    	enable-cache		passwd		yes
    	positive-time-to-live	passwd		2592000
    	negative-time-to-live	passwd		20
    	suggested-size		passwd		211
    	check-files		passwd		yes
    	persistent		passwd		yes
    	shared			passwd		yes
    	max-db-size		passwd		33554432
    	auto-propagate		passwd		yes
    
    	enable-cache		group		yes
    	positive-time-to-live	group		2592000
    	negative-time-to-live	group		20
    	suggested-size		group		211
    	check-files		group		yes
    	persistent		group		yes
    	shared			group		yes
    	max-db-size		group		33554432
    	auto-propagate		group		yes
    
    	enable-cache		hosts		no
    	positive-time-to-live	hosts		2592000
    	negative-time-to-live	hosts		20
    	suggested-size		hosts		211
    	check-files		hosts		yes
    	persistent		hosts		yes
    	shared			hosts		yes
    	max-db-size		hosts		33554432
    
    	enable-cache		services	yes
    	positive-time-to-live	services	2592000
    	negative-time-to-live	services	20
    	suggested-size		services	211
    	check-files		services	yes
    	persistent		services	yes
    	shared			services	yes
    	max-db-size		services	33554432
    

    While we wait for a mechanism to update /etc/nsswitch.conf automatically like the one provided in bug #496915, the file content need to be manually replaced to ensure LDAP is used as the directory service on the machine. /etc/nsswitch.conf should normally look like this:

    passwd:         files ldap
    group:          files ldap
    shadow:         files ldap
    hosts:          files mdns4_minimal [NOTFOUND=return] dns mdns4
    networks:       files
    protocols:      files
    services:       files
    ethers:         files
    rpc:            files
    netgroup:       files ldap
    

    The important parts are that ldap is listed last for passwd, group, shadow and netgroup.

    With these changes in place, any user in LDAP will be able to log in locally on the machine using for example kdm, get a local home directory created and have the password as well as user and group attributes cached.

    LDAP/Kerberos + nss-updatedb + libpam-ccreds + libpam-mklocaluser/pam_mkhomedir

    Because nscd have had its share of problems, and seem to have problems doing proper caching, I've seen suggestions and recipes to use nss-updatedb to copy parts of the LDAP database locally when the LDAP database is available. I have not tested such setup, because I discovered sssd.

    LDAP/Kerberos + sssd + libpam-mklocaluser

    A more flexible and robust setup than the nscd combination mentioned earlier that has shown up recently, is the sssd package from Redhat. It is part of the FreeIPA project to provide a Active Directory like directory service for Linux machines. The sssd system combines the caching of passwords and user information into one package, and remove the need for nscd and libpam-ccreds. It support LDAP and Kerberos, but not NIS. Version 1.2 do not support netgroups, but it is said that it will support this in version 1.5 expected to show up later in 2010. Because the sssd package was missing in Debian, I ended up co-maintaining it with Werner, and version 1.2 is now in testing.

    These packages need to be installed and configured to get the roaming setup I want

    libpam-sss libnss-sss libpam-mklocaluser
    
    The complete setup of sssd is done by editing/creating /etc/sssd/sssd.conf.
    [sssd]
    config_file_version = 2
    reconnection_retries = 3
    sbus_timeout = 30
    services = nss, pam
    domains = INTERN
    
    [nss]
    filter_groups = root
    filter_users = root
    reconnection_retries = 3
    
    [pam]
    reconnection_retries = 3
    
    [domain/INTERN]
    enumerate = false
    cache_credentials = true
    
    id_provider = ldap
    auth_provider = ldap
    chpass_provider = ldap
    
    ldap_uri = ldap://ldap
    ldap_search_base = dc=skole,dc=skolelinux,dc=no
    ldap_tls_reqcert = never
    ldap_tls_cacert = /etc/ssl/certs/ca-certificates.crt
    

    I got the same problem here with certificate checking. Had to set "ldap_tls_reqcert = never" to get it working.

    With the libnss-sss package in testing at the moment, the nsswitch.conf file is update automatically, so there is no need to modify it manually.

    If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

    28th June 2010

    The last few days I have been looking into the status of the LDAP directory in Debian Edu, and in the process I started to miss a GUI tool to browse the LDAP tree. The only one I was able to find in Debian/Squeeze and Lenny is LUMA, which has proved to be a great tool to get a overview of the current LDAP directory populated by default in Skolelinux. Thanks to it, I have been able to find empty and obsolete subtrees, misplaced objects and duplicate objects. It will be installed by default in Debian/Squeeze. If you are working with LDAP, give it a go. :)

    I did notice one problem with it I have not had time to report to the BTS yet. There is no .desktop file in the package, so the tool do not show up in the Gnome and KDE menus, but only deep down in in the Debian submenu in KDE. I hope that can be fixed before Squeeze is released.

    I have not yet been able to get it to modify the tree yet. I would like to move objects and remove subtrees directly in the GUI, but have not found a way to do that with LUMA yet. So in the mean time, I use ldapvi for that.

    If you have tips on other GUI tools for LDAP that might be useful in Debian Edu, please contact us on debian-edu@lists.debian.org.

    Update 2010-06-29: Ross Reedstrom tipped us about the gq package as a useful GUI alternative. It seem like a good tool, but is unmaintained in Debian and got a RC bug keeping it out of Squeeze. Unless that changes, it will not be an option for Debian Edu based on Squeeze.

    24th June 2010

    A while back, I complained about the fact that it is not possible with the provided schemas for storing DNS and DHCP information in LDAP to combine the two sets of information into one LDAP object representing a computer.

    In the mean time, I discovered that a simple fix would be to make the dhcpHost object class auxiliary, to allow it to be combined with the dNSDomain object class, and thus forming one object for one computer when storing both DHCP and DNS information in LDAP.

    If I understand this correctly, it is not safe to do this change without also changing the assigned number for the object class, and I do not know enough about LDAP schema design to do that properly for Debian Edu.

    Anyway, for future reference, this is how I believe we could change the DHCP schema to solve at least part of the problem with the LDAP schemas available today from IETF.

    --- dhcp.schema    (revision 65192)
    +++ dhcp.schema    (working copy)
    @@ -376,7 +376,7 @@
     objectclass ( 2.16.840.1.113719.1.203.6.6
            NAME 'dhcpHost'
            DESC 'This represents information about a particular client'
    -       SUP top
    +       SUP top AUXILIARY
            MUST cn
            MAY  (dhcpLeaseDN $ dhcpHWAddress $ dhcpOptionsDN $ dhcpStatements $ dhcpComments $ dhcpOption)
            X-NDS_CONTAINMENT ('dhcpService' 'dhcpSubnet' 'dhcpGroup') )
    

    I very much welcome clues on how to do this properly for Debian Edu/Squeeze. We provide the DHCP schema in our debian-edu-config package, and should thus be free to rewrite it as we see fit.

    If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

    16th June 2010

    A few times I have had the need to simulate the way tasksel installs packages during the normal debian-installer run. Until now, I have ended up letting tasksel do the work, with the annoying problem of not getting any feedback at all when something fails (like a conffile question from dpkg or a download that fails), using code like this:

    export DEBIAN_FRONTEND=noninteractive
    tasksel --new-install
    
    This would invoke tasksel, let its automatic task selection pick the tasks to install, and continue to install the requested tasks without any output what so ever. Recently I revisited this problem while working on the automatic package upgrade testing, because tasksel would some times hang without any useful feedback, and I want to see what is going on when it happen. Then it occured to me, I can parse the output from tasksel when asked to run in test mode, and use that aptitude command line printed by tasksel then to simulate the tasksel run. I ended up using code like this:
    export DEBIAN_FRONTEND=noninteractive
    cmd="$(in_target tasksel -t --new-install | sed 's/debconf-apt-progress -- //')"
    $cmd
    

    The content of $cmd is typically something like "aptitude -q --without-recommends -o APT::Install-Recommends=no -y install ~t^desktop$ ~t^gnome-desktop$ ~t^laptop$ ~pstandard ~prequired ~pimportant", which will install the gnome desktop task, the laptop task and all packages with priority standard , required and important, just like tasksel would have done it during installation.

    A better approach is probably to extend tasksel to be able to install packages without using debconf-apt-progress, for use cases like this.

    Tags: debian, english, nuug.
    13th June 2010

    For those of us caring about document exchange and interoperability, OfficeShots is a great service. It is to ODF documents what BrowserShots is for web pages.

    A while back, I was contacted by Knut Yrvin at the part of Nokia that used to be Trolltech, who wanted to help the OfficeShots project and wondered if the University of Oslo where I work would be interested in supporting the project. I helped him to navigate his request to the right people at work, and his request was answered with a spot in the machine room with power and network connected, and Knut arranged funding for a machine to fill the spot. The machine is administrated by the OfficeShots people, so I do not have daily contact with its progress, and thus from time to time check back to see how the project is doing.

    Today I had a look, and was happy to see that the Dell box in our machine room now is the host for several virtual machines running as OfficeShots factories, and the project is able to render ODF documents in 17 different document processing implementation on Linux and Windows. This is great.

    Tags: english, standard.
    13th June 2010

    My testing of Debian upgrades from Lenny to Squeeze continues, and I've finally made the upgrade logs available from https://people.skolelinux.org/pere/debian-upgrade-testing/. I am now testing dist-upgrade of Gnome and KDE in a chroot using both apt and aptitude, and found their differences interesting. This time I will only focus on their removal plans.

    After installing a Gnome desktop and the laptop task, apt-get wants to remove 72 packages when dist-upgrading from Lenny to Squeeze. The surprising part is that it want to remove xorg and all xserver-xorg-video* drivers. Clearly not a good choice, but I am not sure why. When asking aptitude to do the same, it want to remove 129 packages, but most of them are library packages I suspect are no longer needed. Both of them want to remove bluetooth packages, which I do not know. Perhaps these bluetooth packages are obsolete?

    For KDE, apt-get want to remove 82 packages, among them kdebase which seem like a bad idea and xorg the same way as with Gnome. Asking aptitude for the same, it wants to remove 192 packages, none which are too surprising.

    I guess the removal of xorg during upgrades should be investigated and avoided, and perhaps others as well. Here are the complete list of planned removals. The complete logs is available from the URL above. Note if you want to repeat these tests, that the upgrade test for kde+apt-get hung in the tasksel setup because of dpkg asking conffile questions. No idea why. I worked around it by using 'echo >> /proc/pidofdpkg/fd/0' to tell dpkg to continue.

    apt-get gnome 72
    bluez-gnome cupsddk-drivers deskbar-applet gnome gnome-desktop-environment gnome-network-admin gtkhtml3.14 iceweasel-gnome-support libavcodec51 libdatrie0 libgdl-1-0 libgnomekbd2 libgnomekbdui2 libmetacity0 libslab0 libxcb-xlib0 nautilus-cd-burner python-gnome2-desktop python-gnome2-extras serpentine swfdec-mozilla update-manager xorg xserver-xorg xserver-xorg-core xserver-xorg-input-all xserver-xorg-input-evdev xserver-xorg-input-kbd xserver-xorg-input-mouse xserver-xorg-input-synaptics xserver-xorg-input-wacom xserver-xorg-video-all xserver-xorg-video-apm xserver-xorg-video-ark xserver-xorg-video-ati xserver-xorg-video-chips xserver-xorg-video-cirrus xserver-xorg-video-cyrix xserver-xorg-video-dummy xserver-xorg-video-fbdev xserver-xorg-video-glint xserver-xorg-video-i128 xserver-xorg-video-i740 xserver-xorg-video-imstt xserver-xorg-video-intel xserver-xorg-video-mach64 xserver-xorg-video-mga xserver-xorg-video-neomagic xserver-xorg-video-nsc xserver-xorg-video-nv xserver-xorg-video-openchrome xserver-xorg-video-r128 xserver-xorg-video-radeon xserver-xorg-video-radeonhd xserver-xorg-video-rendition xserver-xorg-video-s3 xserver-xorg-video-s3virge xserver-xorg-video-savage xserver-xorg-video-siliconmotion xserver-xorg-video-sis xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-tga xserver-xorg-video-trident xserver-xorg-video-tseng xserver-xorg-video-v4l xserver-xorg-video-vesa xserver-xorg-video-vga xserver-xorg-video-vmware xserver-xorg-video-voodoo xulrunner-1.9 xulrunner-1.9-gnome-support

    aptitude gnome 129
    bluez-gnome bluez-utils cpp-4.3 cupsddk-drivers dhcdbd djvulibre-desktop finger gnome-app-install gnome-mount gnome-network-admin gnome-spell gnome-vfs-obexftp gnome-volume-manager gstreamer0.10-gnomevfs gtkhtml3.14 libao2 libavahi-compat-libdnssd1 libavahi-core5 libavcodec51 libbluetooth2 libcamel1.2-11 libcdio7 libcucul0 libcupsys2 libcurl3 libdatrie0 libdirectfb-1.0-0 libdvdread3 libedataserver1.2-9 libeel2-2.20 libeel2-data libepc-1.0-1 libepc-ui-1.0-1 libfaad0 libgail-common libgd2-noxpm libgda3-3 libgda3-common libgdl-1-0 libgdl-1-common libggz2 libggzcore9 libggzmod4 libgksu1.2-0 libgksuui1.0-1 libgmyth0 libgnomecups1.0-1 libgnomekbd2 libgnomekbdui2 libgnomeprint2.2-0 libgnomeprint2.2-data libgnomeprintui2.2-0 libgnomeprintui2.2-common libgnomevfs2-bin libgpod3 libgraphviz4 libgtkhtml2-0 libgtksourceview-common libgtksourceview1.0-0 libgucharmap6 libhesiod0 libicu38 libiw29 libkpathsea4 libltdl3 libmagick++10 libmagick10 libmalaga7 libmetacity0 libmtp7 libmysqlclient15off libnautilus-burn4 libneon27 libnm-glib0 libnm-util0 libopal-2.2 libosp5 libparted1.8-10 libpoppler-glib3 libpoppler3 libpt-1.10.10 libpt-1.10.10-plugins-alsa libpt-1.10.10-plugins-v4l libraw1394-8 libsensors3 libslab0 libsmbios2 libsoup2.2-8 libssh2-1 libsuitesparse-3.1.0 libswfdec-0.6-90 libtalloc1 libtotem-plparser10 libtrackerclient0 libxalan2-java libxalan2-java-gcj libxcb-xlib0 libxerces2-java libxerces2-java-gcj libxklavier12 libxtrap6 libxxf86misc1 libzephyr3 mysql-common nautilus-cd-burner openoffice.org-writer2latex openssl-blacklist p7zip python-4suite-xml python-eggtrayicon python-gnome2-desktop python-gnome2-extras python-gtkhtml2 python-gtkmozembed python-numeric python-sexy serpentine svgalibg1 swfdec-gnome swfdec-mozilla totem-gstreamer update-manager wodim xserver-xorg-video-cyrix xserver-xorg-video-imstt xserver-xorg-video-nsc xserver-xorg-video-v4l xserver-xorg-video-vga zip

    apt-get kde 82
    cupsddk-drivers karm kaudiocreator kcoloredit kcontrol kde kde-core kdeaddons kdeartwork kdebase kdebase-bin kdebase-bin-kde3 kdebase-kio-plugins kdesktop kdeutils khelpcenter kicker kicker-applets knewsticker kolourpaint konq-plugins konqueror korn kpersonalizer kscreensaver ksplash libavcodec51 libdatrie0 libkiten1 libxcb-xlib0 quanta superkaramba texlive-base-bin xorg xserver-xorg xserver-xorg-core xserver-xorg-input-all xserver-xorg-input-evdev xserver-xorg-input-kbd xserver-xorg-input-mouse xserver-xorg-input-synaptics xserver-xorg-input-wacom xserver-xorg-video-all xserver-xorg-video-apm xserver-xorg-video-ark xserver-xorg-video-ati xserver-xorg-video-chips xserver-xorg-video-cirrus xserver-xorg-video-cyrix xserver-xorg-video-dummy xserver-xorg-video-fbdev xserver-xorg-video-glint xserver-xorg-video-i128 xserver-xorg-video-i740 xserver-xorg-video-imstt xserver-xorg-video-intel xserver-xorg-video-mach64 xserver-xorg-video-mga xserver-xorg-video-neomagic xserver-xorg-video-nsc xserver-xorg-video-nv xserver-xorg-video-openchrome xserver-xorg-video-r128 xserver-xorg-video-radeon xserver-xorg-video-radeonhd xserver-xorg-video-rendition xserver-xorg-video-s3 xserver-xorg-video-s3virge xserver-xorg-video-savage xserver-xorg-video-siliconmotion xserver-xorg-video-sis xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-tga xserver-xorg-video-trident xserver-xorg-video-tseng xserver-xorg-video-v4l xserver-xorg-video-vesa xserver-xorg-video-vga xserver-xorg-video-vmware xserver-xorg-video-voodoo xulrunner-1.9

    aptitude kde 192
    bluez-utils cpp-4.3 cupsddk-drivers cvs dcoprss dhcdbd djvulibre-desktop dosfstools eyesapplet fifteenapplet finger gettext ghostscript-x imlib-base imlib11 indi kandy karm kasteroids kaudiocreator kbackgammon kbstate kcoloredit kcontrol kcron kdat kdeadmin-kfile-plugins kdeartwork-misc kdeartwork-theme-window kdebase-bin-kde3 kdebase-kio-plugins kdeedu-data kdegraphics-kfile-plugins kdelirc kdemultimedia-kappfinder-data kdemultimedia-kfile-plugins kdenetwork-kfile-plugins kdepim-kfile-plugins kdepim-kio-plugins kdeprint kdesktop kdessh kdict kdnssd kdvi kedit keduca kenolaba kfax kfaxview kfouleggs kghostview khelpcenter khexedit kiconedit kitchensync klatin klickety kmailcvt kmenuedit kmid kmilo kmoon kmrml kodo kolourpaint kooka korn kpager kpdf kpercentage kpf kpilot kpoker kpovmodeler krec kregexpeditor ksayit ksim ksirc ksirtet ksmiletris ksmserver ksnake ksokoban ksplash ksvg ksysv ktip ktnef kuickshow kverbos kview kviewshell kvoctrain kwifimanager kwin kwin4 kworldclock kxsldbg libakode2 libao2 libarts1-akode libarts1-audiofile libarts1-mpeglib libarts1-xine libavahi-compat-libdnssd1 libavahi-core5 libavc1394-0 libavcodec51 libbluetooth2 libboost-python1.34.1 libcucul0 libcurl3 libcvsservice0 libdatrie0 libdirectfb-1.0-0 libdjvulibre21 libdvdread3 libfaad0 libfreebob0 libgail-common libgd2-noxpm libgraphviz4 libgsmme1c2a libgtkhtml2-0 libicu38 libiec61883-0 libindex0 libiw29 libk3b3 libkcal2b libkcddb1 libkdeedu3 libkdepim1a libkgantt0 libkiten1 libkleopatra1 libkmime2 libkpathsea4 libkpimexchange1 libkpimidentities1 libkscan1 libksieve0 libktnef1 liblockdev1 libltdl3 libmagick10 libmimelib1c2a libmozjs1d libmpcdec3 libneon27 libnm-util0 libopensync0 libpisock9 libpoppler-glib3 libpoppler-qt2 libpoppler3 libraw1394-8 libsmbios2 libssh2-1 libsuitesparse-3.1.0 libtalloc1 libtiff-tools libxalan2-java libxalan2-java-gcj libxcb-xlib0 libxerces2-java libxerces2-java-gcj libxtrap6 mpeglib networkstatus openoffice.org-writer2latex pmount poster psutils quanta quanta-data superkaramba svgalibg1 tex-common texlive-base texlive-base-bin texlive-common texlive-doc-base texlive-fonts-recommended xserver-xorg-video-cyrix xserver-xorg-video-imstt xserver-xorg-video-nsc xserver-xorg-video-v4l xserver-xorg-video-vga xulrunner-1.9

    11th June 2010

    The last few days I have done some upgrade testing in Debian, to see if the upgrade from Lenny to Squeeze will go smoothly. A few bugs have been discovered and reported in the process (#585410 in nagios3-cgi, #584879 already fixed in enscript and #584861 in kdebase-workspace-data), and to get a more regular testing going on, I am working on a script to automate the test.

    The idea is to create a Lenny chroot and use tasksel to install a Gnome or KDE desktop installation inside the chroot before upgrading it. To ensure no services are started in the chroot, a policy-rc.d script is inserted. To make sure tasksel believe it is to install a desktop on a laptop, the tasksel tests are replaced in the chroot (only acceptable because this is a throw-away chroot).

    A naive upgrade from Lenny to Squeeze using aptitude dist-upgrade currently always fail because udev refuses to upgrade with the kernel in Lenny, so to avoid that problem the file /etc/udev/kernel-upgrade is created. The bug report #566000 make me suspect this problem do not trigger in a chroot, but I touch the file anyway to make sure the upgrade go well. Testing on virtual and real hardware have failed me because of udev so far, and creating this file do the trick in such settings anyway. This is a known issue and the current udev behaviour is intended by the udev maintainer because he lack the resources to rewrite udev to keep working with old kernels or something like that. I really wish the udev upstream would keep udev backwards compatible, to avoid such upgrade problem, but given that they fail to do so, I guess documenting the way out of this mess is the best option we got for Debian Squeeze.

    Anyway, back to the task at hand, testing upgrades. This test script, which I call upgrade-test for now, is doing the trick:

    #!/bin/sh
    set -ex
    
    if [ "$1" ] ; then
        desktop=$1
    else
        desktop=gnome
    fi
    
    from=lenny
    to=squeeze
    
    exec < /dev/null
    unset LANG
    mirror=http://ftp.skolelinux.org/debian
    tmpdir=chroot-$from-upgrade-$to-$desktop
    fuser -mv .
    debootstrap $from $tmpdir $mirror
    chroot $tmpdir aptitude update
    cat > $tmpdir/usr/sbin/policy-rc.d <<EOF
    #!/bin/sh
    exit 101
    EOF
    chmod a+rx $tmpdir/usr/sbin/policy-rc.d
    exit_cleanup() {
        umount $tmpdir/proc
    }
    mount -t proc proc $tmpdir/proc
    # Make sure proc is unmounted also on failure
    trap exit_cleanup EXIT INT
    
    chroot $tmpdir aptitude -y install debconf-utils
    
    # Make sure tasksel autoselection trigger.  It need the test scripts
    # to return the correct answers.
    echo tasksel tasksel/desktop multiselect $desktop | \
        chroot $tmpdir debconf-set-selections
    
    # Include the desktop and laptop task
    for test in desktop laptop ; do
        echo > $tmpdir/usr/lib/tasksel/tests/$test <<EOF
    #!/bin/sh
    exit 2
    EOF
        chmod a+rx $tmpdir/usr/lib/tasksel/tests/$test
    done
    
    DEBIAN_FRONTEND=noninteractive
    DEBIAN_PRIORITY=critical
    export DEBIAN_FRONTEND DEBIAN_PRIORITY
    chroot $tmpdir tasksel --new-install
    
    echo deb $mirror $to main > $tmpdir/etc/apt/sources.list
    chroot $tmpdir aptitude update
    touch $tmpdir/etc/udev/kernel-upgrade
    chroot $tmpdir aptitude -y dist-upgrade
    fuser -mv
    

    I suspect it would be useful to test upgrades with both apt-get and with aptitude, but I have not had time to look at how they behave differently so far. I hope to get a cron job running to do the test regularly and post the result on the web. The Gnome upgrade currently work, while the KDE upgrade fail because of the bug in kdebase-workspace-data

    I am not quite sure what kind of extract from the huge upgrade logs (KDE 167 KiB, Gnome 516 KiB) it make sense to include in this blog post, so I will refrain from trying. I can report that for Gnome, aptitude report 760 packages upgraded, 448 newly installed, 129 to remove and 1 not upgraded and 1024MB need to be downloaded while for KDE the same numbers are 702 packages upgraded, 507 newly installed, 193 to remove and 0 not upgraded and 1117MB need to be downloaded

    I am very happy to notice that the Gnome desktop + laptop upgrade is able to migrate to dependency based boot sequencing and parallel booting without a hitch. Was unsure if there were still bugs with packages failing to clean up their obsolete init.d script during upgrades, and no such problem seem to affect the Gnome desktop+laptop packages.

    6th June 2010

    If Debian is to migrate to upstart on Linux, I expect some init.d scripts to migrate (some of) their operations to upstart job while keeping the init.d for hurd and kfreebsd. The packages with such needs will need a way to get their init.d scripts to behave differently when used with sysvinit and with upstart. Because of this, I had a look at the environment variables set when a init.d script is running under upstart, and when it is not.

    With upstart, I notice these environment variables are set when a script is started from rcS.d/ (ignoring some irrelevant ones like COLUMNS):

    DEFAULT_RUNLEVEL=2
    previous=N
    PREVLEVEL=
    RUNLEVEL=
    runlevel=S
    UPSTART_EVENTS=startup
    UPSTART_INSTANCE=
    UPSTART_JOB=rc-sysinit
    

    With sysvinit, these environment variables are set for the same script.

    INIT_VERSION=sysvinit-2.88
    previous=N
    PREVLEVEL=N
    RUNLEVEL=S
    runlevel=S
    

    The RUNLEVEL and PREVLEVEL environment variables passed on from sysvinit are not set by upstart. Not sure if it is intentional or not to not be compatible with sysvinit in this regard.

    For scripts needing to behave differently when upstart is used, looking for the UPSTART_JOB environment variable seem to be a good choice.

    6th June 2010

    Via the blog of Rob Weir I came across the very interesting essay named The Art of Standards Wars (PDF 25 pages). I recommend it for everyone following the standards wars of today.

    3rd June 2010

    When using sitesummary at a site to track machines, it is possible to get a list of the machine types in use thanks to the DMI information extracted from each machine. The script to do so is included in the sitesummary package, and here is example output from the Skolelinux build servers:

    maintainer:~# /usr/lib/sitesummary/hardware-model-summary
      vendor                    count
      Dell Computer Corporation     1
        PowerEdge 1750              1
      IBM                           1
        eserver xSeries 345 -[8670M1X]-     1
      Intel                         2
      [no-dmi-info]                 3
    maintainer:~#
    

    The quality of the report depend on the quality of the DMI tables provided in each machine. Here there are Intel machines without model information listed with Intel as vendor and no model, and virtual Xen machines listed as [no-dmi-info]. One can add -l as a command line option to list the individual machines.

    A larger list is available from the the city of Narvik, which uses Skolelinux on all their shools and also provide the basic sitesummary report publicly. In their report there are ~1400 machines. I know they use both Ubuntu and Skolelinux on their machines, and as sitesummary is available in both distributions, it is trivial to get all of them to report to the same central collector.

    1st June 2010

    It is strange to watch how a bug in Debian causing KDM to fail to start at boot when an NVidia video card is used is handled. The problem seem to be that the nvidia X.org driver uses a long time to initialize, and this duration is longer than kdm is configured to wait.

    I came across two bugs related to this issue, #583312 initially filed against initscripts and passed on to nvidia-glx when it became obvious that the nvidia drivers were involved, and #524751 initially filed against kdm and passed on to src:nvidia-graphics-drivers for unknown reasons.

    To me, it seem that no-one is interested in actually solving the problem nvidia video card owners experience and make sure the Debian distribution work out of the box for these users. The nvidia driver maintainers expect kdm to be set up to wait longer, while kdm expect the nvidia driver maintainers to fix the driver to start faster, and while they wait for each other I guess the users end up switching to a distribution that work for them. I have no idea what the solution is, but I am pretty sure that waiting for each other is not it.

    I wonder why we end up handling bugs this way.

    27th May 2010

    A few days ago, parallel booting was enabled in Debian/testing. The feature seem to hold up pretty well, but three fairly serious issues are known and should be solved:

    • The wicd package seen to break NFS mounting and network setup when parallel booting is enabled. No idea why, but the wicd maintainer seem to be on the case.
    • The nvidia X driver seem to have a race condition triggered more easily when parallel booting is in effect. The maintainer is on the case.
    • The sysv-rc package fail to properly enable dependency based boot sequencing (the shutdown is broken) when old file-rc users try to switch back to sysv-rc. One way to solve it would be for file-rc to create /etc/init.d/.legacy-bootordering, and another is to try to make sysv-rc more robust. Will investigate some more and probably upload a workaround in sysv-rc to help those trying to move from file-rc to sysv-rc get a working shutdown.

    All in all not many surprising issues, and all of them seem solvable before Squeeze is released. In addition to these there are some packages with bugs in their dependencies and run level settings, which I expect will be fixed in a reasonable time span.

    If you report any problems with dependencies in init.d scripts to the BTS, please usertag the report to get it to show up at the list of usertagged bugs related to this.

    Update: Correct bug number to file-rc issue.

    22nd May 2010

    After a long break from debian-installer development, I finally found time today to return to the project. Having to spend less time working dependency based boot in debian, as it is almost complete now, definitely helped freeing some time.

    A while back, I ran into a problem while working on Debian Edu. We include some firmware packages on the Debian Edu CDs, those needed to get disk and network controllers working. Without having these firmware packages available during installation, it is impossible to install Debian Edu on the given machine, and because our target group are non-technical people, asking them to provide firmware packages on an external medium is a support pain. Initially, I expected it to be enough to include the firmware packages on the CD to get debian-installer to find and use them. This proved to be wrong. Next, I hoped it was enough to symlink the relevant firmware packages to some useful location on the CD (tried /cdrom/ and /cdrom/firmware/). This also proved to not work, and at this point I found time to look at the debian-installer code to figure out what was going to work.

    The firmware loading code is in the hw-detect package, and a closer look revealed that it would only look for firmware packages outside the installation media, so the CD was never checked for firmware packages. It would only check USB sticks, floppies and other "external" media devices. Today I changed it to also look in the /cdrom/firmware/ directory on the mounted CD or DVD, which should solve the problem I ran into with Debian edu. I also changed it to look in /firmware/, to make sure the installer also find firmware provided in the initrd when booting the installer via PXE, to allow us to provide the same feature in the PXE setup included in Debian Edu.

    To make sure firmware deb packages with a license questions are not activated without asking if the license is accepted, I extended hw-detect to look for preinst scripts in the firmware packages, and run these before activating the firmware during installation. The license question is asked using debconf in the preinst, so this should solve the issue for the firmware packages I have looked at so far.

    If you want to discuss the details of these features, please contact us on debian-boot@lists.debian.org.

    19th May 2010

    Today, the last piece of the puzzle for roaming laptops in Debian Edu finally entered the Debian archive. Today, the new libpam-mklocaluser package was accepted. Two days ago, two other pieces was accepted into unstable. The pam-python package needed by libpam-mklocaluser, and the sssd package passed NEW on Monday. In addition, the libpam-ccreds package we need is in experimental (version 10-4) since Saturday, and hopefully will be moved to unstable soon.

    This collection of packages allow for two different setups for roaming laptops. The traditional setup would be using libpam-ccreds, nscd and libpam-mklocaluser with LDAP or Kerberos authentication, which should work out of the box if the configuration changes proposed for nscd in BTS report #485282 is implemented. The alternative setup is to use sssd with libpam-mklocaluser to connect to LDAP or Kerberos and let sssd take care of the caching of passwords and group information.

    I have so far been unable to get sssd to work with the LDAP server at the University, but suspect the issue is some SSL/GnuTLS related problem with the server certificate. I plan to update the Debian package to version 1.2, which is scheduled for next week, and hope to find time to make sure the next release will include both the Debian/Ubuntu specific patches. Upstream is friendly and responsive, and I am sure we will find a good solution.

    The idea is to set up the roaming laptops to authenticate using LDAP or Kerberos and create a local user with home directory in /home/ when a usre in LDAP logs in via KDM or GDM for the first time, and cache the password for offline checking, as well as caching group memberhips and other relevant LDAP information. The libpam-mklocaluser package was created to make sure the local home directory is in /home/, instead of /site/server/directory/ which would be the home directory if pam_mkhomedir was used. To avoid confusion with support requests and configuration, we do not want local laptops to have users in a path that is used for the same users home directory on the home directory servers.

    One annoying problem with gdm is that it do not show the PAM message passed to the user from libpam-mklocaluser when the local user is created. Instead gdm simply reject the login with some generic message. The message is shown in kdm, ssh and login, so I guess it is a bug in gdm. Have not investigated if there is some other message type that can be used instead to get gdm to also show the message.

    If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

    14th May 2010

    Since this evening, parallel booting is the default in Debian/unstable for machines using dependency based boot sequencing. Apparently the testing of concurrent booting has been wider than expected, if I am to believe the input on debian-devel@, and I concluded a few days ago to move forward with the feature this weekend, to give us some time to detect any remaining problems before Squeeze is frozen. If serious problems are detected, it is simple to change the default back to sequential boot. The upload of the new sysvinit package also activate a new upstream version.

    More information about dependency based boot sequencing is available from the Debian wiki. It is currently possible to disable parallel booting when one run into problems caused by it, by adding this line to /etc/default/rcS:

    CONCURRENCY=none
    

    If you report any problems with dependencies in init.d scripts to the BTS, please usertag the report to get it to show up at the list of usertagged bugs related to this.

    14th May 2010

    In the recent Debian Edu versions, the sitesummary system is used to keep track of the machines in the school network. Each machine will automatically report its status to the central server after boot and once per night. The network setup is also reported, and using this information it is possible to get the MAC address of all network interfaces in the machines. This is useful to update the DHCP configuration.

    To give some idea how to use sitesummary, here is a one-liner to ist all MAC addresses of all machines reporting to sitesummary. Run this on the collector host:

    perl -MSiteSummary -e 'for_all_hosts(sub { print join(" ", get_macaddresses(shift)), "\n"; });'
    

    This will list all MAC addresses assosiated with all machine, one line per machine and with space between the MAC addresses.

    To allow system administrators easier job at adding static DHCP addresses for hosts, it would be possible to extend this to fetch machine information from sitesummary and update the DHCP and DNS tables in LDAP using this information. Such tool is unfortunately not written yet.

    13th May 2010

    The last few days a new boot system called systemd has been introduced to the free software world. I have not yet had time to play around with it, but it seem to be a very interesting alternative to upstart, and might prove to be a good alternative for Debian when we are able to switch to an event based boot system. Tollef is in the process of getting systemd into Debian, and I look forward to seeing how well it work. I like the fact that systemd handles init.d scripts with dependency information natively, allowing them to run in parallel where upstart at the moment do not.

    Unfortunately do systemd have the same problem as upstart regarding platform support. It only work on recent Linux kernels, and also need some new kernel features enabled to function properly. This means kFreeBSD and Hurd ports of Debian will need a port or a different boot system. Not sure how that will be handled if systemd proves to be the way forward.

    In the mean time, based on the input on debian-devel@ regarding parallel booting in Debian, I have decided to enable full parallel booting as the default in Debian as soon as possible (probably this weekend or early next week), to see if there are any remaining serious bugs in the init.d dependencies. A new version of the sysvinit package implementing this change is already in experimental. If all go well, Squeeze will be released with parallel booting enabled by default.

    6th May 2010

    These days, the init.d script dependencies in Squeeze are quite complete, so complete that it is actually possible to run all the init.d scripts in parallell based on these dependencies. If you want to test your Squeeze system, make sure dependency based boot sequencing is enabled, and add this line to /etc/default/rcS:

    CONCURRENCY=makefile
    

    That is it. It will cause sysv-rc to use the startpar tool to run scripts in parallel using the dependency information stored in /etc/init.d/.depend.boot, /etc/init.d/.depend.start and /etc/init.d/.depend.stop to order the scripts. Startpar is configured to try to start the kdm and gdm scripts as early as possible, and will start the facilities required by kdm or gdm as early as possible to make this happen.

    Give it a try, and see if you like the result. If some services fail to start properly, it is most likely because they have incomplete init.d script dependencies in their startup script (or some of their dependent scripts have incomplete dependencies). Report bugs and get the package maintainers to fix it. :)

    Running scripts in parallel could be the default in Debian when we manage to get the init.d script dependencies complete and correct. I expect we will get there in Squeeze+1, if we get manage to test and fix the remaining issues.

    If you report any problems with dependencies in init.d scripts to the BTS, please usertag the report to get it to show up at the list of usertagged bugs related to this.

    2nd May 2010

    One interesting feature in Active Directory, is the ability to create a new user with an expired password, and thus force the user to change the password on the first login attempt.

    I'm not quite sure how to do that with the LDAP setup in Debian Edu, but did some initial testing with a local account. The account and password aging information is available in /etc/shadow, but unfortunately, it is not possible to specify an expiration time for passwords, only a maximum age for passwords.

    A freshly created account (using adduser test) will have these settings in /etc/shadow:

    root@tjener:~# chage -l test
    Last password change                                    : May 02, 2010
    Password expires                                        : never
    Password inactive                                       : never
    Account expires                                         : never
    Minimum number of days between password change          : 0
    Maximum number of days between password change          : 99999
    Number of days of warning before password expires       : 7
    root@tjener:~#
    

    The only way I could come up with to create a user with an expired account, is to change the date of the last password change to the lowest value possible (January 1th 1970), and the maximum password age to the difference in days between that date and today. To make it simple, I went for 30 years (30 * 365 = 10950) and January 2th (to avoid testing if 0 is a valid value).

    After using these commands to set it up, it seem to work as intended:

    root@tjener:~# chage -d 1 test; chage -M 10950 test
    root@tjener:~# chage -l test
    Last password change                                    : Jan 02, 1970
    Password expires                                        : never
    Password inactive                                       : never
    Account expires                                         : never
    Minimum number of days between password change          : 0
    Maximum number of days between password change          : 10950
    Number of days of warning before password expires       : 7
    root@tjener:~#  
    

    So far I have tested this with ssh and console, and kdm (in Squeeze) login, and all ask for a new password before login in the user (with ssh, I was thrown out and had to log in again).

    Perhaps we should set up something similar for Debian Edu, to make sure only the user itself have the account password?

    If you want to comment on or help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

    Update 2010-05-02 17:20: Paul Tötterman tells me on IRC that the shadow(8) page in Debian/testing now state that setting the date of last password change to zero (0) will force the password to be changed on the first login. This was not mentioned in the manual in Lenny, so I did not notice this in my initial testing. I have tested it on Squeeze, and 'chage -d 0 username' do work there. I have not tested it on Lenny yet.

    Update 2010-05-02-19:05: Jim Paris tells me via email that an equivalent command to expire a password is 'passwd -e username', which insert zero into the date of the last password change.

    28th April 2010

    For some years now, I have wondered how we should handle laptops in Debian Edu. The Debian Edu infrastructure is mostly designed to handle stationary computers, and less suited for computers that come and go.

    Now I finally believe I have an sensible idea on how to adjust Debian Edu for laptops, by introducing a new profile for them, for example called Roaming Workstations. Here are my thought on this. The setup would consist of the following:

    • During installation, the user name of the owner / primary user of the laptop is requested and a local home directory is set up for the user, with uid and gid information fetched from the LDAP server. This allow the user to work also when offline. The central home directory can be available in a subdirectory on request, for example mounted via CIFS. It could be mounted automatically when a user log in while on the Debian Edu network, and unmounted when the machine is taken away (network down, hibernate, etc), it can be set up to do automatic mounting on request (using autofs), or perhaps some GUI button on the desktop can be used to access it when needed. Perhaps it is enough to use the fish protocol in KDE?
    • Password checking is set up to use LDAP or Kerberos authentication when the machine is on the Debian Edu network, and to cache the password for offline checking when the machine unable to reach the LDAP or Kerberos server. This can be done using libpam-ccreds or the Fedora developed System Security Services Daemon packages.
    • File synchronisation with the central home directory is set up using a shared directory in both the local and the central home directory, using unison.
    • Printing should be set up to print to all printers broadcasting their existence on the local network, and should then work out of the box with CUPS. For sites needing accurate printer quotas, some system with Kerberos authentication or printing via ssh could be implemented.
    • For users that should have local root access to their laptop, sudo should be used to allow this to the local user.
    • It would be nice if user and group information from LDAP is cached on the client, but given that there are entries for the local user and primary group in /etc/, it should not be needed.

    I believe all the pieces to implement this are in Debian/testing at the moment. If we work quickly, we should be able to get this ready in time for the Squeeze release to freeze. Some of the pieces need tweaking, like libpam-ccreds should get support for pam-auth-update (#566718) and nslcd (or perhaps debian-edu-config) should get some integration code to stop its daemon when the LDAP server is unavailable to avoid long timeouts when disconnected from the net. If we get Kerberos enabled, we need to make sure we avoid long timeouts there too.

    If you want to help out with implementing this for Debian Edu, please contact us on debian-edu@lists.debian.org.

    19th April 2010

    The last few weeks i have had the pleasure of reading a thought-provoking collection of essays by Cory Doctorow, on topics touching copyright, virtual worlds, the future of man when the conscience mind can be duplicated into a computer and many more. The book titled "Content: Selected Essays on Technology, Creativity, Copyright, and the Future of the Future" is available with few restrictions on the web, for example from his own site. I read the epub-version from feedbooks using fbreader and my N810. I strongly recommend this book.

    14th April 2010

    Yesterdays NUUG presentation about Kerberos was inspiring, and reminded me about the need to start using Kerberos in Skolelinux. Setting up a Kerberos server seem to be straight forward, and if we get this in place a long time before the Squeeze version of Debian freezes, we have a chance to migrate Skolelinux away from NFSv3 for the home directories, and over to an architecture where the infrastructure do not have to trust IP addresses and machines, and instead can trust users and cryptographic keys instead.

    A challenge will be integration and administration. Is there a Kerberos implementation for Debian where one can control the administration access in Kerberos using LDAP groups? With it, the school administration will have to maintain access control using flat files on the main server, which give a huge potential for errors.

    A related question I would like to know is how well Kerberos and pam-ccreds (offline password check) work together. Anyone know?

    Next step will be to use Kerberos for access control in Lwat and Nagios. I have no idea how much work that will be to implement. We would also need to document how to integrate with Windows AD, as such shared network will require two Kerberos realms that need to cooperate to work properly.

    I believe a good start would be to start using Kerberos on the skolelinux.no machines, and this way get ourselves experience with configuration and integration. A natural starting point would be setting up ldap.skolelinux.no as the Kerberos server, and migrate the rest of the machines from PAM via LDAP to PAM via Kerberos one at the time.

    If you would like to contribute to get this working in Skolelinux, I recommend you to see the video recording from yesterdays NUUG presentation, and start using Kerberos at home. The video show show up in a few days.

    6th March 2010

    6 years ago, as part of the Debian Edu development I am involved in, I asked for a hook in the kdm and gdm setup to run scripts as root when the user log out. A bug was submitted against the xfree86-common package in 2004 (#230422), and revisited every time Debian Edu was working on a new release. Today, this finally paid off.

    The framework for this feature was today commited to the git repositry for the xorg package, and the git repository for xdm has been updated to use this framework. Next on my agenda is to make sure kdm and gdm also add code to use this framework.

    In Debian Edu, we want to ability to run commands as root when the user log out, to get rid of runaway processes and do general cleanup after a user. With this framework in place, we finally can do that in a generic way that work with all display managers using this framework. My goal is to get all display managers in Debian use it, similar to how they use the Xsession.d framework today.

    11th February 2010

    On Tuesday, the Debian/Lenny based version of Skolelinux was finally shipped. This was a major leap forward for the project, and I am very pleased that we finally got the release wrapped up. Work on the first point release starts imediately, as we plan to get that one out a month after the major release, to include all fixes for bugs we found and fixed too late in the release process to include last Tuesday.

    Perhaps it even is time for some partying?

    After this first point release, my plan is to focus again on the next major release, based on Squeeze. We will try to get as many of the fixes we need into the official Debian packages before the freeze, and have just a few weeks or months to make it happen.

    27th January 2010

    One of the new features in the next Debian/Lenny based release of Debian Edu/Skolelinux, which is scheduled for release in the next few days, is automatic configuration of the service monitoring system Nagios. The previous release had automatic configuration of trend analysis using Munin, and this Lenny based release take that a step further.

    When installing a Debian Edu Main-server, it is automatically configured as a Munin and Nagios server. In addition, it is configured to be a server for the SiteSummary system I have written for use in Debian Edu. The SiteSummary system is inspired by a system used by the University of Oslo where I work. In short, the system provide a centralised collector of information about the computers on the network, and a client on each computer submitting information to this collector. This allow for automatic information on which packages are installed on each machine, which kernel the machines are using, what kind of configuration the packages got etc. This also allow us to automatically generate Munin and Nagios configuration.

    All computers reporting to the sitesummary collector with the munin-node package installed is automatically enabled as a Munin client and graphs from the statistics collected from that machine show up automatically on http://www/munin/ on the Main-server.

    All non-laptop computers reporting to the sitesummary collector are automatically monitored for network presence (ping and any network services detected). In addition, all computers (also laptops) with the nagios-nrpe-server package installed and configured the way sitesummary would configure it, are monitored for full disks, software raid status, swap free and other checks that need to run locally on the machine.

    The result is that the administrator on a school using Debian Edu based on Lenny will be able to check the health of his installation with one look at the Nagios settings, without having to spend any time keeping the Nagios configuration up-to-date.

    The only configuration one need to do to get Nagios up and running is to set the password used to get access via HTTP. The system administrator need to run "htpasswd /etc/nagios3/htpasswd.users nagiosadmin" to create a nagiosadmin user and set a password for it to be able to log into the Nagios web pages. After that, everything is taken care of.

    12th August 2009

    Just for fun, I did a search right now on Google for a few file ODF and MS Office based formats (not to be mistaken for ISO or ECMA OOXML), to get an idea of their relative usage. I searched using 'filetype:odt' and equvalent terms, and got these results:

    TypeODFMS Office
    Tekst odt:282000 docx:308000
    Presentasjon odp:75600 pptx:183000
    Regneark ods:26500 xlsx:145000

    Next, I added a 'site:no' limit to get the numbers for Norway, and got these numbers:

    TypeODFMS Office
    Tekst odt:2480 docx:4460
    Presentasjon odp:299 pptx:741
    Regneark ods:187 xlsx:372

    I wonder how these numbers change over time.

    I am aware of Google returning different results and numbers based on where the search is done, so I guess these numbers will differ if they are conduced in another country. Because of this, I did the same search from a machine in California, USA, a few minutes after the search done from a machine here in Norway.

    TypeODFMS Office
    Tekst odt:129000 docx:308000
    Presentasjon odp:44200 pptx:93900
    Regneark ods:26500 xlsx:82400

    And with 'site:no':

    TypeODFMS Office
    Tekst odt:2480 docx:3410
    Presentasjon odp:175 pptx:604
    Regneark ods:186 xlsx:296

    Interesting difference, not sure what to conclude from these numbers.

    Tags: english, nuug, standard, web.
    8th August 2009

    According to a blog post from Torsten Werner, the current defect report for ISO 29500 (ISO OOXML) is 809 pages. His interesting point is that the defect report is 71 pages more than the full ODF 1.1 specification. Personally I find it more interesting that ISO still believe ISO OOXML can be fixed in ISO. Personally, I believe it is broken beyon repair, and I completely lack any trust in ISO for being able to get anywhere close to solving the problems. I was part of the Norwegian committee involved in the OOXML fast track process, and was not impressed with Standard Norway and ISO in how they handled it.

    These days I focus on ODF instead, which seem like a specification with the future ahead of it. We are working in NUUG to organise a ODF seminar this autumn.

    Tags: english, nuug, standard.
    27th July 2009

    Since this evening, with the upload of sysvinit version 2.87dsf-2, and the upload of insserv version 1.12.0-10 yesterday, Debian unstable have been migrated to using dependency based boot sequencing. This conclude work me and others have been doing for the last three days. It feels great to see this finally part of the default Debian installation. Now we just need to weed out the last few problems that are bound to show up, to get everything ready for Squeeze.

    The next step is migrating /sbin/init from sysvinit to upstart, and fixing the more fundamental problem of handing the event based non-predictable kernel in the early boot.

    22nd July 2009

    After several years of frustration with the lack of activity from the existing sysvinit upstream developer, I decided a few weeks ago to take over the package and become the new upstream. The number of patches to track for the Debian package was becoming a burden, and the lack of synchronization between the distribution made it hard to keep the package up to date.

    On the new sysvinit team is the SuSe maintainer Dr. Werner Fink, and my Debian co-maintainer Kel Modderman. About 10 days ago, I made a new upstream tarball with version number 2.87dsf (for Debian, SuSe and Fedora), based on the patches currently in use in these distributions. We Debian maintainers plan to move to this tarball as the new upstream as soon as we find time to do the merge. Since the new tarball was created, we agreed with Werner at SuSe to make a new upstream project at Savannah, and continue development there. The project is registered and currently waiting for approval by the Savannah administrators, and as soon as it is approved, we will import the old versions from svn and continue working on the future release.

    It is a bit ironic that this is done now, when some of the involved distributions are moving to upstart as a syvinit replacement.

    24th June 2009

    I spent Monday and tuesday this week in London with a lot of the people involved in the boot system on Debian and Ubuntu, to see if we could find more ways to speed up the boot system. This was an Ubuntu funded developer gathering. It was quite productive. We also discussed the future of boot systems, and ways to handle the increasing number of boot issues introduced by the Linux kernel becoming more and more asynchronous and event base. The Ubuntu approach using udev and upstart might be a good way forward. Time will show.

    Anyway, there are a few ways at the moment to speed up the boot process in Debian. All of these should be applied to get a quick boot:

    • Use dash as /bin/sh.
    • Disable the init.d/hwclock*.sh scripts and make sure the hardware clock is in UTC.
    • Install and activate the insserv package to enable dependency based boot sequencing, and enable concurrent booting.
    These points are based on the Google summer of code work done by Carlos Villegas.

    Support for makefile-style concurrency during boot was uploaded to unstable yesterday. When we tested it, we were able to cut 6 seconds from the boot sequence. It depend on very correct dependency declaration in all init.d scripts, so I expect us to find edge cases where the dependences in some scripts are slightly wrong when we start using this.

    On our IRC channel for this effort, #pkg-sysvinit, a new idea was introduced by Raphael Geissert today, one that could affect the startup speed as well. Instead of starting some scripts concurrently from rcS.d/ and another set of scripts from rc2.d/, it would be possible to run a of them in the same process. A quick way to test this would be to enable insserv and run 'mv /etc/rc2.d/S* /etc/rcS.d/; insserv'. Will need to test if that work. :)

    2nd May 2009

    There are two software projects that have had huge influence on the quality of free software, and I wanted to mention both in case someone do not yet know them.

    The first one is valgrind, a tool to detect and expose errors in the memory handling of programs. It is easy to use, all one need to do is to run 'valgrind program', and it will report any problems on stdout. It is even better if the program include debug information. With debug information, it is able to report the source file name and line number where the problem occurs. It can report things like 'reading past memory block in file X line N, the memory block was allocated in file Y, line M', and 'using uninitialised value in control logic'. This tool has made it trivial to investigate reproducible crash bugs in programs, and have reduced the number of this kind of bugs in free software a lot.

    The second one is Coverity which is a source code checker. It is able to process the source of a program and find problems in the logic without running the program. It started out as the Stanford Checker and became well known when it was used to find bugs in the Linux kernel. It is now a commercial tool and the company behind it is running a community service for the free software community, where a lot of free software projects get their source checked for free. Several thousand defects have been found and fixed so far. It can find errors like 'lock L taken in file X line N is never released if exiting in line M', or 'the code in file Y lines O to P can never be executed'. The projects included in the community service project have managed to get rid of a lot of reliability problems thanks to Coverity.

    I believe tools like this, that are able to automatically find errors in the source, are vital to improve the quality of software and make sure we can get rid of the crashing and failing software we are surrounded by today.

    Tags: debian, english.
    28th April 2009

    Julien Blache claim that no patch is better than a useless patch. I completely disagree, as a patch allow one to discuss a concrete and proposed solution, and also prove that the issue at hand is important enough for someone to spent time on fixing it. No patch do not provide any of these positive properties.

    Tags: debian, english, nuug.
    5th April 2009

    One think I have wanted to figure out for a along time is how to run vlc from cron to do recording of video streams on the net. The task is trivial with mplayer, but I do not really trust the security of mplayer (it crashes too often on strange input), and thus prefer vlc. I finally found a way to do it today. I spent an hour or so searching the web for recipes and reading the documentation. The hardest part was to get rid of the GUI window, but after finding the dummy interface, the command line finally presented itself:

    URL=http://www.ping.uio.no/video/rms-oslo_2009.ogg
    SAVEFILE=rms.ogg
    DISPLAY= vlc -q $URL \
      --sout="#duplicate{dst=std{access=file,url='$SAVEFILE'},dst=nodisplay}" \
      --intf=dummy

    The command stream the URL and store it in the SAVEFILE by duplicating the output stream to "nodisplay" and the file, using the dummy interface. The dummy interface and the nodisplay output make sure no X interface is needed.

    The cron job then need to start this job with the appropriate URL and file name to save, sleep for the duration wanted, and then kill the vlc process with SIGTERM. Here is a complete script vlc-record to use from at or cron:

    #!/bin/sh
    set -e
    URL="$1"
    SAVEFILE="$2"
    DURATION="$3"
    DISPLAY= vlc -q "$URL" \
      --sout="#duplicate{dst=std{access=file,url='$SAVEFILE'},dst=nodisplay}" \
      --intf=dummy < /dev/null > /dev/null 2>&1 &
    pid=$!
    sleep $DURATION
    kill $pid
    wait $pid
    Tags: english, nuug, video.
    30th March 2009

    Where I work at the University of Oslo, one decision stand out as a very good one to form a long lived computer infrastructure. It is the simple one, lost by many in todays computer industry: Standardize on open network protocols and open exchange/storage formats, not applications. Applications come and go, while protocols and files tend to stay, and thus one want to make it easy to change application and vendor, while avoiding conversion costs and locking users to a specific platform or application.

    This approach make it possible to replace the client applications independently of the server applications. One can even allow users to use several different applications as long as they handle the selected protocol and format. In the normal case, only one client application is recommended and users only get help if they choose to use this application, but those that want to deviate from the easy path are not blocked from doing so.

    It also allow us to replace the server side without forcing the users to replace their applications, and thus allow us to select the best server implementation at any moment, when scale and resouce requirements change.

    I strongly recommend standardizing - on open network protocols and open formats, but I would never recommend standardizing on a single application that do not use open network protocol or open formats.

    29th March 2009

    I'm sitting on the train going home from this weekends Debian Edu/Skolelinux development gathering. I got a bit done tuning the desktop, and looked into the dynamic service location protocol implementation avahi. It look like it could be useful for us. Almost 30 people participated, and I believe it was a great environment to get to know the Skolelinux system. Walter Bender, involved in the development of the Sugar educational platform, presented his stuff and also helped me improve my OLPC installation. He also showed me that his Turtle Art application can be used in standalone mode, and we agreed that I would help getting it packaged for Debian. As a standalone application it would be great for Debian Edu. We also tried to get the video conferencing working with two OLPCs, but that proved to be too hard for us. The application seem to need more work before it is ready for me. I look forward to getting home and relax now. :)

    29th March 2009

    The state of standardized LDAP schemas on Linux is far from optimal. There is RFC 2307 documenting one way to store NIS maps in LDAP, and a modified version of this normally called RFC 2307bis, with some modifications to be compatible with Active Directory. The RFC specification handle the content of a lot of system databases, but do not handle DNS zones and DHCP configuration.

    In Debian Edu/Skolelinux, we would like to store information about users, SMB clients/hosts, filegroups, netgroups (users and hosts), DHCP and DNS configuration, and LTSP configuration in LDAP. These objects have a lot in common, but with the current LDAP schemas it is not possible to have one object per entity. For example, one need to have at least three LDAP objects for a given computer, one with the SMB related stuff, one with DNS information and another with DHCP information. The schemas provided for DNS and DHCP are impossible to combine into one LDAP object. In addition, it is impossible to implement quick queries for netgroup membership, because of the way NIS triples are implemented. It just do not scale. I believe it is time for a few RFC specifications to cleam up this mess.

    I would like to have one LDAP object representing each computer in the network, and this object can then keep the SMB (ie host key), DHCP (mac address/name) and DNS (name/IP address) settings in one place. It need to be efficently stored to make sure it scale well.

    I would also like to have a quick way to map from a user or computer and to the net group this user or computer is a member.

    Active Directory have done a better job than unix heads like myself in this regard, and the unix side need to catch up. Time to start a new IETF work group?

    28th February 2009

    At work, we have a few hundred Linux servers, and with that amount of hardware it is important to keep track of when the hardware support contract expire for each server. We have a machine (and service) register, which until recently did not contain much useful besides the machine room location and contact information for the system owner for each machine. To make it easier for us to track support contract status, I've recently spent time on extending the machine register to include information about when the support contract expire, and to tag machines with expired contracts to make it easy to get a list of such machines. I extended a perl script already being used to import information about machines into the register, to also do some screen scraping off the sites of Dell, HP and IBM (our majority of machines are from these vendors), and automatically check the support status for the relevant machines. This make the support status information easily available and I hope it will make it easier for the computer owner to know when to get new hardware or renew the support contract. The result of this work documented that 27% of the machines in the registry is without a support contract, and made it very easy to find them. 27% might seem like a lot, but I see it more as the case of us using machines a bit longer than the 3 years a normal support contract last, to have test machines and a platform for less important services. After all, the machines without a contract are working fine at the moment and the lack of contract is only a problem if any of them break down. When that happen, we can either fix it using spare parts from other machines or move the service to another old machine.

    I believe the code for screen scraping the Dell site was originally written by Trond Hasle Amundsen, and later adjusted by me and Morten Werner Forsbring. The HP scraping was written by me after reading a nice article in ;login: about how to use WWW::Mechanize, and the IBM scraping was written by me based on the Dell code. I know the HTML parsing could be done using nice libraries, but did not want to introduce more dependencies. This is the current incarnation:

    use LWP::Simple;
    use POSIX;
    use WWW::Mechanize;
    use Date::Parse;
    [...]
    sub get_support_info {
        my ($machine, $model, $serial, $productnumber) = @_;
        my $str;
    
        if ( $model =~ m/^Dell / ) {
            # fetch website from Dell support
            my $url = "http://support.euro.dell.com/support/topics/topic.aspx/emea/shared/support/my_systems_info/no/details?c=no&cs=nodhs1&l=no&s=dhs&ServiceTag=$serial";
            my $webpage = get($url);
            return undef unless ($webpage);
    
            my $daysleft = -1;
            my @lines = split(/\n/, $webpage);
            foreach my $line (@lines) {
                next unless ($line =~ m/Beskrivelse/);
                $line =~ s/<[^>]+?>/;/gm;
                $line =~ s/^.+?;(Beskrivelse;)/$1/;
    
                my @f = split(/\;/, $line);
                @f = @f[13 .. $#f];
                my $lastend = "";
                while ($f[3] eq "DELL") {
                    my ($type, $startstr, $endstr, $days) = @f[0, 5, 7, 10];
    
                    my $start = POSIX::strftime("%Y-%m-%d",
                                                localtime(str2time($startstr)));
                    my $end = POSIX::strftime("%Y-%m-%d",
                                              localtime(str2time($endstr)));
                    $str .= "$type $start -> $end ";
                    @f = @f[14 .. $#f];
                    $lastend = $end if ($end gt $lastend);
                }
                my $today = POSIX::strftime("%Y-%m-%d", localtime(time));
                tag_machine_unsupported($machine)
                    if ($lastend lt $today);
            }
        } elsif ( $model =~ m/^HP / ) {
            my $mech = WWW::Mechanize->new();
            my $url =
                'http://www1.itrc.hp.com/service/ewarranty/warrantyInput.do';
            $mech->get($url);
            my $fields = {
                'BODServiceID' => 'NA',
                'RegisteredPurchaseDate' => '',
                'country' => 'NO',
                'productNumber' => $productnumber,
                'serialNumber1' => $serial,
            };
            $mech->submit_form( form_number => 2,
                                fields      => $fields );
            # Next step is screen scraping
            my $content = $mech->content();
    
            $content =~ s/<[^>]+?>/;/gm;
            $content =~ s/\s+/ /gm;
            $content =~ s/;\s*;/;;/gm;
            $content =~ s/;[\s;]+/;/gm;
    
            my $today = POSIX::strftime("%Y-%m-%d", localtime(time));
    
            while ($content =~ m/;Warranty Type;/) {
                my ($type, $status, $startstr, $stopstr) = $content =~
                    m/;Warranty Type;([^;]+);.+?;Status;(\w+);Start Date;([^;]+);End Date;([^;]+);/;
                $content =~ s/^.+?;Warranty Type;//;
                my $start = POSIX::strftime("%Y-%m-%d",
                                            localtime(str2time($startstr)));
                my $end = POSIX::strftime("%Y-%m-%d",
                                          localtime(str2time($stopstr)));
    
                $str .= "$type ($status) $start -> $end ";
    
                tag_machine_unsupported($machine)
                    if ($end lt $today);
            }
        } elsif ( $model =~ m/^IBM / ) {
            # This code ignore extended support contracts.
            my ($producttype) = $model =~ m/.*-\[(.{4}).+\]-/;
            if ($producttype && $serial) {
                my $content =
                    get("http://www-947.ibm.com/systems/support/supportsite.wss/warranty?action=warranty&brandind=5000008&Submit=Submit&type=$producttype&serial=$serial");
                if ($content) {
                    $content =~ s/<[^>]+?>/;/gm;
                    $content =~ s/\s+/ /gm;
                    $content =~ s/;\s*;/;;/gm;
                    $content =~ s/;[\s;]+/;/gm;
    
                    $content =~ s/^.+?;Warranty status;//;
                    my ($status, $end) = $content =~ m/;Warranty status;([^;]+)\s*;Expiration date;(\S+) ;/;
    
                    $str .= "($status) -> $end ";
    
                    my $today = POSIX::strftime("%Y-%m-%d", localtime(time));
                    tag_machine_unsupported($machine)
                        if ($end lt $today);
                }
            }
        }
        return $str;
    }
    

    Here are some examples on how to use the function, using fake serial numbers. The information passed in as arguments are fetched from dmidecode.

    print get_support_info("hp.host", "HP ProLiant BL460c G1", "1234567890"
                           "447707-B21");
    print get_support_info("dell.host", "Dell Inc. PowerEdge 2950", "1234567");
    print get_support_info("ibm.host", "IBM eserver xSeries 345 -[867061X]-",
                           "1234567");
    

    I would recommend this approach for tracking support contracts for everyone with more than a few computers to administer. :)

    Update 2009-03-06: The IBM page do not include extended support contracts, so it is useless in that case. The original Dell code do not handle extended support contracts either, but has been updated to do so.

    Tags: english, nuug.
    20th February 2009

    At work with the University of Oslo, we have several hundred computers in our computing center. This give us a challenge in tracking the location and cabling of the computers, when they are added, moved and removed. Some times the location register is not updated when a computer is inserted or moved and we then have to search the room for the "missing" computer.

    In the last issue of Linux Journal, I came across a project libdmtx to write and read bar code blocks as defined in the The Data Matrix Standard. This is bar codes that can be read with a normal digital camera, for example that on a cell phone, and several such bar codes can be read by libdmtx from one picture. The bar code standard allow up to 2 KiB to be written in the tag. There is another project with a bar code writer written in postscript capable of creating such bar codes, but this was the first time I found a tool to read these bar codes.

    It occurred to me that this could be used to tag and track the machines in our computing center. If both racks and computers are tagged this way, we can use a picture of the rack and all its computers to detect the rack location of any computer in that rack. If we do this regularly for the entire room, we will find all locations, and can detect movements and removals.

    I decided to test if this would work in practice, and picked a random rack and tagged all the machines with their names. Next, I took pictures with my digital camera, and gave the dmtxread program these JPEG pictures to see how many tags it could read. This worked fairly well. If the pictures was well focused and not taken from the side, all tags in the image could be read. Because of limited space between the racks, I was unable to get a good picture of the entire rack, but could without problem read all tags from a picture covering about half the rack. I had to limit the search time used by dmtxread to 60000 ms to make sure it terminated in a reasonable time frame.

    My conclusion is that this could work, and we should probably look at adjusting our computer tagging procedures to use bar codes for easier automatic tracking of computers.

    Tags: english, nuug.
    17th January 2009

    As part of the work we do in NUUG to publish video recordings of our monthly presentations, we provide a page with embedded video for easy access to the recording. Putting a good set of HTML tags together to get working embedded video in all browsers and across all operating systems is not easy. I hope this will become easier when the <video> tag is implemented in all browsers, but I am not sure. We provide the recordings in several formats, MPEG1, Ogg Theora, H.264 and Quicktime, and want the browser/media plugin to pick one it support and use it to play the recording, using whatever embed mechanism the browser understand. There is at least four different tags to use for this, the new HTML5 <video> tag, the <object> tag, the <embed> tag and the <applet> tag. All of these take a lot of options, and finding the best options is a major challenge.

    I just tested the experimental Opera browser available from labs.opera.com, to see how it handled a <video> tag with a few video sources and no extra attributes. I was not very impressed. The browser start by fetching a picture from the video stream. Not sure if it is the first frame, but it is definitely very early in the recording. So far, so good. Next, instead of streaming the 76 MiB video file, it start to download all of it, but do not start to play the video. This mean I have to wait for several minutes for the downloading to finish. When the download is done, the playing of the video do not start! Waiting for the download, but I do not get to see the video? Some testing later, I discover that I have to add the controls="true" attribute to be able to get a play button to pres to start the video. Adding autoplay="true" did not help. I sure hope this is a misfeature of the test version of Opera, and that future implementations of the <video> tag will stream recordings by default, or at least start playing when the download is done.

    The test page I used (since changed to add more attributes) is available from the nuug site. Will have to test it with the new Firefox too.

    In the test process, I discovered a missing feature. I was unable to find a way to get the URL of the playing video out of Opera, so I am not quite sure it picked the Ogg Theora version of the video. I sure hope it was using the announced Ogg Theora support. :)

    28th December 2008

    The Norwegian Unix User Group is recording our montly presentation on video, and recently we have worked on improving the quality of the recordings by mixing the slides directly with the video stream. For this, we use the dvswitch package from the Debian video team. As this require quite one computer per video source, and NUUG do not have enough laptops available, we need to borrow laptops. And to avoid having to install extra software on these borrwed laptops, I have wrapped up all the programs needed on a bootable USB stick. The software required is dvswitch with assosiated source, sink and mixer applications and dvgrab. To allow this setup to work without any configuration, I've patched dvswitch to use avahi to connect the various parts together. And to allow us to use laptops without firewire plugs, I upgraded dvgrab to the one from Debian/unstable to get one that work with USB sources. We have not yet tested this setup in a production setup, but I hope it will work properly, and allow us to set up a video mixer in a very short time frame. We will need it for Go Open 2009.

    The USB image is for a 1 GB memory stick, but can be used on any larger stick as well.

    Tags: english, nuug, video.
    7th December 2008

    This weekend we had a small developer gathering for Debian Edu in Oslo. Most of Saturday was used for the general assemly for the member organization, but the rest of the weekend I used to tune the LTSP installation. LTSP now work out of the box on the 10-network. Acer Aspire One proved to be a very nice thin client, with both screen, mouse and keybard in a small box. Was working on getting the diskless workstation setup configured out of the box, but did not finish it before the weekend was up.

    Did not find time to look at the 4 VGA cards in one box we got from the Brazilian group, so that will have to wait for the next development gathering. Would love to have the Debian Edu installer automatically detect and configure a multiseat setup when it find one of these cards.

    25th November 2008

    Recently I have spent some time evaluating the multimedia browser plugins available in Debian Lenny, to see which one we should use by default in Debian Edu. We need an embedded video playing plugin with control buttons to pause or stop the video, and capable of streaming all the multimedia content available on the web. The test results and notes are available on the Debian wiki. I was surprised how few of the plugins are able to fill this need. My personal video player favorite, VLC, has a really bad plugin which fail on a lot of the test pages. A lot of the MIME types I would expect to work with any free software player (like video/ogg), just do not work. And simple formats like the audio/x-mplegurl format (m3u playlists), just isn't supported by the totem and vlc plugins. I hope the situation will improve soon. No wonder sites use the proprietary Adobe flash to play video.

    For Lenny, we seem to end up with the mplayer plugin. It seem to be the only one fitting our needs. :/

    RSS Feed

    Created by Chronicle v4.6