Petter Reinholdtsen

Entries tagged "nice free software".

Oolite, a life in space as vagabond and mercenary - nice free software
11th December 2016

In my early years, I played the epic game Elite on my PC. I spent many months trading and fighting in space, and reached the 'elite' fighting status before I moved on. The original Elite game was available on Commodore 64 and the IBM PC edition I played had a 64 KB executable. I am still impressed today that the authors managed to squeeze both a 3D engine and details about more than 2000 planet systems across 7 galaxies into a binary so small.

I have known about the free software game Oolite inspired by Elite for a while, but did not really have time to test it properly until a few days ago. It was great to discover that my old knowledge about trading routes were still valid. But my fighting and flying abilities were gone, so I had to retrain to be able to dock on a space station. And I am still not able to make much resistance when I am attacked by pirates, so I bougth and mounted the most powerful laser in the rear to be able to put up at least some resistance while fleeing for my life. :)

When playing Elite in the late eighties, I had to discover everything on my own, and I had long lists of prices seen on different planets to be able to decide where to trade what. This time I had the advantages of the Elite wiki, where information about each planet is easily available with common price ranges and suggested trading routes. This improved my ability to earn money and I have been able to earn enough to buy a lot of useful equipent in a few days. I believe I originally played for months before I could get a docking computer, while now I could get it after less then a week.

If you like science fiction and dreamed of a life as a vagabond in space, you should try out Oolite. It is available for Linux, MacOSX and Windows, and is included in Debian and derivatives since 2011.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, nice free software.
Coz can help you find bottlenecks in multi-threaded software - nice free software
11th August 2016

This summer, I read a great article "coz: This Is the Profiler You're Looking For" in USENIX ;login: about how to profile multi-threaded programs. It presented a system for profiling software by running experiences in the running program, testing how run time performance is affected by "speeding up" parts of the code to various degrees compared to a normal run. It does this by slowing down parallel threads while the "faster up" code is running and measure how this affect processing time. The processing time is measured using probes inserted into the code, either using progress counters (COZ_PROGRESS) or as latency meters (COZ_BEGIN/COZ_END). It can also measure unmodified code by measuring complete the program runtime and running the program several times instead.

The project and presentation was so inspiring that I would like to get the system into Debian. I created a WNPP request for it and contacted upstream to try to make the system ready for Debian by sending patches. The build process need to be changed a bit to avoid running 'git clone' to get dependencies, and to include the JavaScript web page used to visualize the collected profiling information included in the source package. But I expect that should work out fairly soon.

The way the system work is fairly simple. To run an coz experiment on a binary with debug symbols available, start the program like this:

coz run --- program-to-run

This will create a text file profile.coz with the instrumentation information. To show what part of the code affect the performance most, use a web browser and either point it to http://plasma-umass.github.io/coz/ or use the copy from git (in the gh-pages branch). Check out this web site to have a look at several example profiling runs and get an idea what the end result from the profile runs look like. To make the profiling more useful you include <coz.h> and insert the COZ_PROGRESS or COZ_BEGIN and COZ_END at appropriate places in the code, rebuild and run the profiler. This allow coz to do more targeted experiments.

A video published by ACM presenting the Coz profiler is available from Youtube. There is also a paper from the 25th Symposium on Operating Systems Principles available titled Coz: finding code that counts with causal profiling.

The source code for Coz is available from github. It will only build with clang because it uses a C++ feature missing in GCC, but I've submitted a patch to solve it and hope it will be included in the upstream source soon.

Please get in touch if you, like me, would like to see this piece of software in Debian. I would very much like some help with the packaging effort, as I lack the in depth knowledge on how to package C++ libraries.

Tags: debian, english, nice free software.
Creepy, visualise geotagged social media information - nice free software
24th January 2016

Most people seem not to realise that every time they walk around with the computerised radio beacon known as a mobile phone their position is tracked by the phone company and often stored for a long time (like every time a SMS is received or sent). And if their computerised radio beacon is capable of running programs (often called mobile apps) downloaded from the Internet, these programs are often also capable of tracking their location (if the app requested access during installation). And when these programs send out information to central collection points, the location is often included, unless extra care is taken to not send the location. The provided information is used by several entities, for good and bad (what is good and bad, depend on your point of view). What is certain, is that the private sphere and the right to free movement is challenged and perhaps even eradicated for those announcing their location this way, when they share their whereabouts with private and public entities.

The phone company logs provide a register of locations to check out when one want to figure out what the tracked person was doing. It is unavailable for most of us, but provided to selected government officials, company staff, those illegally buying information from unfaithful servants and crackers stealing the information. But the public information can be collected and analysed, and a free software tool to do so is called Creepy or Cree.py. I discovered it when I read an article about Creepy in the Norwegian newspaper Aftenposten i November 2014, and decided to check if it was available in Debian. The python program was in Debian, but the version in Debian was completely broken and practically unmaintained. I uploaded a new version which did not work quite right, but did not have time to fix it then. This Christmas I decided to finally try to get Creepy operational in Debian. Now a fixed version is available in Debian unstable and testing, and almost all Debian specific patches are now included upstream.

The Creepy program visualises geolocation information fetched from Twitter, Instagram, Flickr and Google+, and allow one to get a complete picture of every social media message posted recently in a given area, or track the movement of a given individual across all these services. Earlier it was possible to use the search API of at least some of these services without identifying oneself, but these days it is impossible. This mean that to use Creepy, you need to configure it to log in as yourself on these services, and provide information to them about your search interests. This should be taken into account when using Creepy, as it will also share information about yourself with the services.

The picture above show the twitter messages sent from (or at least geotagged with a position from) the city centre of Oslo, the capital of Norway. One useful way to use Creepy is to first look at information tagged with an area of interest, and next look at all the information provided by one or more individuals who was in the area. I tested it by checking out which celebrity provide their location in twitter messages by checkout out who sent twitter messages near a Norwegian TV station, and next could track their position over time, making it possible to locate their home and work place, among other things. A similar technique have been used to locate Russian soldiers in Ukraine, and it is both a powerful tool to discover lying governments, and a useful tool to help people understand the value of the private information they provide to the public.

The package is not trivial to backport to Debian Stable/Jessie, as it depend on several python modules currently missing in Jessie (at least python-instagram, python-flickrapi and python-requests-toolbelt).

(I have uploaded the image to screenshots.debian.net and licensed it under the same terms as the Creepy program in Debian.)

Tags: debian, english, nice free software.
OpenALPR, find car license plates in video streams - nice free software
23rd December 2015

When I was a kid, we used to collect "car numbers", as we used to call the car license plate numbers in those days. I would write the numbers down in my little book and compare notes with the other kids to see how many region codes we had seen and if we had seen some exotic or special region codes and numbers. It was a fun game to pass time, as we kids have plenty of it.

A few days I came across the OpenALPR project, a free software project to automatically discover and report license plates in images and video streams, and provide the "car numbers" in a machine readable format. I've been looking for such system for a while now, because I believe it is a bad idea that the automatic number plate recognition tool only is available in the hands of the powerful, and want it to be available also for the powerless to even the score when it comes to surveillance and sousveillance. I discovered the developer wanted to get the tool into Debian, and as I too wanted it to be in Debian, I volunteered to help him get it into shape to get the package uploaded into the Debian archive.

Today we finally managed to get the package into shape and uploaded it into Debian, where it currently waits in the NEW queue for review by the Debian ftpmasters.

I guess you are wondering why on earth such tool would be useful for the common folks, ie those not running a large government surveillance system? Well, I plan to put it in a computer on my bike and in my car, tracking the cars nearby and allowing me to be notified when number plates on my watch list are discovered. Another use case was suggested by a friend of mine, who wanted to set it up at his home to open the car port automatically when it discovered the plate on his car. When I mentioned it perhaps was a bit foolhardy to allow anyone capable of placing his license plate number of a piece of cardboard to open his car port, men replied that it was always unlocked anyway. I guess for such use case it make sense. I am sure there are other use cases too, for those with imagination and a vision.

If you want to build your own version of the Debian package, check out the upstream git source and symlink ./distros/debian to ./debian/ before running "debuild" to build the source. Or wait a bit until the package show up in unstable.

Tags: debian, english, nice free software.
listadmin, the quick way to moderate mailman lists - nice free software
22nd October 2014

If you ever had to moderate a mailman list, like the ones on alioth.debian.org, you know the web interface is fairly slow to operate. First you visit one web page, enter the moderation password and get a new page shown with a list of all the messages to moderate and various options for each email address. This take a while for every list you moderate, and you need to do it regularly to do a good job as a list moderator. But there is a quick alternative, the listadmin program. It allow you to check lists for new messages to moderate in a fraction of a second. Here is a test run on two lists I recently took over:

% time listadmin xiph
fetching data for pkg-xiph-commits@lists.alioth.debian.org ... nothing in queue
fetching data for pkg-xiph-maint@lists.alioth.debian.org ... nothing in queue

real    0m1.709s
user    0m0.232s
sys     0m0.012s
%

In 1.7 seconds I had checked two mailing lists and confirmed that there are no message in the moderation queue. Every morning I currently moderate 68 mailman lists, and it normally take around two minutes. When I took over the two pkg-xiph lists above a few days ago, there were 400 emails waiting in the moderator queue. It took me less than 15 minutes to process them all using the listadmin program.

If you install the listadmin package from Debian and create a file ~/.listadmin.ini with content like this, the moderation task is a breeze:

username username@example.org
spamlevel 23
default discard
discard_if_reason "Posting restricted to members only. Remove us from your mail list."

password secret
adminurl https://{domain}/mailman/admindb/{list}
mailman-list@lists.example.com

password hidden
other-list@otherserver.example.org

There are other options to set as well. Check the manual page to learn the details.

If you are forced to moderate lists on a mailman installation where the SSL certificate is self signed or not properly signed by a generally accepted signing authority, you can set a environment variable when calling listadmin to disable SSL verification:

PERL_LWP_SSL_VERIFY_HOSTNAME=0 listadmin

If you want to moderate a subset of the lists you take care of, you can provide an argument to the listadmin script like I do in the initial screen dump (the xiph argument). Using an argument, only lists matching the argument string will be processed. This make it quick to accept messages if you notice the moderation request in your email.

Without the listadmin program, I would never be the moderator of 68 mailing lists, as I simply do not have time to spend on that if the process was any slower. The listadmin program have saved me hours of time I could spend elsewhere over the years. It truly is nice free software.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2014-10-27: Added missing 'username' statement in configuration example. Also, I've been told that the PERL_LWP_SSL_VERIFY_HOSTNAME=0 setting do not work for everyone. Not sure why.

Tags: debian, english, nice free software.
S3QL, a locally mounted cloud file system - nice free software
9th April 2014

For a while now, I have been looking for a sensible offsite backup solution for use at home. My requirements are simple, it must be cheap and locally encrypted (in other words, I keep the encryption keys, the storage provider do not have access to my private files). One idea me and my friends had many years ago, before the cloud storage providers showed up, was to use Google mail as storage, writing a Linux block device storing blocks as emails in the mail service provided by Google, and thus get heaps of free space. On top of this one can add encryption, RAID and volume management to have lots of (fairly slow, I admit that) cheap and encrypted storage. But I never found time to implement such system. But the last few weeks I have looked at a system called S3QL, a locally mounted network backed file system with the features I need.

S3QL is a fuse file system with a local cache and cloud storage, handling several different storage providers, any with Amazon S3, Google Drive or OpenStack API. There are heaps of such storage providers. S3QL can also use a local directory as storage, which combined with sshfs allow for file storage on any ssh server. S3QL include support for encryption, compression, de-duplication, snapshots and immutable file systems, allowing me to mount the remote storage as a local mount point, look at and use the files as if they were local, while the content is stored in the cloud as well. This allow me to have a backup that should survive fire. The file system can not be shared between several machines at the same time, as only one can mount it at the time, but any machine with the encryption key and access to the storage service can mount it if it is unmounted.

It is simple to use. I'm using it on Debian Wheezy, where the package is included already. So to get started, run apt-get install s3ql. Next, pick a storage provider. I ended up picking Greenqloud, after reading their nice recipe on how to use S3QL with their Amazon S3 service, because I trust the laws in Iceland more than those in USA when it come to keeping my personal data safe and private, and thus would rather spend money on a company in Iceland. Another nice recipe is available from the article S3QL Filesystem for HPC Storage by Jeff Layton in the HPC section of Admin magazine. When the provider is picked, figure out how to get the API key needed to connect to the storage API. With Greencloud, the key did not show up until I had added payment details to my account.

Armed with the API access details, it is time to create the file system. First, create a new bucket in the cloud. This bucket is the file system storage area. I picked a bucket name reflecting the machine that was going to store data there, but any name will do. I'll refer to it as bucket-name below. In addition, one need the API login and password, and a locally created password. Store it all in ~root/.s3ql/authinfo2 like this:

[s3c]
storage-url: s3c://s.greenqloud.com:443/bucket-name
backend-login: API-login
backend-password: API-password
fs-passphrase: local-password

I create my local passphrase using pwget 50 or similar, but any sensible way to create a fairly random password should do it. Armed with these details, it is now time to run mkfs, entering the API details and password to create it:

# mkdir -m 700 /var/lib/s3ql-cache
# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl s3c://s.greenqloud.com:443/bucket-name
Enter backend login: 
Enter backend password: 
Before using S3QL, make sure to read the user's guide, especially
the 'Important Rules to Avoid Loosing Data' section.
Enter encryption password: 
Confirm encryption password: 
Generating random encryption key...
Creating metadata tables...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Wrote 0.00 MB of compressed metadata.
# 

The next step is mounting the file system to make the storage available.

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
Using 4 upload threads.
Downloading and decompressing metadata...
Reading metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Mounting filesystem...
# df -h /s3ql
Filesystem                              Size  Used Avail Use% Mounted on
s3c://s.greenqloud.com:443/bucket-name  1.0T     0  1.0T   0% /s3ql
#

The file system is now ready for use. I use rsync to store my backups in it, and as the metadata used by rsync is downloaded at mount time, no network traffic (and storage cost) is triggered by running rsync. To unmount, one should not use the normal umount command, as this will not flush the cache to the cloud storage, but instead running the umount.s3ql command like this:

# umount.s3ql /s3ql
# 

There is a fsck command available to check the file system and correct any problems detected. This can be used if the local server crashes while the file system is mounted, to reset the "already mounted" flag. This is what it look like when processing a working file system:

# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name
Using cached metadata.
File system seems clean, checking anyway.
Checking DB integrity...
Creating temporary extra indices...
Checking lost+found...
Checking cached objects...
Checking names (refcounts)...
Checking contents (names)...
Checking contents (inodes)...
Checking contents (parent inodes)...
Checking objects (reference counts)...
Checking objects (backend)...
..processed 5000 objects so far..
..processed 10000 objects so far..
..processed 15000 objects so far..
Checking objects (sizes)...
Checking blocks (referenced objects)...
Checking blocks (refcounts)...
Checking inode-block mapping (blocks)...
Checking inode-block mapping (inodes)...
Checking inodes (refcounts)...
Checking inodes (sizes)...
Checking extended attributes (names)...
Checking extended attributes (inodes)...
Checking symlinks (inodes)...
Checking directory reachability...
Checking unix conventions...
Checking referential integrity...
Dropping temporary indices...
Backing up old metadata...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Wrote 0.89 MB of compressed metadata.
# 

Thanks to the cache, working on files that fit in the cache is very quick, about the same speed as local file access. Uploading large amount of data is to me limited by the bandwidth out of and into my house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, which is very close to my upload speed, and downloading the same Debian installation ISO gave me 610 kiB/s, close to my download speed. Both were measured using dd. So for me, the bottleneck is my network, not the file system code. I do not know what a good cache size would be, but suspect that the cache should e larger than your working set.

I mentioned that only one machine can mount the file system at the time. If another machine try, it is told that the file system is busy:

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
Using 8 upload threads.
Backend reports that fs is still mounted elsewhere, aborting.
#

The file content is uploaded when the cache is full, while the metadata is uploaded once every 24 hour by default. To ensure the file system content is flushed to the cloud, one can either umount the file system, or ask S3QL to flush the cache and metadata using s3qlctrl:

# s3qlctrl upload-meta /s3ql
# s3qlctrl flushcache /s3ql
# 

If you are curious about how much space your data uses in the cloud, and how much compression and deduplication cut down on the storage usage, you can use s3qlstat on the mounted file system to get a report:

# s3qlstat /s3ql
Directory entries:    9141
Inodes:               9143
Data blocks:          8851
Total data size:      22049.38 MB
After de-duplication: 21955.46 MB (99.57% of total)
After compression:    21877.28 MB (99.22% of total, 99.64% of de-duplicated)
Database size:        2.39 MB (uncompressed)
(some values do not take into account not-yet-uploaded dirty blocks in cache)
#

I mentioned earlier that there are several possible suppliers of storage. I did not try to locate them all, but am aware of at least Greenqloud, Google Drive, Amazon S3 web serivces, Rackspace and Crowncloud. The latter even accept payment in Bitcoin. Pick one that suit your need. Some of them provide several GiB of free storage, but the prize models are quite different and you will have to figure out what suits you best.

While researching this blog post, I had a look at research papers and posters discussing the S3QL file system. There are several, which told me that the file system is getting a critical check by the science community and increased my confidence in using it. One nice poster is titled "An Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject Store and Transformative Parallel I/O Approach" by Hsing-Bung Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields and Pamela Smith. Please have a look.

Given my problems with different file systems earlier, I decided to check out the mounted S3QL file system to see if it would be usable as a home directory (in other word, that it provided POSIX semantics when it come to locking and umask handling etc). Running my test code to check file system semantics, I was happy to discover that no error was found. So the file system can be used for home directories, if one chooses to do so.

If you do not want a locally file system, and want something that work without the Linux fuse file system, I would like to mention the Tarsnap service, which also provide locally encrypted backup using a command line client. It have a nicer access control system, where one can split out read and write access, allowing some systems to write to the backup and others to only read from it.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, nice free software, personvern, sikkerhet.
ReactOS Windows clone - nice free software
1st April 2014

Microsoft have announced that Windows XP reaches its end of life 2014-04-08, in 7 days. But there are heaps of machines still running Windows XP, and depending on Windows XP to run their applications, and upgrading will be expensive, both when it comes to money and when it comes to the amount of effort needed to migrate from Windows XP to a new operating system. Some obvious options (buy new a Windows machine, buy a MacOSX machine, install Linux on the existing machine) are already well known and covered elsewhere. Most of them involve leaving the user applications installed on Windows XP behind and trying out replacements or updated versions. In this blog post I want to mention one strange bird that allow people to keep the hardware and the existing Windows XP applications and run them on a free software operating system that is Windows XP compatible.

ReactOS is a free software operating system (GNU GPL licensed) working on providing a operating system that is binary compatible with Windows, able to run windows programs directly and to use Windows drivers for hardware directly. The project goal is for Windows user to keep their existing machines, drivers and software, and gain the advantages from user a operating system without usage limitations caused by non-free licensing. It is a Windows clone running directly on the hardware, so quite different from the approach taken by the Wine project, which make it possible to run Windows binaries on Linux.

The ReactOS project share code with the Wine project, so most shared libraries available on Windows are already implemented already. There is also a software manager like the one we are used to on Linux, allowing the user to install free software applications with a simple click directly from the Internet. Check out the screen shots on the project web site for an idea what it look like (it looks just like Windows before metro).

I do not use ReactOS myself, preferring Linux and Unix like operating systems. I've tested it, and it work fine in a virt-manager virtual machine. The browser, minesweeper, notepad etc is working fine as far as I can tell. Unfortunately, my main test application is the software included on a CD with the Lego Mindstorms NXT, which seem to install just fine from CD but fail to leave any binaries on the disk after the installation. So no luck with that test software. No idea why, but hope someone else figure out and fix the problem. I've tried the ReactOS Live ISO on a physical machine, and it seemed to work just fine. If you like Windows and want to keep running your old Windows binaries, check it out by downloading the installation CD, the live CD or the preinstalled virtual machine image.

Tags: english, nice free software, reactos.
Video DVD reader library / python-dvdvideo - nice free software
21st March 2014

Keeping your DVD collection safe from scratches and curious children fingers while still having it available when you want to see a movie is not straight forward. My preferred method at the moment is to store a full copy of the ISO on a hard drive, and use VLC, Popcorn Hour or other useful players to view the resulting file. This way the subtitles and bonus material are still available and using the ISO is just like inserting the original DVD record in the DVD player.

Earlier I used dd for taking security copies, but it do not handle DVDs giving read errors (which are quite a few of them). I've also tried using dvdbackup and genisoimage, but these days I use the marvellous python library and program python-dvdvideo written by Bastian Blank. It is in Debian already and the binary package name is python3-dvdvideo. Instead of trying to read every block from the DVD, it parses the file structure and figure out which block on the DVD is actually in used, and only read those blocks from the DVD. This work surprisingly well, and I have been able to almost backup my entire DVD collection using this method.

So far, python-dvdvideo have failed on between 10 and 20 DVDs, which is a small fraction of my collection. The most common problem is DVDs using UTF-16 instead of UTF-8 characters, which according to Bastian is against the DVD specification (and seem to cause some players to fail too). A rarer problem is what seem to be inconsistent DVD structures, as the python library claim there is a overlap between objects. An equally rare problem claim some value is out of range. No idea what is going on there. I wish I knew enough about the DVD format to fix these, to ensure my movie collection will stay with me in the future.

So, if you need to keep your DVDs safe, back them up using python-dvdvideo. :)

Tags: english, multimedia, nice free software, opphavsrett, video.
Free Timetabling Software - nice free software
7th July 2012

Included in Debian Edu / Skolelinux is a large collection of end user and school specific software. It is one of the packages not installed by default but provided in the Debian archive for schools to install if they want to, is a system to automatically plan the school time table using information about available teachers, classes and rooms, combined with the list of required courses and how many hours each topic should receive. The software is named FET, and it provide a graphical user interface to input the required information, save the result in a fairly simple XML format, and generate time tables for both teachers and students. It is available both for Linux, MacOSX and Windows.

This is the feature list, liftet from the project web site:

I have not used it myself, as I am not involved in time table planning at a school, but it seem to work fine when I test it. If you need to set up your schools time table, and is tired of doing it manually, check it out. A quick summary on how to use it can be found in a blog post from MarvelSoft. If you find FET useful, please provide a recipe for the Debian Edu project in the Debian Edu HowTo section.

Tags: debian edu, english, nice free software.

RSS Feed

Created by Chronicle v4.6