Petter Reinholdtsen

Simpler recipe on how to make a simple $7 IMSI Catcher using Debian
9th August 2017

On friday, I came across an interesting article in the Norwegian web based ICT news magazine on how to collect the IMSI numbers of nearby cell phones using the cheap DVB-T software defined radios. The article refered to instructions and a recipe by Keld Norman on Youtube on how to make a simple $7 IMSI Catcher, and I decided to test them out.

The instructions said to use Ubuntu, install pip using apt (to bypass apt), use pip to install pybombs (to bypass both apt and pip), and the ask pybombs to fetch and build everything you need from scratch. I wanted to see if I could do the same on the most recent Debian packages, but this did not work because pybombs tried to build stuff that no longer build with the most recent openssl library or some other version skew problem. While trying to get this recipe working, I learned that the apt->pip->pybombs route was a long detour, and the only piece of software dependency missing in Debian was the gr-gsm package. I also found out that the lead upstream developer of gr-gsm (the name stand for GNU Radio GSM) project already had a set of Debian packages provided in an Ubuntu PPA repository. All I needed to do was to dget the Debian source package and built it.

The IMSI collector is a python script listening for packages on the loopback network device and printing to the terminal some specific GSM packages with IMSI numbers in them. The code is fairly short and easy to understand. The reason this work is because gr-gsm include a tool to read GSM data from a software defined radio like a DVB-T USB stick and other software defined radios, decode them and inject them into a network device on your Linux machine (using the loopback device by default). This proved to work just fine, and I've been testing the collector for a few days now.

The updated and simpler recipe is thus to

  1. start with a Debian machine running Stretch or newer,
  2. build and install the gr-gsm package available from,
  3. clone the git repostory from,
  4. run grgsm_livemon and adjust the frequency until the terminal where it was started is filled with a stream of text (meaning you found a GSM station).
  5. go into the IMSI-catcher directory and run 'sudo python' to extract the IMSI numbers.

To make it even easier in the future to get this sniffer up and running, I decided to package the gr-gsm project for Debian (WNPP #871055), and the package was uploaded into the NEW queue today. Luckily the gnuradio maintainer has promised to help me, as I do not know much about gnuradio stuff yet.

I doubt this "IMSI cacher" is anywhere near as powerfull as commercial tools like The Spy Phone Portable IMSI / IMEI Catcher or the Harris Stingray, but I hope the existance of cheap alternatives can make more people realise how their whereabouts when carrying a cell phone is easily tracked. Seeing the data flow on the screen, realizing that I live close to a police station and knowing that the police is also wearing cell phones, I wonder how hard it would be for criminals to track the position of the police officers to discover when there are police near by, or for foreign military forces to track the location of the Norwegian military forces, or for anyone to track the location of government officials...

It is worth noting that the data reported by the IMSI-catcher script mentioned above is only a fraction of the data broadcasted on the GSM network. It will only collect one frequency at the time, while a typical phone will be using several frequencies, and not all phones will be using the frequencies tracked by the grgsm_livemod program. Also, there is a lot of radio chatter being ignored by the simple_IMSI-catcher script, which would be collected by extending the parser code. I wonder if gr-gsm can be set up to listen to more than one frequency?

Tags: debian, english, personvern, surveillance.
Norwegian Bokmål edition of Debian Administrator's Handbook is now available
25th July 2017

I finally received a copy of the Norwegian Bokmål edition of "The Debian Administrator's Handbook". This test copy arrived in the mail a few days ago, and I am very happy to hold the result in my hand. We spent around one and a half year translating it. This paperbook edition is available from If you buy it quickly, you save 25% on the list price. The book is also available for download in electronic form as PDF, EPUB and Mobipocket, as can be read online as a web page.

This is the second book I publish (the first was the book "Free Culture" by Lawrence Lessig in English, French and Norwegian Bokmål), and I am very excited to finally wrap up this project. I hope "Håndbok for Debian-administratoren" will be well received.

Tags: debian, debian-handbook, english.
«Rapporten ser ikke på informasjonssikkerhet knyttet til personlig integritet»
27th June 2017

Jeg kom over teksten «Killing car privacy by federal mandate» av Leonid Reyzin på Freedom to Tinker i dag, og det gleder meg å se en god gjennomgang om hvorfor det er et urimelig inngrep i privatsfæren å la alle biler kringkaste sin posisjon og bevegelse via radio. Det omtalte forslaget basert på Dedicated Short Range Communication (DSRC) kalles Basic Safety Message (BSM) i USA og Cooperative Awareness Message (CAM) i Europa, og det norske Vegvesenet er en av de som ser ut til å kunne tenke seg å pålegge alle biler å fjerne nok en bit av innbyggernes privatsfære. Anbefaler alle å lese det som står der.

Mens jeg tittet litt på DSRC på biler i Norge kom jeg over et sitat jeg synes er illustrativt for hvordan det offentlige Norge håndterer problemstillinger rundt innbyggernes privatsfære i SINTEF-rapporten «Informasjonssikkerhet i AutoPASS-brikker» av Trond Foss:

«Rapporten ser ikke på informasjonssikkerhet knyttet til personlig integritet.»

Så enkelt kan det tydeligvis gjøres når en vurderer informasjonssikkerheten. Det holder vel at folkene på toppen kan si at «Personvernet er ivaretatt», som jo er den populære intetsigende frasen som gjør at mange tror enkeltindividers integritet tas vare på. Sitatet fikk meg til å undres på hvor ofte samme tilnærming, å bare se bort fra behovet for personlig itegritet, blir valgt når en velger å legge til rette for nok et inngrep i privatsfæren til personer i Norge. Det er jo sjelden det får reaksjoner. Historien om reaksjonene på Helse Sør-Østs tjenesteutsetting er jo sørgelig nok et unntak og toppen av isfjellet, desverre. Tror jeg fortsatt takker nei til både AutoPASS og holder meg så langt unna det norske helsevesenet som jeg kan, inntil de har demonstrert og dokumentert at de verdsetter individets privatsfære og personlige integritet høyere enn kortsiktig gevist og samfunnsnytte.

Tags: norsk, personvern, sikkerhet.
Updated sales number for my Free Culture paper editions
12th June 2017

It is pleasing to see that the work we put down in publishing new editions of the classic Free Culture book by the founder of the Creative Commons movement, Lawrence Lessig, is still being appreciated. I had a look at the latest sales numbers for the paper edition today. Not too impressive, but happy to see some buyers still exist. All the revenue from the books is sent to the Creative Commons Corporation, and they receive the largest cut if you buy directly from Lulu. Most books are sold via Amazon, with Ingram second and only a small fraction directly from Lulu. The ebook edition is available for free from Github.

Title / languageQuantity
2016 jan-jun2016 jul-dec2017 jan-may
Culture Libre / French 3 6 15
Fri kultur / Norwegian 7 1 0
Free Culture / English 14 27 16
Total 24 34 31

A bit sad to see the low sales number on the Norwegian edition, and a bit surprising the English edition still selling so well.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

Tags: docbook, english, freeculture.
Release 0.1.1 of free software archive system Nikita announced
10th June 2017

I am very happy to report that the Nikita Noark 5 core project tagged its second release today. The free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.1.1 since version 0.1.0 (from

If this sound interesting to you, please contact us on IRC (#nikita on or email (nikita-noark mailing list).

Tags: english, nuug, offentlig innsyn, standard.
Idea for storing trusted timestamps in a Noark 5 archive
7th June 2017

This is a copy of an email I posted to the nikita-noark mailing list. Please follow up there if you would like to discuss this topic. The background is that we are making a free software archive system based on the Norwegian Noark 5 standard for government archives.

I've been wondering a bit lately how trusted timestamps could be stored in Noark 5. Trusted timestamps can be used to verify that some information (document/file/checksum/metadata) have not been changed since a specific time in the past. This is useful to verify the integrity of the documents in the archive.

Then it occured to me, perhaps the trusted timestamps could be stored as dokument variants (ie dokumentobjekt referered to from dokumentbeskrivelse) with the filename set to the hash it is stamping?

Given a "dokumentbeskrivelse" with an associated "dokumentobjekt", a new dokumentobjekt is associated with "dokumentbeskrivelse" with the same attributes as the stamped dokumentobjekt except these attributes:

This assume a service following IETF RFC 3161 is used, which specifiy the given MIME type for replies and the .tsr file ending for the content of such trusted timestamp. As far as I can tell from the Noark 5 specifications, it is OK to have several variants/renderings of a dokument attached to a given dokumentbeskrivelse objekt. It might be stretching it a bit to make some of these variants represent crypto-signatures useful for verifying the document integrity instead of representing the dokument itself.

Using the source of the service in formatDetaljer allow several timestamping services to be used. This is useful to spread the risk of key compromise over several organisations. It would only be a problem to trust the timestamps if all of the organisations are compromised.

The following oneliner on Linux can be used to generate the tsr file. $input is the path to the file to checksum, and $sha256 is the SHA-256 checksum of the file (ie the ".tsr" value mentioned above).

openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \
  | curl -s -H "Content-Type: application/timestamp-query" \
      --data-binary "@-" > $sha256.tsr

To verify the timestamp, you first need to download the public key of the trusted timestamp service, for example using this command:

wget -O ca-cert.txt \

Note, the public key should be stored alongside the timestamps in the archive to make sure it is also available 100 years from now. It is probably a good idea to standardise how and were to store such public keys, to make it easier to find for those trying to verify documents 100 or 1000 years from now. :)

The verification itself is a simple openssl command:

openssl ts -verify -data $inputfile -in $sha256.tsr \
  -CAfile ca-cert.txt -text

Is there any reason this approach would not work? Is it somehow against the Noark 5 specification?

Tags: english, offentlig innsyn, standard.
Når nynorskoversettelsen svikter til eksamen...
3rd June 2017

Aftenposten melder i dag om feil i eksamensoppgavene for eksamen i politikk og menneskerettigheter, der teksten i bokmåls og nynorskutgaven ikke var like. Oppgaveteksten er gjengitt i artikkelen, og jeg ble nysgjerring på om den fri oversetterløsningen Apertium ville gjort en bedre jobb enn Utdanningsdirektoratet. Det kan se slik ut.

Her er bokmålsoppgaven fra eksamenen:

Drøft utfordringene knyttet til nasjonalstatenes og andre aktørers rolle og muligheter til å håndtere internasjonale utfordringer, som for eksempel flykningekrisen.

Vedlegge er eksempler på tekster som kan gi relevante perspektiver på temaet:

  1. Flykningeregnskapet 2016, UNHCR og IDMC
  2. «Grenseløst Europa for fall» A-Magasinet, 26. november 2015

Dette oversetter Apertium slik:

Drøft utfordringane knytte til nasjonalstatane sine og rolla til andre aktørar og høve til å handtera internasjonale utfordringar, som til dømes *flykningekrisen.

Vedleggja er døme på tekster som kan gje relevante perspektiv på temaet:

  1. *Flykningeregnskapet 2016, *UNHCR og *IDMC
  2. «*Grenseløst Europa for fall» A-Magasinet, 26. november 2015

Ord som ikke ble forstått er markert med stjerne (*), og trenger ekstra språksjekk. Men ingen ord er forsvunnet, slik det var i oppgaven elevene fikk presentert på eksamen. Jeg mistenker dog at "andre aktørers rolle og muligheter til ..." burde vært oversatt til "rolla til andre aktørar og deira høve til ..." eller noe slikt, men det er kanskje flisespikking. Det understreker vel bare at det alltid trengs korrekturlesning etter automatisk oversettelse.

Tags: debian, norsk, stavekontroll.
Epost inn som arkivformat i Riksarkivarens forskrift?
27th April 2017

I disse dager, med frist 1. mai, har Riksarkivaren ute en høring på sin forskrift. Som en kan se er det ikke mye tid igjen før fristen som går ut på søndag. Denne forskriften er det som lister opp hvilke formater det er greit å arkivere i Noark 5-løsninger i Norge.

Jeg fant høringsdokumentene hos Norsk Arkivråd etter å ha blitt tipset på epostlisten til fri programvareprosjektet Nikita Noark5-Core, som lager et Noark 5 Tjenestegresesnitt. Jeg er involvert i Nikita-prosjektet og takket være min interesse for tjenestegrensesnittsprosjektet har jeg lest en god del Noark 5-relaterte dokumenter, og til min overraskelse oppdaget at standard epost ikke er på listen over godkjente formater som kan arkiveres. Høringen med frist søndag er en glimrende mulighet til å forsøke å gjøre noe med det. Jeg holder på med egen høringsuttalelse, og lurer på om andre er interessert i å støtte forslaget om å tillate arkivering av epost som epost i arkivet.

Er du igang med å skrive egen høringsuttalelse allerede? I så fall kan du jo vurdere å ta med en formulering om epost-lagring. Jeg tror ikke det trengs så mye. Her et kort forslag til tekst:

Viser til høring sendt ut 2017-02-17 (Riksarkivarens referanse 2016/9840 HELHJO), og tillater oss å sende inn noen innspill om revisjon av Forskrift om utfyllende tekniske og arkivfaglige bestemmelser om behandling av offentlige arkiver (Riksarkivarens forskrift).

Svært mye av vår kommuikasjon foregår i dag på e-post.  Vi foreslår derfor at Internett-e-post, slik det er beskrevet i IETF RFC 5322, bør inn som godkjent dokumentformat.  Vi foreslår at forskriftens oversikt over godkjente dokumentformater ved innlevering i § 5-16 endres til å ta med Internett-e-post.

Som del av arbeidet med tjenestegrensesnitt har vi testet hvordan epost kan lagres i en Noark 5-struktur, og holder på å skrive et forslag om hvordan dette kan gjøres som vil bli sendt over til arkivverket så snart det er ferdig. De som er interesserte kan følge fremdriften på web.

Oppdatering 2017-04-28: I dag ble høringuttalelsen jeg skrev sendt inn av foreningen NUUG.

Tags: norsk, offentlig innsyn, standard.
Offentlig elektronisk postjournal blokkerer tilgang for utvalgte webklienter
20th April 2017

Jeg oppdaget i dag at nettstedet som publiserer offentlige postjournaler fra statlige etater, OEP, har begynt å blokkerer enkelte typer webklienter fra å få tilgang. Vet ikke hvor mange det gjelder, men det gjelder i hvert fall libwww-perl og curl. For å teste selv, kjør følgende:

% curl -v -s 2>&1 |grep '< HTTP'
< HTTP/1.1 404 Not Found
% curl -v -s --header 'User-Agent:Opera/12.0' 2>&1 |grep '< HTTP'
< HTTP/1.1 200 OK

Her kan en se at tjenesten gir «404 Not Found» for curl i standardoppsettet, mens den gir «200 OK» hvis curl hevder å være Opera versjon 12.0. Offentlig elektronisk postjournal startet blokkeringen 2017-03-02.

Blokkeringen vil gjøre det litt vanskeligere å maskinelt hente informasjon fra Kan blokkeringen være gjort for å hindre automatisert innsamling av informasjon fra OEP, slik Pressens Offentlighetsutvalg gjorde for å dokumentere hvordan departementene hindrer innsyn i rapporten «Slik hindrer departementer innsyn» som ble publiserte i januar 2017. Det virker usannsynlig, da det jo er trivielt å bytte User-Agent til noe nytt.

Finnes det juridisk grunnlag for det offentlige å diskriminere webklienter slik det gjøres her? Der tilgang gis eller ikke alt etter hva klienten sier at den heter? Da OEP eies av DIFI og driftes av Basefarm, finnes det kanskje noen dokumenter sendt mellom disse to aktørene man kan be om innsyn i for å forstå hva som har skjedd. Men postjournalen til DIFI viser kun to dokumenter det siste året mellom DIFI og Basefarm. Mimes brønn neste, tenker jeg.

Tags: norsk, offentlig innsyn.
Free software archive system Nikita now able to store documents
19th March 2017

The Nikita Noark 5 core project is implementing the Norwegian standard for keeping an electronic archive of government documents. The Noark 5 standard document the requirement for data systems used by the archives in the Norwegian government, and the Noark 5 web interface specification document a REST web service for storing, searching and retrieving documents and metadata in such archive. I've been involved in the project since a few weeks before Christmas, when the Norwegian Unix User Group announced it supported the project. I believe this is an important project, and hope it can make it possible for the government archives in the future to use free software to keep the archives we citizens depend on. But as I do not hold such archive myself, personally my first use case is to store and analyse public mail journal metadata published from the government. I find it useful to have a clear use case in mind when developing, to make sure the system scratches one of my itches.

If you would like to help make sure there is a free software alternatives for the archives, please join our IRC channel (#nikita on and the project mailing list.

When I got involved, the web service could store metadata about documents. But a few weeks ago, a new milestone was reached when it became possible to store full text documents too. Yesterday, I completed an implementation of a command line tool archive-pdf to upload a PDF file to the archive using this API. The tool is very simple at the moment, and find existing fonds, series and files while asking the user to select which one to use if more than one exist. Once a file is identified, the PDF is associated with the file and uploaded, using the title extracted from the PDF itself. The process is fairly similar to visiting the archive, opening a cabinet, locating a file and storing a piece of paper in the archive. Here is a test run directly after populating the database with test data using our API tester:

~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf
using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446
using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446

 0 - Title of the test case file created 2017-03-18T23:49:32.103446
 1 - Title of the test file created 2017-03-18T23:49:32.103446
Select which mappe you want (or search term): 0
Uploading mangelmelding/mangler.pdf
  PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt
  File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446

You can see here how the fonds (arkiv) and serie (arkivdel) only had one option, while the user need to choose which file (mappe) to use among the two created by the API tester. The archive-pdf tool can be found in the git repository for the API tester.

In the project, I have been mostly working on the API tester so far, while getting to know the code base. The API tester currently use the HATEOAS links to traverse the entire exposed service API and verify that the exposed operations and objects match the specification, as well as trying to create objects holding metadata and uploading a simple XML file to store. The tester has proved very useful for finding flaws in our implementation, as well as flaws in the reference site and the specification.

The test document I uploaded is a summary of all the specification defects we have collected so far while implementing the web service. There are several unclear and conflicting parts of the specification, and we have started writing down the questions we get from implementing it. We use a format inspired by how The Austin Group collect defect reports for the POSIX standard with their instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :).

The Nikita project is implemented using Java and Spring, and is fairly easy to get up and running using Docker containers for those that want to test the current code base. The API tester is implemented in Python.

Tags: english, nuug, offentlig innsyn, standard.

RSS feed

Created by Chronicle v4.6