Today I've spent quite some time chasing a bug in a legacy code at work. In retrospect, the problem is trivially simple.

It can be illustrated by the following snippet.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<script src="irrelevant.js" type="text/javascript"/>
<script type="text/javascript">
function meow() { alert("meow"); }
<a href=""
 onclick="meow(); return false;">
click for meow</a>

So, why does it show you otters instead of meowing, and how long did it take you to spot the bug?

Posted Thu May 10 15:30:23 2012 Tags:

Here's a tiny ugly patch to make xpdf remember positions in files and restore them. You open a PDF document, read it a bit, then quit xpdf. Next time you open the same file, the same page will be shown as it was when you quit.

The positions are stored in ~/.xpdf.bookmarks.

If you are using xpdf from FreeBSD ports, just put the patches into /usr/ports/graphics/xpdf/files directory and rebuild.

Patch 1
Patch 2

Posted Wed Mar 7 16:47:16 2012 Tags:

I've been using ikiwiki as my private wiki for several months now, and have been very happy with it.

It's the ultimate geek wiki. You get:

  1. A simple, yet well-known default input format, markdown.
  2. A real version control system of your choosing, as opposed to some ugly bolted on custom thing (I chose git).
  3. An ability to edit the content using your favourite text editor, which is, frankly, a huge improvement over HTML text areas.
  4. It is written in Perl, and the code is reasonably clean and well thought-out. Since Perl is the language I've been using the most, it is a pretty big bonus for me.
  5. The actual wiki content is a collection of generated static HTML pages, so you don't actually have to think about CPU resources spent by the server.

Those are the big points. You get much more than that, of course, but those were the sellers for me personally.

Ikiwiki can also work as a blog, and since today, it powers this blog as well. I'll do a comprehensive writeup on what it took to convert this blog from Movable Type to ikiwiki once I am reasonably sure everything works to my satisfaction.

Posted Tue Dec 21 13:17:09 2010 Tags:

For months, I've been plagued by intermittent mouse freezes on one of my boxes.

It started after a regular Xorg upgrade. According to various mailing lists, that particular upgrade caused similar problems to a lot of people, so I tried different suggested fixes. No luck.

A bit later, Xorg on FreeBSD was modified to fix the reported problems. But the upgrade did not fix my problem.

Eventually I came to a realization that it is likely that the problem is not with the mouse driver or with any other part of Xorg. Rather, it was a problem with synergy client interaction with the new xcb. I even found a problem report with a supposed fix to the problem. By the time I've found it, the fix was committed to the synergy port, and was subsequently rolled back because it lead to other problems. I tried the patch in the PR anyway. Still did not help me.

Not wanting to spend too much time on this, I was coping with the delays and only occasionally, when annoyed more than usual, was trying to find another fix. Unsuccessfully, I must add, until this morning, when I discovered synergy+, a maintenance fork of the original synergy. I was not aware that synergy+ is basically a drop-in replacement to synergy, the binaries having the same names as in the original. Better still, synergy+ client works just fine with the original synergy server. So I've decided to give it a shot, removed the synergy package, and installed the synergy+ port. Voila, the freezes are gone. I am a happy camper now.

Posted Thu Jan 7 16:26:28 2010 Tags:

With the recent (2009-12-23) update to FreeBSD's sysutils/smartmontools port smartctl stopped working if run as non-root. I did not investigate whether it is because of the change in the way smartctl operates, or whether it just stopped to be setuid root.

Normally I don't mind going root to run smartctl by hand, but it presents a bit of a problem for the hddtemp_smartctl Munin plugin.

One possible solution is to add the munin user to the operator group, add the following two lines to /etc/devfs.conf:

perm ata 0660
perm xpt0 0660

And finally, run sh /etc/rc.d/devfs restart.

Being the dummy that I am, I only thought about a simpler solution when composing this post: just add user root into the [hddtemp_smartctl] section of your munin/plugin-conf.d/plugins.conf file. Besides being simpler, this method has an added advantage: an updated version of the sysutils/munin-node port can easily incorporate this change. Dag-Erling: hint, hint. :-)

Posted Tue Jan 5 14:00:26 2010 Tags:

Today at work I needed to locate and extract, automatically, some information from a website.

There was no direct URL to the information I needed, some fields had to be filled and some POST forms had to be submitted.

Normally I would use WWW::Mechanize for such a task, but in this particular instance the situation was made somewhat less managable because the site in question was implemented with ASP.NET.

The problem with this is that every link has an associated JavaScript event handler which does some housekeeping, assigns things to funnily named hidden input fields like __EVENTTARGET and __EVENTARGUMENT and then POSTs a form.

My first thought was to try and find a CPAN module which handles those complications. Not surprizingly, there is one, aptly named HTML::TreeBuilderX::ASP_NET.

According to its documentation, the module works in combination with the standard LWP::UserAgent and HTML::TreeBuilder, and converts ASP.NET JavaScript posting redirects into an HTTP::Request object which can be fed to LWP::UserAgent's request() method. Just what the doctor ordered.

However, it turned out that my joy was a bit premature:

  • it requires Perl 5.10, which we do not yet have on our production systems;
  • documentation is incomplete and inaccurate at times - it insists naming its httpRequest() method as httpResponse();
  • it fails its own tests, not only on two machines I have tried to run them, but also on a lot of other systems according to CPAN Testers.

After a bit of pondering I decided that spending time on trying to fix the HTML::TreeBuilderX::ASP_NET module is a bit counter-productive - I needed the working code soon.

So what to do?

One thing we should keep in mind is that those JavaScript postbacks do not do anything fancy. The hidden fields that are filled in depend on what was clicked on the page, nothing else. After they are filled, a normal POST occurs.

So if we know what to POST, we could just use WWW::Mechanize and get the job done easily and quickly.

So the solution naturally splits into two parts - finding out what fields to set, and automating the process.

The first part is to launch a browser, do clicking and entering by hand, and capture what gets POSTed at each step. This capturing could be done by a variety of methods:

  • tcpdump/wireshark - listen to 'em on the wire!
  • having a proxy which outputs the POSTed parameters;
  • using a browser extension that shows POSTed parameters.

I have chosen the second option, since I had a script similar to what I need already, and since it is easy to filter out any parameters which I did not want to see, like __VIEWSTATE, which can easily be several kilobytes long.


#! /usr/bin/perl
use strict;
use warnings;
use HTTP::Proxy;
use CGI;

my $proxy = HTTP::Proxy->new(host => "localhost");
$proxy->logmask(32); # 32 - FILTERS
        request => Spy::BodyFilter->new(),

package Spy::BodyFilter;
use base qw(HTTP::Proxy::BodyFilter);

sub will_modify { 0 }

sub filter
    my ($me, undef, $req) = @_;
    print $req->method, " ", $req->uri, "\n";
    return unless $req->method eq "POST";
    my $body = $req->content;
    my $q = new CGI($body);
    for my $p ($q->param) {
        next if $p eq "__VIEWSTATE";
        print "$p\n\t", $q->param($p), "\n";

Launch it locally in a terminal, set your browser's proxy settings to localhost:8080, and watch the output in the terminal.

The second part of the puzzle is to use the wonderful WWW::Mechanize::Shell. It provides an interactive shell, in which we can issue GET requests, see the content of the responses, view links, forms, and form fields with their values, follow the links, set the value of the fields, click on buttons and submit the forms. Best of all, after getting what we are after we can issue a script command and get a piece of Perl code that will perform all the tasks we've just done.

So the final solution looks like this:

  1. Load the start page in your browser (through the spyproxy).
  2. Load the same page in WWW::Mechanize::Shell.
  3. In the browser, fill in any fields that need filling, and click where you want.
  4. Observe the spyproxy output, note any fields that need setting. In a typical ASP.NET application, you will want to ignore the vast majority of the fields at any given moment. Don't worry, humans are good at this sort of pattern recognition. :-) Pay special attention to __EVENTTARGET and __EVENTARGUMENT fields.
  5. Set the same fields to the same values in the shell (use value fieldname fieldvalue).
  6. If __EVENTTARGET was set, type submit in the shell; otherwise, find the name of the button that was pressed (see step 4), and type click buttonname in the shell;
  7. Examine the content of the response (content in the shell) to make sure that what you've got in the shell makes sense.
  8. If more clicking and entering is to be done, go to step 3.
  9. Type script in the shell.
  10. Go edit - remove any prints you do not need, change constants you entered in the fields with variables where needed.
  11. Your custom scraping script is ready to use.
  12. ...
  13. Profit!

I hope this trick will be of use to somebody. Enjoy!

Posted Wed Aug 26 21:37:37 2009 Tags:
Books giveaway by tobez

For reasons which I am not going to delve into here (this is a topic for another post), we are going to get rid of about half of our books.

There are some (low) hundreds of books for the taking, slightly more than half in English, the rest being mostly Russian with a sprinkling of Danish here and there.

Fiction, non-fiction, textbooks, science fiction, you name it.

So, if you are in Copenhagen area and are interested, write me a note and consider coming over to have a look, maybe you'll find something you'd like to keep. All books are to be had for free, although we would not mind selling them if you will insist.

Posted Tue Mar 24 11:24:07 2009 Tags:

Often I want to know how long it took for a particular command to finish.

An obvious solution to use the time(1) command does not work without a degree of anticipation on my part that I do not normally posess.

At some point I became sufficiently annoyed to actually add some hooks to my .zshrc. All commands executed in an iteractive shell are timed, but the reporting is done only for those that took longer than 10 seconds to execute.

This ugly code does the job:


    echo ""
    echo "note_report: $note_command completed in $1 seconds"

    if [ "x$TTY" != "x" ]; then

    local xx
    if [ "x$TTY" != "x" ]; then
        if [ "x$note_ignore" = "x" ]; then
            if [ $xx -gt 10 ]; then
                if [ $TTYIDLE -gt 10 ]; then
                    note_report $xx


Posted Thu Mar 19 19:55:14 2009 Tags:

Me and my wife are going to Paris in April. So, some time ago we have ordered tickets from a popular Danish travel site,

The tickets are for 2009-04-18.

This morning I have got a shiny (HTML) email from them which said the following (loosely translated from Danish):

Bon Vojage.

Hello Anton.

In a moment you will be traveling to Paris! There are often many things that should be taken care of before the journey, and we would like to help you with practical details. We hope that you will find in this mail something that will make your journey even better. Have a good journey!

After realizing that today is 2009-03-18, I had my moment of panic, frantically searching for the PDF with the electronic ticket, verifying that I have not made a major fuckup and that our tickets are indeed for April the 18th, not March the 18th.

So the fuckup is not mine. Fine. But it would be nice to be absolutely really positively sure, so I called their customer service. The robot helpfully told me that

  • this is a paid call (I don't remember the exact amount per minute, but it was not peanuts);
  • I am number 10 in the queue.

And there is no contact E-mail on the website.

So what do you think - is it a simple, albeit embarrasing programming error on behalf of programmers, or a secret plot to get some more money out of customers induced into a state of panic?

At any rate, I do not think I will be using their services again. If I need some excitement in my life, there are better ways to obtain it.

Posted Wed Mar 18 11:52:41 2009 Tags:

Some years ago I've made a little web application which allowed one to browse FreeBSD ports collection by tags, à la delicious.

The tags were not created by users but were instead generated from a couple of fields taken from every port's Makefile, so it was not exactly a "social" software.

There was some limited amount of discussion on FreeBSD mailing lists, and a publicly accessible readonly SVN repository was created by my friend Erwin, but the overall interest was rather low.

Over time I moved on and basically stopped working on the project, but recently I had an idea - not exactly to re-surrect it, but to make it more easy for people who are interested to contribute.

Enter port-tags at github. Github is a tool to host git repositories of your open-source projects. Anybody can easily clone your repository, fork it completely, or submit their changes back to you. I only started using it today, so I cannot say much about its features and how convenient they are, but from what I've heard, it is very very nice.

So, if you are interested, and have got round tuits to spare, please hack on port-tags - maybe some good will eventually come out of it.

Posted Sun Mar 15 20:26:49 2009 Tags:
Subscribe to The Party Line: RSS Atom