Blog

Viewing posts for the category software

Just when you think you're done ...

Testing blogging from GMail

Firefox backing up.

Firefox backing up.
A bug is bugging me up, but it is unclear to me what is the cause, so no reporting or fixing is yet in sight - whenever I scroll up in Firefox using the mouse scroller there is ~10% chance that the "back" action will be executed. It has happened to me a lot of times now, and I haven't noticed similar symptoms in other programs.
Ubuntu Dapper, FF 1.5, USB mouse, HP nx6110 notebook. Any advise from the lazyweb?

On the weekend I was at the school reuni...

On the weekend I was at the school reunion and among other matters I gave my school a present of 50 Ubuntu CDs to use in school and to give out to the pupils most interested in computers. After that I talked to my IT teacher who is now an IT adviser to the principal. It seems that all that we were taught in high school (Office software) is being move down to grade school and it is currently unclear, what will be taught to high school students. It could be programming, but it would take a large pile of time and money to prepare all the teachers for that.
It is quite a food for thought - find something challenging enough to teach at 11th and 12th grade to children that have learned all of word processing and presentation making while in their 5th grade and in the same time make it very easy to prepare 40+ year old teachers to teach that material.
In this context I find the project Little Wizard very interesting - if basics of programming can be taught so easily, maybe we still have a chance of getting it into the mandatory curriculum.

Aggregation the G way.

Aggregation the G way.
I've been using the magic of RSS (and Atom) to keep up with Planet Debian, blogs of my friends, tech news and posts to an anime fan forum for the past several months, however one little problem bothered me - the forum has a lot of posts (sometimes more then 5 per minute) and an RSS feed of only last five items. That means that I had to have my RSS aggregator (Liferea) open at all times, so that I do not miss any posts. However keeping my laptop on and online at all times is quite bothersome, so I started looking for solutions.
I tried Google Reader however I didn't like it much - I like to see more of my feed. There is too much screen space wasted for all the wrong reasons, one can only see 5 items at the time (which is quite a hassle if you have 500+ of them), a lot of info I would like to see is not there (which blog did this come from???), some strange html conversions, ...
Now I have found my solution. It is a combination of rss2email and GMail. I have a computer that I always keep on and online (it could be a server, but in my case it is a simple workstation). I installed rss2email there, added all my feeds to the database there (hint: export feedlist from Liferea and do a bit of grep/sed magic), moded config so that all mails come from a single address, all have a custom identifiable header and all are HTML mails without any transformations. After that I configured cron to launch "r2e run" every half an hour and "r2e run 22" every minute (where '22' is the id of the feed of that anime forum). At Gmail side I simply filtered all mails from rss2email into a separate label and archived them (so that they do not clutter my inbox).
Now when I want to reed my feeds, I go to the last message of the RSS label in my Gmail, open it, read it, star it if I need to look at it later and then press "k" to get to the newer message. After I am done reading I usually go and remove RSS label from read messages, so that they do not appear in this labels "folder" and are only saved in Archive. If I start running out of space, I will simply search and delete old RSS items then.
Most of the screen space now is devoted to the message, I can see all the fields I want to see (source, author, topic, content, pictures, URLs) and I can manage messages by hundreds. Also being able to search both your mail and your rss feeds at the same time is neat.
Yesterday I did a little improvement to this scheme. The forum that I read has the title of every message made of "author: topic", so every message is a separate conversation in Gmail. I decided to try to use the conversation feature of Gmail and did a little modification to rss2email so that if the message is from this forum, then everything up to the first semicolon is cut from the title and inserted as a part of the name of the sender. In the end the subject of every message contains only the topic of the corresponding thread in the forum and thus all messages from one thread are neatly grouped together as a single conversation. At the same time every messages From: looks like "$forum_name $author <custom@from.address.com>" so that I can still see who wrote what in the forum.

Another idea came to me in shower - I've...

Another idea came to me in shower - I've been reading up about MS Office 12 UI changes and I think there are some very nice ideas there, but we can do better.
The main principle of the Ribbon is that all functions are there, but some are smaller then others based on their priority.
It came to me that when you design an UI in, for example, Glade you are basically creating a dynamic structure that can scale up or down. The only two things missing to make it a Ribbonesque interface are: 1. unique priority for each widget to decide which widgets to reduce/increase in size, 2. multiple size versions for each widget - buttons from 128x128px to 16x16px, ...
For situations when 16x16px is not enough for the widget (editbox, for example) one could make a micro button that brings up the rest of the widget as a popover when pressed or simply not show the widget. Less important widgets would simply not be displayed at smaller screen/window sizes (hidden behind a generic "+" icon meaning more functions in a category).
This would allow an application to use those huge screens of the future for bigger and more detailed buttons/widgets and at the same time would increase usability of applications at small screen sizes. Or maybe I am just thinking too far.

Folding@Stage


Flashback to childhood
A contest at Animefest - we all are trying to remember how paper boats were made. Here I have a moment of "enlightenment" :)

Another bug has pissed me enough to star...

Another bug has pissed me enough to start debugging. This time it is Totem-xine crashing on startup in Ubuntu dapper.

The first thing is that you cann't rebuild totem from sources multiple time after ubuntu patches - ubuntu uses dpatch to patch something in automake files and after the build has been run, the unpatch fails thus preventing a rebuild, doh! Worked around that by removing that patch. (Bug not reported yet)

After installing totem-gstreamer, my main suspect is the change to the statusbar, that look very recent. Could it be that Totem developers forgot a critical fix to the xine backend? Could it be that the treat xine backend as a ... second class citizen? To what? To that GStreamer? I tried to use GStreamer, I really did, but there are a few tiny issues: 1) it doesn't open even half the files that xine does, 2) within 5 minutes of a movie audio-video can easily get out of sync by 5 seconds. I have never seen A-V sync in xine. Ever. I love telling our Windows using frends that my movies "just work" with totem-xine, please do not take that away!

Anyway - back to the bug we go.

As we have a clean crash, I recompiled totem with debugging symbols ("DEB_BUILD_OPTS=nostrip,noopt debuild -us -uc") and run with gdb. When totem crashed, I got the code line, where it happened:

(totem:4608): GLib-GObject-WARNING **: invalid cast from ` ' to `TotemTimeLabel'

Program received signal SIGSEGV, Segmentation fault.
0x08068659 in totem_time_label_set_time (label=0x8199a60, time=0, length=0) at totem-time-label.c:69
69 if (time / 1000 == label->priv->time / 1000

Now, that is interesting, lets see, what we have here - time is an int, so no segfaults from there, but label is a TotemTimeLabel. Hmm, that error now makes sense. And when we take a look at label->priv, it appears to be a pointer to TotemTimeLabelPrivate with an address of 0xffffffff. That's the problem, now we only need to backtrace trough the program and find the bug that is causing that.

Well all looks pretty nice - there is a "tick" event in the player that calls the time update. Not really clear, why there is such a discrepance between GtkLabel and TotemTimeLabel or why this structure is not inicialized in time. More strange is that gstreamer backend never calls this function. Wierd. Let's see what happens if I just return from it without doing anything. Does not help - now statusbar is crashing.

Let's try it from another angle - it worked before. Nothing much in totem changed since release of breezy. Installing the version from breezy, it works fine. Recompiling the version from breezy on dapper - crashes. Ouch! It looks like xine backend of totem has not been ported to that new crazy Gnome 2.12 thingie, like gstreamer backend was. Strange - that is a backend, it should not be dependent on the frontend, no? Anyway, it is not something I can do - I will have to install the breezy version, hack some dependencies to make it no conflict with one optional library and then file a critical bug on totem for breaking the xine backend.

But even that will have to wait 'till tomorrow, sleep is of the essence, anywere.

Do you want to hear the most incredible ...

Do you want to hear the most incredible "it's not a bug - it's a feature" story ever?

After shooting hundreds of megs of RAWs with my Canon 350D last couple of weeks, I noticed a very strange thing - importing this large amount of files from my camera into F-Spot took ages. F-Spot ate memory in tens and hundreds of megabytes and never returned it back to the system. Well I blamed it on Mono and went searching for a better way. Then I found out that command-line C program gphoto also take the same horrific amount of memory to import my photos. I saw that to download 900 Mb of photos (~250 photos) photo memory use went up to ~910 Mb (2 Mb were shared). Luckily Linux managed to swap out part of gphoto, so I could finish the download with my 512 Mb of real RAM and a 1 Gb swap file. I googled and founds tens of bug reports on this - first of them as early as December 2004. Ouch.

Well - let's see what the problem is, shall we? Some bugreports reference a bug in gphoto's SourceForge bug tracker where a users reports that downloading a 250 Mb video file takes 250 Mb of RAM and developers reply that unfortunately that is the limitation of current infrastructure and it is very hard to fix. Bumer.

But wait! He says that downloading ONE file takes a lot of RAM. This limit should not exist when downloading multiple files - we should be able to drop information about previous file as soon as we start downloading the next one, right?

Ok, lest see, what really is going on there. Downloading source of gphoto. Looking at it. Seeing a lot of mess. After around 10 minutes I start to understand that there is a table of option names and functions and the real job is doe by command line parser who calls a function as soon as he encounters a proper parameter on the command line. :P After 3 more minutes jumping around the code I finally get to a function that gets called to download a single file. Looks pretty easy:


  • take a CameraFile pointer

  • pass it to gp_file_new() for inicialization

  • pass it to gp_get_file() to get the actual data of file (download happens here)

  • pass it to gp_write_file_to_file() to dump the data to a file on disk

  • pass it to gp_file_unref() to free the data


Looks all fine and dandy so far. However I see the memory use that suggest that this last operation does not happen as it should, so I search for the gp_file_unref() function. I do not find it in gphoto source, but as I soon figure out - it is in libgphoto2. The function is pretty straight forward - the reference count of the structure is reduced by 1 and if it has reached 0, the structure is freed from memory via gp_file_free() function.

Hmm, I wonder what will happen if I replace gp_file_unref() with gp_file_free() in gphoto? After a quick compile and installation (I thank the Gods and all DD's for the wonders of "debuild -us -uc && sudo dpkg -i ../gphoto*.deb") I ran gphoto again. Wow, it now only consumes 8-16 Mb of RAM and not 900. The files downloaded fine, but in the end glibc made a lot of fuss about "double free". What does that mean? It means that someone managed to get a reference to our MemoryFile and didn't give it back. Naughty boy!

We only call three functions using that pointer, so it should not be hard to trace them trough the source to see what they do. The gp_file_new() function looks good, it sets reference count to 1 always. gp_get_file is more complex - I get to crawl through a lot of strange redirects to all levels of gphoto architecture. At one point I get a bit alarmed as I see a local variable called ref_count, but then I see that the code just stores reference count there for safekeeping while data is copied from another object and right after that copy reference count is put back safely. After all that I get to the end of the gp_get_file function, just a couple thing left - cache the result, clean up and return the file. Wait a minute ....

CACHE?!?!?!!

$(&@($^@#$(^@&^$(#&$@#(&$(@#$&!^&$^@*!(&$#(@& !!!!!

It appears that someone thought that it is a good idea to use a gig or so of my RAM for cache, just in case if I would like to download the same photos the second time around in the same program call. IT IS NOT!

Results: one line patch, one NMU building for upload, one *very* long bug in upstream bug tracker, one developer quite upset and not too convinced about the correctness of free software ways any more :P

Sky burn


Sky burn

One more day without taking a shot, but with a bit of productivity :) While on the topic of photography, I must say - if you work with photos in Linux, use UFRaw and that other thing. The other thing doesn't even have a preview when converting, but the UFRaw can autodetect a white balance in this photo in such way to turn those clouds white and blue - i.e. remove any color influence from the sunset.
Now to the geeky stuff - today I made the community web site for SBackup, you can find it here. After evaluating the options I went with Wikka Wiki as it is much simpler codewise then MediaWiki or Trac and at this point I mainly want simplicity there. Finally there is a bit of documentation for the SBackup project and a way for users to contribute to it.
About media players - I still use AmaroK, despite being a hardcore Gnome user. I like the command line control interface, automatic lyrics downloads and a dynamic mode of stream of "suggested" songs.

Recent Posts

Archive

2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005

Categories

Authors

Feeds

RSS / Atom