Viewing posts for the category software
Firefox backing up.
A bug is bugging me up, but it is unclear to me what is the cause, so no reporting or fixing is yet in sight - whenever I scroll up in Firefox using the mouse scroller there is ~10% chance that the "back" action will be executed. It has happened to me a lot of times now, and I haven't noticed similar symptoms in other programs.
Ubuntu Dapper, FF 1.5, USB mouse, HP nx6110 notebook. Any advise from the lazyweb?
On the weekend I was at the school reunion and among other matters I gave my school a present of 50 Ubuntu CDs to use in school and to give out to the pupils most interested in computers. After that I talked to my IT teacher who is now an IT adviser to the principal. It seems that all that we were taught in high school (Office software) is being move down to grade school and it is currently unclear, what will be taught to high school students. It could be programming, but it would take a large pile of time and money to prepare all the teachers for that.
It is quite a food for thought - find something challenging enough to teach at 11th and 12th grade to children that have learned all of word processing and presentation making while in their 5th grade and in the same time make it very easy to prepare 40+ year old teachers to teach that material.
In this context I find the project Little Wizard very interesting - if basics of programming can be taught so easily, maybe we still have a chance of getting it into the mandatory curriculum.
Aggregation the G way.
I've been using the magic of RSS (and Atom) to keep up with Planet Debian, blogs of my friends, tech news and posts to an anime fan forum for the past several months, however one little problem bothered me - the forum has a lot of posts (sometimes more then 5 per minute) and an RSS feed of only last five items. That means that I had to have my RSS aggregator (Liferea) open at all times, so that I do not miss any posts. However keeping my laptop on and online at all times is quite bothersome, so I started looking for solutions.
I tried Google Reader however I didn't like it much - I like to see more of my feed. There is too much screen space wasted for all the wrong reasons, one can only see 5 items at the time (which is quite a hassle if you have 500+ of them), a lot of info I would like to see is not there (which blog did this come from???), some strange html conversions, ...
Now I have found my solution. It is a combination of rss2email and GMail. I have a computer that I always keep on and online (it could be a server, but in my case it is a simple workstation). I installed rss2email there, added all my feeds to the database there (hint: export feedlist from Liferea and do a bit of grep/sed magic), moded config so that all mails come from a single address, all have a custom identifiable header and all are HTML mails without any transformations. After that I configured cron to launch "r2e run" every half an hour and "r2e run 22" every minute (where '22' is the id of the feed of that anime forum). At Gmail side I simply filtered all mails from rss2email into a separate label and archived them (so that they do not clutter my inbox).
Now when I want to reed my feeds, I go to the last message of the RSS label in my Gmail, open it, read it, star it if I need to look at it later and then press "k" to get to the newer message. After I am done reading I usually go and remove RSS label from read messages, so that they do not appear in this labels "folder" and are only saved in Archive. If I start running out of space, I will simply search and delete old RSS items then.
Most of the screen space now is devoted to the message, I can see all the fields I want to see (source, author, topic, content, pictures, URLs) and I can manage messages by hundreds. Also being able to search both your mail and your rss feeds at the same time is neat.
Yesterday I did a little improvement to this scheme. The forum that I read has the title of every message made of "author: topic", so every message is a separate conversation in Gmail. I decided to try to use the conversation feature of Gmail and did a little modification to rss2email so that if the message is from this forum, then everything up to the first semicolon is cut from the title and inserted as a part of the name of the sender. In the end the subject of every message contains only the topic of the corresponding thread in the forum and thus all messages from one thread are neatly grouped together as a single conversation. At the same time every messages From: looks like "$forum_name $author <firstname.lastname@example.org>" so that I can still see who wrote what in the forum.
Another idea came to me in shower - I've been reading up about MS Office 12 UI changes and I think there are some very nice ideas there, but we can do better.
The main principle of the Ribbon is that all functions are there, but some are smaller then others based on their priority.
It came to me that when you design an UI in, for example, Glade you are basically creating a dynamic structure that can scale up or down. The only two things missing to make it a Ribbonesque interface are: 1. unique priority for each widget to decide which widgets to reduce/increase in size, 2. multiple size versions for each widget - buttons from 128x128px to 16x16px, ...
For situations when 16x16px is not enough for the widget (editbox, for example) one could make a micro button that brings up the rest of the widget as a popover when pressed or simply not show the widget. Less important widgets would simply not be displayed at smaller screen/window sizes (hidden behind a generic "+" icon meaning more functions in a category).
This would allow an application to use those huge screens of the future for bigger and more detailed buttons/widgets and at the same time would increase usability of applications at small screen sizes. Or maybe I am just thinking too far.
Another bug has pissed me enough to start debugging. This time it is Totem-xine crashing on startup in Ubuntu dapper.
The first thing is that you cann't rebuild totem from sources multiple time after ubuntu patches - ubuntu uses dpatch to patch something in automake files and after the build has been run, the unpatch fails thus preventing a rebuild, doh! Worked around that by removing that patch. (Bug not reported yet)
After installing totem-gstreamer, my main suspect is the change to the statusbar, that look very recent. Could it be that Totem developers forgot a critical fix to the xine backend? Could it be that the treat xine backend as a ... second class citizen? To what? To that GStreamer? I tried to use GStreamer, I really did, but there are a few tiny issues: 1) it doesn't open even half the files that xine does, 2) within 5 minutes of a movie audio-video can easily get out of sync by 5 seconds. I have never seen A-V sync in xine. Ever. I love telling our Windows using frends that my movies "just work" with totem-xine, please do not take that away!
Anyway - back to the bug we go.
As we have a clean crash, I recompiled totem with debugging symbols ("DEB_BUILD_OPTS=nostrip,noopt debuild -us -uc") and run with gdb. When totem crashed, I got the code line, where it happened:
(totem:4608): GLib-GObject-WARNING **: invalid cast from ` ' to `TotemTimeLabel'
Program received signal SIGSEGV, Segmentation fault.
0x08068659 in totem_time_label_set_time (label=0x8199a60, time=0, length=0) at totem-time-label.c:69
69 if (time / 1000 == label->priv->time / 1000
Do you want to hear the most incredible "it's not a bug - it's a feature" story ever?
After shooting hundreds of megs of RAWs with my Canon 350D last couple of weeks, I noticed a very strange thing - importing this large amount of files from my camera into F-Spot took ages. F-Spot ate memory in tens and hundreds of megabytes and never returned it back to the system. Well I blamed it on Mono and went searching for a better way. Then I found out that command-line C program gphoto also take the same horrific amount of memory to import my photos. I saw that to download 900 Mb of photos (~250 photos) photo memory use went up to ~910 Mb (2 Mb were shared). Luckily Linux managed to swap out part of gphoto, so I could finish the download with my 512 Mb of real RAM and a 1 Gb swap file. I googled and founds tens of bug reports on this - first of them as early as December 2004. Ouch.
Well - let's see what the problem is, shall we? Some bugreports reference a bug in gphoto's SourceForge bug tracker where a users reports that downloading a 250 Mb video file takes 250 Mb of RAM and developers reply that unfortunately that is the limitation of current infrastructure and it is very hard to fix. Bumer.
But wait! He says that downloading ONE file takes a lot of RAM. This limit should not exist when downloading multiple files - we should be able to drop information about previous file as soon as we start downloading the next one, right?
Ok, lest see, what really is going on there. Downloading source of gphoto. Looking at it. Seeing a lot of mess. After around 10 minutes I start to understand that there is a table of option names and functions and the real job is doe by command line parser who calls a function as soon as he encounters a proper parameter on the command line. :P After 3 more minutes jumping around the code I finally get to a function that gets called to download a single file. Looks pretty easy: