Aigarius Blog (Posts about programming)http://aigarius.com/categories/programming.atom2021-06-30T20:20:27ZAigars MahinovsNikolaAutomation of embedded developmenthttp://aigarius.com/blog/2018/03/23/automation-of-embedded-development/2018-03-23T13:03:10Z2018-03-23T13:03:10ZAigars Mahinovs<p>I am wondering if there is a standard solution to a problem that I am facing. Say you are developing an embedded Debian Linux device. You want to have a "test farm" - a bunch of copies of your target hardware running a lot of tests, while the development is ongoing. For this to work automatically, your automation setup needs to have a way to fully re-flash the device, even if the image previously flashed to it does not boot. How would that be usually achieved?</p>
<p>I'd imagine some sort of option in the initial bootloader that would look at some hardware switch (that your test host could trip programmatically) and if that is set, then boot into a very minimal and very stable "emergency" Linux system, then you could ssh into that, mount the target partitions and rewrite their contents with the new image to be tested.</p>
<p>Are there ready-made solutions that do such a thing? Generically or even just for some specific development boards? Do people solve this problem in a completely different way? Was unable to find any good info online.</p>Distributing third party applications via Docker?http://aigarius.com/blog/2014/09/25/distributing-third-party-applications-via-docker/2014-09-25T19:09:00Z2014-09-25T19:09:00ZAigars Mahinovs<p>Recently the discussions around how to distribute third party applications for "Linux" has become a new topic of the hour and for a good reason - Linux is becoming mainstream outside of free software world. While having each distribution have a perfectly packaged, version-controlled and natively compiled version of each application installable from a per-distribution repository in a simple and fully secured manner is a great solution for popular free software applications, this model is slightly less ideal for less popular apps and for non-free software applications. In these scenarios the developers of the software would want to do the packaging into some form, distribute that to end-users (either directly or trough some other channels, such as app stores) and have just one version that would work on any Linux distribution and keep working for a long while.</p>
<p>For me the topic really hit at Debconf 14 where <a href="http://meetings-archive.debian.net/pub/debian-meetings/2014/debconf14/webm/QA_with_Linus_Torvalds.webm">Linus voiced his frustrations with app distribution problems</a> and also some of that was touched by <a href="http://meetings-archive.debian.net/pub/debian-meetings/2014/debconf14/webm/SteamOS_and_Debian.webm">Valve</a>. Looking back we can see passionate discussions and interesting ideas on the subject from <a href="http://0pointer.net/blog/revisiting-how-we-put-together-linux-systems.html">systemd developers</a> (<a href="http://www.superlectures.com/guadec2013/sandboxed-applications-for-gnome">another</a>) and <a href="http://blogs.gnome.org/aday/2014/07/10/sandboxed-applications-for-gnome/">Gnome developers</a> (<a href="http://blogs.gnome.org/aday/2014/07/23/sandboxed-applications-for-gnome-part-2/">part2</a> and <a href="http://blogs.gnome.org/uraeus/2014/07/10/desktop-containers-the-way-forward/">part3</a>).</p>
<p>After reading/watching all that I came away with the impression that I love many of the ideas expressed, but I am not as thrilled about the proposed solutions. The systemd managed zoo of btrfs volumes is something that I actually had a nightmare about.</p>
<p>There are far simpler solutions with existing code that you can start working on right now. I would prefer basing Linux applications on <a href="http://www.docker.com/">Docker</a>. Docker is a convenience layer on top of Linux cgroups and namespaces. Docker stores its images in a datastore that can be based on AUFS or btrfs or devicemapper or even plain files. It already has a semantic for defining images, creating them, running them, explicitly linking resources and controlling processes.</p>
<p>Lets play a simple scenario on how third party applications should work on Linux.</p>
<p>Third party application developer writes a new game for Linux. As his target he chooses one of the "application runtime" Docker images on Docker Hub. Let's say he chooses the latest Debian stable release. In that case he writes a simple Dockerfile that installs his build-dependencies and compiles his game in "debian-app-dev:wheezy" container. The output of that is a new folder containing all the compiled game resources and another Dockerfile - this one describes the runtime dependencies of the game. Now when a docker image is built from this compiled folder, it is based on "debian-app:wheezy" container that no longer has any development tools and is optimized for speed and size. After this build is complete the developer exports the Docker image into a file. This file can contain either the full system needed to run the new game or (after <a href="https://github.com/docker/docker/issues/8214">#8214</a> is implemented) just the filesystem layers with the actual game files and enough meta-data to reconstruct the full environment from public Docker repos. The developer can then distribute this file to the end user in the way that is comfortable for them.</p>
<p>The end user would download the game file (either trough an app store app, app store website or in any other way) and import it into local Docker instance. For user convenience we would need to come with an file extension and create some GUIs to launch for double click, similar to GDebi. Here the user would be able to review what permissions the app needs to run (like GL access, PulseAudio, webcam, storage for save files, ...). Enough metainfo and cooperation would have to exist to allow desktop menu to detect installed "apps" in Docker and show shortcuts to launch them. When the user does so, a new Docker container is launched running the command provided by the developer inside the container. Other metadata would determine other docker run options, such as whether to link over a socket for talking to PulseAudio or whether to mount in a folder into the container to where the game would be able to save its save files. Or even if the application would be able to access X (or Wayland) at all.</p>
<p>Behind the scenes the application is running from the contained and stable libraries, but talking to a limited and restricted set of system level services. Those would need to be kept backwards compatible once we start this process.</p>
<p>On the sandboxing part, not only our third party application is running in a very limited environment, but also we can enhance our system services to recognize requests from such applications via cgroups. This can, for example, allow a window manager to mark all windows spawned by an application even if the are from a bunch of different processes. Also the window manager can now track all processes of a logical application from any of its windows.</p>
<p>For updates the developer can simply create a new image and distribute the same size file as before, or, if the purchase is going via some kind of app-store application, the layers that actually changed can be rsynced over individually thus creating a much faster update experience. Images with the same base can share data, this would encourage creation of higher level base images, such as "debian-app-gamegl:wheezy" that all GL game developers could use thus getting a smaller installation package.</p>
<p>After a while the question of updating abandonware will come up. Say there is is this cool game built on top of "debian-app-gamegl:wheezy", but now there was a security bug or some other issue that requires the base image to be updated, but that would not require a recompile or a change to the game itself. If <a href="https://github.com/docker/docker/issues/8215">this</a> Docker proposal is realized, then either the end user or a redistributor can easily re-base the old Docker image of the game on a new base. Using this mechanism it would also be possible to handle incompatible changes to system services - ten years down the line AwesomeAudio replaces PulseAudio, so we create a new "debian-app-gamegl:wheezy.14" version that contains a replacement libpulse that actually talks to AwesomeAudio system service instead.</p>
<p>There is no need to re-invent everything or push everything and now package management too into systemd or push non-distribution application management into distribution tools. Separating things into logical blocks does not hurt their interoperability, but it allows to recombine them in a different way for a different purpose or to replace some part to create a system with a radically different functionality.</p>
<p>Or am I crazy and we should just go and sacrifice Docker, apt, dpkg, FHS and non-btrfs filesystems on the altar of systemd?</p>
<p>P.S. You might get the impression that I dislike systemd. I love it! Like an init system. And I love the ideas and talent of the systemd developers. But I think that systemd should have nothing to do with application distribution or processes started by users. I am sometimes getting an uncomfortable feeling that systemd is morphing towards replacing the whole of <a href="http://en.wikipedia.org/wiki/UNIX_System_V">System V</a> jumping all the way to System <a href="http://en.wikipedia.org/wiki/Roman_numerals">D</a> and rewriting, obsoleting or absorbing everything between the kernel and Gnome. In my opinion it would be far healthier for the community if all of these projects would developed and be usable separately from systemd, so that other solutions can compete on a level playing field. Or, maybe, we could just confess that what systemd is doing is creating a new Linux meta-distribution.</p>Going to Debconf14http://aigarius.com/blog/2014/06/13/going-to-debconf14/2014-06-13T17:06:35Z2014-06-13T17:06:35ZAigars Mahinovs<p>It's that time of the year, again, when I lan to go to Debconf, reserve vacation, get visa waiver, book tickets. Let's hope nothing blocks me from attending this time. It has been too long.</p>
<p>Now I just need to finish up <a href="https://github.com/aigarius/photoriver">photoriver</a> before Debconf :) In fact it is quite close to being ready - I just need to finish up the GPS tagging feature, figure out why FLickr stopped working recently and optionally work on burst detection and/or FlashAir IP address autodetection in the network.</p>Photo migration from Flickr to Google Plushttp://aigarius.com/blog/2014/03/27/photo-migration-from-flickr-to-google-plus/2014-03-27T18:03:40Z2014-03-27T18:03:40ZAigars Mahinovs<p>I've been with Flickr since 2005 now, posting a lot of my photos there, so that other poeple from the events, that I usually take photos of, could enjoy them. But lately I've become annoyed with it. It is very slow to uplaod to and even worse to get photos out of it - there is no large shiny button to Download a set of photos, like I noticed in G+. So I decided to try and copy my photos over. I am not abandoning or deleting my Flickr account yet, but we'll see.</p>
<p>The process was not as simple as I hoped. There is this FlickrToGpluss website tool. It would have been perfect .. if it worked. In that tool you simply log in to both services, check which albums you want to migrate over and at what photo size and that's it - the service will do the migration directly on their servers. It actually feeds Google the URLs of the Flickr photos so the photos don't even go trought the service itself, only metadata does. Unfortunately I hit a couple snags - first of all the migration stopped progressing a few days and ~20 Gb into the process (out of ~40 Gb). And for the photos that were migrated their titles were empty and their file names were set to Flickr descriptions. Among other things that meant that when you downloaded the album as a zip file with all the photots (which was the feature that I was doing this whole thing for) you got photos in almost random order - namely in the order of their sorted titles. Ugh. So I canceled that migration (by revoking priviledges to that app on G+, there is no other way to see or modify progress there) and sat down to make a manual-ish solution.</p>
<p>First, I had to get my photos out of Flickr. For that I took <a href="http://code.google.com/p/offlickr/">Offlickr</a> and ran it in set mode:</p>
<pre>./Offlickr.py -i 98848866@N00 -p -s -c 32</pre>
<p>The "98848866@N00" is my Flickr ID which I got from <a href="http://idgettr.com/">this nice service</a>, then -p to download photos (and not just metadata), -s to download all sets and -c 32 to do the download in 32 parallel threads. An important thing to do is to take all you photos that are not in a set in Flickr and add them to a new 'nonset" so that those photos are also picked up here, there is an option under Organize to select all non-set photos. It worked great, but there were a couple tiny issues:</p>
<ol>
<li>There is a <a href="http://code.google.com/p/offlickr/issues/detail?id=12">bug</a> in Offlickr that it does not honor pages in Flickr sets, so it only downloads first 500 images in each set, fix for that is in that bug;</li>
<li>It also wanted Python2.6 for some reason, but worked fine with Python2.7</li>
<li>With that number of threads sometimes Flickr actually failed to respond with the photo, serving a 500 error page instead. Offlickr does not check return code and happily daves that HTML page as the photo. To work around that I simply deleted the HTML errors and then ran the same Offlickr command again so that it re-downloads the missing files. Had to repeat that a few times to get all of them:</li>
</ol>
<pre>ack-grep -l -R "504 Gateway Time-out" dst/ | xargs rm</pre>
<p>After all that I had my photos, all 40 Gb of them on my computer. Should I upload them to G+ now? Not yet! See the photos all had lost their original file names. It turns out Flickr simply throws that little nugget of information away. It is nowhere to be found, neither in metadata or the UI or the Exif of the photos. Also some of my photos had clever descriptions that I did not want to loose or re-enter in G+ and also geolocation information. Flickr does not embed that info into the Exif of the image, instead it is provided separately - Offlickr saves that as an XML file next to each image.</p>
<p>So I wrote a simple and hacky <a href="http://aigarius.com/static/media/uploads/adddata.py">script</a> to re-embed that info. It did 3 things:</p>
<ol>
<li>Embed title of the photo into the Description EXIF tag, so that G+ automatically picks it up as title of the photo;</li>
<li>Embed the GEO location information into the proper EXIF tags, so that G+ picks that up automatically;</li>
<li>Create a new file name based on original picture taken datetime and EXIF Canon FileNumber field (if such exists), so that all photos in an album are sequential.</li>
</ol>
<p>It uses <a href="http://www.sno.phy.queensu.ca/~phil/exiftool/">exiftool</a> for the actual heavy lifting.</p>
<p>After all that was finished I tested the result by uploading a few images to G+ and testing that their title is being set correctly, that they have a sane file name and that geo information works. After that I just uploaded them all. I tried figuring out the G+ API (they actually have it) but I was unable to pass the tutorial, so I abandoned it and simply uploaded the photos of each set int their own tab via a browser. That took a few hours. But that is much faster that with Flickr. Like 4 MB/s versus 0.5 MB/s faster. And <a href="https://plus.google.com/u/0/photos/+AigarsMahinovs/albums">here</a> is the result. So far I kind of like it. We'll see how it goes after a year or so.</p>
<p>Now on to an even more fun problem - I now have ~40 Gb of photos from Flickr/G+ and ~100 Gb of photos locally. Those sets partially intersect. I know for a fact that there are photos in Flickr set that are not in my local set and it is pretty obvoious that there are some the other way round. Now I need to find them. Oh and I cann't use simple hashes, because Exif has changed and so have the file names for most of them. And not to forget that I often take a burst of 3-4 pictures, so there are bound to be a some near-duplicate photos in each set too. This shall be fun :)</p>Moved to Mezzaninehttp://aigarius.com/blog/2013/10/21/moved-to-mezzanine/2013-10-21T09:10:43Z2013-10-21T09:10:43ZAigars Mahinovs<p>After my server that has hosted my blog for some years had given out its last breath (second motherboard failure), I decited it was time for a change. And not just server change, but also change in the blog engine itself. As I now focus on Python and Django almost exclusively at work, it felt logical to use some kind of Django-based blog or CMS. I tried django-cms and mezzanine and ... <a href="http://mezzanine.jupo.org/">Mezzanine</a> is so fast and simple, that I simply stopped looking.</p>
<p>After simply following the tutorial and creating a <a href="https://github.com/aigarius/aigblog">skeleton project</a>, I had a ready-to go site with all the CMS features, incuding a blog. I just had to change a few settings to have the blog module be the home page of the site, change site settings for the title and Google Analytics settings and such and tweak the theme a bit to my liking.</p>
<p>This was my first real exposure to a <a href="http://getbootstrap.com/2.3.2/">Bootstrap</a> design. I must say - it is very simple to understand and modify if your needs fit within its limits. For example, I wanted to remove the left sidebar and expand the main content block to fill that. All I had to do was to remove the div element with class "left-sidebar span-2" and change the class of the main content part from "span-7" to "span-9". To do that I simply copied the templates/base.html file from mezzanine default templates and modified it. The information from django-debug-toolbar showed me what files were used in rendering the page.</p>
<p>But the feature that really got me hooked was the Wordpress import. Using a simple management command I was able to feed into Mezzanine instance the XML export file from Wordpress. It created blog posts, categories, comments, static pages and even redirects from Wordpress permalinks to Mezzanine permalinks. It was not flawless - there were a few issues:</p>
<ul>
<li>I had to set the COMMENT_MAX_LENGTH setting to something higher than the default 3000 chars to accomodate some longer comments</li>
<li>As I failed to clean up the comments before exporting, comments marked as spam in Wordpress still got into the XML and showed up in the new blog without being marked as spam there</li>
<li>Some comments (mostly spam) had a much longer user name than 50 char maximum. Even with a --noinput setting that should have truncated the names, the import errored-out as Django passed the long strings to the Postgress database which promptly refused to store too much data or to truncate it. I chose to work around that by increasing the column size in Postgress.</li>
<li>One of the posts did not have a set title, so the import took the whole first paragraph as the title and then failed to create a good slug from it. This caused the redirect creation to fail. I fixed this by editing the XML file and setting a title.</li>
<li>It looks like the slug creation for the redirect and the actual slug are slightly different. Some permalink redirects for posts with non-ascii symbols in the title failed to link up correctly</li>
</ul>
<p>After that was done it was a relatively straightforward process of picking up the code and data and deploying it to a Django-friendly hosting service. There is a plenty of good competition out there, most now offer a simple one-click Django installation, so I just created a simple Django skeleton via their web interface and then replace what they generated with my app while keeping their settings as local_settings.py . I should probably write a bit more details about the process. After I create a custom fabric file for this.</p>
<p>It is quite a strange feeling to have a Mezzanine blog that responds faster from a shared server half a continent away, compared to a Wordpress on dedicated server in the same room</p>
<p>There are a few features that I am still missing - most notably draft post autosave. That has bitten me hard while writing this post :P Also a Twitter digest post feature. But on the bright side - that is a great motivation to write such features, preferably in a portable way that other people can use too :)</p>MorzeSMShttp://aigarius.com/blog/2011/02/11/morzesms/2011-02-11T14:02:36Z2011-02-11T14:02:36ZAigars Mahinovs<p>Quick post. In light of recent Nokia+Microsoft-MeeGo news, I have gone to learn more about Android in a hurry. And here is the first result - MorzeSMS.</p>
<p>Basically it is a tiny app that will play a morze code when you receive an SMS message. The morze code is the phone number of the sender in 'cut number' morze form (to be shorter). The idea is that first of all morze is a cool sound and second is that each sender gets a unique sound that you can learn to identify over time so that you know who sent you a message as soon as you hear the beep.</p>
<p>This is very early beta - there is no UI, no configuration. You can download it <a href="http://www.aigarius.com/files/MorzeSMS.apk">here</a>. Leave bug reports in the comments of this blog post. It does not disable the stock SMS-received sound.</p>So, what is Microsoft Azure and how it compares to Amazon's AWShttp://aigarius.com/blog/2010/05/19/so-what-is-microsoft-azure-and-how-it-compares-to-amazons-aws/2010-05-19T18:05:18Z2010-05-19T18:05:18ZAigars Mahinovs<p>We had a Microsoft salesperson stop at our offices today to tell people about wonders of cloud computing and Microsoft's Azure will save us all.</p>
<p>The following is a short and non-exhaustive list of what Azure does not do according to their own experts:<br></p><ul><br> <li>no Infrastructure as a Service - which means that they actually don't offer you servers to run you code on, they only offer Windows Server instances to which you can deploy your code. There is no imaging and no platform choice. In a half a year or so they plan to roll out Virtual Machine Instances where you will actually be able to upload your own images of what you want to run and then running Linux on Azure will be possible, but that will run with a double virtualization, so it is likely to be incredibly slow</li><br> <li>No fast key-value storage - they only provide SQL servers, with blob storage if you want, but nothing in the NoSQL scalability department</li><br> <li>their Windows Azure (limited version of Windows Server OS that they actually run in their cloud) does not run most of their current server solutions - they are working to migrate them one by one</li><br> <li>SQL service has no backup (but raw data storage has data triplication)</li><br> <li>No autoscalability as such (but they do provide a tool that you can install on your cloud servers to launch additional instances when load rises, or kill instances - reliability and configurability is unclear)</li><br> <li>No VPN (they are working on it, might be delivered in 12 months, meanwhile they suggest to use a Service Bus - their equivalent of Amazon SQS to communicate to you cloud servers, and pay a subscription fee + per-request fee + bandwidth fee)</li><br> <li>No way to launch a server with more than 14 Gb of RAM - both Amazon High CPU and High RAM server offerings were alien to the presenter</li><br> <li>Their SQL server offering is a monthly subscription - no pay-as-you-go there</li><br> <li>No reserved instances, but Microsoft is willing to talk about volume discounts, but later, at some point in a year or two when they start actually providing service contracts</li><br> <li>Their bandwidth fees for Asia are three times higher than in other regions for some reason</li><br> <li>If you think of any interesting feature, like CloudFront, CloudWatch, MapReduce, FWS, load balancing, notification service, ... Microsoft Azure does not have that.</li><br></ul>
<p>The Microsoft presenter also repeatedly engaged in shady tactics when presenting his information:<br></p><ul><br> <li>he implied that you need to pay for Windows licences on Amazon</li><br> <li>he claimed that starting up an instance on Amazon with any custom configuration (like from your own image) would take a lot of time</li><br> <li>he claimed the invention of container data centres </li><br> <li>he claimed that Microsoft is the only provider that you can trust to store your data in the location that they advertise - when confronted that Amazon has a EU datacenter and provides you a clear way to get your data there, he actually claimed that Amazon might leak your data to other datacenters - 'How can you know they will not?'</li><br> <li>He impled multiple times during the presentation that no other cloud providers can be trusted to keep data within EU - he even tried to say that Amazon only opened their first EU datacenter last month and was very surprised and annoyed when corrected (they opened an Asian datacenter and had a EU datacenter for a long time)</li><br> <li>Tried to pass off the fact that you can only run Windows provides a 'good choice for the customers'</li><br> <li>Talked about planned features like they were implemented and available already - like MS Systems Centre online which he spend 5 minutes talking about how it will enable to migrate services from your private cloud in-house to the Azure cloud and then barely mentioned that it might mature to a beta status at the end of 2010</li><br> <li>Tried to pass off Amazon as 'too low level and complicated' while contrasting it to 'niche' force.com and Google App Engine and then presenting their solution that will 'run all kinds of .Net and Silverlight applications'! Yeah, apparently their platform does not even run regular Windows .exe files</li><br> <li>Tried pass off their SLAs (non-contractual guarantees of service levels, which are unlikely to be enforceable beyond 'money back guarantee') as key selling point, even though they don't even offer actual service contracts <strong>Update:</strong> Also claimed that Amazon does not provide SLAs, while the actually do</li><br> <li>With fanfare claimed 14k applications running in their cloud, when pressed noted that most of these application don't even pay them, because they got in on special promotions.</li><br></ul>
<p>So to summarize, Microsoft has build a cloud of Windows-like machines which can run your programs (as long as they are in .Net or Silverlight) and which does not have even a third of the features of such de-facto market leader as Amazon AWS. They try giving away their product for free and only get 14k applications in 6+ months. </p>
<p>Note: Amazon has at least 300 000 <em>public</em> applications and God only knows how many more private ones.</p>
<p>They will grow, due to companies locked-into Microsoft's way of thinking buying into their marketing, but if you have a head on your shoulders, you should avoid it as a trap, which it actually is. All that can be done on Azure, can also be done on Amazon. The opposite is not always true. In addition, if you will want have a mixed environment with both Windows and Linux servers, you will not be able to deploy Linux servers to Azure (or they will be very slow) and you'll have to put your Linux servers into Amazon (or another) cloud and pay double traffic fees for every byte of traffic between your own servers.</p>
<p>Microsoft Azure prices are exactly the same as Amazon EC2 prices with Windows instances. However on Amazon you actually get fully capable Windows Server instances. And if you don't need Windows, you can get a much better price on Amazon.</p>
<p>The only companies that can safely use Azure are companies that only have Windows servers AND plan to only have Windows servers for the foreseeable future. Knowing how dominant Linux is in the server market and how many 'Windows-only' shops actually have a few Linux servers in the back room, there is very little future for Azure, until they wise up and make Linux instances first class citizens in their cloud. If not because their customers need them, then because their customers think that they at least might need them in the future.</p>
<p><strong>Update: </strong> Amazon now released a Reduced Redundancy storage for S3 that provides the exacts same level of storage guarantees that regular Microsoft Azure SLA (99.9%) for two thirds of the price, and also Amazons now explicitly says that their regular S3 is 'Designed for 99.999999999% Durability', which is worlds above what Microsoft provides. Now in this case Amazon again is cheaper than MS. Also Amazon bandwidth prices are 33% lower compared to Azure. In Azia the difference is simply staggering (0.45$ vs 0.19$ per Gb).</p>Going to Debconf10!http://aigarius.com/blog/2010/05/06/going-to-debconf10/2010-05-06T20:05:15Z2010-05-06T20:05:15ZAigars Mahinovs<p>After a few months, today I finally got the approval and <img alt="" class="alignnone size-full wp-image-1505" height="101" src="http://www.aigarius.com/blog/wp-content/uploads/2010/05/im_going_to_debconf10.png" title="im_going_to_debconf10" width="200"></p>
<p>As I work for Accenture now, I got them to pay for the plane tickets and also spring for the 'Professional' registration, so more people can attend. I am very happy, and very grateful to the company and to the managers here in Riga office for being so very supportive. It is refreshing to see that even a large corporation can be quite nimble.</p>
<p>I hope I can get on a direct Riga - New York flight (there actually is one, once a week, by Uzbekistan Airways) which would be a wicked cool way to save travel time and worries. This might even be faster end-to-end than last year going to Spain (via Frankfurt). Now I only need to replace my passport with the new biometric one, to qualify for USA Visa Waiver and I'll be good to go :)</p>Google Wavehttp://aigarius.com/blog/2009/06/01/google-wave/2009-06-01T13:06:52Z2009-06-01T13:06:52ZAigars Mahinovs<p>So, the latest buzz on the web is all about <a href="http://wave.ggogle.com">Google Wave</a>. I would urge everyone developing stuff for the Internet and technological people depending on the Internet for their daily work, to watch that introductory video. The concept is frankly mind-blowing. If this is done right and embraced by all the right people, Google Wave could be the new platform concept that could be used to create new generation of email, instant messaging, collaboration software (CMS, wiki, Sharepoint, workflow, ...), blogging software and forum software and do all that while integrating back with current technologies, like Twitter and RSS feeds.</p>
<p>It remains to be seen what parts will be open source and federated and what parts will remain Google-only services. For example, I am convinced that Spelly the spellchecker bot and Rozy the automatic translation bot will be hard to impossible to federate. This being Google, it is practically a given that the code will be good, but a bit hard to understand and contribute to. Management of the free software community and proprietary community relations will likely be the key to the success of this technology.</p>
<p>I love the technology itself - imagine that you have a web blog. The blog software makes each your blog post a wave. When people leave comments on the post, these comments show up in your 'email' inbox. You can reply to people right from there. By default your replies will show up on your blog and also show up in the email inboxes of the commenter, but you also have the option to mark your reply private and only send it to the commenter. Also it looks to be possible to choose to host all the data yourself or to make Google host all your blog and comment data and make your blog just be a frame where this data is displayed in, possibly allowing you to use very, very minimal system resources to maintain the blog even in a case of Slashdotting. I hate blogs without comments, let's replace them all.</p>
<p>I would also love to see replacements to Microsoft Exchange and Microsoft Sharepoint be built using Wave technology with a full support to ODF if possible. Imagine collaborative document editing in OpenOffice using this technology in the backend. It would be one huge project, but it should be possible. Companies currently pay hundreds of thousands of dollars to set up MS Exchange+Sharepoint+Active Directory to be able to simply share documents in their Intranet and see who edited a document (no live editing, no change tracking). It is possible to make a much better product with ODF and Google Wave and it is possible to earn a bunch of money supporting such a product for companies that need support contracts. If it would be possible to licence spellchecking and translation services from Google in a way where a box is installed in customers data centre and the data to be spellchecked and translated never leaves that data centre, then I believe that people would be willing to pay for this superior spellchecking and translation experience.</p>
<p>We need open source products that people can depend on. We also need ways for business people to sell services with an easily visible added value so that they can make money and contribute their developer work hours back to the open source projects. We (the free software community) have great ways to get a positive feedback loop with users that can develop software, but when our users have no interest in developing anything, the feedback loop breaks and free software growth slows down. There is a lot of untapped potential in this area - we just need to find a way to convert needs and demands of non-developers into code.</p>
<p>I'm gonna stop this rant before it diverges even further off topic, but the main point is - go watch the Google Wave demo, read the tech specs and think how you could integrate that into your free software project.</p>
<p>Useful wave discussions (will update as I find more):<br></p><ul><br> <li><a href="http://www.maetico.com/everything-and-wave/">About Wave from the TG2 guy</a></li><br> <li><a href="http://johnfmoore.wordpress.com/2009/05/31/google-wave-smb-crm/">SMB CRM on Wave (ideas)</a></li><br> <li><a href="http://twitter.com/waveappreview/">A Twitter feed with more useful links</a></li></ul>
<p></p>WoWHead client for Linuxhttp://aigarius.com/blog/2008/06/26/wowhead-client-for-linux/2008-06-26T12:06:17Z2008-06-26T12:06:17ZAigars Mahinovs<p>This is highly unofficial, but if you want to upload your <a href="http://www.worldofwarcraft.com">World of Warcraft</a> statistics to <a href="http://www.wowhead.com">WoWHead</a> in Linux, then you might be able to do so by using the following script. You will need curl and wget installed.</p>
<p><!--more--></p>
<p><code><br>#!/bin/sh</code></p>
<p>#Path to WoW<br>WOW="/home/user/games/World of Warcraft"<br>#User name on WoWHead<br>USER="guest"<br>#MD5sum of your WoWHead password<br>PASS="badbeef666badbeef666badbeef666ba"<br>#You MAC address (without separators, lower case)<br>MAC="010203040a0b"<br>#WoW account name<br>ACC="wowman"<br>#WoW Locale<br>LOCALE="enUS"</p>
<p># Ignoring the update info for now, just downloading it all<br>wget -q http://client.wowhead.com/files/updates.xml -O /dev/null<br>rm -rf "$WOW/Interface/Addons/+Wowhead_Looter"<br>mkdir -p "$WOW/Interface/Addons/+Wowhead_Looter"<br>wget -q http://client.wowhead.com/files/Wowhead_Looter.lua -O "$WOW/Interface/Addons/+Wowhead_Looter/Wowhead_Looter.lua"<br>wget -q http://client.wowhead.com/files/+Wowhead_Looter.toc -O "$WOW/Interface/Addons/+Wowhead_Looter/+Wowhead_Looter.toc"<br>wget -q http://client.wowhead.com/files/Wowhead_Looter.xml -O "$WOW/Interface/Addons/+Wowhead_Looter/Wowhead_Looter.xml"<br>wget -q http://client.wowhead.com/files/Localization.lua -O "$WOW/Interface/Addons/+Wowhead_Looter/Localization.lua"</p>
<p># Ignoring authentification errors. This should return "0" on a sucessful login.<br>wget -nv "http://client.wowhead.com/auth.php?username=$USER&password=$PASS&macAddress=$MAC" -O /dev/null</p>
<p># Uploading all data <br>TMPDIR=`mktemp -d`<br>cd $TMPDIR<br>cp "$WOW/wtf/Account/$ACC/SavedVariables/+Wowhead_Looter.lua" .<br>cp "$WOW/Cache/wdb/$LOCALE/creaturecache.wdb" .<br>cp "$WOW/Cache/wdb/$LOCALE/gameobjectcache.wdb" .<br>cp "$WOW/Cache/wdb/$LOCALE/itemcache.wdb" .<br>cp "$WOW/Cache/wdb/$LOCALE/pagetextcache.wdb" .<br>cp "$WOW/Cache/wdb/$LOCALE/questcache.wdb" .<br>gzip *<br>curl -F "file0=@+Wowhead_Looter.lua.gz" -F "file1=@creaturecache.wdb.gz" -F "file2=@gameobjectcache.wdb.gz" -F "file3=@itemcache.wdb.gz" -F "file4=@pagetextcache.wdb.gz" -F "file5=@questcache.wdb.gz" "http://client.wowhead.com/upload.php?username=$USER&password=$PASS&macAddress=$MAC"<br>cd<br>rm -rf $TMDIR<br></p>
<p>To get MD5sum of your password use this:<br><code><br>echo -n "password" | md5sum<br></code></p>