Wednesday 30 December 2009

XLink FTW, Part 2

Reading through my yesterday's blog entry on XLink, I feel there are things I need to clarify. In no particular order, here goes...
  • I think that a schema of some kind (speaking in the general sense and thus including anything from DTDs to XSDs) is always necessary for XML to work well. I know, that sets me apart from quite a few of the young whippersnappers in XML today, but I never consider the well-formedness advantage an advantage when it was marketed as such. But then, I'm a dochead, to borrow Ken Holman's terminology.
  • Namespaces frequently make life difficult, especially for those of us who feel that DTDs are superior to XSDs. Those pesky namespace attributes often keep popping up when processing XML, resulting in bug reports from desperate technical writers. And they all know my mobile number, it seems. However, namespaces really are a must in these days, regardless of your refusal to import foreign namespaces to your XML, because most of XML's really useful sister recommendations depend on them.
  • I do consider data typing in XML to be largely unnecessary, if we stick to using XML for documentation and publishing. Only rarely have I felt the need to include data typing in a schema, and in most of those instances I have been proven wrong by more sensible colleagues (or come to my senses on my own, resulting in the quick removal of unnecessary data types in my XSD).
  • It is a pain to implement XLink in an XSD, but largely because of reasons that have nothing to do with XLink as such and everything to do with problems with namespaces (such as the problem hinted at above). Plus, of course, the fact that different XML editors still seem to implement different parts of the XML Schema recommendation in differing ways (or not at all).
  • DTDs, on the other hand, work like a charm with XLink attributes added, provided that your tools follow the XML spec. I have experienced problems with MSXML and its derivatives, which proves my point.
Thank you for reading.

Tuesday 29 December 2009

Was XLink A Mistake?

This morning, I read Robin Berjon's little something on XML Bad Practices, originally a whitepaper he presented at XML Prague 2009. I was there, presenting right after he did, and I remember that I nervously listened to his presentation while preparing my own (not my finest hour but that's a story for another blog entry), wanting to address some of his points. While a lot of what he said made good sense, some didn't then and certainly don't now.

In Reusing the Useless, Robin discusses XLink, a recommendation that remains my personal favourite among W3C's plethora of recommendations. Apparently it's no-one else's, at least if Robin is to be believed. "Core XML specification produced by the W3C such as XSLT or XML Schema don't use it even though they have linking elements," he says, adding that very few have implemented anything but the rudimentary parts of it. But I get ahead of myself; let's see what Robin says. He starts out with this:
That feeling (and a general sense that reuse is good) leads people to want to reuse as many parts of the XML stack as possible when creating a new language. That is a good feeling, and certainly one that should be listened to carefully — there are indeed many good and useful technologies to reuse.
This, of course, makes a lot of sense. We are in the standardisation business so we don't want to reinvent the wheel every time. Me, I've done so time and again, and the one W3C recommendation I have used again and again is... XLink. It provides me with a neat way of defining link semantics without enforcing a processing model, from very simple point-to-point relations to multi-ended link abstractions. Yes, I have used both; Simple XLinks are present in most of my DTDs requiring cross-referencing, images or indeed any point-to-point semantics, and Extended XLinks were a useful and necessary addition to the aftermarket document structures of a major car manufacturer, among other things.

But again, I get ahead of myself. Here's what raised my eyebrows for the first time, this morning:
But that only works if everyone plays, and furthermore the cost of using XLink has to be taken into account. First, a whole new namespace is needed.
This is interesting, to say the least. I thought this was one of the main points of introducing namespaces in the first place, to avoid name collisions.

The basic idea behind namespaces is extremely simple: you use one DTD (well, maybe it's a schema since DTDs aren't namespace-aware; there's a lot I would like to say on that topic, too, so either this is going to be a very long post or I need to start writing down my ideas for blog posts) but in your instances you need to include content created using other schemas. One solution is to only use unique names, but this is a pipe dream and in reality, there's only so many names you can give, say, a paragraph (p, ptxt, para, ...) or a cross-reference (ref, href, link, ...), without resorting to silliness. Inevitably, your elements and attributes will have the same names as someone else's, and that can be a huge pain. Namespaces are a neat way of getting around this problem, and as an added bonus you'll eventually always get that question, "what does the namespace URL stand for?" from your audience when presenting your work.

My point, and the simple question I would like to ask here, is why is it suddenly a bad thing to introduce a namespace for XLink when practically every recommendation, suggestion, and badly written XML configuration file seems to use one these days? Yes, they all come at a cost, among them that if you actually want to validate that included content from that other namespace, you need to implement something doing the work, somehow. You need to validate it against the right schema and so you need all kinds of lookup mechanisms and stuff. But if you can implement one namespace, shouldn't you be able to implement several, especially if your imported namespace provides you with a useful mechanism, say, a standardised linking mechanism?

Namespaces aren't my favourite W3C recommendation but it is what we have. In his blog and whitepaper, Robin points out several bad practices when implementing namespaces and I fully agree with them (perhaps excepting some of the discussion on a "default" namespace for attributes without a prefix), but they are mostly outside the topic at hand because I fail to see why they'd make XLink an undesired recommendation while still encouraging various others.

Robin continues:
Second, the distinction between href and src requires a second attribute.
To be perfectly honest, I'm not sure what this means. First of all, what, exactly, is, the distinction between href and src? According to the XLink recommendation, href "supplies the data that allows an XLink application to find a remote resource," adding that when used, it must be a URI. In simple XLinks, href's are all you need; the source and a reference to it are (or rather, can be) the same thing. (Yes, there is some verbosity since you'll need that namespace declaration and the XLink type, that sort of thing, but if you use XML Schema, you'll be far more verbose than this anyway.)

When discussing extended XLinks, though, yes, there is a difference between a "source" and a "reference" to that source (provided I understand the objection correctly). It's one of the really neat things with extended XLink because it allows us to leave out the linking information from the document instances. We can create complicated, multi-ended, linking structures between resources without the resources ever being aware of them being part of a link. The links can instead be described out-of-line, outside the resources, centrally in a linkbase.

To do this well, there needs to be a clear distinction between pointing out link ends and creating link arcs between them. Certainly, it requires more than one attribute, and in the XLink recommendation, it could easily require three (the pointer to the source, the source's label, and the actual link arc).

Is this the only way to do multi-ended links? No, certainly not, but it does provide us with a standardised way, one that a group of people put considerable thought into. It is possible to redo the work and maybe even do it better, but unless you have a lot of time on your hands, why should you? It's a perfectly serviceable recommendation, with far fewer side effects than, say, namespaces on older XML specs, and it does most of the things you'll ever need with links.

(Granted, XLink, just as any post-namespaces spec, will cause havoc for any system that includes badly implemented XML parsers wanting to interpret everything before and including the colon in an element or attribute name as throwaway strings, but that's not an XLink problem; it's a namespaces problem and above all an implementation problem. XML allows colons in QNames; don't use a parser that tries to redefine what was meant, once upon a time.)

Not everyone agreed with the XLink principles and so left them out in specs that followed, but I have a feeling that what happened was at least partly political (the linking in XHTML comes to mind, with the, um, discussions that ensued), plus that the timing could have been better. At the time, implementing XLink could be something of a pain.

An aside: around the time the XLink recommendation came out, I was heavily involved in implementing large-scale extended XLinks in a CMS for a well-known car manufacturer. Extended XLink solved many of our key problems; being able to define multiple relationships between multiple resources in multiple contexts using a central linkbase made, for the first time, actual single-source publishing possible for the company, and they had been using SGML for years.

The system almost wasn't, however, for a very simple reason. The XML editor of choice (not my choice, by the way; I was presented with it as a fact of life) and its accompanying publishing solution could not handle the processing of inline link ends or indeed any kind of inline link elements beyond ID/IDREF pairs for page references. The editor and the publishing solution chosen would simply not allow us to access and process them, no matter what we did. This was before XSL-FO was finished or in widespread use, mind, and before most editors (including this one) would offer complete APIs for processing the XML.

I won't go into details but the solution was ugly and almost voided the use of extended XLinks. No alternative linking solution would have fared any better, however; the problem was that we were slightly ahead of what was then practical to implement and several of the tools available then just didn't cut it.

Getting back to Robin's blog entry, he also says:
And then there are issues with parts of XLink being useless for (or detrimental to) one's needs, which entails specifying that parts of it should be used but not others, or that on such and such element when one XLink attribute isn't present it defaults to something specific not in the XLink specification, etc.
It's hard to address the specifics here since there are none. I don't have a clue of what parts of XLink are useless or detrimental to Robin's work and can only address his more general complaints.

Most "standards" are like this. There is a basic spec that you need to adapt to, with the bare essentials, and there are additions that you can leave out if you don't need them. XLink makes it easy to implement a minimal linking mechanism while offering a standardised way to expand that mechanism to suit future needs. It also deliberately leaves out the processing model, allowing, for example, for a far more flexible way to define "include" links than XInclude, a linking mechanism that in my mind is inferior in almost every respect to the relevant parts of the XLink spec.

Central here is that with XLink, I can use one linking mechanism for all my linking needs, from cross-references to images to include links, and still be able to define a single processing model for all of them, one that fits my needs. I suspect it would have been very difficult to define anything sufficiently consistent (yet flexible) in the spec itself, so why force one into it?

To me, this is akin to the early criticism DTDs received for lacking data typing. XML Schema added this capacity, resulting in a huge specification with a data typing part that either remained unused or was used for all the wrong reasons. In a document-centric world, data typing is mostly unnecessary which is a good reason to why it wasn't included in DTDs. (In the few cases where data typing was useful, it was easy enough to add an attribute for the element(s) in question, containing either a regular expression or some other suitable content definition, and add the necessary processing for the applications as needed. There was no need to write a novel for the data types no-one needed, anyway.)

As you might guess, my point is that not including the processing model in the spec is a strength, not a weakness, because a sufficiently complete, general-purpose, processing model for a complete linking mechanism is most likely too complex to do well. It would only serve to create conflicting needs and make the spec less useful. Why not leave it to implementation?

Which brings me to Robin's next point:
Core XML specification produced by the W3C such as XSLT or XML Schema don't use it even though they have linking elements.
I don't pretend to know why this is; I have an idea of why XHTML didn't, and in my mind it had very little to do with any technical merits or lack of same, and a lot to do with politics and differing fractions in the W3C. Could it be the same with XML Schema and XSLT? It might; I know that XLink could have addressed the linking needs of both specs. Certainly, XML Schema is "costly" enough to not be bothered by an extra namespace among those already included. Maybe someone close to the working groups would like to share, but what's the point now?

In Robin's blog, the above statement leads to:
I don't believe that anyone implements much in the way of generic link processing.
I've implemented a lot in this respect, starting from about the time XML became an official spec. XLink has proved to be very useful, allowing me to benefit from my earlier work while still being flexible enough to encourage some very differing link implementations.

Granted, most of my work has been document-centric, with my clients ranging from companies very small to the armed forces of my native country, but in all of these, XLink has proven to be sufficiently useful and flexible. A friend of mine, Henrik MÃ¥rtensson, now a business management guru, wrote a basic XLink implementation more than a decade ago (yes, long before XLink was a finished spec; we were both involved in implementing XLink in various places back then), with everything that was required to create useful links, be they cross-references, pointers to images, or something else. This implementation is still in use today, and while I and others have changed a lot of stuff surrounding it, the core and the basic model remain unchanged. My presentation at XML Prague 2009, right after Robin's, touched on some of this work, and had my computer been healthier, he would have witnessed at least one XLink implementation.

Which (sort of) leads to Robin's last point:
Reuse of other languages should be done where needed, and when the cost does not exceed that of reinvention.
I agree with the basic notion, obviously, but not with his conclusions. XLink, to me, is exactly the kind of semantics that is far easier to reuse than to reinvent. Yes, it is possible to simply write "href CDATA #IMPLIED" (or the schema equivalent) and be done with it, but anything more complex than that will benefit from standardisation, especially if you ever envision having to do it again. XLink is a terrific option when it comes to anything having to do with linking.

Thursday 3 December 2009

Korg TR61

I'm sure you'll all be thrilled to know that I bought a Korg TR61 synthesizer and "musical workstation" yesterday, replacing my old and trusty Kawai K4. I've slept about 4 hours, choosing to spend most of the night playing instead.

Friday 16 October 2009

KMess...

...is a Linux-based IM client for Windows Live. It's a very nice little app that allows me to connect to Windows Live, formerly known as MSN, without having to use Windows. Or allowed, because it's broke. Right now it crashes every time I try to start it.

Tuesday 13 October 2009

Oxygen 11 and an Old Bug

Oxygen 11 is out and I just installed it on my laptop. It's still the best XML IDE there is, by far, but I'm getting a bit annoyed by a bug that has persisted for a year now.

Long story short, but basically the tabs bar that keeps track on the open documents and indicates the current document is buggy and will not correctly focus on the current document if that document is too far right to be visible without scrolling, and a previous document, also too far right to be visible without scrolling, has just been closed. The bar and the actual visible document do not synch so while the document itself is open and editable, the tab is missing. Very annoying, to say the least, if you are working with a highly modularised stylesheet and need to have more than a handful documents open at the same time. Apparently the bug is in a third party component and until that component is updated or replaced, the bug will persist.

I can live with that bug, though, and the new version seems to have enough fun stuff to keep me busy for a while. XProc support, for example, is a welcome addition. I've been wanting to try it ever since Norman Walsh presented it at XML Prague, and now there should be no further excuses.

Tuesday 6 October 2009

Amarok Woes

Amarok, the best music player there is for Linux or indeed any platform, has been redesigned from the ground up. The old version 1.4, while feature-rich, was apparently up to the standards and visions of the app's makers, and now, there is a 2.2 out.

Amarok 2.2 is completely different in terms of appearance (and probably even more internally), but right now I'm having lots of trouble getting it to work the way 1.4 did. Here's what's bugging me the most:
  • I can't seem to get the playlists to sort my music according to albums without every album on the list being listed once for every song that album contains. VERY frustrating.
  • Version 2.2 reintroduced the capability to play audio CDs. However, this feature doesn't seem to work on my laptop--the CD is listed but just won't play (even Kscd plays on the laptop and that's to say a lot). On my desktop, the CDs will play, though.
  • CD title lengths are always 0:00. Annoying.
I'm sure these will be sorted out in time, but right now, when KDE 4 is still buggy and unreliable, and Pulseaudio still mutes sound on every boot, there is enough on my plate already to keep me annoyed.

Tuesday 22 September 2009

XMetaL and wine 1.1.28

wine 1.1.28 (the current wine-unstable package in Debian) breaks my old XMetaL 3.1 installation. The application starts but complains about Gecko not being installed, helpfully offers to install it... and then nothing. If I stop the installation, XMetaL starts but crashes soon afterwards, probably because XMetaL needs IE components, not Gecko.

I'm sure it's fixable, somehow, but I'm not sure I'll bother. Oxygen runs without problems and all I need is a good enough XML editor.

Saturday 5 September 2009

Corena S1000D User Forum

Earlier this week, I spent two days attending the Corena S1000D User Forum in Kongsberg, Norway. As product-specific user forums go, this one was quite informative and I remain impressed with the Corena product line. They certainly offer a good mix of S1000D functionality, with clear promises of getting better in many areas of the spec (my current favourite is the applicability editor), and it seems to me that few products out there can match theirs.

And of course, Svante Ericsson, a leading authority on the S1000D standard, gave a talk that in itself made the drive to Kongsberg worth my while. Svante is a former colleague of mine from my days at Information & Media (now Sörman) and knows better than most out there what he's talking about. He's also a lot of fun to listen to.

Monday 24 August 2009

Integrated Logistic Support

I'm spending this week in Malmö in southern Sweden, participating in a course on Integrated Logistics Support (ILS). ILS, basically, is about planning for and supporting a product's whole lifecycle, from its early planning stages and onwards to the deployment (called "employment plan", a phrase that to me meant something very different until today), the product's useful life, including maintenance and support, and the product's eventual disposal. As always, the idea is to create a better and more cost-effective product, meaning more money to you. See, what's really intersting is how the results of an ILS analysis, called LSA (don't you just love acronyms?), can be put into the design of the product itself and how a proper analysis can help significantly reduce cost.

ILS is traditionally about military products, preferably state-of-the art helicopters or perhaps a large frigate but an ordinary rifle can benefit, too, and so is the course I'm attending, but it's easy to see how ILS can be put into good use elsewhere. It's fascinating stuff. Obviously it's a bit early for me to comment on anything factually relevant, being a complete newbie in all things ILS, but I can see a relevance to document management and the systems I help build when I'm not in Malmö.

As I said, fascinating stuff.

Friday 21 August 2009

More on KDE 4.3

I like KDE 4.3. Let's make that perfectly clear, because after reading this, you might get the wrong idea.

KDE 4.3 ha a far more finished look and feel than 4.2. Things seem to be better integrated and the crashes are fewer, with fewer causes. I've finally got the hang of Dolphin (that tricky address bar, for one thing), and I even got Kscd to work.

But.

There are many annoyances as well:
  • KMix mutes the master volume on every startup (I only have to unmute it and turn the volume up, but this is very annoying to do on every startup).
  • Okular won't accept (or remember) landscape print settings (Document Viewer from Gnome, that uses the same printer drivers, as far as I can tell,has no problems).
  • The eye candy on Desktop settings usually crashes parts of the KDE environment, with the taskbar going first, if you try more than one or two settings.
  • The PulseAudio/Phonon combo is very unreliable. With GStreamer, it won't output sound, but with the Xine backend, it usually does.
  • Amarok no longer knows how to play CDs. I tried to use Rhythmbox but it chops up CD audio in 15-second bursts with a 200 ms pause between every one, and for the life of me, I can't figure out why.
Some of the eye candy issues could easily be related to the (very) buggy Intel Xorg driver, but I still think that KDE shouldn't crash as a result.

On the whole, I like the environment, though. ;)

Monday 10 August 2009

KDE 4.3...

...is a big step forward. Finally things are starting to actually work. Now, if they could only reimplement the CD player functionality in Amarok I'd be even happier. (And to be honest, I'm not sure I like Amarok's new look.)

Monday 29 June 2009

The Sopranos Ending

I watched the final episode of The Sopranos last night, for the fourth or fifth time. Now, I know some people say Tony died in the end of that episode, but I just don't see it. Yes, I know how one uses POV shots (I've made my share of films), and yes, I know the Members Only jacket guy looks shady, and yes, I know that there is a not-so-subtle reference to the first Godfather film, but I just don't see it.

I think the ending was made ambiguous on purpose. We can all interpret it pretty much how we want and go on with our lives, and should David Chase decide to revive the Sopranos as a feature film, we can all go and see it, and make Mr Chase an even richer man. And go on discussing that.

Tuesday 9 June 2009

Inline Tagging

Here's a trivial little piece of inline tagging that is nagging me:

<emph><super>2</super></emph>

It's a classic chicken-or-egg problem, really. The tagging is commonplace enough; it's trivial, crude, even, and represents an emphasised and superscripted number, but should the number be emphasised first and then superscripted, or should it be the other way around, like this:

<super><emph>2</emph></super>

I know, I really shouldn't bother, but it is precisely this kind of nested inline tagging that can completely stop me in my tracks. In a wider context, the question is: is the order of nesting important? That is, semantically speaking, is there a difference? Am I saying that an emphasised (in other words, important) number happens to be superscripted, or that a superscripted number happens to be emphasised (important)?

More often than not, this type of inline tagging is about formatting, not semantics, so it probably doesn't matter. Also, emphasis as an inline tag is dodgy at best because while it says that the highlighted text is important but fails to mention why, and the "why" is what is important if we want semantics, if we need intelligence. It's the same with superscript and subscript elements, and quite a few other common inline elements that are about how things should be presented rather than structured.

But then, of course, formatting is useful, too, because it can visualise abstract concepts.

Chicken or egg, folks? Me, I don't know. I only wrote this because I needed a break form designing an export format from a product database, a format where I need to visualise data.

Friday 29 May 2009

KDE 4.2

Some time ago, I made the upgrade to KDE 4.2 from 3.5. It was made available in Debian's Unstable branch so I figured "why not?"

Why not, indeed?

Well, for starters, I can't figure out how to make it react to audio CDs in the CD drive. KDE 3.5 offered a dialog where I could choose what to do with the damned thing. With this one, it's beyond me; nothing happens. I've toyed around in the Settings, but to no avail. I've googled around. I can't make it work.

Just now, I received an email with a MS word attachment, a .doc file. KMail offered Kate as the default choice, a bloody text editor, but the thing is that not too long ago, KMail knew that OpenOffice works for anything with that suffix, and furthermore, KDE knows, from what I can see in the File Associations settings, that OpenOffice is the right application to use. But it doesn't. It won't.

The refurbished Kicker menu gets stuck on the desktop after I click it, until I click on it somewhere near the Search edit box. On my laptop, the task bar (or whatever they want to call it, these days, never remembers how wide it should be if I use the laptop on an external screen (with a different resolution) in addition to the built-in one. For some reason, something switched the sound settings on the Audigy card to the Digital output after I upgraded to a 2.6.29 kernel, without telling me, so I went through hell to get my sound back, before I discovered the switch (that, by the way, is not available on every mixer there is) that needed a click.

Or all those settings that used to require a root password, to change how KDM behaves. Or whatever. Lots of things have gone wrong with the KDE upgrade and I don't know how to fix them, not without some surfing on the net, and I can't be bothered. I think of myself as a power user, I have used computers in various forms since the late seventies and Unix in a number of incarnations through the years, but surely it shouldn't be like this?

And no, I don't want to switch to Gnome because I hate it, I think it treats me lika an idiot, but maybe I need to? What say you? I don't want to spend all my free time on the bloody Internets, trying to find the answers to each and every little problem there is.

Sunday 17 May 2009

Put XSD 1.1 On Hold

In his latest blog entry at O'Reilly, Rick Jelliff asks W3C to please put XSD 1.1 on hold and address the deeper underlying issues that make schemas practically useless.

I'd like to go one step further and encourage the schema working group to consider Relax NG, compact syntax, instead, as a more sensible and compact alternative to XSDs. It does everything we need from a schema language, without being impenetrable or impossibly verbose. If W3C actively endorsed Relax NG, maybe we'd get the software manufacturers to support Relax NG on a wider scale. Yes, I know, Oxygen already supports it, but there are plenty of manufacturers out there that need to follow suit.

Please.

Wednesday 13 May 2009

Jean Michel Jarre

I went to see Jean Michel Jarre perform in concert, earlier this week. I've been a fan since the 70s when Oxygène came out but I never thought I'd experience him live. Göteborg's too small a city for the kind of thing he is famed for, painting the Houston skyline with lasers or transforming London's Docklands to a gigantic concert venue, so I was pleasently surprised when he announced his "In-Door" tour, a series of performances indoors, on a fairly small scale.

The Scandinavium is not what I'd call small (U2 paid a visit 18 years ago, and I listened to Paul McCartney dust off his Beatles repertoir there, close to 20 years ago), but I still thought it wouldn't be enough for Jean Michel Jarre.

Boy I was wrong. From the laser harp (you have to see and hear it; there's no way I can make it justice here) to the analog synthesizers, from Oxygène to Rendez-vous... it was all perfect (well, actually he slipped while playing that laser harp, just once, but it happened) and I really only wrote this to gloat.

Monday 4 May 2009

Slow Keys

The KDE 4.2 desktop on my Debian/GNU Linux laptop install (Sid, the unstable flavour) practically died the other day, after a dist-upgrade. Well, actually, the keyboard stopped responding while the touchpad continued working perfectly. A first Google search (sloppily performed, I'll admit) hinted at changes done in Xorg 1.6 but while I found a few hints, nothing I did with the xorg.conf file could revive the keyboard.

After a few hours of experimenting and general panic, I stumbled on an older post on a KDE message board. The author had managed to turn on the Slow Keys feature in KDE 4, a set of functions designed for the disabled, which resulted in a very slow keyboard. I checked my settings and yes, the feature had somehow been turned on.

Now, relieved as I was, I'm also pretty sure that I have not been anywhere near that checkbox. How is this possible?

Monday 27 April 2009

Out of Print

I like O'Reilly's books. They're well-written, well-researched, and a lot of fun to read. They are also very cool, because O'Reilly, probably better than anyone else in the IT publishing business, know how to market their books (think The Camel Book if you don't believe me, and resign yourself to "I need to find another blog to read" if you don't get this particular geek reference). I would love to write for them some day.

In the meantime, I frequently surf over to their site, reading the blogs, browsing the catalogue, and planning my next buy. And sometimes I just read stuff here and there. Today, I browsed the list of out-of-print books and found this:
I LOL'd, as they say. Yes, a book from January 1900 is probably out of print by now, but I had no idea that XML was that old.

Friday 17 April 2009

TomTom and Microsoft

Apparently TomTom and Microsoft have promised not to sue each other for the next five years. The settlement came soon after TomTom countersued and joined forces with the open source movement.

I can't decide if I like this or not.

Tuesday 7 April 2009

Anticlimax

My persistent Debian-related WiFi problems finally got solved. The other day, I sat at a restaurant, working away, and decided to try the WiFi. The laptop connected, immediately, with no problems whatsoever.

This got me thinking.

Apparently, Debian and the Intel drivers couldn't be to blame. It had to be a router issue. So I upgraded my Netgear router firmware (I have both a gateway and a repeater, so there were two machines to consider), and voilá! I had a working WiFi connection at home.

Sunday 29 March 2009

Not That Easy - WiFi Woes, Part Two

Turns out I was too optimistic. wicd and the little adjusting I did does not deliver a working Internet, not every time. Today, no matter how I tried, I couldn't connect (beyond the router, to which I can always connect) until I changed the laptop IP from dynamic to static, and the DHCP client from automatic to dhclient. All of a sudden, I was back surfing!

Only, just now, when I booted up the laptop again, I couldn't connect beyond the router (which connected fast enough), not until I changed the IP back to dynamic... This is seriously weird and I can't explain it. I wonder if it's got something to do with my router, an instruction that is lost on the way, DNS services that aren't updates... something?!?

WiFi on Linux is NOT easy.

Thursday 26 March 2009

WiFi Woes

Just a little note for posterity:

My Lenovo T61 laptop that now runs Debian has an Intel 3945 ABG wireless network adapter. While the Debian (Lenny) installation and the subsequent upgrade to Sid went flawlessly, with the WiFi card discovered and listed, it wouldn't connect wirelessly to my Netgear router (actually a repeater, fed from a Netgear ADSL modem/gateway). It connected to the router itself, I was able to ping the router and connect to it using a browser, but everything beyond that was inaccessible. I tried various interfaces stanzas, reconfigured TCP/IP, and tested all kinds of tricks, without any success.

Then I did some serious googling. A lot of people have had this problem and many probably still do. Also, the problem was pretty much the same, regardless of your Linx flavour. Finally, a Ubuntu forum suggested removing the network-manager package and installing wicd in its place. Said and done (luckily I had upgraded to Debian Sid; the package is not available in Lenny). I had to reboot but could still not connect.

As a last resort I tried explicitly pointing out my ISP's DNS server IP addresses in the wicd configuration. That did it and I'm now writing this blog on a WiFi connection.

Sometimes it's important to document these things. Maybe, just maybe, it will help someone else.

Linux on the Laptop

Following the unfortunate events surrounding my presentation at XML Prague (a fabulous event, by the way; you should have been there), I now run Debian GNU/Linux as my primary OS on my work laptop. There is a Windows XP partition, so far, but my plan is to use Xen and virtualisation, and run the Windows operating systems as Xen domains.

The laptop installation that failed contained my first attempts at virtualisation, by the way. Microsoft's Virtual PC ran Windows 2003 Server and Cassis, the Document Management System that I'm part of developing at Condesign Operations Support, and was connected to my XP installation through a loopback adapter. In theory, this is a very nice setup since it is possible to simply run a complete image of an OS and the server setup as part of a demonstration and then reset it to its pre-demo state for the next show. In practice, however, Virtual PC does not deliver. The hardware it emulates is very limited and everything it does is rather slow. It was enough to wet my appetite, however (together with my friend Niklas' obsession with Xen), so I decided to do it right, now that I had to wipe the old drive anyway.

My Debian installation does not yet run a Xen kernel, but I'll keep you posted.

Saturday 21 March 2009

I Hate Windows

This blog is about blame. Specifically, it's about Windows XP. If you're into the Microsoft-friendly thing, quit reading.

I held my presentation at XML Prague today. It seemed to go reasonably well until I was about to switch to a demo. I was in Powerpoint and meant to show a little something on the actual application, so I switched to Internet Explorer, or so I thought.

NOTHING happened. Nothing. Zip. Zilch. Zero. Nothing whatsoever.

Nothing. Can you imagine the terror? WTF?

I kept on speaking, realising that my presentation wasn't up to par. Another demo opportunity came up, with similar results.

Nothing.

And that is what happened. Nothing. You explain this, because I can't.

Thursday 19 March 2009

Two Days Left

Well, a bit more than that, actually. See you at XML Prague.

Thursday 12 March 2009

XML Prague Nerves

XML Prague is only a week away so I've been working on my slides. I've done stuff on resource naming (links identifying a resource by name rather than location), markup, linking systems (actually only one, XLink), publishing, "document trees" and more, but it all feels deceptively simple to me. Right now, the whitepaper's most important (and only?) message, use an abstraction layer such as URNs to always identify whatever resources you are using, seems trivial, even. The question, therefore, is if I really am that brilliant at what I do (I'd like to think so, obviously) or if the whole thing is so simple that it doesn't need pointing out, in which case I'm in trouble and people will fall asleep (or worse) at my presentation.

As I said, I do include some other stuff in the presentation, but in my mind it's all a result of the basic premise.

Or maybe it's just me being nervous. We'll see.

Friday 6 March 2009

XML Is Progress...

...but I miss SGML. I miss the time when any project worth its salt was up to a year or more but would still have to be redone from scratch to create something that actually works.

These days, companies expect these things to work right out of the box. Just install it, give it a week's training and another week of adjustments, and you're all set. I have competitors that market their product in this way, claiming that it's all it takes. Funny, that, because what they are actually saying is that the handling of a customer's most vital asset, their information, should require no more than a couple of weeks to be up and running but the product itself could take years to develop. Is it just me or is there something wrong with this concept?

In the olden times, things were never that easy. There was no DOM and there was no XSLT, and certainly no native, built-in SGML APIs to ease a developer's plight. No fast track, no overnight results. Outputting structured documents on paper was not an easy task so things were allowed to take time. There was time to do a proper analysis, and time to perfect it while others were writing code.

Imagine having that time now, imagine having a year to do things properly. Oh, yes, I miss SGML.

Digital Landfill

Digital Landfill is a blog by John Mancini, President of AIIM International Inc, "an industry association that provides document imaging, document management, and knowledge management solutions". It's very insightful, often very funny, and well worth reading.

I spent half an hour reading it instead of working, but don't tell my boss, all right?

Thursday 5 March 2009

Another Take on TomTom and Linux

Had some time to kill today, so I surfed the net, reading about my new GPS. Turns out that TomTom uses a 2.6 Linux kernel to run their hardware. It's modified, of course, but it's still a Linux kernel.

Why is it that there isn't a Linux version of the software required to connect it to a computer, then? Why is Linux acceptable in the GPS but not outside it?

Friday 27 February 2009

GPS

I'm sure you'll all be thrilled to know that my new GPS is working perfectly. It found me a shortcut yesterday. Shorter AND better. Well, I'm thrilled at least.

But Tomtom's strategy of selling map update services borders on the indecent. The first thing I did when I bought the thing was to update the maps. There's a "latest map" guarantee, see, meaning that within 30 days of purchase, I'm entitled to map updates. So today I logged on to Tomtom and it hinted that my map was old (God forbid) and that for only €XX, I could have the latest.

If I hadn't read the manual I would have thought that there was actually an update available, and that it would cost me money. As things stand, I do have the latest map and this was simply a sales pitch, appearing to be a map update. It looked like the update thing I'd seen earlier, so I thought I would have to pay for this one. Made me angry and disappointed, and very weary of anything they say.

I very much dislike software with built-in sales pitches. I don't mind buying services when I need them, but I do think that I shouldn't be fooled into buying them.

Friday 20 February 2009

TomTom 730T and Linux

Bought a TomTom 730T GPS device today. Very cool. Lots of well-considered features, great design, just what I wanted. I'm really looking forward to driving around for a bit tomorrow.

But then, I tried to install the TomTom Home 2 software on my Debian (Sid) Linux box, using wine. Didn't work. I spent an hour trying to get around the error message (a rather crypic message involving some file from Visual C++) but couldn't make it work.

Yes, TomTom, I do use a Windows box, too, at work, but I was hoping for you to return the favour (I bought your product, after all, didn't I?) by supplying me with software for the operating system of my choice. It shouldn't be that hard to do; after all, you did come around to recompile your software for the Mac.

Aren't more people using Linux these days than Mac?

Tuesday 10 February 2009

Dolby CP100


I got hold of a Dolby CP100 cinema sound processor. It's Dolby's oldest Dolby cinema processor, introduced nearly 33 years ago. I don't expect to actually use it a lot, but I'm very tempted to install it in the sound rack at the cinema, just to freak out visiting filmmakers.

In any case, I think it's the most beautiful-looking processor Dolby has ever made. ::sigh::

Monday 9 February 2009

Do You Still Believe MMR Vaccines Cause Autism?

http://www.timesonline.co.uk/tol/life_and_style/health/article5683671.ece

Looks like Andrew Wakefield, the doctor who claimed MMR vaccines were to blame for the so-called "autism epidemy" manipulated his research data in order to get the results he wanted.

Not that I believe for one second that the MMR scare is now history, with nutters like watsername married to Jim Carrey running around crying wolf, but it still feels like a victory to me.

XML Prague

I will present my whitepaper, Practical Reuse in XML, at XML Prague on March 21 (in Prague, Czech Republic, in case you didn't guess that). XML Prague is a small conference but the speaker list is very impressive: Norman Walsh, Ken Holman, and Jeni Tennison will attend, among many others. I'm really looking forward to this one.