Linux To The Rescue.

One of my common post types is a big up for Linux, but I can’t help feeling that this one is worth it.

Here at work I was presented with an old, battered laptop whose owner had schlepped around the world with it. It was refusing to start, and the request was to help extract important data before it became inoperable forever.

In the early stages I got lucky and managed to get the computer to boot successfully into Windows. However, the success was short lived, and the system froze, displaying a classic blue screen of death.

The next approach was to try and clone the data from the system using CloneZilla. I created a bootable USB stick, restarted the machine and began cloning the partitions. Unfortunately the two important partitions kept on throwing errors on particular sectors and stopping the cloning process in its tracks. And as the system discs were all using NTFS I wasn’t able to attempt any kind of on-the-fly repairs.

In the end, I got a live CD version of a low resources Ubuntu (Lubuntu, using the LXDE window manager) and booted the system with that. The boot process was seamless and the window manager started without a problem. I then used the command line and some Linux know-how to mount the Windows hard drive and an external hard drive provided by the user. Then it was just a matter of finding the data that the user wanted saved and copying it using the normal file system manager.

It has taken a while. The USB interface on the machine is slow, and data reading from the hard drive seemed sluggish, but it has worked well, and all of the users data is now safe.

It is worth noting that when I say ‘Linux know-how’ then it makes it sound simpler than it was. I realised this when the user asked whether it would be possible for me to tell her how to mount the discs if she needed to do. I started to explain the process, then realised (prompted by the way her eyes were glazing over) that while it is obvious to a long term Linux user it might as well be in a foreign language to a new user. Worse than that, when I tried some Google-Foo I wasn’t replete with good explanations that way either. I’m not sure if this is a failing of my Google-Foo or a gap in the otherwise excellent on-line documentation.


Synergy – Control For More Than One Computer At A Time.

I am not, by habit or practice, a tidy person. This is strongly reflected in my work desk which is awash with stacks of papers, forms and diaries. Unfortunately, I am also someone who has two computers on my desk (see many previous comments on the subject).

The computer I use most is a 27″ screen iMac. The screen is lovely, with excellent image quality.

However, I don’t particularly like Apple keyboards, so I don’t use an Apple keyboard. Instead I have a wireless Logitech keyboard (a  K230 for the record). The previous keyboard that I had didn’t have a number pad on it, so I added a separate wireless number pad (also Logitech, a N305). I’m also not that keen on Apple mice because I need at least two buttons and the direction of scrolling is all wrong, so I have a trackball, which I like because it doesn’t need lots of desk space (M570).

The brilliant thing about the Logitech kit is that they all link using Logitech’s ‘Unity’ wireless system, and, importantly, each Unity USB connector can support up to 6 devices. That means I can have the keyboard, number pad and trackball all connecting to a single USB connection, and I don’t lose a heap of precious USB ports.

I am very happy with this setup and can type quickly, mouse around accurately and enter data in spreadsheets efficiently.

Then there is my other computer. The keyboard is stiff and not particularly pleasant to use, and the mouse is an old Apple mouse (pre touch surface, even pre-rollable nipple thing) where the whole case of the mouse is the button. Its horrible.

So how do I manage this setup, and the challenges of limited desk space, preferred hardware etc. Enter Synergy, a simple system to manage multiple computers from a single keyboard and mouse.

Setting up Synergy appears to be easiest when you are sharing multiple Windows computers as Windows is provided with a nice handy app where you can name computers and define where their monitors lie in relation to one another. Users of other systems need to create and edit a text file to define these relationships. So I chose to make the iMac the server machine and the Windows machine a client. Following instructions was simple enough, and after having a few connection issues caused by misnamings of the two systems I was suddenly up and running.

I have a similar issue in starting the Synergy server on my Linux box that I have setting the keyboard, activating the screensaver etc, so initially I was tempted to append the instructions to activate the server onto the script that sets these things up. However, the only time I’m going to use the Synergy system is at work, so it doesn’t make sense to add this overhead to all my Linux systems. Instead I created a script which launches the Synergy server. I run this after my initial login (and after the keyboard setting script).

Using Synergy is simply a matter of mousing over to the screen edge which is defined as the point at which the two computers meet. For reasons of understandability this should be the edge of the screens where the monitors abut. Pause for a moment and the mouse will flip over to the other screen, and you are in control of the other computer.

One unforeseen advantage of this has been the way that the keyboard layout on the server computer, that is the iMac is the keyboard layout used on the other computer. As I have written elsewhere, I use a non-Qwerty keyboard layout. In Linux I can choose a Dvorak keyboard layout, but with UK punctuation (a £ sign, the @ sign being on the middle row, just by the return key etc). I also have the Caps Lock key and the left hand Ctrl key swapped so that I have a Ctrl key on the home row of the keyboard. Replicating the finer points of this in Windows is difficult as the normal Dvorak options are US English, Dvorak left handed and Dvorak right handed. Customisation is possible, but it requires installation of extra programs and understanding an arcane piece of software.

However, using Synergy with Linux as the server fixes this as the Linux keyboard layout is carried over to the Windows computer as I flip over. Bonus!


Linux vs. Windows – Another Linux Success Story.

When I was purchasing my home laptop, I chose a Dell. One of the reasons that I liked the Dell store was the way that I could choose a system and then tweak the requirements as needs be.

As I was running through the options, things like processor, memory and disc size, I remember deliberately choosing not to have a webcam at all.

The reasoning was simple. I was planning to use Linux at least part time on this new computer, and add-ons like webcams were, I thought, difficult to use, or poorly supported. If I didn’t bother having them then I wouldn’t need to fiddle to get them working.

Now, spool forward three or four years. My daughter got a book for Christmas which boasts ‘Augmented Reality’, requiring a simple program install and a webcam. So I borrowed a webcam from work (we have it for those occasions when interviewers want to Skype) and took it home.

Obviously the ‘AR’ software only runs on Windows or Mac, so we booted the laptop into Windows, plugged in the webcam and waited while Windows did its thing.

It is worth noting that Windows 7 does an excellent job of detecting the device and installing the drivers, but it took an inordinately long time to do so (more than 5 minutes, less than 10). But once it was done, it worked very well.

When I got to work this morning I decided to try an experiment by plugging the webcam into my desktop machine running Linux (its a Mac, but it dual boots into Linux because, well, I like it like that!).

I plugged it in, then started up ‘Cheese’, the webcam program where you can add weird and wonderful effects to you images. I then selected ‘Preferences’ and was able to switch between the computers built-in webcam and the USB webcam straight away (with no apparent drivers required).

Now I have a horrible feeling I will need to buy a webcam for home…

2013 Book Report 20: Hello World! Computer Programming for Kids and Other Beginners – Warren Sande

I read this book in a combination of the paperback and the Kindle version as purchase of one brings with it the option of downloading the other.

Introduction to Programming.

I purchased this book, at least in part, as I am interested in making sure that as time goes on my daughter begins to learn more about computers than just the fact that it is possible to watch YouTube videos on them.  While this is a project that I feel strongly about, I am being careful to not push any agenda.

The fact that such a book is available is an indication of the times that we live in. I grew up in the ZX Spectrum era, learning BASIC in lessons at school. But a ZX Spectrum was a simple machine, with limited capabilities.  Even the cheapest, simplest modern PC eclipses the original home computers in what they can do. But I think it is important to maintain a perspective on the functionality of modern computers. Programs now are huge projects, often written by an army of coders. Computers are still machines that are controlled by the programs that people write for them. Therefore having even simple skills to know how to program computers opens up a plethora of possibilities, which will help modify thought processes, teach critical thinking skills and prepare children for a technologically complex future.  In this much this book has similar aims to the Raspberry Pi. It aims to allow users of computers to see that they don’t have to be passive, they can interact, and they can make the computer do what they want it to.

These then are the aims of this book, and in order to guide the process this book is actually the product of a father and son team, where the father is an experienced programmer and the son is a beginner intent on wrestling control of the machine. This central conceit of the book works to make the book a good source of information, but it does sometimes lead to cutesy writing which is less than ideal.

To provide a suitable springboard the book looks at programming using the Python language. This is a popular programming language for those looking to learn, and it is easy to see why. Programs written in Python tend to be easy to understand, reading like a stilted English language description of what you are trying to achieve.

The only point I would raise about the wisdom of choosing Python is that the language is in something of a state of flux. Most of the code currently ‘out there’ is Python v. 2, but Python v. 3 is growing in popularity, and there are some quite major changes in the two versions, apparently intended to force reappraisal of some of the code base.  The book does address these issues and even highlights where they may cause problems, but it seems an awkward thing for the beginner to have to deal with.

And does it work? Well I haven’t tried it with my daughter yet. She is only 7 years old, and I think it is probably a little early to expect programming to fire her imagination. And when she is ready to make her first foray into coding I’ll start with Scratch, before the rather more abstract Python. But this book will be one I reach for when we are ready to make that transition.

Just One of The Reasons Windows is So Dangerous.

I was reminded recently of one of the obscure reasons that Windows can so often be a dangerous operating system in a world where the bad guys are so frequent.

The latest reminder came from a number of colleagues who received an email claiming to be from Royal Mail, regarding a lost or misdirected parcel. The email had an attachment which was zipped. Unzipping the file produced a file with the ‘extension’ to its name of .pdf.exe.

And in this simple fact lies the danger. Windows uses file extensions in a number of ways. For a start, it uses the file extension to choose what type of icon to use. But it also uses the file extension to choose what to do if the file is ‘opened’. In the case of the rogue email, Windows glibly uses the pdf file type icon. To most users this is a familiar file type and is often safe, and they may feel ready to click on the file. But at this point the system will use the second part of the extension to select what to do with the file, in this case running the rogue program.

Fortunately most users are already running an anti-virus program and the bad guys have been thwarted.

This time!

The Weird, The Wonderful and The Very, Very Lightweight.

One of the things that really separates Linux from both Windows and Mac OS X is the desktop.  In both Windows and Mac OS X you get a desktop, which comes with its own metaphors, and you have to pretty much learn to live with that.  The Operating System and the Desktop are essentially synonymous. Sometimes this can cause real problems. The launch of Windows 8 has seen the biggest change in the desktop metaphor since Windows 95 (I could comment on a naming convention that goes 3, 3.1, 95, 98, XP, Vista, 7, 8, but that will have to be a post for another day). And many people aren’t particularly happy with the new interface. I know that when I am required to use it, as, for instance I set up someone’s new laptop, I spend most of my time looking for things that I used to know how to find. Fortunately, I quickly found the search function, and probably use that more than anything else.  And as for Mac OS X, as users are slowly lead (through a series of expensive software and hardware upgrades) to the expected melding of the iOS and Mac OS interfaces, users have to get used to changes in the way things work every time.

Users of both Windows and Mac OS X are used to their set up, and in reality, there isn’t a sensible or universal answer to the question of which is better. Users of both systems become comfortable with them.  I personally don’t like the way that the Mac menu is a bar at the top of the screen which changes depending on which application is active. This approach seems especially churlish when you have a huge monitor and you have to move the cursor right to the top of the screen to pick a menu item. There are some oddities (from a Windows users perspective) about having to select a window before you can click on it, necessitating double clicks when changing applications. Since the introduction of Windows 7 I have got used to using the menu in much the same way as I use the terminal in Linux, or the alt-F2 application finder. I don’t bother looking for the icon, I just enter the first few characters of the app, and then select what I want.  I recently found out how to do something similar using Command-space in Mac OS X.

So what of Linux? When I first used Linux, back in the late nineties, it came with either the command line, or an awful windowing interface called CDE. It was ugly. It wasn’t really slow, but it didn’t really serve much of a purpose. Linux was still very command line driven, with all of the learning curve requirements that that involved, but at least you could run multiple terminals on your desktop. However, CDE was remarkably limited by just how few applications were available for it.

Since then, there have been huge advances in the Linux desktop. The obvious players are KDE and Gnome. KDE was what I used when I first moved to using SuSe Linux. It came with good programs which handled browsing and email. KMail even supported the use of encryption keys and signing. Gnome was available, but hadn’t been customised as much, but was easier to fiddle with.

I suspect most users chose their desktop on the basis of whichever was the default for their particular distribution. And over the years I have had positive and negative experiences with both. Most recently I was using Gnome for a long time.

But then I got a laptop which I felt, although it is a dual core machine didn’t really have a huge amount of ‘oomph’ to spare. I wanted to make sure that I made the best of what power I had. So I began to look at the alternatives to the default Gnome. And because of the modular nature of Linux there are an awful lot to choose from. Some of them set out to be a lightweight, but ‘normal’ replacement. Examples are Xfce, Openbox and LXDE. Xfce and LXDE especially are the sort of environments where Windows users will feel right at home. Openbox is less obvious as there isn’t a menu bar by default, though adding one is easy. Instead it uses a ‘pop-up’ menu which appears when the right hand mouse button is clicked. From that the applications can be started, and cycling through open applications was achieved in the usual Alt-Tab manner.

Having experimented with Openbox and Xfce I settled on Xfce for a long time. It is lightweight and fast, and worked well on the screen on my laptop. I spent a long time tweaking the functionality of Xfce. I had Conky running with my own brew configuration file which showed processor activity, network activity and disc usage. I was very happy with it, and I stuck with Crunchbang, by distribution at that time for quite a period, just because I was so happy with the desktop.

But then Crunchbang went with Openbox as its default, and I moved on to the new kid on the block, the marvellous Linux Mint. Of course, Mint has made its name by using a fork of the old Gnome 2 desktop (as opposed to Ubuntu’s Unity or Gnome 3). I definitely don’t really like Unity or Gnome 3 as both of them seem intent on using a similar approach to the ‘screen top menu bar found it Mac OS). So I used Mint in its Xfce form.

But I have also been looking at some of the other, less conventional options. For the sake of this post the ones I investigated most seriously are Ratpoison and Xmonad. Ratpoison is best considered a windowing equivalent of the command line program ‘screen’. You can open multiple programs and then navigate between them using the keyboard. It is designed to run everything full screen (which can be odd when you want to run multiple dialogue programs like The Gimp). I loved the idea, but I find the insistence on full screen a little daunting, especially on my work desktop. A small application filling a 27″ monitor seems a profligate waste of screen space! Xmonad is similar to Ratpoison in that most of its functionality can be driven by the keyboard, but it works more along the lines of tiling the windows it creates automatically so you can open programs that you want to use and have them fill the available space automatically. There are lots of ‘recipes’ for tweaking behaviour, especially in terms of keyboard layouts. This is especially of interest to me as I tend to use the Dvorak keyboard layout whenever possible.

One area of Xmonad which I would really like to investigate in the future is its multiple display support. It implements libraries for binding multiple displays into one unit, which sounds like great fun if I ever get to use. In the meantime this big display, chopped and spliced on the fly is an interesting concept, and I could see it becoming by preferred display for the time being.


Now I Just Need Paranoid Friends.

My last post was about protecting data stored on a server by having the data encrypted and un-encrypted by a special Linux file system. It is a very elegant way of securing data against physical theft (though, obviously, a hacker gaining access would be able to see the files).

The other area of interest that came back recently is more generic. Edward Snowden shocked America by blowing the whistle on the US government hoovering up everyone’s emails. One of the things I thought was very interesting was the unspoken part of the story. The ‘controversy’ is that the government is doing it to US citizens in contravention of the 4th Amendment. Apparently not controversial is the fact that various governments have been doing it to the citizens of the rest of the world for a long time, and with complete impunity.

Back in the day when I used to work at WorldPay we used to say to customers that they shouldn’t send anything by email which they wouldn’t write on a postcard and send by Snailmail. Despite that, there were occasional people who would email use with their complete credit card number in the email asking why their transaction had failed.

So what can you do to protect the information that you send by email? The best solution is to encrypt everything that you send. Classic encryption used to come in the form of a shared secret between the message sender and the message receiver. The same method that was used to encrypt the information was used to unencrypt it. The problem in this case is always how do you share the secret of encryption/decryption securely. In many ways this was the major weakness in the Enigma machines used by Germany in the Second World War. Many radio operators got lazy in how they transmitted their first message of the day (which included information on the days secret configuration).

The modern solution to this problem is to have an asynchronous key pair for encryption. There are two parts to the key, the public and private. The public key is intended for distribution. This can include publishing on web pages, uploading to public keyservers and the like. The private key must be kept safe and protected.

The way that the asynchronous encryption works is simple in concept:

  1. Identify the address of the person that you want to send an encrypted email to, and if they have a public key available download it.
  2. Use the public key to encrypt the information that you wish to send. The important point is that the encryption process is a one way street. Even with the public key to hand the original message cannot be extracted from the encrypted data.
  3. Send the encrypted data to the intended recipient. The user will receive the data and can then upencrypt the data using their private key and a passphrase associated with the encryption keys.

As you can see, the data treated in this way is secured. However, this information can only be sent to users who have generated their own encryption keys.

Another, more common use of the encryption key pair is to ‘sign’ messages or files in order to confirm their veracity. In many ways the signing process is similar to that of generating an m5 signature of a file. An apparently random string of characters which is a product of the information being signed, the users private key and passphrase is appended to the information. The public key can be used to compare the signature with the signed data, and can confirm that whoever created the signature has access both to the private key for the user, the key’s passphrase and the original content of the data. Changing either the data that is signed or the signing string will cause a mismatch indicative of data that has been tampered with.

All of this probably sounds like quite a lot of hassle. The good news is that it is simple to automate the process in a simple, cross-platform way. Thunderbird supports a simple add-on ‘Enigmail’ which handles importing and managing public keys, signing and encrypting. It also confirms the identity of users if content is signed.

The only downside? No one I know cares enough about this stuff to actually go through the steps of creating a key pair and starting to encrypt their information. I’ll just keep to signing for the time being.

If you want to be able to email me securely then the following are my public keys, along with their associated email addresses: (Work address):

Version: SKS 1.1.4
Comment: Hostname:

—–END PGP PUBLIC KEY BLOCK—– (Home address):

Version: SKS 1.1.4
Comment: Hostname:


The really cool thing is that with the apps APG and K9-Mail I can have this same functionality on my phone as on my desktop machines!