Announcing ExceptionalEmails.com

If you are a sysadmin or developer or similar you probably get a bunch of emails from systems telling you they are doing just fine. You probably have mail rules to shove these off into some folder you never look at so you can get on with life. If one should happen to not turn up, that would be kind of interesting, but there is no email rule you can make to alert you about an email that didn’t happen. Over the last couple of weeks I have been building a system to fix that http://exceptionalemails.com. You basically shove all the emails you get at a set of special email addresses, one for each type of regular email, and set up rules saying what you expect to happen. You then get on with your life, and if an email fails to happen, or perhaps contains the wrong words (fail/error/out of disk space/etc.) then and only then we will send you an email – you only need to see the exceptions.

This is the form to set up the rules for an alert, so in this example I would set my fileserver backup schedule to email alanbell1+fileserver@exceptionalemails.com when it is done (or leave it emailing me, and set up a rule to put the mail in a folder of my email and forward the mail to alanbell1+fileserver@exceptionalemails.com)

an alert form

 

This was my first project using MongoDB as a back end and I have been really impressed by it, I have a background in NoSQL and it all made sense to me in terms of performance expectations and optimisations. I load tested it with a million emails and it was still really fast. It is running on Ubuntu server, with a user interface is written in PHP. The back end jobs that receive emails and check for alerts going overdue are written in Python.

I would be really interested in any feedback on the site, I have some plans for improving the analysis of past emails with sparklines so you can see when failures happened, and maybe fluctuations of arrival times of emails. Any other suggestions would be welcome. There is an outside chance that I might write a JuJu charm for it – and probably do a bit of a refactoring of the code to make deployment easier. One of the reasons for choosing MongoDB at the back end and a separate process to receive the emails was to allow it to scale horizontally across a bunch of servers. Based on my load testing I couldn’t hammer it hard enough to slow things down noticeably so I am not sure my grand clustering plans are going to be required.

The code is on Github, under AGPL3 and I am tracking issues there.

Bier vandeStreek

Broeders and Dark Roast
A few days ago here at Libertus Towers we received a lovely gift from a friend in the Netherlands: Free Beer!

vandeStreek Beer is from

Two enhusiastic brothers from Utrecht, the Netherlands who enjoy tasteful craft beers. After several years brewing on a micro scale, we are now sharing our beers with the world.

Thankfully our friend, a brother of the two brewers above, knew our love of all things beer, and thought it would be a good idea to let us sample his siblings’ art…

There are two brews called BROEDERS and DARK ROAST.

After leaving them for a good session in my fridge I thought I’d start by cracking open a Broeders…

I’m a big fan of very hoppy beers with a good bitter finish (think really good IPA) and this Broeders is right up there for me. The first taste thought I had on the palette was “nutty”, very nutty, then the hops kicked in followed by a gentle breeze of burnt caramel. The beer was lovely and dry, clean tasting, surprisingly refreshing for a beer of this strength, and the head, whilst not deep, kept it’s consistency right to the bottom of the glass.

Broeders a strong beer by UK standards at 6.3% ABV so probably not one you’d want to do a long session on, but the high alcohol content didn’t destroy the flavours unlike some high-strength beers tend to do.

Whilst I was drinking Broeders I did think that the finish (the length of time the flavour lingers afterwards) might be rather short but how wrong I was! When I tottered off to bed, probably a good hour or so after I’d finished the glass, that hoppy, nutty complex of flavours was still there; it seemed a shame to have to clean my teeth!

To conclude then, I really liked this beer. So if you fancy trying something different I’d recommend vandeStreek Broeders any day.

Next week I’ll give you my take on the Dark Roast. To be honest I’m not expecting to like it so much. I’ve never really liked dark beers, many seem to me to be too sweet and a bit “thick” & sickly. But hey ho – I’m not going to not drink it; that would just be rude wouldn’t it… ;-)

Ubuntu Coaster and other animals

My son James (12) has presented me with two gifts he’s made at school recently. Both are terrific and he designed and made them himself.

As a proud dad I felt it only right to show them off…

Note the clever location of the hole for use as a keyring...

Note the clever location of the hole for use as a keyring…

Then yesterday he presented me with this very nice USB Device modelled into a Seahorse:

USB Stick

USB Stick

USB Stick

There’s a clip to hold the two parts together and I’m really impressed how the two halves line up; seeing as they are cut from two different sheets of perspex!

Ubuntu Smart Scopes

A new feature of Ubuntu was discussed today (which is like an announcement but without overhyping it), it is called Smart Scopes and is documented here https://wiki.ubuntu.com/SmartScopes1304Spec go read that first and then I have a video for you to watch.

http://www.youtube.com/watch?feature=player_embedded&v=CBeQur7VBDM

Now go back and read the spec that I told you to read earlier, but all the way to the end this time.

In the video from left to right is Alan Bell (me), David Callé, Jono Bacon, Michael Hall, Roberto Alsina and Stuart Langridge, all discussing this new framework for searching. It is coming soon, to the Ubuntu Raring desktop and then to phone and TV and tablet etc. The objective is to make searching really really effective and helpful to the user, but as with the previous efforts in this direction there will be some concerns around how it is implemented.

In short, Canonical will be running a server much like the existing productsearch.ubuntu.com server which will accept queries and return a bunch of results as json. The current implementation searches Amazon and the Ubuntu One music store and a few other places. The new one will do the same, plus more server-side searches, plus a new feature altogether which is a list of good scope names for the client to search. Your client will now send a list of all locally installed scopes to the server (actually a list of scopes you have added and a list of scopes you have removed or turned off from the standard set) along with your query. The server then returns results it found and wants to put in your dash, plus a subset of the local scopes you sent it, in order, that the server thinks would be good places to hunt for your search term. This means that your client might have 100 or more locally installed search scopes, but the server will advise it which are likely to give good results. Now for the scary bit, once you have looked at the results and perhaps clicked on something then your client pings the server again to tell it which scope produced the most relevant result. This means that the server can learn from this feedback about which scopes produce high quality results for that keyword, and perhaps rank that one a bit higher in future recommendations lists.

  • Lenses are now called master scopes
  • You control each individual scope that you want to search in or not search in, not the master scopes so you will have 100 or so things to turn on or off.
  • You can still have locally installed scopes that search authenticated data sources
  • You could in principal run your own search server if you write one to implement the API and patch the home master scope to look at your own server
  • The server isn’t open source
  • You can’t opt out of the feedback process (without turning off the smart scope altogether – which you can do)
  • If you install a local scope then your client will tell the server the name of that scope
  • Every query to the server is going to include a list of locally installed scope names (100 or so perhaps?)
  • You can focus a search at a particular scope by using a keyword, for example “omlet: chicken house” to only search the Omlet scope and not the chicken stuff master scope.
  • The rather poorly thought out remote-content-search checkbox to disable local scopes from doing online searches remains in place – however you don’t need it as you have per-scope controls.
  • There may be some code quality checks introduced to stop scopes that don’t pay attention to the remote-content-search setting from getting into the Ubuntu distribution. – but you don’t need it.
  • This probably won’t put more adverts on your desktop while you are trying to do work.
  • This is probably a more private way of searching for stuff than googling for it.
  • This won’t be opt-in, all the good stuff in Ubuntu is turned on by default.
  • Your IP address gets logged on the web server logs, but not in the database of the smart scopes application running on the server. The developers working on the smart scopes don’t have access to the web server logs.
  • It would be relatively trivial (I could do it in a day or so if I felt like it) to write a gnome-shell client for this smart scopes server to display the remote results, however doing something with the scope recommendations list would be a bit of a struggle.
  • The home master scope (dash) search box will contain the help text “search your computer and online sources” to make it clear that it isn’t just a local search.

Now to the big question. How much are people going to freak out about this? Well if they read the spec all the way to the end they will see all the stuff that is being collected, how it is aggregated, how much or how little privacy this is costing them and why it is being done for the greater good of having decent search results. The feedback data collection process is likely to be slightly freakout causing. I can see why the developers want this turned on and I can see why it is antisocial to turn it off, like leeching on bittorrent while downloading an Ubuntu iso or whatever. I think they would be wise to have a checkbox in the privacy settings dialogue so that antisocial people can turn this off. I imagine the developers will stick with the current policy that if you want to use smart scopes you have to participate in the feedback process to make it better.

I think we need to do some education around the lack of an applications launcher though. Currently people think that Super + name of application is a replacement for the Gnome 2 applications menu. It isn’t. Super+a + name of application is how to start applications. This is going to focus the search on just applications and will work a lot faster than doing an omniglobaleverywhere search which is what the superkey does by itself.

For me this is a good development overall. The privacy debacle will be solved to my satisfaction when you can locally and personally blacklist scopes. This will mean that I can write a scope without it being co-dependent on all the other online scopes and I don’t have to worry about whether intranet access constitutes internet access. All scopes can simply stop if remote-content-search is set, but nobody needs to set it, the flag will basically just break all searching and be a bit pointless.

GeoTools: Geolocation services for vtiger CRM

As many of you know already, our company Libertus Solutions does quite a lot of work with the open source CRM called vtiger. It’s a very competent and accomplished product made even more so by its well thought out extension capabilities.

In this post I’m really pleased to announce our first open source vtlib module for vtiger called GeoTools.

It was derived from another project on the vtiger forge called Maps, which we have taken and extended in true open source style. Standing on the shoulders of giants, and all that…

GeoTools introduces Geolocation features to vtiger in a standard vtlib module package. It adds the ability to perform distance-based searches on your data.

GeoTools uses the Google Maps API to gather positional data, that’s latitude and longitude coordinates, for the entity records that have been configured in the GeoTools Settings area. Once we have acquired this positional data we can then perform location-based calculations to display the results on an embedded Google Map, and as a list view of entity records.

Anyway enough of the words already! Here’s a video:

As soon as the forge site is up I’ll update this and provide links to the code.

Update: Here we are: This will be a moving target for some time yet – it’s still rather “beta” grade code…

Finding VirtualBox IP addresses

I have been running some server instances in VirtualBox recently and as I move between networks it is a pain to have to log in and get the IP address from ipconfig before being able to access the test web sites I have running in them. I also prefer to SSH to them rather than use the VirtualBox instance (it gives better character screen size, although I could reconfigure things; I also tab my terminals).

Anyway, in order to make things easier I put together two scripts, one that handles getting the IP address of the Virtualbox instance, and the other that handles connecting via SSH just by telling it the instance name. In order to do this you need to install the VirtualBox Guest Additions. This package is generally used in connection with video drivers for GUI based guests, but it also has some extensions that present extra information about the guest to the host machine.

Installing on Ubuntu is quite easy. My setup is using Ubuntu 12.10 on the desktop (host) and Ubuntu 12.04 LTS on the server (guest). To start with there is a package with the Guest Additions ISO in, so start by installing it with:

sudo aptitude install virtualbox-guest-additions-iso

Next you need to mount the ISO in the guest OS. To do this choose the Install Guest Additions option from the Devices menu. Since this is a CLI server OS it won’t automatically mount the ISO, so you will have to do this manually with:

sudo mount /dev/cdrom /media/cdrom

guestadditions1

Once you have done this you need to install dkms and then run the install script with:

sudo aptitude install dkms
sudo /media/cdrom/VBoxLinuxAdditions.run

It will complain about not having found X to install the graphics drivers, but this isn’t a problem.

Once you have done this you can use the command:

VBoxManage guestproperty enumerate <vname>

where <vname> is the name of your guest. This will list out all of the available information that can now be accessed.

Using this command in a little bit of bash I created the two scripts. Firstly to get the IP address of a named guest:

#! /bin/bash
VIP=`VBoxManage guestproperty get $1 "/VirtualBox/GuestInfo/Net/0/V4/IP" | awk '{print $2}'`
echo $VIP

Secondly, to SSH to a named guest:

#! /bin/bash
VIP=`VBoxManage guestproperty get $1 "/VirtualBox/GuestInfo/Net/0/V4/IP" | awk '{print $2}'`
ssh-keygen -f "/home/paul/.ssh/known_hosts" -R $VIP
ssh $VIP

This second one is a little more involved because it first deletes the entry from the known_host file (remember to change the location). I’ve done this to stop an error coming up if the IP address has already been used, which isn’t uncommon with DHCP leases (you often get the same one, but not always!). You will have to confirm the authenticity of the host each time you connect, but since this is scripted and the IP has been automatically obtained locally to the machine this shouldn’t present a security risk.

Each of these scripts takes the guest name as a parameter, eg:

vbip Alfresco

or

vssh Alfresco

Lastly, to make these new scripts easy to use I created a .bash_aliases file in my home directory with the following:

## custom aliases
alias vssh='~/scripts/vssh'
alias vbip='~/scripts/vbip'

You will need to adjust for whever you have put these scripts, I tend to have a scripts directory in my home directory for this purpose.

Next Page »