How to Install OpenERP Odoo 8 on Ubuntu Server 14.04 LTS

odoo_logo_rgb

Introduction

Welcome to the latest of our very popular OpenERP Odoo installation “How Tos”.

The new release of Odoo 8.0 is a major upgrade introducing a great many new features and a new name.

Odoo 8.0 is not only better looking and easier to use, it also brings many improvements to the existing feature-set and adds a number of brand new features which extend the scope of the business needs covered by Odoo. Ecommerce, CMS, Integrated BI…

Rather than me blathering on about what’s new, you can simply just go and read the release notes here.

The How To

Following that introduction, I bet you can’t wait to get your hands dirty…

Just one thing before we start: You can simply download a .deb (for Debian/Ubuntu type systems) or a. rpm (Redhat/CentOS) package of OpenERP and install that. Unfortunately that approach doesn’t provide us (Libertus Solutions) with enough fine-grained control over where things get installed, and it restricts our flexibility to modify & customise, hence I prefer to do it a slightly more manual way (this install process below should only take about 10-15 minutes once the host machine has been built).

This time, rather than using a source tarball as the basis for installation we are going to take the code straight from the Odoo 8.0 branch on Github. This should help when it comes to installing updates and bug fixes in the future by being able to issue a git pull command to update the code. Bear in mind before doing a pull request you should always have backups and you may need to update your Odoo database(s) as well.

So without further ado here we go:

Step 1. Build your server

I install just the bare minimum from the install routine (you may want to install the openssh-server during the install procedure or install subsequently depending on your needs).

After the server has restarted for the first time I install the openssh-server package (so we can connect to it remotely) and denyhosts to add a degree of brute-force attack protection. There are other protection applications available: I’m not saying this one is the best, but it’s one that works and is easy to configure and manage. If you don’t already, it’s also worth looking at setting up key-based ssh access, rather than relying on passwords. This can also help to limit the potential of brute-force attacks. [NB: This isn’t a How To on securing your server…]

sudo apt-get install openssh-server denyhosts

UPDATE: Note that it seems denyhosts is no longer being maintained and is not in the main Ubuntu repository any more. I’m aware of a possibly suitable alternative called fail2ban but have not used it yet; do you own research. Thanks to Rami for the cluebat!

UPDATE2: Thanks to Paul for the pointer. I have added python-unicodecsv to the list of dependencies. Apparently this is required to correctly restore backups.

Now make sure your server has all the latest versions & patches by doing an update:

sudo apt-get update
sudo apt-get dist-upgrade

Although not always essential it’s probably a good idea to reboot your server now and make sure it all comes back up and you can login via ssh.

Now we’re ready to start the Odoo install.

Step 2. Create the Odoo user that will own and run the application

sudo adduser --system --home=/opt/odoo --group odoo

This is a “system” user. It is there to own and run the application, it isn’t supposed to be a person type user with a login etc. In Ubuntu, a system user gets a UID below 1000, has no shell (it’s actually /bin/false) and has logins disabled. Note that I’ve specified a “home” of /opt/odoo, this is where the OpenERP server code will reside and is created automatically by the command above. The location of the server code is your choice of course, but be aware that some of the instructions and configuration files below may need to be altered if you decide to install to a different location.

[Note: If you want to run multiple versions of Odoo/OpenERP on the same server, the way I do it is to create multiple users with the correct version number as part of the name, e.g. openerp70, openerp61 etc. If you also use this when creating the Postgres users too, you can have full separation of systems on the same server. I also use similarly named home directories, e.g. /opt/odoo80, /opt/openerp70, /opt/openerp61 and config and start-up/shutdown files. You will also need to configure different ports for each instance or else only the first will start.]

A question I have been asked a few times is how to run the Odoo server as the odoo system user from the command line if it has no shell. This can be done quite easily:

sudo su - odoo -s /bin/bash

This will su your current terminal login to the odoo user (the “-” between su and odoo is correct) and use the shell /bin/bash. When this command is run you will be in odoo’s home directory: /opt/odoo.

When you have done what you need you can leave the odoo user’s shell by typing exit.

Step 3. Install and configure the database server, PostgreSQL

sudo apt-get install postgresql

Then configure the Odoo user on postgres:

First change to the postgres user so we have the necessary privileges to configure the database.

sudo su - postgres

Now create a new database user. This is so Odoo has access rights to connect to PostgreSQL and to create and drop databases. Remember what your choice of password is here; you will need it later on:

createuser --createdb --username postgres --no-createrole --no-superuser --pwprompt odoo
Enter password for new role: ********
Enter it again: ********

Finally exit from the postgres user account:

exit

Step 4. Install the necessary Python libraries for the server

sudo apt-get install python-cups python-dateutil python-decorator python-docutils python-feedparser \
python-gdata python-geoip python-gevent python-imaging python-jinja2 python-ldap python-libxslt1
python-lxml python-mako python-mock python-openid python-passlib python-psutil python-psycopg2
python-pybabel python-pychart python-pydot python-pyparsing python-pypdf python-reportlab python-requests \
python-simplejson python-tz python-unicodecsv python-unittest2 python-vatnumber python-vobject \
python-werkzeug python-xlwt python-yaml wkhtmltopdf

With that done, all the dependencies for installing Odoo 8.0 are now satisfied (note that there are some changes between this and the packages required for OpenERP 7.0).

UPDATE & NOTE: It has been pointed out to me that the Qweb templating engine in Odoo 8 is warning that the version of wkhtmltopdf is too old. It turns out that Ubuntu 14.04 packages version 0.9.9 of this library even though this is rather old. To update your Ubuntu server please follow the instructions on this page. Many thanks to Ruben Kannan for pointing this out :-).

UPDATE & NOTE II: Zak suggests an alternative method to get and install the most recent version of wkhtmltopdf in this comment below. Thanks Zak.

UPDATE & NOTE III: David suggested adding python-cups and python-geoip to the list of modules to install in this comment below. Thanks David.

Step 5. Install the Odoo server

Install Git.
sudo apt-get install git

Switch to the Odoo user:
sudo su - odoo -s /bin/bash

Grab a copy of the most current Odoo 8 branch (Note the “.” at the end of this command!):
git clone https://www.github.com/odoo/odoo --depth 1 --branch 8.0 --single-branch .
(This might take a little while depending on the speed of your Internet connection.)

Note: Thanks to Ian Beardslee for the cluebat. Have now added --depth 1 to the command so it only retrieves the latest version without all the history. The download is now quite a bit quicker.

Once it’s finished exit from the odoo user: exit.

Step 6. Configuring the OpenERP application

The default configuration file for the server (/opt/odoo/debian/openerp-server.conf) is actually very minimal and will, with only a small change work fine so we’ll copy that file to where we need it and change it’s ownership and permissions:

sudo cp /opt/odoo/debian/openerp-server.conf /etc/odoo-server.conf
sudo chown odoo: /etc/odoo-server.conf
sudo chmod 640 /etc/odoo-server.conf

The above commands make the file owned and writeable only by the odoo user and group and only readable by odoo and root.

To allow the odoo server to run initially, you should only need to change two lines in this file. Toward to the top of the file change the line db_password = False to the same password you used back in step 3. Then modify the line addons_path = /usr/lib/python2.7/dist-packages/openerp/addons so that it reads addons_path = /opt/odoo/addons instead.

One other line we might as well add to the configuration file now, is to tell Odoo where to write its log file. To complement my suggested location below add the following line to the odoo-server.conf file:

logfile = /var/log/odoo/odoo-server.log

Use your favourite text editor here. I tend to use nano, e.g.

sudo nano /etc/odoo-server.conf

Once the configuration file is edited and saved, you can start the server just to check if it actually runs.

sudo su - odoo -s /bin/bash
/opt/odoo/openerp-server

If you end up with a few lines eventually saying OpenERP (Yes. The log still says OpenERP and not Odoo) is running and waiting for connections then you are all set.

If there are errors, you’ll need to go back and find out where the problem is.

Otherwise simply enter CTL+C to stop the server and then exit to leave the openerp user account and go back to your own shell.

Step 7. Installing the boot script

For the final step we need to install a script which will be used to start-up and shut down the server automatically and also run the application as the correct user. There is a script you can use in /opt/odoo/debian/init but this will need a few small modifications to work with the system installed the way I have described above. Here’s a link to the one I’ve already modified for Odoo version 8.

Similar to the configuration file, you need to either copy it or paste the contents of this script to a file in /etc/init.d/ and call it odoo-server. Once it is in the right place you will need to make it executable and owned by root:

sudo chmod 755 /etc/init.d/odoo-server
sudo chown root: /etc/init.d/odoo-server

In the configuration file there’s an entry for the server’s log file. We need to create that directory first so that the server has somewhere to log to and also we must make it writeable by the openerp user:

sudo mkdir /var/log/odoo
sudo chown odoo:root /var/log/odoo

Step 8. Testing the server

To start the Odoo server type:

sudo /etc/init.d/odoo-server start

You should now be able to view the logfile and see that the server has started.

less /var/log/odoo/odoo-server.log

If there are any problems starting the server you need to go back and check. There’s really no point ploughing on if the server doesn’t start…

Odoo 8 New Database

Odoo 8 New Database

If the log file looks OK, now point your web browser at the domain or IP address of your Odoo server (or localhost if you are on the same machine) and use port 8069. The url will look something like this:

http://IP_or_domain.com:8069

What you should see is a screen like this one (it is the Database Management Screen because you have no Odoo databases yet):

What I do recommend you do at this point is to change the super admin password to something nice and strong (Click the “Password” menu). By default this password is just “admin” and knowing that, a user can create, backup, restore and drop databases! This password is stored in plain text in the /etc/odoo-server.conf file; hence why we restricted access to just odoo and root. When you change and save the new password the /etc/odoo-server.conf file will be re-written and will have a lot more options in it.

Now it’s time to make sure the server stops properly too:

sudo /etc/init.d/odoo-server stop

Check the log file again to make sure it has stopped and/or look at your server’s process list.

Step 9. Automating Odoo startup and shutdown

If everything above seems to be working OK, the final step is make the script start and stop automatically with the Ubuntu Server. To do this type:

sudo update-rc.d odoo-server defaults

You can now try rebooting you server if you like. Odoo should be running by the time you log back in.

If you type ps aux | grep odoo you should see a line similar to this:

odoo 1491 0.1 10.6 207132 53596 ? Sl 22:23 0:02 python /opt/odoo/openerp-server -c /etc/odoo-server.conf

Which shows that the server is running. And of course you can check the logfile or visit the server from your web browser too.

Odoo 8 Opening Screen

Odoo 8 Opening Screen

That’s it! Next I would suggest you create a new database filling in the fields as desired. Once the database is initialised, you will be directed straight to the new main configuration screen which gives you a feel for the new User Interface in Odoo 8 and shows you how easy it is to set up a basic system.

vtiger 6.1.0 Released

ColorWhiteLogo

After several months of development the latest version of the most popular open source CRM, vtiger CRM is released today.

vtiger 6.1.0 provides new features, over 150 bug fixes and support for newer versions of PHP and MySQL, as well as the new all Extension Store & vtiger Marketplace where our own Geographic Information System extension, GeoTools, was the first to be approved and published.

Release notes for vtiger are here.

Downloads are here.

solution-provider-new

As always, the UK’s most experienced vtiger partner Libertus Solutions is available for support and assistance with any vtiger implementation.

GeoTools re-visited

GeoTools-250Remember our first release of GeoTools? Libertus Solutions are delighted to announce the release of a completely re-written GIS (Geographical Information System) extension module for the soon-to-be-released vtiger 6.1.0.

This new version of GeoTools integrates services from OpenStreetMap with whatever business data is configured and available in your CRM application.

GeoTools

One of the big changes in the forthcoming 6.1.0 release of vtiger is a brand new integrated Extension Store feature that makes it simple for users to install and manage new extension modules for their vtiger installation:

  • 1 Click install of commercial or free applications
  • Confidence that apps are verified by vtiger before publication
  • Contact information of publisher is immediately available
  • See and contribute Ratings and Reviews of extensions

GeoTools is the first new extension module to be approved and published in vtiger’s new Marketplace and offers a host of new features that take full advantage of the new MVC architecture of vtiger 6.

  • Use existing search filters and geographic parameters simultaneously in search results
  • Manually update the location coordinates of records when they have not been Geocoded correctly by drag & drop
  • Drag & drop the radius centre to re-calculate distance searches instantly
  • Automatically detect the users actual location for an instant “any Leads nearby?” type enquiry
  • Arbitrary manual address entry for on-the-fly geocoding
  • Ability to add records directly to the Geocoding database cache using drag & drop

The techy bit

This version of GeoTools has been totally re-written from the ground up. We fixed a few bad design decisions that kind of “just happened” in the first version and have re-architected the internal Geocoding and Tile service APIs so these are now vendor agnostic. In the first GeoTools release we used the Google Maps API for both Tile and Geocoding services but these are now subject to terms & conditions which are probably not appropriate for the majority of businesses that would wish to use GeoTools. Consequently, this release now uses services from OpenStreetMap but other services from commercial vendors could just as easily be supported too; such as Google or MapQuest or MapBox.

We used the amazing Leaflet JS library to handle the majority of the mapping user interface. It’s a pleasure to use, well documented and is highly featured.

GeoTools is Open Source Software. We decided however to sell the extension via vtiger’s Marketplace for a nominal fee as we have spent considerable effort developing this product and believe it represents great value for money. Installation from the vtiger marketplace is a 1-click affair and we will provide support to paying customers. The GeoTools code will be available shortly on the vtiger forge but users will be on their own in terms of installation and support.

There isn’t a straightforward upgrade path between the version of GeoTools for vtiger 5.4.0 and this release unfortunately, although it should be possible to do for customers who have a large amount of data they’d rather not have to re-geocode. If you are interested in migrating please contact Libertus Solutions directly.

There will be a video showing GeoTools for vtiger 6.1.0 coming soon 🙂

UPDATE: There is now a demo system available here: http://geotools.libertus.co.uk. Login with a username and password of GeoTools. (This demo database is dropped and restored every 4 hours)

How to install a Squid & Dansguardian content filter on Ubuntu Server

Being a family man and a geek, our household has both children and lots of tech; there are 6 or so computers, various tablets, smartphones and other devices capable of connecting to, and displaying content from, the Internet.

For a while now I’ve wanted to provide a degree of content filtering on our network to prevent accidental, or deliberate, access to some of the worst things the Internet has to offer. What I didn’t want to do however was blindly hand control of this very important job to my ISP (as our beloved leader would like us all to do). Also, I absolutely believe this is one of my responsibilities as a parent; it is not anyone else’s. In addition, there are several problems I have with our government’s chosen approach:

  • Filtering at the ISP network-side means the ISP must try and inspect all my internet traffic all of the time (what else could they potentially do with this information I wonder?)
  • If the ISP’s filter prevents access to content which we feel that our kids should be able access, how can I change that? Essentially I can’t.
  • I reckon that most kids of mid-teenage years will have worked out ways to bypass these filters anyway (see footnote) leaving more naive parents in blissful ignorance; thinking their kids are protected when in fact they are not.

With the above in mind I set about thinking how I could provide a degree of security on our home network using tried and trusted Open Source tools…

Firstly this is how our networked looked before.
Home network (before filtering)

The BT Router is providing the DHCP service in the above diagram.

The Ubuntu 12.04 Server is called vimes (after Commander Vimes in the Discworld novels by Terry Pratchett) and is still running the same hardware that I described way back in 2007! It’s a low power VIA C7 processor, 1G of RAM and it now has a couple of Terabytes of disk. It’s mainly used as a central backup controller and dlna media store/server for the house.

I never did get Untangle working on it, but now it seemed like a good device to use to do some filtering… There are loads of instructions on the Internet about using Squid & Dansguardian but none covered quite what I wanted to achieve: A dhcp serving, bridging, transparent proxy content filter.

Architecturally, my network needed to look like this:

Home network (after filtering)

As you can see above, the physical change is rather negligible. The Ubuntu server now sits between the home LAN and the broadband router rather than as just another network node on the LAN as it was before.

The configuration of the server to provide what I required can be broken down into several steps.

1. Get the Ubuntu server acting as a transparent bridge

This is relatively straightforward. First install the bridge-utils package: sudo apt-get install bridge-utils

Then I made a backup of my /etc/network/interfaces file and replaced it with this one:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
  
# The loopback network interface
auto lo
iface lo inet loopback

# Set up interfaces
iface eth0 inet manual
iface eth1 inet manual

# Bridge setup
auto br0
iface br0 inet static
  bridge_ports eth0 eth1
  address 192.168.1.2
  broadcast 192.168.1.255
  netmask 255.255.255.0
  gateway 192.168.1.1

Probably the most interesting part of this file is where we assign a static IP address to the bridge itself. Without this I would not be able to connect to this server as both ethernet ports are now just transparent bridge ports so not actually listening for IP traffic at all.

(Obviously you will need to determine the correct IP address scheme for your own network)

2. Disable DHCP on the router and let Ubuntu do it instead

The reason for this is mostly down to the BT Home Hub… For some bizarre reason, BT determined that they should control what DNS servers you can use. Although I’m not using it right now, I might choose to use OpenDNS for example, but I can’t change the DNS addresses served by the BT Home Hub router so the only way I can control this is to turn off DHCP on the router altogether and do it myself.

Install the dhcp server: sudo apt-get install dhcp3-server

Tell the dhcp server to listen for requests on the bridge port we created before by editing the file /etc/default/isc-dchp-server so that the INTERFACES line reads: INTERFACES="br0".

Then edit the dhcp configuration file /etc/dhcp/dhcpd.conf so we allocate the IP addresses we want to our network devices. This is how mine looks:

ddns-update-style none;

default-lease-time 600;
max-lease-time 7200;

# If this DHCP server is the official DHCP server for the local
# network, the authoritative directive should be uncommented.
authoritative;

# Use this to send dhcp log messages to a different log file (you also
# have to hack syslog.conf to complete the redirection).
log-facility local7;

subnet 192.168.1.0 netmask 255.255.255.0 {
        range 192.168.1.16 192.168.1.254;

        option subnet-mask 255.255.255.0;
        option routers 192.168.1.1;
        
        #Google DNS
        option domain-name-servers 8.8.8.8, 8.8.4.4;
        #OpenDNS
        #option domain-name-servers 208.67.222.222, 208.67.220.220;

        option broadcast-address 192.168.1.255;
}

There are many options and choices to make regarding setting up your DHCP server. It is extremely flexible; you will probably need to consult the man pages and other on-line resources to determine what is best for you. Mine is very simple. It serves one block of IP addresses within the range 192.168.1.16 to 192.168.1.254 to all devices. Currently I’m using Google’s DNS servers but as you can see I’ve also added OpenDNS as a comment so I can try it later if I want to.

3. Install Squid and get it working as a transparent proxy using IPTables

This bit took a while to get right but, as with most things it seems to me, in the end the actual configuration is fairly straightforward.

Install Squid: sudo apt-get install squid.

Edit the Squid configuration file /etc/squid3/squid.conf… By default this file contains a lot of settings. I made a backup and then reduced it to just those lines that needed changing so it looked like this:

http_port 3128 transparent

acl localnet src 192.168.1.0/24
acl localhost src 127.0.0.1/255.255.255.255
acl CONNECT method CONNECT

http_access allow localnet
http_access allow localhost
always_direct allow all

cache_dir aufs /var/spool/squid3 50000 16 256

Probably the most interesting part in the above is the word “transparent” after the proxy port. Essentially this means we do not have to configure every browser on our network: http://en.wikipedia.org/wiki/Proxy_server#Transparent_proxy. The final line of the file is just some instructions to configure where the cache is stored and how big it is. Again, there are tons of options available which the reader will need to find out for themselves…

To actually cause all the traffic on our LAN to go through the proxy rather than just passing through the bridge transparently requires a bit of configuration on the server using ebtables to allow easier configuration of the Linux kernel’s bridge & iptables to redirect particular TCP/IP ports to the proxy.

First I installed ebtables: sudo apt-get install ebtables

My very simplistic understanding of the following command is that it essentially tells the bridge to identify IP traffic for port 80 (http) and pass this up to the kernel’s IP stack for further processing (routing) which we then use iptables to handle.

sudo ebtables -t broute -A BROUTING -p IPv4 --ip-protocol 6 --ip-destination-port 80 -j redirect --redirect-target ACCEPT

Then we tell iptables to forward all port 80 traffic from the bridge to our proxy:

sudo iptables -t nat -A PREROUTING -i br0 -p tcp --dport 80 -j REDIRECT --to-port 3128

Restart Squid: sudo service squid3 restart

At this point http browser traffic should now be passing through your bridge and squid proxy before going on to the router and Internet. You can test to see if it is working by tailing the squid access.log file.

I found that squid seemed to be very slow at this juncture. So I resorted to some google fu and looked for some help on tuning the performance of the system. I came across this post and decided to try the configuration suggestions by adding the following lines to my squid.conf file:

#Performance Tuning Options
hosts_file /etc/hosts
dns_nameservers 8.8.8.8 8.8.4.4
cache_replacement_policy heap LFUDA
cache_swap_low 90
cache_swap_high 95
cache_mem 200MB
logfile_rotate 10
memory_pools off
maximum_object_size 50 MB
maximum_object_size_in_memory 50 KB
quick_abort_min 0 KB
quick_abort_max 0 KB
log_icp_queries off
client_db off
buffered_logs on
half_closed_clients off
log_fqdn off

This made an immediate and noticeable difference to the performance; enough so in fact that I haven’t yet bothered to go any further with tuning investigations. Thanks to the author Tony at last.fm for the suggestions.

4. Install Dansguardian and get it filtering content

sudo apt-get install dansguardian is all you need to install the application.

To get it to work with our proxy I needed to make a couple of changes to the configuration file /etc/dansguardian/dansguardian.conf.

First, remove or comment out the line at the top that reads UNCONFIGURED - Please remove this line after configuration I just prefixed it with a #.

Next we need to configure the ports by changing two lines so they look like this:

filterport = 8080
proxyport = 3128

Finally, and I think this is right, we need to set it so that Dansguardian and squid are both running as the same user so edit these two lines:

daemonuser = ‘proxy’
daemongroup = ‘proxy’

As you will see in that file, there are loads of other configuration options for Dansguardian and I will leave it up to the reader to investigate these at their leisure.

One suggestion I came across on my wanderings around the Interwebs was to grab a copy of one of the large collections of blacklisted sites records and install these into /etc/dansguardian/blacklists/. I used the one linked to from the Dansguardian website here http://urlblacklist.com/ which says it is OK to download once for free. As I understand it, having a list of blacklist sites will reduce the need for Dansguardian to parse every url or all content but this shouldn’t be relied on as the only mechanism as obviously the blacklist will get out-of-date pretty quickly.

Dansguardian has configurable lists of “phrases” and “weights” that you can tailor to suit your needs.

Now that’s installed we need to go back and reconfigure one of the iptables rules so that traffic is routed to Dansguardian rather than straight to Squid first and also enable communication between Squid and Dansguardian. You can flush (empty) the existing iptables rules by running iptables -F.

Now re-enter the rules as follows:

sudo iptables -t nat -A PREROUTING -i br0 -p tcp –dport 80 -j REDIRECT –to-port 8080
sudo iptables -t nat -A OUTPUT -p tcp –dport 80 -m owner –uid-owner proxy -j ACCEPT
sudo iptables -t nat -A OUTPUT -p tcp –dport 3128 -m owner –uid-owner proxy -j ACCEPT
sudo iptables -t nat -A OUTPUT -p tcp –dport 3128 -j REDIRECT –to-ports 8080

Restart Squid and Dansguardian: sudo service squid3 restart & sudo service dansguardian restart.

Now if you try to connect to the internet from behind the server your requests should be passed through Dansguardian and Squid automatically. If you try and visit something that is inappropriate your request should be blocked.

If it all seems to be working OK then I suggest making your ebtables and iptables rules permanent so they are restored after a reboot.

This can be achieved easily for iptables by simply running sudo iptables-save.

I followed these very helpful instructions to achieve a similar thing for the ebtables rule.

And that’s it. Try rebooting the server to make sure that it all still works without you having to re-configure everything. Then ask your kids and wife to let you know if things that they want to get to are being blocked. YOU now have the ability to control this – not your ISP… 😀

Footnotes

Be aware that on the network diagrams above the Wifi service provided by the BT Homehub router, and the LAN on the router side of the server, are not protected by these instructions. For me this is fine as the coverage of that Wifi network only makes it as far as the Kitchen anyway. And if it was more visible I could always change the key and only let my wife and I have access.

Also, I should make it clear that I know what I have above is not foolproof. I am completely aware that filtering/monitoring encrypted traffic is virtually impossible and there are plenty of services available that provide ways to circumvent what I have here. But I am also not naive and I reckon that if my kids have understood enough about networking and protocols etc. to be able to use tunnelling proxies or VPN services then they are probably mature enough to decide for themselves what they want to look at.

Of course there are plenty of additional mechanisms one can put in place if desired.

  • Time-based filters preventing any Internet access at all at certain times
  • Confiscation of Internet connected devices at bedtime
  • Placing computers and gaming consoles in public rooms of the house and not in bedrooms
  • And many more I’m sure you can think of yourself

As I see it, the point is simply this: As a parent, this is your responsibility…

Ubuntu and Privacy and how it really works now.

There have been quite a few entertaining discussions on the interwebs about Ubuntu and concerns around privacy. This topic comes and goes on a regular basis, today it has come up because Mozilla are planning on putting some fairly harmless adverts on the blank tiles of new tabs and this is being compared to the Dash search in Ubuntu. Whenever the topic is raised it tends to be a fairly heated discussion, mostly focussing on the Amazon search results in the dash, mostly calling that adverts or spyware. It is a discussion that is mostly overblown and underinformed, with so much time spent freaking out about “adverts” that the real problems have been completely missed. Lets go through a bit of history, and I will try and explain the difference between the real problems and the FUD.

Initially there was the Gnome 2 application launcher, kinda similar to the Windows start button, it is a way to run applications that you have on your computer. They are nicely categorised so you can find all the graphics related applications on your computer and see Inkscape alongside Gimp and choose what you want to run. This worked well and people were generally satisfied at this mechanism for running local applications. Then along came Unity, this introduced the launcher, a dock bar on the left that shows running applications and has the ability to pin applications so you can start them by clicking on them when they are not running. The launcher is the way to run applications that you have on your computer – but not all of them, and not categorised, just your favourite ones you have pinned to the launcher. Unity also introduced the dash. This has a different scope of functionality, I like to call it the OmniGlobalEverywhere search tool. You type stuff in and it searches in lots of places to find what it is you are looking for. This is not the same scope of functionality as the Gnome 2 application launcher, it could search for local files, videos on YouTube and other streaming services, music, photos, other things. It is an extensible search interface and you can plug in additional search things. I wrote an OpenERP plugin so I could type an invoice number and jump straight to that invoice in a browser for example. It was a pretty cool concept as a jack of all trades search interface – but it isn’t the master of the specialised job of viewing and running applications you have already got installed.

Everyone completely missed the fact that the magic privacy button for a long time did almost nothing – it was just an undocumented flag that some lenses looked at and turned themselves off. Others did not. This was a real big deal and nobody noticed because they were obsessed with calling Amazon search results adverts. Now we have all kinds of odd lenses and search queries possibly going to yelp, zotero, yahoo finance, songster, songkick, gallica, europeana, etsy, COLORlovers and other places. Have you even heard of every single one of these? Do you know they are not evil? Do you know they are financially stable enough not to close the doors and let the domain renewal lapse for someone evil to buy it? Amazon I know and trust to continue existing, I also trust them not to want searches for partial mostly irrelevant words for profiling data when they have my product purchase history. The utter junk that the dash sends is of no value to Amazon compared to everything else they have, but this doesn’t stop people banging on about that one specific, relatively harmless and pointless in equal measure lense.

Firstly the Amazon lens is nothing special, and it is perhaps the internet connected lens I am least worried about. I trust Amazon to do what I expect them to do, I am a customer so they know what I bought, sending them random strings like “calcul” and “gedi” and “eclip” does not give them valuable data. It is junk. I am much more concerned about stuff like the Europeana, jstor, grooveshark lenses which do exactly the same thing but I have no idea who those organisations are or what they do. Even things like openweathermap, sounds good, but are they really a trusted organisation?
So, back to how it works. Your query for “socks” goes to products.ubuntu.com. At that point canonical’s secret sauce server looks at your query and decides that most people who search for socks either want to know about products to buy, or applications to run. They don’t tend to click on the results from the medicines or recipes lenses when we try showing those lenses to the user. So, having decided that the shopping lens and the applications lens are reasonable ones to search in it sends the query to Amazon (being the only shop currently supported, but it is designed to support every online sock vendor in the world) and tells your computer that the applications lens is worth looking in. When it gets the results back from Amazon those go to your computer, as a bunch of json data that is very similar to the Amazon json API, Amazon at this point thinks that Canonical’s server has got cold toes and is in need of some nice warm socks. Amazon does not know you exist at this stage.

[iframe src=”http://rcm-eu.amazon-adsystem.com/e/cm?lt1=_blank&bc1=000000&IS2=1&bg1=FFFFFF&fc1=000000&lc1=0000FF&t=theopesou-21&o=2&p=8&l=as4&m=amazon&f=ifr&ref=ss_til&asins=B003QI99FK” style=”width:120px;height:240px;” scrolling=”no” marginwidth=”0″ marginheight=”0″ frameborder=”0″]

That bundle of sock related data goes to the shopping lens on your computer, which then displays the results. It does this by showing some text “stripy socks, only £5.30” and a picture, which it used to retrieve from Amazons content distribution network – O.M.G.!!! a data privacy leak. Amazon could log hits to their CDN (which I doubt they do), consolidate them globally, and figure out that it was displaying a bunch of sock pictures requested by your IP address, shortly after Canonical’s server searched for socks, so they could theoretically tie this together and infer that the reason you are staring at sock pictures is because you searched for socks via the dash search tool. So this huge and seriously concerning data privacy breach was a problem, so they fixed it. Now when you search for socks, Amazon gets CDN requests for images from products.ubuntu.com. Your computer gets the images from products.ubuntu.com (over https rather than http), it is now basically a reverse proxy for Amazon images, so that amazon is now more convinced than ever that Canonical’s server has got cold toes. As it happens, there is nothing wrong with your toes and you actually wanted to configure a socks proxy all along, and the shopping thing was a pointless overhead because when you want new socks the dash isn’t where you dash to.

There is a conversation on the technical board mailing list here https://lists.ubuntu.com/archives/technical-board/2013-October/thread.html and here https://lists.ubuntu.com/archives/technical-board/2013-November/thread.html relating to the closedness of the server side app. Having written something a bit similar myself, mine was closed for a while because it contained the Amazon API oauth keys in the source code. There really isn’t much to it on the server side. My server code is here https://github.com/AlanBell/shopping-search-provider/blob/master/server/index.php

We are supporting Code Club, and so should you!

Much has been made of the recent announcement of the Year of Code and the underwhelming interview on Newsnight of Lottie Dexter which contained some selected footage from what appears to be a class on jQuery, possibly by Code First:Girls in which coding is described twice as gobbledegook and went on to have Lottie Dexter announce that she was unable to code. This is not ideal for the director of an organisation that is supposed to inspire and promote the teaching of coding. I don’t demand a string of coding accomplishments from such a position, it is just that without a basic understanding of coding it is hard to articulate how much fun it is. Computing in schools fell apart as a subject in the mid 90s, the emphasis changed from doing programming projects and educational activities to using spreadsheets, word-processors and desktop database applications. In many schools the teaching the foundational skills of computing was replaced by Microsoft Office training. This is not the same, and something I have been concerned about for many years, it is one reason I was involved with supporting the Open Source Schools project around the time of the end of BECTA and one reason why we exhibited at BETT and introduced teachers to the OLPC project and the thinking behind it. A couple of years ago when taking my eldest to an open day at a local secondary school the first words out of the mouth of the teacher when we got to the ICT room were “Don’t worry, there is no coding in this subject”. We selected a different school.

This is all quite sad, but it is fixable. Coding is fun and easy, teaching it is fun and easy. I know this because I do it. Every Tuesday afternoon this term I am visiting a school a few miles away to run an after school Code Club. We are doing programming projects using Scratch, here is the project we did this week, it is a fruit machine that cycles through a few images and you click the images to stop and try to get them all to line up.

[iframe allowtransparency=”true” width=”485″ height=”402″ src=”http://scratch.mit.edu/projects/embed/17799833/?autostart=false” frameborder=”0″ allowfullscreen]

Part of the code required to do this looks like this:

code

It is programmed by dragging and dropping the commands from a palette of options (which is particularly great on an interactive whiteboard), no typing or spelling errors involved and the club of year 5 (age 9) programmers now know about variables, random numbers, if statements, infinite loops, bounded loops, signals, events and designing a fun game by balancing parameters to make it not too easy and not too hard. They have been trying things out, experimenting, getting things wrong and figuring out what the problem is and what they need to do in order to get the outcome they want. This is computing and it is the foundation of the skills we want coming into the industry.

I would encourage everyone in the IT industry, or with an interest in IT in the UK (and elsewhere, but some of this is UK specific) to get involved in Code Club . The Code Club website allows schools to say that they would like to have a Code Club, and volunteers to search for schools in their area that want one. This means that you do not have to approach the school and start by explaining what it is all about and why they should want to have a Code Club. They already know that bit, it means you have to do nothing to “sell” the concept to the school. The activity plans are great, the coders love them and you don’t have to decide what you are going to do each week, that is all done for you. There is a bit of admin and checking that is done in advance, you get a security check called a DBS, but that is all arranged and paid for by STEMNET.

I don’t know if the Year of Code organisation will make any particular contribution itself, but the Newsnight appearance and subsequent kerfuffle has certainly brought some attention on the efforts of Code Club, Young Rewired State, the Raspberry Pi foundation and some other organisations which are actively working to bring the fun of coding back into UK schools and this is a good thing.

Next Page »