How to install a Squid & Dansguardian content filter on Ubuntu Server
Being a family man and a geek, our household has both children and lots of tech; there are 6 or so computers, various tablets, smartphones and other devices capable of connecting to, and displaying content from, the Internet.
For a while now I’ve wanted to provide a degree of content filtering on our network to prevent accidental, or deliberate, access to some of the worst things the Internet has to offer. What I didn’t want to do however was blindly hand control of this very important job to my ISP (as our beloved leader would like us all to do). Also, I absolutely believe this is one of my responsibilities as a parent; it is not anyone else’s. In addition, there are several problems I have with our government’s chosen approach:
- Filtering at the ISP network-side means the ISP must try and inspect all my internet traffic all of the time (what else could they potentially do with this information I wonder?)
- If the ISP’s filter prevents access to content which we feel that our kids should be able access, how can I change that? Essentially I can’t.
- I reckon that most kids of mid-teenage years will have worked out ways to bypass these filters anyway (see footnote) leaving more naive parents in blissful ignorance; thinking their kids are protected when in fact they are not.
With the above in mind I set about thinking how I could provide a degree of security on our home network using tried and trusted Open Source tools…
Firstly this is how our networked looked before.
The BT Router is providing the DHCP service in the above diagram.
The Ubuntu 12.04 Server is called vimes (after Commander Vimes in the Discworld novels by Terry Pratchett) and is still running the same hardware that I described way back in 2007! It’s a low power VIA C7 processor, 1G of RAM and it now has a couple of Terabytes of disk. It’s mainly used as a central backup controller and dlna media store/server for the house.
I never did get Untangle working on it, but now it seemed like a good device to use to do some filtering… There are loads of instructions on the Internet about using Squid & Dansguardian but none covered quite what I wanted to achieve: A dhcp serving, bridging, transparent proxy content filter.
Architecturally, my network needed to look like this:
As you can see above, the physical change is rather negligible. The Ubuntu server now sits between the home LAN and the broadband router rather than as just another network node on the LAN as it was before.
The configuration of the server to provide what I required can be broken down into several steps.
1. Get the Ubuntu server acting as a transparent bridge
This is relatively straightforward. First install the bridge-utils package: sudo apt-get install bridge-utils
Then I made a backup of my /etc/network/interfaces
file and replaced it with this one:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # Set up interfaces iface eth0 inet manual iface eth1 inet manual # Bridge setup auto br0 iface br0 inet static bridge_ports eth0 eth1 address 192.168.1.2 broadcast 192.168.1.255 netmask 255.255.255.0 gateway 192.168.1.1
Probably the most interesting part of this file is where we assign a static IP address to the bridge itself. Without this I would not be able to connect to this server as both ethernet ports are now just transparent bridge ports so not actually listening for IP traffic at all.
(Obviously you will need to determine the correct IP address scheme for your own network)
2. Disable DHCP on the router and let Ubuntu do it instead
The reason for this is mostly down to the BT Home Hub… For some bizarre reason, BT determined that they should control what DNS servers you can use. Although I’m not using it right now, I might choose to use OpenDNS for example, but I can’t change the DNS addresses served by the BT Home Hub router so the only way I can control this is to turn off DHCP on the router altogether and do it myself.
Install the dhcp server: sudo apt-get install dhcp3-server
Tell the dhcp server to listen for requests on the bridge port we created before by editing the file /etc/default/isc-dchp-server
so that the INTERFACES line reads: INTERFACES="br0"
.
Then edit the dhcp configuration file /etc/dhcp/dhcpd.conf
so we allocate the IP addresses we want to our network devices. This is how mine looks:
ddns-update-style none; default-lease-time 600; max-lease-time 7200; # If this DHCP server is the official DHCP server for the local # network, the authoritative directive should be uncommented. authoritative; # Use this to send dhcp log messages to a different log file (you also # have to hack syslog.conf to complete the redirection). log-facility local7; subnet 192.168.1.0 netmask 255.255.255.0 { range 192.168.1.16 192.168.1.254; option subnet-mask 255.255.255.0; option routers 192.168.1.1; #Google DNS option domain-name-servers 8.8.8.8, 8.8.4.4; #OpenDNS #option domain-name-servers 208.67.222.222, 208.67.220.220; option broadcast-address 192.168.1.255; }
There are many options and choices to make regarding setting up your DHCP server. It is extremely flexible; you will probably need to consult the man pages and other on-line resources to determine what is best for you. Mine is very simple. It serves one block of IP addresses within the range 192.168.1.16 to 192.168.1.254 to all devices. Currently I’m using Google’s DNS servers but as you can see I’ve also added OpenDNS as a comment so I can try it later if I want to.
3. Install Squid and get it working as a transparent proxy using IPTables
This bit took a while to get right but, as with most things it seems to me, in the end the actual configuration is fairly straightforward.
Install Squid: sudo apt-get install squid
.
Edit the Squid configuration file /etc/squid3/squid.conf
… By default this file contains a lot of settings. I made a backup and then reduced it to just those lines that needed changing so it looked like this:
http_port 3128 transparent
acl localnet src 192.168.1.0/24
acl localhost src 127.0.0.1/255.255.255.255
acl CONNECT method CONNECThttp_access allow localnet
http_access allow localhost
always_direct allow allcache_dir aufs /var/spool/squid3 50000 16 256
Probably the most interesting part in the above is the word “transparent” after the proxy port. Essentially this means we do not have to configure every browser on our network: http://en.wikipedia.org/wiki/Proxy_server#Transparent_proxy. The final line of the file is just some instructions to configure where the cache is stored and how big it is. Again, there are tons of options available which the reader will need to find out for themselves…
To actually cause all the traffic on our LAN to go through the proxy rather than just passing through the bridge transparently requires a bit of configuration on the server using ebtables to allow easier configuration of the Linux kernel’s bridge & iptables to redirect particular TCP/IP ports to the proxy.
First I installed ebtables: sudo apt-get install ebtables
My very simplistic understanding of the following command is that it essentially tells the bridge to identify IP traffic for port 80 (http) and pass this up to the kernel’s IP stack for further processing (routing) which we then use iptables to handle.
sudo ebtables -t broute -A BROUTING -p IPv4 --ip-protocol 6 --ip-destination-port 80 -j redirect --redirect-target ACCEPT
Then we tell iptables to forward all port 80 traffic from the bridge to our proxy:
sudo iptables -t nat -A PREROUTING -i br0 -p tcp --dport 80 -j REDIRECT --to-port 3128
Restart Squid: sudo service squid3 restart
At this point http browser traffic should now be passing through your bridge and squid proxy before going on to the router and Internet. You can test to see if it is working by tailing the squid access.log file.
I found that squid seemed to be very slow at this juncture. So I resorted to some google fu and looked for some help on tuning the performance of the system. I came across this post and decided to try the configuration suggestions by adding the following lines to my squid.conf file:
#Performance Tuning Options hosts_file /etc/hosts dns_nameservers 8.8.8.8 8.8.4.4 cache_replacement_policy heap LFUDA cache_swap_low 90 cache_swap_high 95 cache_mem 200MB logfile_rotate 10 memory_pools off maximum_object_size 50 MB maximum_object_size_in_memory 50 KB quick_abort_min 0 KB quick_abort_max 0 KB log_icp_queries off client_db off buffered_logs on half_closed_clients off log_fqdn off
This made an immediate and noticeable difference to the performance; enough so in fact that I haven’t yet bothered to go any further with tuning investigations. Thanks to the author Tony at last.fm for the suggestions.
4. Install Dansguardian and get it filtering content
sudo apt-get install dansguardian
is all you need to install the application.
To get it to work with our proxy I needed to make a couple of changes to the configuration file /etc/dansguardian/dansguardian.conf
.
First, remove or comment out the line at the top that reads UNCONFIGURED - Please remove this line after configuration
I just prefixed it with a #
.
Next we need to configure the ports by changing two lines so they look like this:
filterport = 8080
proxyport = 3128
Finally, and I think this is right, we need to set it so that Dansguardian and squid are both running as the same user so edit these two lines:
daemonuser = ‘proxy’
daemongroup = ‘proxy’
As you will see in that file, there are loads of other configuration options for Dansguardian and I will leave it up to the reader to investigate these at their leisure.
One suggestion I came across on my wanderings around the Interwebs was to grab a copy of one of the large collections of blacklisted sites records and install these into /etc/dansguardian/blacklists/
. I used the one linked to from the Dansguardian website here http://urlblacklist.com/ which says it is OK to download once for free. As I understand it, having a list of blacklist sites will reduce the need for Dansguardian to parse every url or all content but this shouldn’t be relied on as the only mechanism as obviously the blacklist will get out-of-date pretty quickly.
Dansguardian has configurable lists of “phrases” and “weights” that you can tailor to suit your needs.
Now that’s installed we need to go back and reconfigure one of the iptables rules so that traffic is routed to Dansguardian rather than straight to Squid first and also enable communication between Squid and Dansguardian. You can flush (empty) the existing iptables rules by running iptables -F
.
Now re-enter the rules as follows:
sudo iptables -t nat -A PREROUTING -i br0 -p tcp –dport 80 -j REDIRECT –to-port 8080
sudo iptables -t nat -A OUTPUT -p tcp –dport 80 -m owner –uid-owner proxy -j ACCEPT
sudo iptables -t nat -A OUTPUT -p tcp –dport 3128 -m owner –uid-owner proxy -j ACCEPT
sudo iptables -t nat -A OUTPUT -p tcp –dport 3128 -j REDIRECT –to-ports 8080
Restart Squid and Dansguardian: sudo service squid3 restart
& sudo service dansguardian restart
.
Now if you try to connect to the internet from behind the server your requests should be passed through Dansguardian and Squid automatically. If you try and visit something that is inappropriate your request should be blocked.
If it all seems to be working OK then I suggest making your ebtables and iptables rules permanent so they are restored after a reboot.
This can be achieved easily for iptables by simply running sudo iptables-save
.
I followed these very helpful instructions to achieve a similar thing for the ebtables rule.
And that’s it. Try rebooting the server to make sure that it all still works without you having to re-configure everything. Then ask your kids and wife to let you know if things that they want to get to are being blocked. YOU now have the ability to control this – not your ISP… 😀
Footnotes
Be aware that on the network diagrams above the Wifi service provided by the BT Homehub router, and the LAN on the router side of the server, are not protected by these instructions. For me this is fine as the coverage of that Wifi network only makes it as far as the Kitchen anyway. And if it was more visible I could always change the key and only let my wife and I have access.
Also, I should make it clear that I know what I have above is not foolproof. I am completely aware that filtering/monitoring encrypted traffic is virtually impossible and there are plenty of services available that provide ways to circumvent what I have here. But I am also not naive and I reckon that if my kids have understood enough about networking and protocols etc. to be able to use tunnelling proxies or VPN services then they are probably mature enough to decide for themselves what they want to look at.
Of course there are plenty of additional mechanisms one can put in place if desired.
- Time-based filters preventing any Internet access at all at certain times
- Confiscation of Internet connected devices at bedtime
- Placing computers and gaming consoles in public rooms of the house and not in bedrooms
- And many more I’m sure you can think of yourself
As I see it, the point is simply this: As a parent, this is your responsibility…
Building Ubuntu for the Raspberry Pi
As a result of the prior musings about crowdfunding and the rather shaky VAT status of the whole sector I have been thinking quite a bit about crowdfunding and where it might be useful and how we could get involved in some way. For our normal consultancy business we have no need of capital investments and we don’t produce anything that lends itself to the crowdfunding model, however I did come up with a project I have been wanting to do for quite a long time. Allow me to introduce it by way of a little video . . .
Back when the Raspberry Pi was in development it was shown running Ubuntu 9.04, Jaunty Jackalope. This was the last Ubuntu release that supported the ARMv6 instruction set, from that point on Ubuntu was optimised for newer ARM chips and would not run on the Broadcom chip that the Pi used. I am the point of contact of the Ubuntu UK Local Community team and I was dead excited about this little computer with it’s exposed PCB and low price point. I asked some of the Ubuntu ARM folk if they could support it going forward, but that wasn’t going to be possible, they didn’t have the resources to build for two ARM platforms and the bottom line was that the Pi probably wasn’t going to provide a good user experience for the increasingly heavy Ubuntu user interface. This was sad, but it was the situation. I was a bit concerned that the Raspberry Pi foundation was proceeding on the basis that Jaunty was available – it was already old, going out of support and was a dead end, there were going to be no future updates for it. I was concerned that the UK Local community was going to be landed with a lot of new users who were having a poor user experience and there would be nothing we could do about it. Reluctantly I approached the Raspberry Pi foundation (I met the lovely Liz and Eben at an event in Oxford) and shared my concerns with them, and suggested Debian was the way forward, so the Pi would have a system based on a platform Ubuntu users would be familiar with, that would get updates.
So this was sad, I wasn’t happy about it, the foundation wasn’t happy about it, many users were not happy about it, but it was much better to have a new Debian with updates and prospects than an old dead end Ubuntu release.
Moving on to the present, the Raspberry Pi is a huge success, Rasbian is a great operating platform for it, the LXDE desktop is fine, the Wayland demo was brilliant and loads of cool projects are happening based on the Pi. We still want Ubuntu on it though. We are using it in embedded projects, it is also turning up in things like the OpenERP Point of Sale kit, situations where it doesn’t need a responsive user interface (or a user interface at all). It would be great to know that all the libraries we are using on it are the same versions we are using on other computers that are running Ubuntu. It might be nice to see what the Ubuntu Unity desktop looks like on the Pi, especially Unity 8 running in Mir, but that explicitly isn’t a goal. This project aims to build everything that will build from source without too much hassle. If that gets us a desktop then great, if it gets us a command line with python, that is great too.
Now for the armchair accountants in the audience, having seen the admin end of a campaign I can explain it a little better than before. This is a flexible funding setup rather than the all-or-nothing option and we are accepting paypal and credit card pledges. The paypal pledges happen instantly, the money goes from the end user direct to our paypal account and then there is an immediate debit of 9% of the amount which goes from us to indiegogo – so the money is not held in escrow at all, and it isn’t a big payment at the end. This is fairly clearly a purchase of a pledge to the full pledge value and a subsequent payment to indiegogo which is either a purchase of campaign hosting services, or some kind of financial services fee, not sure about that bit yet. Credit card payments are slightly different, we don’t have the money for those yet, after the campaign ends Indiegogo will do a bank transfer to us for the funds (less the 4% or 9% commission presumably). Paypal is regulated as a bank now, so I think the money should turn up in our financials when it is in the paypal account, not just when we make a transfer of it to a bricks and mortar bank. We will enter all the pledges as sales and pay VAT on them and we will reclaim the VAT on the materials purchased to build the cluster. If anyone wants a VAT invoice for a paypal pledge I can sort that out. Credit card pledges are a bit more interesting as it is questionable whether they have happened yet.
If you want to contribute to the cluster and help us build Ubuntu for the Raspberry Pi then do head on over to Indiegogo and join the 40 or so other contributors we have so far.
From the technical side of things, designing the cluster feel free to pitch in your comments and suggestions below. We have had a lot of people suggesting that we don’t use the Raspberry Pi and use some other platform instead. These suggestions include: cross compile it from Intel machines, use QEMU on fast Intel computers, use cloud computing, use a Power Mac (whut!), use the OpenSUSE Build Service, Use a Calxeda box, use Pandaboards, use Wandboard quad core arm boards. Feel free to add to the list of other platforms we should be using instead, I think I will add the yet to be delivered Parallela board to the list of things we should be using. All these suggestions are great, they would work and they might even be faster or easier. They just are not things I really want (apart from the Parallela which I don’t have) and I don’t think it works as a crowdfunding concept to raise funds to build it out of anything but the Raspberry Pi.
To provide power to lots of Pis there are a few approaches, Southampton University did this:
and other cluster projects have build custom 5v electronics for feeding the USB or direct to the GPIO pins. The custom supply option doesn’t work out particularly cheap and to run the whole cluster you are looking at parts of the circuit supporting a current heading towards 32 amps, which gets kinda complicated. At the moment I am leaning towards using a special powered hub, the Pihub which can cope with powering 4 Pi devices from a single slightly beefy supply. This keeps the plug count down (they will all need PAT testing at some point so I don’t want to go completely wild on plugs) and keeps everything neat and safe and fanless.
Networking is another area where there are options. WiFi sounds mad for a cluster, but is it really? The Pi Ethernet port kind of hangs off USB internally, so wouldn’t a 150Mbit USB wifi dongle be comparable to a 100Mbit ethernet? Lets solve this using science. Initial testing with iperf shows 74Mbit throughput on the ethernet between two Pi devices, over WiFi just 20Mbit. This is rather less than I would expect, maybe there is more performance that can be teased out of the wifi, or maybe the initial feeling is right and ethernet is the way forward. Maybe you have an opinion or advice in this area?
The funding campaign runs through to Christmas but as we have some of the money available already I am thinking we will probably start getting some bits fairly soon and start setting up the cluster controllers and do some power measurements and more detailed performance testing.
How to install OpenERP 7.0 on Ubuntu 12.04 LTS
Introduction
Welcome to the latest of our very popular OpenERP installation “How Tos”.
The new release of OpenERP 7.0 is a major upgrade and a new Long Term Support release; the 7.0 Release Notes extend to over 90 pages! The most noticeable change is a complete re-write of the User Interface that features a much more modern look and feel.
OpenERP 7.0 is not only better looking and easier to use, it also brings many improvements to the existing feature-set and adds a number of brand new features which extend the scope of the business needs covered by OpenERP. Integration of social network capabilities, integration with Google Docs and LinkedIn, new Contract Management, new Event Management, new Point of Sale, new Address Book, new Fleet Management,… are only some of the many enhancements in OpenERP 7.0.
The How To
Following that introduction, I bet you can’t wait to get your hands dirty…
Just one thing before we start: You can simply download a “.deb†package of OpenERP and install that on Ubuntu. Unfortunately that approach doesn’t provide us (Libertus Solutions) with enough fine-grained control over where things get installed, and it restricts our flexibility to modify & customise, hence I prefer to do it a slightly more manual way (this install process below should only take about 10-15 minutes once the host machine has been built).
So without further ado here we go:
Step 1. Build your server
I install just the bare minimum from the install routine (you may want to install the openssh-server
during the install procedure or install subsequently depending on your needs).
After the server has restarted for the first time I install the openssh-server
package (so we can connect to it remotely) and denyhosts
to add a degree of brute-force attack protection. There are other protection applications available: I’m not saying this one is the best, but it’s one that works and is easy to configure and manage. If you don’t already, it’s also worth looking at setting up key-based ssh access, rather than relying on passwords. This can also help to limit the potential of brute-force attacks. [NB: This isn’t a How To on securing your server…]
sudo apt-get install openssh-server denyhosts
Now make sure your server has all the latest versions & patches by doing an update:
sudo apt-get update
sudo apt-get dist-upgrade
Although not always essential it’s probably a good idea to reboot your server now and make sure it all comes back up and you can login via ssh.
Now we’re ready to start the OpenERP install.
Step 2. Create the OpenERP user that will own and run the application
sudo adduser --system --home=/opt/openerp --group openerp
This is a “system†user. It is there to own and run the application, it isn’t supposed to be a person type user with a login etc. In Ubuntu, a system user gets a UID below 1000, has no shell (it’s actually /bin/false
) and has logins disabled. Note that I’ve specified a “home†of /opt/openerp
, this is where the OpenERP server code will reside and is created automatically by the command above. The location of the server code is your choice of course, but be aware that some of the instructions and configuration files below may need to be altered if you decide to install to a different location.
[Note: If you want to run multiple versions of OpenERP on the same server, the way I do it is to create multiple users with the correct version number as part of the name, e.g. openerp70, openerp61 etc. If you also use this when creating the Postgres users too, you can have full separation of systems on the same server. I also use similarly named home directories, e.g. /opt/openerp70, /opt/openerp61 and config and start-up/shutdown files. You will also need to configure different ports for each instance or else only the first will start.]
A question I have been asked a few times is how to run the OpenERP server as the openerp system user from the command line if it has no shell. This can be done quite easily:
sudo su - openerp -s /bin/bash
This will su
your current terminal login to the openerp user (the “-
” between su
and openerp
is correct) and use the shell /bin/bash
. When this command is run you will be in openerp’s home directory: /opt/openerp
.
When you have done what you need you can leave the openerp user’s shell by typing exit
.
Step 3. Install and configure the database server, PostgreSQL
sudo apt-get install postgresql
Then configure the OpenERP user on postgres:
First change to the postgres user so we have the necessary privileges to configure the database.
sudo su - postgres
Now create a new database user. This is so OpenERP has access rights to connect to PostgreSQL and to create and drop databases. Remember what your choice of password is here; you will need it later on:
createuser --createdb --username postgres --no-createrole --no-superuser --pwprompt openerp
Enter password for new role: ********
Enter it again: ********
Finally exit from the postgres user account:
exit
Step 4. Install the necessary Python libraries for the server
sudo apt-get install python-dateutil python-docutils python-feedparser python-gdata \
python-jinja2 python-ldap python-libxslt1 python-lxml python-mako python-mock python-openid \
python-psycopg2 python-psutil python-pybabel python-pychart python-pydot python-pyparsing \
python-reportlab python-simplejson python-tz python-unittest2 python-vatnumber python-vobject \
python-webdav python-werkzeug python-xlwt python-yaml python-zsi
With that done, all the dependencies for installing OpenERP 7.0 are now satisfied (note that there are some new packages required since 6.1).
Step 5. Install the OpenERP server
I tend to use wget
for this sort of thing and I download the files to my home directory.
Make sure you get the latest version of the application: at the time of writing this it’s 7.0. I got the download links from their download pages (note there are also deb
, rpm
and exe
builds in this area too). There isn’t a static 7.0 release tarball as such anymore, but there is a nightly build of the 7.0 source tree which should be just as good and will contain patches as and when things get fixed. The link below is to the source tarball for the 7.0 branch.
Note: As an alternative method of getting the code onto your server, Jerome added a very useful comment showing how to get it straight from launchpad. Thanks!
wget http://nightly.openerp.com/7.0/nightly/src/openerp-7.0-latest.tar.gz
Now install the code where we need it: cd
to the /opt/openerp/
directory and extract the tarball there.
cd /opt/openerp
sudo tar xvf ~/openerp-7.0-latest.tar.gz
Next we need to change the ownership of all the the files to the OpenERP user and group we created earlier.
sudo chown -R openerp: *
And finally, the way I have done this is to copy the server directory to something with a simpler name so that the configuration files and boot scripts don’t need constant editing (I called it, rather unimaginatively, server). I started out using a symlink solution, but I found that when it comes to upgrading, it seems to make more sense to me to just keep a copy of the files in place and then overwrite them with the new code. This way you keep any custom or user-installed modules and reports etc. all in the right place.
sudo cp -a openerp-7.0 server
As an example, should OpenERP 7.0.1 come out soon, I can extract the tarballs into /opt/openerp/ as above. I can do any testing I need, then repeat the copy command so that the modified files will overwrite as needed and any custom modules, report templates and such will be retained. Once satisfied the upgrade is stable, the older 7.0 directories can be removed if wanted.
That’s the OpenERP server software installed. The last steps to a working system is to set up the configuration file and associated boot script so OpenERP starts and stops automatically when the server itself stops and starts.
Step 6. Configuring the OpenERP application
The default configuration file for the server (in /opt/openerp/server/install/
) is actually very minimal and will, with only one small change work fine so we’ll simply copy that file to where we need it and change it’s ownership and permissions:
sudo cp /opt/openerp/server/install/openerp-server.conf /etc/
sudo chown openerp: /etc/openerp-server.conf
sudo chmod 640 /etc/openerp-server.conf
The above commands make the file owned and writeable only by the openerp user and group and only readable by openerp and root.
To allow the OpenERP server to run initially, you should only need to change one line in this file. Toward to the top of the file change the line db_password = False
to the same password you used back in step 3. Use your favourite text editor here. I tend to use nano, e.g.
sudo nano /etc/openerp-server.conf
One other line we might as well add to the configuration file now, is to tell OpenERP where to write its log file. To complement my suggested location below add the following line to the openerp-server.conf
file:
logfile = /var/log/openerp/openerp-server.log
Once the configuration file is edited and saved, you can start the server just to check if it actually runs.
sudo su - openerp -s /bin/bash
/opt/openerp/server/openerp-server
If you end up with a few lines eventually saying OpenERP is running and waiting for connections then you are all set.
On my system I noticed the following warning:
2012-12-19 11:53:51,613 6586 WARNING ? openerp.addons.google_docs.google_docs: Please install latest gdata-python-client from http://code.google.com/p/gdata-python-client/downloads/list
The Ubuntu 12.04 packaged version of the python gdata client library is not quite recent enough, so to install a more up-to-date version I did the following (exit from the openerp user’s shell if you are still in it first):
sudo apt-get install python-pip
sudo pip install gdata --upgrade
Going back and repeating the commands to start the server resulted in no further warnings
sudo su - openerp -s /bin/bash
/opt/openerp/server/openerp-server
If there are errors, you’ll need to go back and find out where the problem is.
Otherwise simply enter CTL+C
to stop the server and then exit
to leave the openerp user account and go back to your own shell.
Step 7. Installing the boot script
For the final step we need to install a script which will be used to start-up and shut down the server automatically and also run the application as the correct user. There is a script you can use in /opt/openerp/server/install/openerp-server.init
but this will need a few small modifications to work with the system installed the way I have described above. Here’s a link to the one I’ve already modified for 7.0.
Similar to the configuration file, you need to either copy it or paste the contents of this script to a file in /etc/init.d/
and call it openerp-server
. Once it is in the right place you will need to make it executable and owned by root:
sudo chmod 755 /etc/init.d/openerp-server
sudo chown root: /etc/init.d/openerp-server
In the configuration file there’s an entry for the server’s log file. We need to create that directory first so that the server has somewhere to log to and also we must make it writeable by the openerp user:
sudo mkdir /var/log/openerp
sudo chown openerp:root /var/log/openerp
Step 8. Testing the server
To start the OpenERP server type:
sudo /etc/init.d/openerp-server start
You should now be able to view the logfile and see that the server has started.
less /var/log/openerp/openerp-server.log
If there are any problems starting the server you need to go back and check. There’s really no point ploughing on if the server doesn’t start…
If the log file looks OK, now point your web browser at the domain or IP address of your OpenERP server (or localhost if you are on the same machine) and use port 8069. The url will look something like this:
http://IP_or_domain.com:8069
What you should see is a screen like this one (it is the Database Management Screen because you have no OpenERP databases yet):
What I do recommend you do at this point is to change the super admin password to something nice and strong (Click the “Password” menu). By default this password is just “admin” and knowing that, a user can create, backup, restore and drop databases! This password is stored in plain text in the /etc/openerp-server.conf
file; hence why we restricted access to just openerp and root. When you change and save the new password the /etc/openerp-server.conf
file will be re-written and will have a lot more options in it.
Now it’s time to make sure the server stops properly too:
sudo /etc/init.d/openerp-server stop
Check the logfile again to make sure it has stopped and/or look at your server’s process list.
Step 9. Automating OpenERP startup and shutdown
If everything above seems to be working OK, the final step is make the script start and stop automatically with the Ubuntu Server. To do this type:
sudo update-rc.d openerp-server defaults
You can now try rebooting you server if you like. OpenERP should be running by the time you log back in.
If you type ps aux | grep openerp
you should see a line similar to this:
openerp 1491 0.1 10.6 207132 53596 ? Sl 22:23 0:02 python /opt/openerp/server/openerp-server -c /etc/openerp-server.conf
Which shows that the server is running. And of course you can check the logfile or visit the server from your web browser too.
That’s it! Next I would suggest you create a new database filling in the fields as desired. Once the database is initialised, you will be directed straight to the new main configuration screen which gives you a fell for the new User Interface in OpenERP 7 and shows you how easy it is to set up a basic system.
The Ubuntu UK Christmas Party
The decorations have been in the shops for months, the clocks have changed, sooner or later we are going to have to face up to the inevitability of another Christmas, and there  is nothing we can do to prevent it.
Take a moment to look forward to your long weekend of seasonal festivities with close family, distant relatives, and the not-quite-distant-enough ones. Think of the present opening, the joy of seeing another pair of socks, the screams of rage that inform you that Ben 10 was last years hot thing and a completely inappropriate present to have given this year. Just think about the meal of curiously burned stuffing inside a not-quite-cooked turkey which you will then smother with cranberry jam for no apparent culinary purpose. Spare a moment to consider the bowl of sprouts and the fun of watching adults attempting to fool children into thinking they are edible. Perhaps after the meal someone will suggest that you all play a board game together, won’t that be fun! The best you can hope for is that they will all be asleep before the Dr Who Christmas special starts.
If these thoughts of Christmas have left you in need of a stiff drink, don’t worry you are not alone, and we have a plan. The Ubuntu UK Christmas event will be held at The Hub Islington on December 21st from 7PM. You can register your attendance here (launchpad account required). We would invite you to bring some Christmas and/or Ubuntu themed nibbles and optionally a bottle of something to ward off the cold. Take an evening out to relax with friends and steel yourself for what is to come.
PCs with Compulsorily Bundled Software Should Be Outlawed
I’ve written about the Microsoft Tax many times before and have even had a minor success with regards to getting it refunded.Now a fellow Open Source blogger and businessman, Dr Adrian Steel of Mercian Labels, is trying, so far without luck, to get the cost of an unwanted Windows License refunded from a company called Fonestop Ltd. He’s kindly providing an ongoing record of the correspondence between himself and the supplier whilst he seeks a fair refund for the software that he does not want nor require.
This example goes a long way to indicate why the bundling of software and hardware in this way is so wrong. It is incredibly hard to buy a computer in the UK that is not already infected with an inefficient, outdated, expensive, bloated and, still alarmingly, insecure operating system called Microsoft® Windows™. It is also becoming increasingly difficult to get even a partial refund due to the updated terms in the EULA that comes with version 7 of the OS (you can read most of the license agreements here):
By using the software, you accept these terms. If you do not accept them, do not use the software. Instead, contact the manufacturer or installer to determine its return policy. You must comply with that policy, which might limit your rights or require you to return the entire system on which the software is installed.
In earlier versions the statement about returning the entire system was not there. Here’s what the Vista EULA said:
By using the software, you accept these terms. If you do not accept them, do not use the software. Instead, contact the manufacturer or installer to determine their return policy for a refund or credit.
Reading Adrian’s struggle to get back the money that is rightfully his makes me quite angry. There are plenty of computer users that do not want or need Windows software when they buy a new computer. Even if they are not aware of the great Free Software operating systems such as Ubuntu or Fedora or many others, they probably already have a perfectly legal and valid CD of Windows in a drawer or cupboard anyway. Even I have a legal and valid Windows XP CD in my office; not that it ever gets used nowadays…
So what’s to be done? I really feel like starting some kind of campaign to get the lawmakers here and across the EU to make this kind of practice illegal. I as a consumer should be able to select and buy any computer I like and decide for myself if I wish to pay for a pre-installed operating system or not. That should be a choice I am free to make. Currently, apart from a few very brave and admirable vendors, I do not have this choice. And now it’s even harder to obtain a refund due to the change in the wording of Microsoft’s EULA.
These Brave and Admirable vendors deserve a mention:
- Brave because I’m sure that they will come under pressure from businesses like Microsoft to bundle their software and conform to the way that they want you to sell Computers.
- Admirable because they are standing up for something which is good and noble and may not be the most profitable course for their company to take.
As many of you know we started a website some time ago called Naked Computers to track these Brave and Admirable suppliers around the world. It’s been useful to many but it has been quite quiet recently and it could definitely do with a revamp to make it look more appealing (any WordPress Theme designers fancy knocking up a new look and feel for the site?).
In the UK there is one computer supplier that, in my humble opinion, should be applauded for their attitude: Novatech. I think that every machine they sell from their website or retail outlets are offered with or without an Operating System; it’s your choice. It’s quite interesting to look on their site and see just how expensive Windows really is: ~£70 to ~£800 or more!
Recently I noticed Novatech making a few noises on Twitter and I commented positively on their approach to selling naked computers. This was their reply to me:
@opensourcerer Thanks for recommending us, we sell all systems without operating systems as we like to give our customers a choice.
So come on you lot! Let’s try and come up with a plan, ideas and suggestions as to how to go about fixing this problem once and for all… Our company, The Open Learning Centre can host a wiki or something if needed but please use the comments here to start the ball rolling.
Are there any lawyers out there who fancy a challenge? Want to fight for Freedom and allow consumers to make their own choice rather than be forced to pay for something they frequently neither need nor want?
Finally, for those naive souls who believe that an EULA gives you some protection or guarantees, think again…
How to remove Mono from Ubuntu 9.10 Karmic Koala [Updated]
I’ve been mildly intrigued as to why the volume of background noise and character assassination that has surrounded Mono has been on the wane over the last few months. Consequently, I started wondering if there were any obvious reasons for this outbreak of pacifism in what has sometimes seemed like a debating chamber for differing groups of religious fundamentalists.
Some of it is surely to do with Microsoft’s Community Promise made back in July 2009, but I doubt that is really the only reason for the attenuation. I do wonder if Mono might just simply be losing some of its lustre. In August Blackduck reported how the amount of code being written for FOSS projects using C# was pretty negligible at just 1.33% and that growth in C# usage over a 12 month period was virtually zero.
There were also some rather nasty and personal attacks which did nothing to help our community at large nor the reputation of the individuals’ concerned so maybe people have consciously, or subconsciously, decided to just shut-up for a while?
Quite recently Microsoft, along with Intel, announced that they will ship Silverlight on Linux as opposed to using the Microsoft/Novell sponsored Mono project called Moonlight. OK, admittedly this announcement was only for Moblin Linux, but hey, since when has Microsoft ever been transparent about it’s long term objectives or plans? Perhaps, Mono and Moonlight were just too heavyweight for Moblin devices (netbooks and smart-phones typically), or maybe there is more to it. It could be a very good start to a typical Microsoft "Embrace, Extend & Extinguish" strategy. Who knows? But it certainly isn’t exactly a ringing endorsement of Mono and Moonlight is it?
The awkward question: If it’s that easy to port Microsoft Silverlight to Linux, why does the Moonlight project exist at all?
“I’m really clear about our commitment to Moonlight. I see the work we’re doing with Miguel and Moonlight as core to our strategy for delivering implementations for Linux,” says Goldfarb, protesting, perhaps, a little too much. ®
Anyhow, my personal opinion of Mono hasn’t changed much. There are no Mono applications in Ubuntu that make me go weak at the knees and get all excited; far from it in fact:
- I’ve never really had any need for Tomboy at all and since discovering Getting Things Gnome my jotted notes and todos all go in this great little Python task keeping application anyway. If you have used, or ever wanted to use Tomboy in the past however there is now a clone written in C++ called Gnote. This is in the Karmic “universe” repository and can be installed either from Synaptic, the new Ubuntu Software Centre (now spelt correctly if you use an en_GB locale) or by typing
sudo apt-get install gnote
. - When I last used F-Spot, which was probably back in Gutsy or Hardy days I reckon, it annoyed me that the application wouldn’t automatically delete the pictures off my camera after importing. GThumb did and always has; so no big deal there then. There is also a new kid on the block called Solang that is in the Karmic repos too. I haven’t tried it in anger myself yet but I’ve heard good things from others.
- Media Players/Managers? “Banshee!” I hear you cry. Well, I’ve never tried it because I don’t have Mono on my Ubuntu desktop or laptops so I can’t say if I like or not as an application. On my Ubuntu machines, the only music player I have tried and actually really liked, is Songbird. There are still a few features missing, but the forthcoming 1.4 release is looking like it will plug some of these gaps. Songbird looks, feels and works fine for my needs.
On the 15th October a very important figure in our community penned his own contribution to this discussion. Jeremy Allison, of Samba fame, wrote a well considered letter essentially calling on the major GNU/Linux distributions to move Mono outside of their default and core repositories. It’s something others, including myself, have discussed before, but likely with a lot less weight than Jeremy’s comments will surely carry.
… I think it is time for the Mono implementation and applications that use it to be moved into the “risky” category, until the patent situation around it is deemed to be truly safe to use by default in Free Software.
Microsoft isn’t playing games any more by merely threatening to assert patents. Real lawsuits have now occurred and the gloves are off against Free Software. Moving Mono and its applications to the “restricted” repositories is now just plain common sense.
Anyway, back to the reason for this post.
In the latest, shiniest, bestest, release of Ubuntu to date, and it really is a cracking release, the desktop version of Karmic Koala (version 9.10) contains two Mono dependent applications in the default install along with the Mono VM and associated libraries etc.
Now, this time, we have 3 ways to go Mono free:
- Visit Jo Shield’s blog and get Chicken Little Remix (CLR). Chicken Little Remix (CLR) provides a solution for users who wish to use Ubuntu but would prefer it to not contain any Mono-based software. This 2nd release of CLR, based on Ubuntu 9.10, comes as a livecd with it’s own unique desktop wallpaper and also features replacement applications where appropriate.
- Use the KDE based Kubuntu instead of Ubuntu, which uses Gnome. (Thanks Mark for pointing out my omission in the comments below)
- Install the regular Ubuntu distribution and then remove the applications and their supporting packages*. The simple command required goes like this [Update] Thanks to Jo who mentioned the 3 libraries that should also be removed [/Update]:
sudo apt-get purge libmono* libgdiplus cli-common libsqlite0 libglitz-glx1 libglitz1
Which should reply with something similar to:
The following packages will be REMOVED
cli-common* f-spot* libart2.0-cil* libflickrnet2.2-cil* libgconf2.0-cil*
libgdiplus* libglade2.0-cil* libglib2.0-cil* libgmime2.2a-cil*
libgnome-keyring1.0-cil* libgnome-vfs2.0-cil* libgnome2.24-cil*
libgnomepanel2.24-cil* libgtk2.0-cil* libmono-addins-gui0.2-cil*
libmono-addins0.2-cil* libmono-cairo2.0-cil* libmono-corlib2.0-cil*
libmono-data-tds2.0-cil* libmono-i18n-west2.0-cil* libmono-posix2.0-cil*
libmono-security2.0-cil* libmono-sharpzip2.84-cil* libmono-sqlite2.0-cil*
libmono-system-data2.0-cil* libmono-system-web2.0-cil*
libmono-system2.0-cil* libmono2.0-cil* libndesk-dbus-glib1.0-cil*
libndesk-dbus1.0-cil* mono-2.0-gac* mono-gac* mono-runtime* tomboy*
0 upgraded, 0 newly installed, 34 to remove and 0 not upgraded.
After this operation, 47.8MB disk space will be freed.
Do you want to continue [Y/n]?
NB: This command was tested on a default installation. The purge
switch is designed to remove configuration data too. If you have any important information on your system that might be dependent on these applications, please do your research and backup or copy it first. I test the command in a clean Virtual Machine build before using it on a live system: YMMV.
* If you are aware of any other packages that can, or should be removed, please let me know and I will update the post.
Depending on your vigilance or need, you may wish to install the package called Mononono which will keep a look out for you and alert you if an application tries to install any Mono components.
For those of you who do not happen to be scholars of ancient Egyptian history, the picture at the top of this article is of the Egyptian Pharaoh Akhenaten regarded by some as the first Monotheist:
Akhenaten tried to bring about a departure from traditional religion that in the end would not be accepted. After his death, traditional religious practice was gradually restored, and when some dozen years later rulers without clear rights of succession from the Eighteenth Dynasty founded a new dynasty, they discredited Akhenaten and his immediate successors, referring to Akhenaten himself as ‘the enemy’ in archival records.
Image courtesy of Wikimedia under several free licences.