Cluster update

I am  delighted to say that the Raspberry Pi cluster project is now fully funded to the first target of £2,500, this means that the Indiegogo fees will be 4% of the total rather than the 9% which applies to partly funded flexible campaigns. The money received by Paypal has already partially cleared, so we have been out spending some of it, here is a collection of Raspberry Pi units doing some load testing.

Initial testing

There are many ways to build a cluster and many decisions to take along the way, like how to power them, what SD cards to use, whether to overclock them, how to do networking, how to fix them together etc. I will try to explain some of the reasons behind what we are doing and what we tried and didn’t like so much.

Powering the Pis

The first two criteria for powering the cluster was that it must be safe, and it must look safe. These are not the same thing at all, it is quite easy to have something with bare wires all over the place that looks a bit scary, but is entirely safe. It is also possible to have it looking great, but overloading some components and generating too much heat in the wrong place and build something that is a good looking fire risk. A single large transformer was one approach, difficulties would be handling the connection from 20A cable or rail (basically like mains flex, the current decides the wire gauge, not the voltage) down to MicroUSB, most electronics components like a USB socket or stripboard are rated for 2.5A max so we would end up with chunky mains grade connectors all over the place, which looks scary, even if it is entirely safe. After a bit of experimentation we found a D-Link 7 port USB hub with a 3A power supply and decided to see how many Raspberry Pi ¬†devices we could power from it, turns out that it can do all 7, which was a bit of a surprise. We know the Pi should be able to draw 700mA for a reliable supply, but that is when it has two 100mA USB peripherals plugged into it and is running the CPU and GPU flat out. As we are not using the USB ports and we won’t be using the GPU at all, our little Pi units only draw about 400mA each. This simplifies the power setup a lot, we just need several of these high powered hubs giving us a neat, safe and safe looking setup. The power supply for the hub does get a little warm, but I have tested the current draw at the plug and we are not exceeding the rated supply.

Networking

Initially I wanted to find out if we could do it all with WiFi. This would cut out the wires, would give us a decent theoretical peak speed and could in theory be supported by a single wifi router. After testing Pi compatible Wireless N dongles the performance just wasn’t there, the max we could get was 20Mbit/sec, whilst with wired networking 74Mbit/sec was achievable. I am not sure if this was a limitation of the USB system or the drivers, but it became clear that wired networking would be significantly quicker. Having decided that wires are the way forward it came to choosing switches. One big switch or lots of little ones? Well price/performance ratio of the small home switches is just unbeatable. We settled on some TP-Link 8 port gigabit switches. Obviously the Pi would only be connecting at 100Mbit (link speed) but the uplink to the backbone switch is at gigabit speeds. Choosing the 8 port switch meant that we were going to have groups of 7 Raspberry Pi units and one port for the uplink. This approach of multiple hubs has the excellent side effect that the cluster is modular. Every shelf can run as a self-contained cluster of 7 devices networked together, we then join them together using a backbone hub to make a bigger cluster.

Physical setup

Here is the first layout attempt. It uses a 30cm x 50cm shelf, with the pi units screwed to wooden dowels pushed into holes drilled in the shelf. There are holes drilled through for the network cables, which were snipped and re-crimped on the other side.

Pi On a Board

The router and power setup were screwed to the underside of the shelf. This setup was a bit fiddly to build, crimping network cables is a bit time consuming and the dowel arrangement wasn’t as neat as I wanted.

pi on the side

The Raspberry Pi doesn’t really have a flat available side to it, I was thinking of removing the composite video and audio out connectors to produce a flat side for fixing things to, then I noticed that if I drill some holes just the right size then the composite connection makes quite a reasonable push-fit fixing for a sideways mounted unit. Here is the shelving unit they are going to be fixed to, it is an IKEA Ivar set with 8 30×50 shelves. One design goal is to use easily available parts so that other people can replicate the design without sourcing obscure or expensive components. Wood is a great material for this kind of project, it is easy to cut, drill and fix things to, and it is a good thermal and electrical insulator – I wouldn’t want to accidentally put a Raspberry Pi down on a metal surface!

shelving unit

More updates will follow as the build progresses, if you have any suggestions on different approaches to any of the decisions on power/networking/fixing then do leave a comment, the design isn’t fixed in stone and we could end up changing it if a better idea comes along. Any further contributions to the campaign would also be gratefully appreciated, they will go towards filling up more shelves!

6 Comments

  • Chris says:

    Hi Alan, seems like some good thinking has been put in the design already. I agree on that re-crimping those RJ45 was a bit of pain. Way to go!!!
    Chris

  • Janne says:

    What kind of cluster or HPC software do you intend to use, and what benchmark task? That really affects what kind of network topology will work for you. With a tree topology you’ll have a substantial communications latency difference between nodes in the same local group and nodes in distant ones, and it will probably limit your total performance more than the raw processing speed of each node.

    • Alan Bell says:

      Building Ubuntu from source, it is a compile farm mostly. Building is mostly an offline activity, they download the source packages and build dependencies and then build them so I am not worried about network latency (in fact with the Pi, network latency is the least of our performance worries, by some distance!)

  • Brian Gowland says:

    On the subject of power… the idea of using the D-Link hubs for power distribution is a neat one but I’d be slightly concerned about pushing the wall wart PSUs too far. Stating the obvious but 7 x 400mA is 2.8A which, from a PSU rated at 3A, is ~93% capability giving just 7% redundancy. Also that assumes the USB hubs are happy (in an electro-mechanical sense) to distribute that amount of power for an extended period. I’d be tempted to reduce the number of RPis per hub (perhaps to 5) and power the hubs from one or two beefy switched mode PSUs.

    • Alan Bell says:

      yeah, it is a bit close, but I have been soak testing it for a while without problem. I figure that there will be an engineering safety factor for the PSU is on top of the rated output. If I can find a 5A rated 7 port hub I would be very tempted to move to that. I do have a 5A supply that will plug into the hub, I was going to try that if the 3A transformer didn’t work (yes, I know that might overdrive some bits inside the hub) so far it seems to be working and within tolerances – and it means each shelf is self-contained for power.
      The transformer is drawing about 13-14W measured on the high voltage side, so it is certainly less than 3A load on the low voltage side.

  • Mike Morrow says:

    Have you thought of using a Distributed Processing model for the compiles/builds? I don’t know why you need so many of these, nor four months to do it. Parcel out instructions and source code and let ‘er rip on lots of Pi units overnight at people’s homes. They could send the results back to you as they get them. Maybe I am being nieve, here. Also, why the worry about network speed? Aren’t they getting source, compiling it and sending the .o files back? They could not individually be that big, I would think. The compile time would swamp the network time. Just blue-sky’ing here after many years in the computer industry. Let me know if I am not making sense as applied to this project. I would be glad to devote overnights to builds every night and frequently 24 hours a day. I expect others would as well. $5/year operating cost is trivial. Don’t mean to upset the cart, here. Just thinking about similar projects and how they ran and similarity to this one. I would really like to have Ubuntu on Pi so I would only have one flavor of Linux running here. Having to learn Debian and Ubuntu, both, is a pain. So much to learn… Good luck with it. Mike.

Leave a Reply

XHTML: You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>