6

I have seen bunch of articles saying how raspberries could be joined to make a cluster. I basically am a 3d artist type thing and you know rendering a 10 second animation can take hours.

So if I make a cluster of about 20 or more raspberries with each having 1 GB RAM will the end result have 20 GB RAM? I just want to know what will be the role of cluster regarding the RAM? I am concerned about RAM because that's what troubles me... I don't concentrate on the processor but RAM.

Thanks for your patience P.S. My first question here.

Rehan Ullah
  • 163
  • 1
  • 1
  • 5
  • 5
    probably outperformed by an 8GB video card – Jasen Jun 26 '16 at 09:06
  • 1
    As answered by others, no, it doesn't really make sense. Depending on how powerful of rendering you need, a high-end "Scooter Computer" (or several) may do nicely for CPU rendering. If you're doing GPU rendering, then it wouldn't be good. – Nateowami Jun 27 '16 at 03:12
  • Looks like you could use Thea Renderer for Cinema 4D on a Raspberry Pi. https://www.thearender.com/site/index.php/downloads/thea-for-arm.html – Superdooperhero Feb 26 '17 at 08:12

6 Answers6

23

The general consensus is that clusters are a waste of bandwidth. Yes, your cluster will have access to the sum of all the processing power and RAM, but you are introducing network latency into your performance equation. If you are focused more on RAM than CPU, you could build a RAM-heavy desktop for the same price as your Pi cluster. You mentioned 20 Rpi2 devices for your cluster. 20 x $35 = $700. If you go the AMD route (less expensive for the same performance level as Intel), you could build a desktop with 32 GB or RAM for that same dollar amount.

Also, the RAM on the RPi (LPDDR2) is running at 400 MHz and can be accessed at a rate of 800 MT/s, whereas a (AMD-based) desktop uses RAM (DDR3) that runs at 1066 MHz and can be accessed at a rate of 2133 MT/s, about 2.5 times faster.

All things considered, yes building a cluster of Pi's is a cool project. But if your aim is to access better performance, a desktop with better specs is a better solution.

tlhIngan
  • 3,372
  • 5
  • 19
  • 33
  • 1
    Actually, DDR3 is usually 1333MHz or 1600MHz. 1066MHz is only when you are running an older CPU (that supports the maximum speed of DDR2) and then you put it on a board that supports DDR3. I had this setup with an Intel Q6600 (Socket 775), which had a FSB speed of 1066MHz. – Ismael Miguel Jun 25 '16 at 21:06
  • @IsmaelMiguel Don't confuse clock speed with transfer rate :) https://en.wikipedia.org/wiki/DDR2_SDRAM https://en.wikipedia.org/wiki/DDR3_SDRAM – tlhIngan Jun 25 '16 at 23:10
  • Yeah thanks for the info.. I now realize that was a foolery by my part.. But you people guided me and now I am on track :) – Rehan Ullah Jun 26 '16 at 13:10
6

Short answer: probably

It really depends on whether or not the process is able to be parallelized. Some processes just can't be split among the RPi's and therefore would not have any benefit from a cluster. But, rendering animations sounds like a task that would be able to be split up and therefore would benefit from a cluster.

@Thingian said that it introduces a lot of network latency, this is true however i don't know to much about rendering but once again i think that this would effect it little as when rendering the different process probably dont need to "talk" with one another all that much.

If you would like some more insight into this I'd recommend you use this question and this related forum thread from the official RPi forum (though this has less to with graphics and more with general clustering) as well as How do I build a cluster?

If you'd like to buy the setup with minimal amounts of work on your part Iden .inc http://idein.jp is building a board and that would make it easier for you to connect 16 RPi zeros and it would probably take care of the connections and make you desk look a little less like a rat nest (IF you can find the zeros as they are extremely scarce right now)

sir_ian
  • 980
  • 4
  • 16
  • 38
  • Are you talking about the multi-thread process... I think Maya and Blender must be multi-threaded... Gonna read about them now.. – Rehan Ullah Jun 25 '16 at 18:53
  • Thanks for the input.. I think I will have to go for the alternate way.. I searched more and found that Raspberry wasn't a quite good way to go. – Rehan Ullah Jun 25 '16 at 19:05
  • 2
    I think the word you want is parallelizable. You won't get 20Gb because some of that RAM will be used for the system, likewise for the network connection. While the @RehanUllah is not concerned with the CPU - A Pi is slower than most recent desktops. As he came to realize this may not be the best solution. – Steve Robillard Jun 25 '16 at 19:16
  • @SteveRobillard Thanks for the word and you are right about everything else. – sir_ian Jun 25 '16 at 19:21
  • Raspberry Pi's are slow and cheap, but they have a good GPU. If what you want to do, fits in that then perhaps. Otherwise probably not. – Thorbjørn Ravn Andersen Jun 26 '16 at 09:32
  • @SteveRobillard yeah I have come upon an aritcle which shows how to make a 24 core render farm at a reasonable price of about 3200 bucks... I think that one has good cooling and relatively better than pi farm. – Rehan Ullah Jun 26 '16 at 13:09
  • @ThorbjørnRavnAndersen I don't know about Maya, but for Blender at least only the Cycles render engine can use the GPU, and it does so using CUDA. It's like using the GPU as a fast processor, kind of like mining on a GPU. Blender can only use NVIDIA GPUs, so the Pi's GPU will be worthless for rendering. The CPU will not be great, and the RAM will be spread between all the Pis, so you won't be able to render anything that needs more than 1GB of RAM (see Agate's answer). – Nateowami Jun 27 '16 at 02:07
6

Probably not. There's a few issues here.

The raspberry pi runs the ARM arcitecture, and I've never seen rendering software that runs on it. The best renderfarm is useless if your software won't work.

While pricier, x86 has better single threaded support, available software. While the on die ram might have lower latency, more and faster ram might be handy.

"So if I make a cluster of about 20 or more raspberries with each having 1 GB RAM will the end result have 20 GB RAM?"

No, you would run X threads on a system each doing part of a task, with Y ram. So you could set up your render manager to do 4 tasks with up to 512mb of ram each, and split a render over many systems handling one frame each.

I'd start with the software. Check what it'll run in. No point building a raspberry pi cluster with software that only works on x86, and you might end up going with a proper PC and a video card if GPU acceleration gives you good results with your specific software. My previous job swore by many many x86 cores so my answer reflects that.

As for hardware, I think the "scooter computer" Jeff Atwood wrote about would be a good baseline. You could go even cheaper if you wanted to sacrifice some performance for cost

350 usd (or 10 pis) gets you

i5-5200 Broadwell 2 core / 4 thread CPU at 2.2 Ghz - 2.7 Ghz
16GB DDR3 RAM
128GB M.2 SSD
Dual gigabit Realtek 8168 ethernet
front 4 USB 3.0 ports / rear 4 USB 2.0 ports
Dual HDMI out

You'd get more than 10x the ram, a faster x86 core with HT.

You don't get a crappy 100mbps ethernet connection bottlenecked by USB

You get reasonably fast onboard storage (which would also be nice if you needed more swap).

You get less threads but with better singlethreaded performance (which is nice anyway!).

I've also personally had issues with rpi installs failing, and well, these have actual hard drives (well SSDs) not slow SD cards and would be more reliable.

Looking at all this, the pi cluster would be a terrible option compared to one decent low end machine.

  • "I've never seen rendering software that runs on it" - http://blender.stackexchange.com/questions/33015/can-blender-run-headless-on-an-arm-processor/33062 but look at the speed: over an hour to render that one frame? – user253751 Jun 27 '16 at 09:58
  • blender could run on the arm platform using the eltechs exagear product... – Marietto Aug 01 '17 at 09:34
3

Of course not! Each node in your cluster needs to be able to load all of the textures / geometry etc. So it would limit the total size of your source data to (much less than) 1GB, but 20 copies.

Instead, consider renting an EC2 instance, on demand : https://aws.amazon.com/ec2/pricing

For example, a c3.8xlarge at $1.68 per hour will render much faster than a cluster of PIs, plus be easier to configure and setup.

(Depending on your location, that's probably in the same ballpark as the electricity to run 20x PI.)

Agate
  • 31
  • 1
  • 1
    The Pi 2 takes about 2.1 Watts at full load. Multiplying by 20, that's 42 watts. Average cost of power in the US is about 12 cents/kWh. So 0.504 cents/hour, or 333 times cheaper. Australia has the highest prices in the world at $0.49 USD, but that's still only 2.058 cents/hour, or 81 times less. Not saying Pis are good for rendering... – Nateowami Jun 27 '16 at 05:37
  • @Nateowami I also think power wise Pis consume less but you also have to make cooling system for that which sometimes can boost up electricity consumption. – Rehan Ullah Jun 27 '16 at 09:30
  • @RehanUllah Actually, Pis cool passively, and don't even need a heat sink. If you were running them at full load constantly, it would lower the clock speed, in which case a passive heat sink would be helpful. But Pis are still not a good way to go, because the 1GB of RAM is very limiting. – Nateowami Jun 27 '16 at 09:37
  • @Nateowami Yeah you are right... I am surely not going that way now.. I was just trying to know if that was possible and beneficial but now from all the replies and info here I realize it isn't :) – Rehan Ullah Jun 27 '16 at 09:49
2

If the speed of the new pi3 is such (looking at mips reports) that it takes about 26 of them to equal one Haswel Xeon or i7, I conclude that it's cheaper to use desktop processors. My desktop has 32GB of RAM, so that's more than you get from 26 1GB nodes, and you need less since the code doesn't need to be duplicated 26 times.

For the clusters I've seen using older pis, it would take 4× as many! I think that's the case for the pi-zero as well. So pointless for actually using, but a cheap way to have a platform for testing clustering software so it really is a cluster.

JDługosz
  • 287
  • 2
  • 3
  • 10
  • 1
    For a rendering farm, you should compare the GPU. – v7d8dpo4 Jun 26 '16 at 11:02
  • +1! All this rendering would probably be no issue at all with a proper compute based GPU, like a nVidia Quadro. In fact, probably any one of the newer nVidia gaming GPU's with compute support would be better for any CUDA-compatible rendering software. – Drunken Code Monkey Jun 26 '16 at 13:22
2

To be honest, it depends on what you are computing. Raspberry Pi are made to be versatile, and do a lot of different things. The IoT, personal computers, supercomputers, servers, etc.

If you cluster, you increase the power of your setup with Pi. there are supercomputers built to hash and process data, built out of pi. there are way more powerful gpu setups, that will process graphics, and big data as well.

Take for instance, cloud computing, and understand, that you can essentially, create clusters and supercomputers, within cloud framework.

then you should understand, that adding GPUs on google cloud, AWS, AZURE, or Bluemix, increases the price of your running instance.

Many times, it's as expensive, if not way more expensive, just to add a GPU instance.

In google cloud for instance, you can have up to 8 gpu instances, for an 8 core VM instance.

Now, take all the dough you would spend to not only purchase all those raspberry pi, and all the cost of electricity, and understand that you are probably in most circumstances, better off running 1 raspberry pi. and then just using that raspberry pi, to connect to cloud compute services.

there are demos to try out of cloud computing services, but pretty much, none of them will permit you to try out GPU instances on demo accounts.

SO I would just use a raspberry pi, and run ubuntu mate, and just connect to IBM bluemix, and or google cloud. in order to create clusters.

the only thing that bites with that, is that app development in the cloud sucks, if you need to run xcode, because you can dream on, finding a damn MacOS image, for the cloud, without purchasing your own, to upload to VMs.

unless you are creating some sort of robotic cluster, that is motorized, for physical display purposes.

that's my 2 cents.

nicholas
  • 21
  • 1