@Rob: thanks =)
@Saker: I'm not sure I understand your schema, but I can reformulate a little bit how the SMB proxy can be used in production.
The classic setup is that you have in your studio a central asset server, most likely a NFS or SMB shared drive, where all your render nodes fetch what they need.
Now, let's say you want to use remote render nodes outside of your facility. It can be Amazon nodes, or it can be render nodes in other datacenters. Why would you want to do that? Usually it's because you can find cheap render nodes in some remote data center, or you simply need to many node and need to spread the load among multiple providers.
OK now let's say you have a high-bandwidth low-latency link to those remote datacenters. In this case you can just set up a VPN, and mount your central asset server on the remote render nodes. If the network link is good, it should work OK.
However in practice, you don't necessarily have a perfect network link between your main facility, where your asset server is located, and the remote dataceter with your extra render nodes. So if you simply set up a VPN and mount your local asset server on the remote render nodes, performance will be extremely low for file transfer and render nodes will be unusable (SMB and NFS are made for local networks, they are not designed for latency links).
So at this point you start thinking, can I just replicate my asset server to the remote datacenter so that the remote render nodes can have a local copy of the files? Usually that's not possible because you have TBs of assets, and artists edit them all the time. So, your next option is: "OK for each frame let's send the render nodes only the assets needed for the frame". That is the ideal option. The problem is, most of the time, you don't know which assets a given job is going to require.
The SMB proxy is the solution to your problem. In this remote datacenter where you have your extra render node, you add a ubuntu machine where you install the SMB proxy. You mount the resulting share on the render nodes. Then the SMB proxy shows all assets from your central assets server, as if they were there locally. The SMB proxy just connects to your central assets server, fetches the folder and files list, and present them to the remote render nodes. Of course, in the remote datacenter, the SMB proxy has no files locally, but the render nodes don't know that.
Then when you send a job to the render node, the render node starts opening files on the SMB share from the SMB proxy. What the SMB proxy does is that it detects which files the render node is trying to open, freeze the call, import the files from the central assets server with a high-speed file transfer protocol, and then releases the call. The render node opens the file, and was never aware that the file has been imported live.
So, to sum it up, you have a runtime assets-syncing system, which enables you to present your render nodes a unified name space for your assets, without having to deal with actual sync issues. All your render nodes, whatever their location, think all assets are local, and when they actually need them the SMB proxy actually import them locally.