Ruby Fibers Vs Ruby Threads

Labels: , , , ,

Ruby 1.9 Fibers are touted as lightweight concurrency elements that are much lighter than threads. I have noticed a sizbale impact when I was benchmarking an application that made heavy use of fibers. So I wondered what If I switched to threads instead? After some time fighting with threads I decided I needed to write something specific for this comparison. I have written a small application that would spawn a number of fibers (or threads) and then would return the time went into this operation. I also recorded the VM size after the operation (all created fibers and threads are still reachable, hence, no garbage collection). I did not measure the cost of context switching for both approaches, may be in another time.

Here are the results for creation time:



And the results for memory usage:



Conclusion

Fibers are much faster to create than threads, they eat much less memory too. There is also a limit on the number of threads for 1.9 as I maxed on 3070 threads while fibers were not complaining when I created 100,000 of them (but they took 203 seconds and occuppied a whoping 500MB of RAM).

Comments (15)

Are fibers equivalent to ruby 1.8 green threads in that they can only run on one processor and all fibers will stop processing if one of them does an IO blocking operation? It might be interesting to combine threads and fibers so you spawn a thread that then creates the fibers, stopping your main thread from ever becoming blocked.

Fibers are not even preemptive. Inside a given thread, only one fiber can run at a time. Fibers need to yield to their caller to allow for other fibers to run.

Delegating the fibers to another thread might save you when the blocking code is Ruby scheduler aware. But most c extensions will block your the whole of your interpreter anyway (unless you do all your IO in a non blocking way)

As a note, it appears that running resume on a fiber is faster than creating a new one:

>> a = Fiber.new { loop { Fiber.yield }}
>> Benchmark.measure { 10000.times { a.resume}}
=> # Benchmark::Tms:0x0d7adc @label="", @real=0.109291076660156, @cstime=0.0, @cutime=0.0, @stime=0.0, @utime=0.05, @total=0.05>
>> Benchmark.measure { 10000.times { b = Fiber.new {}}}
=> # Benchmark::Tms:0x121dbc @label="", @real=0.298281908035278, @cstime=0.0, @cutime=0.0, @stime=0.15, @utime=0.13, @total=0.28>

So I almost wonder if one could actually reuse fibers [like a fiber pool] to save on speed.
GL.
-=R

I have implemented a fiber pool, part of the soon to be released neverblock library, check it out here:

http://github.com/oldmoe/neverblock

Thanks.

I assume you turned off the GC for these tests, then?
Thanks!
-=R

Also note that since rails will 'become thread safe' I wonder if having a few threads thrown into the mix wouldn't help things somehow. [1] A mix of fibers + a few threads. Maybe the overhead won't be too intense.
Thanks.
-=R [Mormon] :)

[1] http://weblog.rubyonrails.org/2008/8/16/josh-peek-officially-joins-the-rails-core

Nice one, I didn't turn off the GC for those tests, need to redo the test sometime soon.

I am not sure of the fibers + threads mix thing. I am rather thinking of a master eventloop with forked worker processes, still contemplating though, should be writing code once this neverblock thing is officialy released.

You can see that threads [even if sleeping] provide a small slowdown on Ruby:

>> Benchmark.measure { 10000.times { Fiber.new {}}}
=> # Benchmark::Tms:0x125a48 @label="", @real=0.270713090896606, @cstime=0.0, @cutime=0.0, @stime=0.15, @utime=0.12, @total=0.27>


single threaded, this line took 0.27

now add some 1000 threads [just for kicks] -- all sleeping

>> 1000.times { Thread.new { sleep }}
=> 1000
>> Benchmark.measure { 10000.times { Fiber.new {}}}
=> # Benchmark::Tms:0x10c4a8 @label="", @real=0.588271141052246, @cstime=0.0, @cutime=0.0, @stime=0.32, @utime=0.26, @total=0.58>

0.58seconds.

So I think you're right that threads do add at least a small bit of baggage and overhead.

-=R

I think having more than one thread in the system will have a sizable overhead, it will kick start the thread scheduler which would not be needed at all otherwise

Yeah single threaded is probably fastest [my thought once had been to wonder if it were possible to rip out the thread handling stuff from core and run all async and disallow thread creation].
That being said, however, the one reason why it would make sense to have 'a few' worker threads is, as you mentioned once, if one thread does something that is computationally intensive it would stop all other outstanding requests, so...it may end up being useful. I haven't studied in depth the effect of having a 'few' threads [I know that having a lot of them is definitely bad, and that having one is best, but I'm unsure of the cost of a few. In my own tests it doesn't seem to hurt all that much, but I haven't researched it all that well.]
Good luck.
Let me know how or if I can help.
-=R

I am worried that having multiple threads will only give me very little benefits with the GIL in action. I wont be able to use more than one processor anyway. That is why I am more inclined towards a worker process model. I am wondering if I could use sysv-ipc for communication, the idea is a master process with an eventloop and a group of workers each with its eventloop.
Communicating via pipes/sharedmemory/files depending on request/response size.

I still didn't write a line of code, only testing Francis' http server for EM (which is so so fast) would like to use it as a basis for the master process which should be the only process that listens on a socket.

Would love if you help out in any way. I would love to help with your generational GC myself. If only with testing.

hi folks,

what exactly is the technical difference between a thread and a fiber.

I thought that threads are a piece of code, with a shared address space. I can create threads at user level or at OS level such as pthreads?

Sana

Fiber is basically a thread, but it doesn't switch execution with other fibers--you have to manually control when it passes control back to the "parent" fiber.
see http://www.infoq.com/news/2007/08/ruby-1-9-fibers

hi oldmoe,

its quite like kilim written for java, actor based model of scala. If i got it right at the very first instance.

Or the advanced stage of neverblock would be the above two.

I think neverblock has been released now, or not? If not yet, would like to collaborate.

Reach me at : abhishek.manocha@gmail.com