Matz is not a threading guy

Published on November 08, 2012 by Jesse Storimer

I attended my very first Rubyconf last week. There were certainly some common themes that I kept hearing over and over again throughout the course of the conference. Among other things, there was JRuby, threading, the GVL, and MRI 2.0.

We got to see a few talks about the new features in MRI 2.0. One of the oft-discussed features was refinements. If you haven't seen this yet, Magnus Holm wrote up a nice explanation. In a sentence, it allows you to monkey patch methods within a specific context only, rather than globally.

Some people are excited about refinements. However, it was brought up at Rubyconf, more than once, that MRI 2.0 is getting refinements but it still hasn't given any attention to improving its concurrency situation. Brian Ford's talk gives a good overview of this, but I think he also captured it with this tweet:

Mike Perham also raised some good points about the possible gains that MRI 2.0 could get if it addressed this.

Enter Matz

At this year's Rubyconf they opted to leave the Q&A with Matz until the very end. This left time for this issue to bubble up in people's minds and Matz actually addressed it1.

The future of threading in MRI

Early on in the Q&A, Matz was asked about the future of threading in MRI. He began by saying, as things have changed over the years, he's not a fan of threads. But he does like the Actor model.

He continued:

I don't consider myself as the threading guy, so I don't think I can make the right decision about the Actor library or the threading library.

Matz said he'd prefer to see the future of concurrency in MRI decided by the community. Discussions, gems, general consensus coming from the community.

This is interesting because he seemed very confident about the design of Ruby in other areas. But when it comes to threads, he's not sure. Perhaps this is why MRI is falling behind when it comes to threading and multi-core utilization.

Matz's answer led to this take on MINASWAN:

Gets me every time :)

The future of the GVL

A little bit later, he was asked about the future of the GVL, a hot topic of late in the Ruby community.

Matz said that Koichi (author of the MRI 1.9 VM) once removed the GVL and replaced it with fine-grained locks. This ultimately made MRI slower. Presumably, this speed could be regained several times over when using multiple threads, but it's slower for the base case: single-threaded programs.

He also mentioned that the C extension API makes thread safety a big problem, and that having thread-safe data structures is very hard.

Matz <3 Processes

The conversation then moved to processes and the new CoW-friendly GC in MRI 2.0. Matz said he really loves the architecture of Unicorn.

He said:

Using multiple processes is the best way to do concurrency in MRI for the near future.

Personally, I <3 Unicorn. My Rubyconf talk was about Unicorn. I think it's great. It's good to know that Matz shares my opinion.

What about threads?

The other implementations have proven that threading in Ruby is viable. Both JRuby and Rubinius (2.0) run native threads without a GVL, offering real parallel threading.

By the sounds of it, MRI's threading has no improvements planned over the next few releases. Those releases will probably span the next few years. If threading is really the hot-button issue that it seems to be, what will the community do?

Matz seems open to a change in concurrency models for MRI (he said he'd be open to an Actor library in stdlib), if it comes from the community. Will someone step up and fix this for everyone? Celluloid is really gaining momentum, will it take this crown?

Or maybe in a few years time the other implementations will have their kinks ironed out, and we'll all be using Rubinius or JRuby. Will MRI simply be a reference implementation in the future?

These questions will surely be answered over the next few years. Exciting times.

  1. You can actually watch the livestream of the Q&A on the confreaks channel. The relevant bits are at 2:53:30 and 3:18:40.