Steve Vinoski thinks to deflate my arguments with suppositions and presumptions, which I cannot simply let stand.
(Sorry, Steve-O, but I think you're out in left field on this one. I'm happy to argue it further with you over beer, but if you want the last word, have at it, and we'll compare scores when we run into each other at the next conference.)
Steve first takes aim at my comparison of the Erlang process model to the *nix process model:
First, Ted says:
Erlang’s reliability model–that is, the spawn-a-thousand-processes model–is not unique to Erlang. In fact, it’s been the model for Unix programs and servers, most notably the Apache web server, for decades. When building a robust system under Unix, a master-slave model, in which a master process spawns (and monitors) n number of child processes to do the actual work, offers that same kind of reliability and robustness. If one of these processes fail (due to corrupted memory access, operating system fault, or what-have-you), the process can simply die and be replaced by a new child process.
There’s really no comparison between the UNIX process model (which BTW I hold in very high regard) and Erlang’s approach to achieving high reliability. They are simply not at all the same, and there’s no way you can claim that UNIX “offers that same kind of reliability and robustness” as Erlang can. If it could, wouldn’t virtually every UNIX process be consistently yielding reliability of five nines or better?
What Steve misses here is that just because something can, doesn't mean it does. Processes in *nix are just as vulnerable to bad coding practices as are processes in Windows or Mac OS X, and let's be very clear: the robustness and reliability of a system is entirely held hostage to the skill and care of the worst programmer on the system. There is a large difference between theory and practice, Steve, and whether somebody takes *nix up on that offer depends a great deal on how much they're interested in building robust and reliable software.
This is where a system's architecture becomes so important--architecture leads developers down a particular path, enabling them to fall into what Rico Mariani once described as "the pit of success", or what I like to call "correctness by default". Windows leads developers down a single-process/multi-thread-based model, and UNIX leads developers down a multi-process-based model. Which one seems more robust and reliable by default to you?
(By the way, Erlang's model is apparently neither processes nor threads; according to Wikipedia's entry on Erlang,
Erlang processes are neither operating system processes nor operating system threads, but lightweight processes somewhat similar to Java's original “green threads”. Like operating system processes (and unlike green threads and operating system threads) they have no shared state between them.
Now, I grant you, Wikipedia is about as accurate as graffiti scrawled on the wall, but if that's an incorrect summation, please point me to the documentation that contradicts this and let's fix the Wikipedia entry while we're at it. But in the meantime, assuming this is correct, it means that Erlang's model is similar to the CLR's AppDomain construct, which has been around since .NET 1.0, and Java's proposed "Isolate" feature which has yet to be implemented.)
(Oh, and if the argument here is that Erlang's reliability comes from its lack of shared state between threads, hell, man, that's hardly a difficult architecture to cook up. Most transactional systems get there pretty easily, including EJB, though then programmers then go through dozens of hoops to try and break it.)
Next, Steve makes some interesting fundamental assumptions about "high reliability":
Obviously, achieving high reliability requires at least two computers. On those systems, what part of the UNIX process model allows a process on one system to seamlessly fork child processes on another and monitor them over there? Yes, there are ways to do it, but would anyone claim they are as reliable and robust as Erlang’s approach? I sure wouldn’t. Also, UNIX pipes provide IPC for processes on the same host, but what about communicating with processes on other hosts? Yes, there are many, many ways to achieve that as well — after all, I’ve spent most of my career working on distributed computing systems, so I’m well aware of the myriad choices here — but that’s actually a problem in this case: too many choices, too many trade-offs, and far too many ways to get it wrong. Erlang can achieve high reliability in part because it solves these issues, and a whole bunch of other related issues such as live code upgrade/downgrade, extremely well.
I think you're making some serious assumptions about the definition of "high reliability" here, Steve--building reliable software no more depends on having at least two computers as it does having at least two power sources, two network infrastructures, or two continents. Obviously, the more reliable you want to get with your software, the more redundancy you want to have, but that's a continuum, and one man's "high" reliability is another man's "low" reliability. Often, having just two processes running is redundant enough to get the job done.
As for UNIX's process model making it seamless to fork child processes "over there" and monitor them "over here", I know other languages that have supported precisely that since 1995, SR (Synchronizing Resources) being one of them. (SR later was adapted to the JVM to become JR, and the reason I'm aware of them is because I took a couple of classes from both langauges' creator, Ron Olsson, from UC Davis.)
Frankly, I think Steve's reaching with this argument--there's no reasoning here, just "I sure wouldn't [claim that they are as reliable and robust as Erlang's approach]" as persuasion. You like Erlang's ability to spin off child processes, Steve, and that's fine, but let's not pretend that Erlang's doing anything incredibly novel here--it's just taking the UNIX model and incorporating it directly into the language.
And that's part of the problem--any time we incorporate something directly as part of the language, there's all kinds of versioning and revision issues that come with it. This, to my mind, is one of Scala's (and F#'s and Lisp's and Clojure's and Scheme's and other composite languages') greatest strengths, the ability to create constructs that look like they're part of the language, but in fact come from libraries. Libraries are much much easier to revise and adapt right now, largely because we know how to do it, at least compared against what we know about how to do it with languages.
Next, Steve hollers at me for being apparently inconsistent:
There is no reason a VM (JVM, CLR, Parrot, etc) could not do this. In fact, here’s the kicker: it would be easier for a VM environment to do this, because VM’s, by their nature, seek to abstract away the details of the underlying platform that muddy up the picture.
In your original posting, Ted, you criticized Erlang for having its own VM, yet here you say that a VM approach can yield the best solution for this problem. Aren’t you contradicting yourself?
I do criticize Erlang for having its own VM (though I think it's not a VM, it's an interpreter, which is a far cry from an actual VM), and yet I do believe the VMs can yield the best solution for the problem. The key here is simple: how much energy has gone into making the VM fast, robust, scalable, manageable, monitorable, and so on? The JVM and the CLR have (literally) thousands of man-months sunk into them to reach high levels in all those areas. Can Erlang claim the same? How do I tune Erlang's GC? How do I find out if Erlang's GC is holding objects longer than it should? How do I discover if an Erlang process is about to die due to a low-memory condition? Can I hold objects inside of a weak reference in Erlang? All of these were things developed in the JVM and CLR in response to real customer problems, and once done, were available to any and all languages that run on top of that platform. In order to keep up, Erlang must sink a significant amount of effort into these same scenarios, and I'm willing to bet that Erlang didn't get feature "X" unless Ericsson ran into the need for it themselves.
By the way, the same argument applies to Ruby, at least until you start talking about JRuby or IronRuby. Ditto for Python up until IronPython or Jython (both of which, I understand, now run faster than the native C Python interpreter).
Steve continues attacking my VM-based arguments:
It would be relatively simple to take an Actors-based Java application, such as that currently being built in Scala, and move it away from a threads-based model and over to a process-based model (with the JVM constuction[sic]/teardown being handled entirely by underlying infrastructure) with little to no impact on the programming model.
Would it really be “relatively simple?” Even if what you describe really were relatively simple, which I strongly doubt, there’s still no guarantee that the result would help applications get anywhere near the levels of reliability they can achieve using Erlang.
Actually, yes, it would, because Scala's already done it. Actors is a library, not part of the language, and as such, is extensible in ways we haven't anticipated yet. As for creating and tearing down JVMs automatically, again, JR has done that already. Combining the two is probably not much more than talking to Prof. Olsson and Prof. Odersky for a while, then writing the code for the host; if I get some time, I'll take a stab at it, if nobody's done it before now. More importantly, the lift web framework is seeing some pretty impressive scalability and performance numbers using Actors, thanks to the inherent nature of an Actors model and the intrinsic perf and scale capabilities of the JVM, though I don't know how much anybody's measured its reliability or robustness yet. (How should we measure it, come to think of it?)
Best part is, the IT department doesn't have to do anything different to their existing Java-based network topology to start taking advantage of this. Can you say the same for Erlang?
As to Steve’s comment that the Erlang interpreter isn’t monitorable, I never said that–I said that Erlang was not monitorable using current IT operations monitoring tools. The JVM and CLR both have gone to great lengths to build infrastructure hooks that make it easy to keep an eye not only on what’s going on at the process level (”Is it up? Is it down?”) but also what’s going on inside the system (”How many requests have we processed in the last hour? How many of those were successful? How many database connections have been created?” and so on). Nothing says that Erlang–or any other system–can’t do that, but it requires the Erlang developer build that infrastructure him-or-herself, which usually means it’s either not going to get done, making life harder for the IT support staff, or else it gets done to a minimalist level, making life harder for the IT support staff.
I know what you meant in your original posting, Ted, and my objection still stands. Are you saying here that all Java and .NET applications are by default network-monitoring-friendly, whereas Erlang applications are not? I seem to recall quite a bit of effort spent by various teams at my previous employer to make sure our distributed computing products, including the Java-based products and .NET-based products, played reasonably well with network monitoring systems, and I sure don’t recall any of it being automatic. Yes, it’s nice that the Java and CLR guys have made their infrastructure monitorable, but that doesn’t relieve developers of the need to put actual effort into tying their applications into the monitoring system in a way that provides useful information that makes sense. There is no magic here, and in my experience, even with all this support, it still doesn’t guarantee that monitoring support will be done to the degree that the IT support staff would like to see.
And do you honestly believe Erlang — conceived, designed, implemented, and maintained by a large well-established telecommunications company for use in highly-reliable telecommunications systems — would offer nothing in the way of tying into network monitoring systems? I guess SNMP, for example, doesn’t count anymore?
(Coincidentally, I recently had to tie some of the Erlang stuff I’m currently working on into a monitoring system which isn’t written in Erlang, and it took me maybe a quarter of a workday to integrate them. I’m absolutely certain it would have taken longer in Java.)
Every JVM and CLR process are, by default, network-monitoring-friendly. Java5 introduced JMX, and the CLR has had PerfMon and WMI hooks in it since Day One. Can't make that any clearer. Dunno what kind of efforts your previous employer was going through, but perhaps those efforts were back in the Java 1.4 and earlier days, when JMX wasn't a part of the platform. Frankly, whether the application you're monitoring hooks into the monitoring infrastructure is not really part of the argument, since Erlang doesn't offer that, either. I'm more concerned with whether the infrastructure is monitoring-friendly.
Considering that most IT departments are happy if you give them any monitoring capabilities, having the infrastructure monitoring-friendly is a big deal.
And Steve, if it takes you more than a quarter of a workday to create an MBean-friendly interface, write an implementation of that interface and register the object under an ObjectName with the MBeanServer, then you're woefully out of practice in Java--you should be able to do that in an hour or so.
More to the point, though, if Erlang ties into SNMP out of the box with no work required by the programmer, please tell me where that's doc'ed and how it works! I won't claim to be the smartest Erlang programmer on the block, so anywhere you can point to facts about Erlang that I'm missing, please point 'em out!
Finally, Steve approaches the coup de grace:
But here’s the part of Ted’s response that I really don’t understand:
So given that an execution engine could easily adopt the model that gives Erlang its reliability, and that using Erlang means a lot more work to get the monitorability and manageability (which is a necessary side-effect requirement of accepting that failure happens), hopefully my reasons for saying that Erlang (or Ruby’s or any other native-implemented language) is a non-starter for me becomes more clear.
Ted, first you state that an execution engine could (emphasis mine) “easily adopt the model that gives Erlang its reliability,” and then you say that it’s “a lot more work” for anyone to write an Erlang application that can be monitored and managed? Aren’t you getting those backwards? It should be obvious that in reality, writing a monitorable Erlang app is not hard at all, whereas building Erlang-level reliability into another VM would be a considerably complicated and time-consuming undertaking.
Yes, the JVM could easily adopt the multi-process model if it chose to. (Said work is being done via the Java Isolates JSR.) The CLR already does (via AppDomains). I mean what I say when I say it. If you have facts that disagree, please cite them. You've already stated that Erlang hooks into SNMP, so please, if you want to get the same degree of monitoring and management that the JVM and CLR have, write the full set of monitoring hooks to keep track of all the same things the JVM and CLR track inside their respective VMs and expose them via SNMP. If you're looking for the full set, look either in the javax.management package JavaDoc and take every interface that ends in "MBean", or walk up to any Windows machine with the .NET framework installed and look at the set of PerfMon counters exposed there.
If you really want to prove your point, let's have a bake-off: on the sound of the gun firing, you start adding the monitoring and management hooks to the Erlang interpreter, and I'll add the spin-a-process-off hooks to the CLR or the JVM, and we'll see who's done first. Then we'll release the whole thing to the world and both camps will have been made better for the experience.
Or maybe you could just port Erlang to the JVM or CLR, and then we'll both be happy.